logo
1 year of AI: Rockford teachers, students prepare for the future

1 year of AI: Rockford teachers, students prepare for the future

Yahoo22-05-2025

ROCKFORD, Ill. (WTVO) — Rockford Public Schools (RPS) teachers hope they have created more responsibility and awareness around the use of AI after a full school year of students using the technology.
RPS implemented , an AI learning tool, to be used by students and teachers at the beginning of the 2024-2025 school year.
'We have to be realistic. Kids are using AI, whether it's for academics or for social reasons. They're interacting with it,' said RPS Director of Educational Technology Susan Uram. 'So, I do think as educators… We know we have a responsibility to make sure that there is some AI literacy going on.'
Students were able to access chatbots powered by artificial intelligence and learned how to prompt them in a meaningful way. The director explained that the AI is supposed to be used as a tool, not a crutch for learning.
'There's an ethical responsibility around. Are you just asking it to generate something for you, or are you asking it for feedback and input?' said Uram. [Are you] using it as a learning tool, [or] just something to do the learning for you?'
AI is also helping teachers have time by creating the framework of lesson plans.
'It's been kind of a game changer for us in being able to support our teachers and taking the Tier one curriculum, but making it their own and just enhancing it,' said Annie Peiffer.
'You're able to go through the lesson plan and be very targeted with what you are teaching to your students, whether it's students that need a little bit of additional support or students that need a little bit more of a challenge,' said Sara Bruce. 'The AI can help target that lesson plan directly for those students.'
The district is trying to make sure everyone is prepared for the future of AI.
'We know that kids who will even graduate this year need to have AI skills and will enter an AI-influenced world,' Uram explained. 'It's not something that we can slowly adopt. It's something that we need to understand as quickly as possible and start to incorporate into our professional practice for the sake of the students.'
RPS plans to continue using Magic School for the 2025-2026 school year to improve and refine students' ability to use AI to their advantage.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The AI Collective Launches Globally to Mobilize Next Generation of AI Innovators and Stewards
The AI Collective Launches Globally to Mobilize Next Generation of AI Innovators and Stewards

Business Wire

time10 minutes ago

  • Business Wire

The AI Collective Launches Globally to Mobilize Next Generation of AI Innovators and Stewards

SAN FRANCISCO--(BUSINESS WIRE)--The world's largest grassroots AI community today formally detailed its global rebrand as The AI Collective, its official incorporation as a non-profit organization, and the launch of three foundational initiatives to cultivate a collaborative ecosystem for the responsible stewardship of AI. Formerly The GenAI Collective, the organization boasts 70,000+ members across 25+ global chapters and partners including AWS, Meta, Anthropic, Roam, and Product Hunt. AI's future depends not just on faster models—but on rebuilding trust, fostering global collaboration, and aligning progress with human values. Share This announcement builds on initial excitement sparked by a social media reveal on Monday, June 3rd ( and coincides with flagship launch events currently underway in San Francisco, New York City, and across its global chapters ( As the race to develop artificial intelligence accelerates daily, fueled by intense competition and unprecedented investment, the essential work of aligning these powerful systems with human values and societal trust is often dangerously sidelined. The AI Collective is responding directly to this critical gap, positioning itself as the essential human counterpoint in the age of acceleration by building a global community to steward AI, informing decision-makers, and supporting mission-aligned entrepreneurs. "The ground is shifting beneath us. AI's exponential progress brings immense possibility, but also profound questions about our future, our work, and our very identity – and the institutions we rely on weren't built for this velocity," said Chappy Asel, Co-Founder and Executive Director of The AI Collective. 'Trust is the invisible thread holding society together, and right now, that thread is fraying under the strain of misaligned incentives and rapid, uncoordinated change. We believe that in an era of exponential transformation, our greatest strength lies in each other – in creating trusted, in-person spaces to make sense of this moment, ask the deeper questions, and collaboratively shape a future where AI genuinely enhances human flourishing, not just surpasses human capability,' Asel said. The AI Collective believes that without deliberate intervention focused on trust, openness, and participation, AI's trajectory risks leaving humanity behind. Recognizing that managing this requires bridging the widening gap between technological progress and societal adaptation, the organization is launching three foundational initiatives designed to rebuild trust through action and participation: Expanding Global Community & Events: Rebuilding Trust Through In-Person Connection. The AI Collective will rehumanize innovation by launching 100 chapters across six continents by end of year. Each chapter will fuse its community's unique vision with global shared values of openness and inclusivity, curating salons, hackathons, demo nights, mentorship circles, and cross‑disciplinary mashups to deepen trust and collaboration. Their network will converge at Singularity Fest this November 2025 in San Francisco for a multi‑day, decentralized celebration attracting 10,000 pioneers across domains for hands‑on labs, purpose‑driven keynotes, thematic tracks, and community‑led activations, ensuring AI's next leaps keep humanity firmly at the center. Informing Decision-Makers: Bridging the Gap Between Frontier Insight and Responsible Governance. To counter the risks of institutional blind spots and lagging policy, The AI Collective Institute, the organization's participatory research arm, connects frontier technologists directly with policymakers, industry leaders, and the public. It translates ground-truth insights from the AI ecosystem into practical guidance through open research, equipping frontline leaders to foster responsible innovation and navigate future shock effectively. Supporting Mission-Aligned Innovation: Actively Incentivizing Human-Centric AI Development. In pursuit of a values-aligned AI ecosystem driven by the community's unique trust and insight, Collective Investments acts as a dedicated founder-investor matchmaking program under the non-profit umbrella. It identifies and supports promising founders building trustworthy, beneficial AI, connecting them with values-aligned capital allocators (VCs, grants, angels) and providing crucial support to ensure the AI future being built reflects the principles of human flourishing and responsible progress. The AI Collective is celebrating this evolution with its ongoing flagship events and global chapter celebrations. The organization invites builders, thinkers, policymakers, investors, and pioneers worldwide to join the conversation and contribute to shaping a future where technology serves all of humanity. Read the organization's foundational perspective, " Trust in the Age of Acceleration," learn more and join the community at: and follow the journey @_ai_collective on social platforms. ABOUT THE AI COLLECTIVE The AI Collective (formerly The GenAI Collective) is the world's largest non-profit, grassroots community dedicated to empowering the AI ecosystem to collaboratively steer AI's future toward trust, openness, and human flourishing. Founded in 2023, the AI Collective has rapidly grown into a global force: 70,000+ members: Comprising leading founders, researchers, investors, and multidisciplinary operators from OpenAI, Anthropic, Nvidia, Google, Microsoft, Amazon, Databricks, Cohere, and more. 200+ AI startups launched or showcased at demo nights, connecting directly with investors and clients; participating companies subsequently raised $72M+ in funding. 25+ active chapters with 200+ events hosted, located in major tech hubs globally including New York City, London, Paris, Washington, D.C., Seattle, Bengaluru and more. 40+ leading partners, including Amazon, Anthropic, Andreessen Horowitz, Meta, Github, TedAI, Product Hunt, Roam, Linux Foundation, and top academic institutions. A dedicated global team of 100+ volunteer organizers committed to building authentic, impactful community experiences. Through its focus on in-person connection, participatory research via The AI Collective Institute, and support for mission-aligned innovation via Collective Investments, the organization serves as a vital hub for sense-making, collaboration, and responsible stewardship in the age of artificial intelligence.

Artificial Intelligence Collaboration and Indirect Regulatory Lag
Artificial Intelligence Collaboration and Indirect Regulatory Lag

Forbes

time12 minutes ago

  • Forbes

Artificial Intelligence Collaboration and Indirect Regulatory Lag

WASHINGTON, DC - MAY 16: Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary ... More Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. The committee held an oversight hearing to examine A.I., focusing on rules for artificial intelligence. (Photo by) Steve Jobs often downplayed his accomplishments by saying that 'creativity is just connecting things.' Regardless of whether this affects the way you understand his legacy, it is beyond the range of doubt that most innovation comes from interdisciplinary efforts. Everyone agrees that if AI is to exponentially increase collaboration across disciplines, the laws must not lag too far behind technology. The following explores how a less obvious interpretation of this phrase will help us do what Jobs explained was the logic behind his genius The Regulatory Lag What most people mean when they say that legislation and regulation have difficulty keeping pace with the rate of innovation because the innovation and its consequences are not well known until well after the product hits the market. While that is true, it only tells half of the story. Technological innovations also put more attenuated branches of the law under pressure to adjust. These are second-order, more indirect legal effects, where whole sets of laws—originally unrelated to the new technology—have to adapt to enable society to maximize the full potential of the innovation. One classic example comes from the time right after the Internet became mainstream. After digital communication and connectivity became widespread and expedited international communication and commercial relations, nations discovered that barriers to cross-border trade and investment were getting in the way. Barriers such as tariffs and outdated investment FDI partnership requirements—had to be lowered or eliminated if the Internet was to be an effective catalyst to global economic growth. Neoliberal Reforms When the internet emerged in the 1990s, much attention went to laws that directly regulated it—such as data privacy, digital speech, and cybersecurity. But some of the most important legal changes were not about the internet itself. They were about removing indirect legal barriers that stood in the way of its broader economic and social potential. Cross-border trade and investment rules, for instance, had to evolve. Tariffs on goods, restrictions on foreign ownership, and outdated service regulations had little to do with the internet as a technology, but everything to do with whether global e-commerce, remote work, and digital entrepreneurship could flourish. These indirect legal constraints were largely overlooked in early internet governance debates, yet their reform was essential to unleashing the internet's full power. Artificial Intelligence and Indirect Barriers A comparable story is starting to unfold with artificial intelligence. While much of the focus when talking about law and AI has been given to algorithmic accountability and data privacy, there is also an opportunity for a larger societal return from AI in its ability to reduce barriers between disciplines. AI is increasing the viability of interdisciplinary work because it can synthesize, translate, and apply knowledge across domains in ways that make cross-field collaboration more essential. Already we are seeing marriages of law and computer science, medicine and machine learning, environmental modeling, and language processing. AI is a general-purpose technology that rewards those who are capable of marrying insights across disciplines. In that sense, the AI era is also the era of interdisciplinary boundary-blurring opportunities triggered by AI are up against legal barriers to entry across disciplines and professions. In many professions, it requires learning a patchwork of licensure regimes and intractable definitions of domain knowledge to gain the right to practice or contribute constructively. While some of these regulations are generally intended to protect public interests, they can also hinder innovation and prevent new interdisciplinary practices from gaining traction. To achieve the full potential of AI-enabled collaboration, many of these legal barriers need to be eliminated—or at least reimagined. We are starting to see some positive movements. For example, a few states are starting to grant nurse practitioners and physician assistants greater autonomy in clinical decision-making, and that's a step toward cross-disciplinary collaboration of healthcare and AI diagnostics. For now, this is a move in the right direction. However, In some other fields, the professional rules of engagement support silos. This must change if we're going to be serious about enabling AI to help us crack complex, interdependent problems. Legislators and regulators cannot focus exclusively on the bark that protects the tree of change, they must also focus on the hidden network of roots that that quietly nourish and sustain it.

Foreign propagandists continue using ChatGPT in influence campaigns
Foreign propagandists continue using ChatGPT in influence campaigns

Engadget

time21 minutes ago

  • Engadget

Foreign propagandists continue using ChatGPT in influence campaigns

Chinese propaganda and social engineering operations have been using ChatGPT to create posts, comments and drive engagement at home and abroad. OpenAI said it has recently disrupted four Chinese covert influence operations that were using its tool to generate social media posts and replies on platforms including TikTok, Facebook, Reddit and X. The comments generated revolved around several topics from US politics to a Taiwanese video game where players fight the Chinese Communist Party. ChatGPT was used to create social media posts that both supported and decried different hot button issues to stir up misleading political discourse. Ben Nimmo, principal investigator at OpenAI told NPR , "what we're seeing from China is a growing range of covert operations using a growing range of tactics." While OpenAI claimed it also disrupted a handful of operations it believes originated in Russia, Iran and North Korea, Nimmo elaborated on the Chinese operations saying they "targeted many different countries and topics [...] some of them combined elements of influence operations, social engineering, surveillance." This is far from the first time this has occurred. In 2023, researchers from cybersecurity firm Mandiant found that AI-generated content has been used in politically motivated online influence campaigns in numerous instances since 2019. In 2024, OpenAI published a blog post outlining its efforts to disrupt five state-affiliated operations across China, Iran and North Korea that were using OpenAI models for malicious intent. These applications included debugging code, generating scripts and creating content for use in phishing campaigns. That same year, OpenAI said it disrupted an Iranian operation that was using ChatGPT to create longform political articles about US elections that were then posted on fake news sites posing as both conservative and progressive outlets. The operation was also creating comments to post on X and Instagram through fake accounts, again espousing opposing points of view. "We didn't generally see these operations getting more engagement because of their use of AI," Nimmo told NPR . "For these operations, better tools don't necessarily mean better outcomes." This offers little comfort. As generative AI gets cheaper and smarter , it stands to reason that its ability to generate content en masse will make influence campaigns like these easier and more affordable to build, even if their efficacy remains unchanged.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store