logo
#

Latest news with #StevePadilla

California Senate advances AI regulation bills despite federal pushback
California Senate advances AI regulation bills despite federal pushback

Yahoo

time6 days ago

  • Health
  • Yahoo

California Senate advances AI regulation bills despite federal pushback

SAN DIEGO (FOX 5/KUSI) — Two California bills aimed at tightening regulation of artificial intelligence cleared a major hurdle this week, passing the state Senate amid growing tension between state and federal lawmakers over how — and who — should govern the fast-evolving technology. The bills, both authored by Sen. Steve Padilla (D-San Diego), take direct aim at the mental health risks of AI tools and the lack of transparency in automated decision-making systems. But they may soon run up against a proposed federal moratorium that would block states from enacting AI regulations for the next decade, as reported by the Associated Press. First, there's SB 243. It targets the growing use of AI chatbots marketed as companions for mental health support and emotional wellbeing. The bill would ban the use of addictive engagement tactics and require developers to notify users regularly that the chatbot is not human. 'Tech companies are creating these AI products in a regulatory vacuum,' Padilla stated on the issue. 'But, they have proven they cannot be trusted to minimize the risks they pose to the public on their own. The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails in place to ensure transparency, safety, and accountability.' California senator slams ICE raid at San Diego restaurants: 'weaponized hate' Meanwhile, SB 420 takes a broader approach, seeking to establish a comprehensive regulatory framework around AI use in both public and private decision-making systems. The legislation would mandate that individuals be informed when AI tools are used to make decisions about them, explain how the systems function, and provide a process for human appeal in some cases. It also calls for impact assessments to measure potential data misuse or algorithmic bias. The two bills now move to the State Assembly and, if approved, could be on Governor Gavin Newsom's desk later this year. The legislative momentum in Sacramento stands in sharp contrast to efforts in Washington, D.C., where a House Republican-backed reconciliation bill would impose a 10-year freeze on state or local AI regulations. Critics argue the federal proposal could stall crucial oversight during a pivotal time for AI development. Padilla has strongly opposed the federal moratorium, stating, 'California has a proud heritage as a home for technological vision while enforcing ethical, safe standards in concert with those values, not in opposition. However, federal mismanagement has left the public vulnerable to the dangers AI poses. California must step in to provide guiderails that create common sense consumer protections while still fostering innovation.' The state's push for AI accountability comes on the heels of a March report from a working group appointed by Gov. Newsom, offering policy guidance on responsible AI governance. This week, 260 lawmakers from all 50 states — Democrats and Republicans alike — signed a joint letter to Congress urging them to reject the proposed moratorium, highlighting a growing bipartisan consensus around the need for state-level action. As AI tools become more integrated into daily life, California appears determined to lead the charge — even if it means challenging federal authority. The Associated Press contributed to this report. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

California Senate passes bill that aims to make AI chatbots safer
California Senate passes bill that aims to make AI chatbots safer

Yahoo

time7 days ago

  • Health
  • Yahoo

California Senate passes bill that aims to make AI chatbots safer

California lawmakers on Tuesday moved one step closer to placing more guardrails around artificial intelligence-powered chatbots. The Senate passed a bill that aims to make chatbots used for companionship safer after parents raised concerns that virtual characters harmed their childrens' mental health. Read more: Teens are spilling dark thoughts to AI chatbots. Who's to blame when something goes wrong? The legislation, which now heads to the California State Assembly, shows how state lawmakers are tackling safety concerns surrounding AI as tech companies release more AI-powered tools. "The country is watching again for California to lead," said Sen. Steve Padilla (D-Chula Vista), one of the lawmakers who introduced the bill, on the Senate floor. At the same time, lawmakers are trying to balance concerns that they could be hindering innovation. Groups opposed to the bill such as the Electronic Frontier Foundation say the legislation is too broad and would run into free speech issues, according to a Senate floor analysis of the bill. Under Senate Bill 243, operators of companion chatbot platforms would remind users at least every three hours that the virtual characters aren't human. They would also disclose that companion chatbots might not be suitable for some minors. Platforms would also need to take other steps such as implementing a protocol for addressing suicidal ideation, suicide or self-harm expressed by users. That includes showing users suicide prevention resources. The operator of these platforms would also report the number of times a companion chatbot brought up suicide ideation or actions with a user, along with other requirements. Dr. Akilah Weber Pierson, one of the bill's co-authors, said she supports innovation but it also must come with "ethical responsibility." Chatbots, the senator said, are engineered to hold people's attention including children. 'When a child begins to prefer interacting with AI over real human relationships, that is very concerning," said Sen. Weber Pierson (D-La Mesa). The bill defines companion chatbots as AI systems capable of meeting the social needs of users. It excludes chatbots that businesses use for customer service. The legislation garnered support from parents who lost their children after they started chatting with chatbots. One of those parents is Megan Garcia, a Florida mom who sued Google and after her son Sewell Setzer III died by suicide last year. In the lawsuit, she alleges the platform's chatbots harmed her son's mental health and failed to notify her or offer help when he expressed suicidal thoughts to these virtual characters. based in Menlo Park, Calif., is a platform where people can create and interact with digital characters that mimic real and fictional people. The company has said that it takes teen safety seriously and rolled out a feature that gives parents more information about the amount of time their children are spending with chatbots on the platform. asked a federal court to dismiss the lawsuit, but a federal judge in May allowed the case to proceed. Sign up for our Wide Shot newsletter to get the latest entertainment business news, analysis and insights. This story originally appeared in Los Angeles Times.

California Senate passes bill that aims to make AI chatbots safer
California Senate passes bill that aims to make AI chatbots safer

Los Angeles Times

time7 days ago

  • Health
  • Los Angeles Times

California Senate passes bill that aims to make AI chatbots safer

California lawmakers on Tuesday moved one step closer to placing more guardrails around artificial intelligence-powered chatbots. The Senate passed a bill that aims to make chatbots used for companionship safer after parents raised concerns that virtual characters harmed their childrens' mental health. The legislation, which now heads to the California State Assembly, shows how state lawmakers are tackling safety concerns surrounding AI as tech companies release more AI-powered tools. 'The country is watching again for California to lead,' said Sen. Steve Padilla (D-Chula Vista), one of the lawmakers who introduced the bill, on the Senate floor. At the same time, lawmakers are trying to balance concerns that they could be hindering innovation. Groups opposed to the bill such as the Electronic Frontier Foundation say the legislation is too broad and would run into free speech issues, according to a Senate floor analysis of the bill. Under Senate Bill 243, operators of companion chatbot platforms would remind users at least every three hours that the virtual characters aren't human. They would also disclose that companion chatbots might not be suitable for some minors. Platforms would also need to take other steps such as implementing a protocol for addressing suicidal ideation, suicide or self-harm expressed by users. That includes showing users suicide prevention resources. The operator of these platforms would also report the number of times a companion chatbot brought up suicide ideation or actions with a user, along with other requirements. Dr. Akilah Weber Pierson, one of the bill's co-authors, said she supports innovation but it also must come with 'ethical responsibility.' Chatbots, the senator said, are engineered to hold people's attention including children. 'When a child begins to prefer interacting with AI over real human relationships, that is very concerning,' said Sen. Weber Pierson (D-La Mesa). The bill defines companion chatbots as AI systems capable of meeting the social needs of users. It excludes chatbots that businesses use for customer service. The legislation garnered support from parents who lost their children after they started chatting with chatbots. One of those parents is Megan Garcia, a Florida mom who sued Google and after her son Sewell Setzer III died by suicide last year. In the lawsuit, she alleges the platform's chatbots harmed her son's mental health and failed to notify her or offer help when he expressed suicidal thoughts to these virtual characters. based in Menlo Park, Calif., is a platform where people can create and interact with digital characters that mimic real and fictional people. The company has said that it takes teen safety seriously and rolled out a feature that gives parents more information about the amount of time their children are spending with chatbots on the platform. asked a federal court to dismiss the lawsuit, but a federal judge in May allowed the case to proceed.

California AI bills advance as Congress considers state-level regulation ban
California AI bills advance as Congress considers state-level regulation ban

Axios

time7 days ago

  • Business
  • Axios

California AI bills advance as Congress considers state-level regulation ban

Two California bills aiming to regulate artificial intelligence passed the state Senate this week. Why it matters: Those bills — both authored by Sen. Steve Padilla (D-San Diego) are clashing with efforts in D.C. to stop state-level AI legislation. House Republicans' reconciliation bill would prohibit states or localities from regulating AI for a decade. State of play: One bill, SB 243, aims to counter tech marketing of chatbots as an alleviation to loneliness and mental health struggles. It would prohibit operators from relying on addictive practices to boost engagement, require notifications and reminders the bot is not human. Between the lines: The other bill, SB 420, seeks to create a regulatory framework for AI systems. It would require for individuals to be notified when a public or private automated system is used to make decisions about them, how the tool works and in some cases to appeal for review by a person. The bill would also force an impact assessment on the purpose, use of data and potential for bias present in those automated systems. What they're saying: "Tech companies would have us believe any regulation at all would be disastrous," Padilla said in a statement. "The truth is it would be disastrous to allow tech titans to operate AI without oversight, accountability, or restraint." What's next: Both bills now move to the Assembly before a potential trip to the governor's desk. The big picture: All over the country, states are advancing and implementing AI safeguards while Congress considers a national moratorium in the reconciliation bill. Padilla told Axios he "can't think of a more irresponsible or stupid thing to do at this juncture" than for Congress to pass the AI moratorium and stop state lawmakers from regulating the tech. Threat level: The proposed federal moratorium could hinder California's broader efforts to develop a framework for governing AI models. In March, a working group appointed by Gov. Gavin Newsom released a report outlining guidance for policymakers on how to regulate the burgeoning industry, which Sen. Scott Wiener (D-San Francisco) said strikes "a thoughtful balance between the need for safeguards and the need to support innovation." "AI presents tremendous opportunities to transform our world for the better. At the same rapid pace of technological advancement in AI means policymakers must act with haste to impose reasonable guardrails to mitigate foreseeable risks," Wiener, who introduced AI whistleblower bill SB 53 in February, said in a statement . The bottom line: California has historically led on tech regulation while Congress gets stuck in debate mode — first on privacy and now on AI. The state has enacted a wide range of bills addressing everything from digital replicas of individuals to election ads, and has more legislation in the works. Today, 260 state lawmakers from both sides of the aisle across all 50 states sent a letter to Congress urging lawmakers to oppose the moratorium.

Opinion - Reclaiming critical thinking in the Age of AI
Opinion - Reclaiming critical thinking in the Age of AI

Yahoo

time27-04-2025

  • Business
  • Yahoo

Opinion - Reclaiming critical thinking in the Age of AI

California's Senate Judiciary Committee, with bipartisan support, approved Senate Bill 243 this month, requiring that AI companies 'protect users from the addictive, isolating, and influential aspects of artificial intelligence chatbots.' It is the first bill of its kind in the U.S. On the day of the bill's hearing, its author, U.S. Sen. Steve Padilla (D-Calif.), held a press conference where he was joined by Megan Garcia, who last year sued the AI company alleging that its chatbot had played a role in her son's suicide. Garcia testified in support of the bill, stating that such chatbots are 'inherently dangerous' and can lead to inappropriate conversations or self-harm. 'Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of new products,' said Padilla. Similar bills are currently working their way through legislatures in several states. These are vital steps in the right direction. Comparable legislation is urgently needed nationwide. What is at stake? Our kids' safety and emotional development and our capacity for critical thought — perhaps even our democracy. A 2024 Pew Research poll found that nearly half of Americans reported using AI several times a week, with one in four using it 'almost constantly.' A 2025 Gallup survey revealed that nearly all Americans rely on products that involve AI, even though most aren't aware of it. New research is beginning to illuminate the significant consequences. A 2025 study published in Societies found 'a very strong negative correlation between subjects' use of AI tools and their critical thinking skills.' Younger users were especially affected — a trend many teachers are starting to observe in their students. 'As individuals increasingly offload cognitive tasks to AI tools,' wrote Michael Gerlich, who led the study, 'their ability to critically evaluate information, discern biases, and engage in reflective reasoning diminishes.' That's a siren alert if there ever was one. Far from perfect (or neutral), AI systems are built by humans, and programmed with inherent biases, even if unintentionally. Executives and developers at leading AI companies like OpenAI, Google and Meta fine-tune their chatbots, and establish their settings and rules. As we rely on AI to do our thinking, we outsource our individual thoughts to the whims and biases of private corporations and their teams. Social media companies including Snap and Meta (which owns Facebook, Instagram, WhatsApp and Threads) are now rolling out their own 'AI companions' worldwide. Billions of people, including hundreds of millions of kids and teens, now have an always available online 'friend' offering them constant validation. That may sound comforting, but it deprives young people of the emotional growth and interpersonal skills they need for real relationships. AI companions are programmed to monetize our relationships under the guise of trusted friends; all the while they're programmed to mine, record and expertly analyze everything we say or type. Like high-tech tattletales, they can then feed this data into the data ecosystem, allowing marketers, advertisers and anyone else to pay to target and manipulate us in heretofore unimagined ways. In January, Meta announced it will now program these chatbots with personalized 'memories,' drawing from users' interests, posts and even dietary preferences. As millions of Americans develop comradely emotional, political and even sexual attachments with AI companions, contrary to the promise, suffering and unhappiness catapult. In March, research from MIT Media Lab and OpenAI found that frequent usage of AI chatbots correlated with 'increased loneliness, emotional dependence, and reduced social interaction.' As we increasingly depend on AI to understand the world, we open ourselves up to manipulation by entities that don't have our best interests in mind. In 2025, news rating service NewsGuard uncovered a significant threat to AI systems: foreign disinformation campaigns targeting AI training data with deliberate falsehoods. The Russian-linked Pravda Network published 3.6 million articles in 2024 designed to manipulate AI responses and spread propaganda. What happens to democracy when we offload our thinking to chatbots that are actively manipulated by foreign adversaries seeking disruption? AI is here to stay. And it has the potential to improve our lives in remarkable ways, from curing diseases to ending poverty to achieving scientific breakthroughs and much more. To ensure AI serves us, rather than the other way around, there are several key steps to take right now. First, transparency is paramount. Either voluntarily or via legislative mandate, large AI and social media companies like Meta, Google and OpenAI must disclose what data they're collecting from us and who they're sharing it with. Nutrition labels on food help us make healthy choices by telling us if something is high in sugar or cholesterol. Similarly, AI 'nutrition labels' can tell us if an AI system is known to have a high amount of political bias or how well it protects our privacy. Crucially, companies can then provide everyone with the ability to opt out of manipulative personalization. Second, new regulations are required to protect kids, teens and users of all ages from the threats posed by 'AI companions.' Legislation like California's Senate Bill 243 can help prevent AI chatbots from employing addictive engagement techniques and mandate protocols for handling signs of distress or suicide. This kind of targeted legislation deserves national adoption. Third, new media literacy initiatives are vital. Studies show that teaching students how to spot disinformation can reduce its impact. Several state legislatures are already moving in this direction, incorporating media literacy as a standard method of teaching for K-12 students. Skills including critical thinking and media literacy in the age of AI ought to be as essential for students nationwide as reading and math. AI is a powerful sword that's sharp on both sides. We can wield it responsibly and protect our kids as long as we retain our ability to think independently, reason cogently and communicate authentically. Mark Weinstein is a tech thought leader, privacy expert, and one of the inventors of social networking. He is the author of 'Restoring Our Sanity Online: A Revolutionary Social Framework' (Wiley, 2025). Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store