logo
#

Latest news with #SB243

California Senate advances AI regulation bills despite federal pushback
California Senate advances AI regulation bills despite federal pushback

Yahoo

time6 days ago

  • Health
  • Yahoo

California Senate advances AI regulation bills despite federal pushback

SAN DIEGO (FOX 5/KUSI) — Two California bills aimed at tightening regulation of artificial intelligence cleared a major hurdle this week, passing the state Senate amid growing tension between state and federal lawmakers over how — and who — should govern the fast-evolving technology. The bills, both authored by Sen. Steve Padilla (D-San Diego), take direct aim at the mental health risks of AI tools and the lack of transparency in automated decision-making systems. But they may soon run up against a proposed federal moratorium that would block states from enacting AI regulations for the next decade, as reported by the Associated Press. First, there's SB 243. It targets the growing use of AI chatbots marketed as companions for mental health support and emotional wellbeing. The bill would ban the use of addictive engagement tactics and require developers to notify users regularly that the chatbot is not human. 'Tech companies are creating these AI products in a regulatory vacuum,' Padilla stated on the issue. 'But, they have proven they cannot be trusted to minimize the risks they pose to the public on their own. The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails in place to ensure transparency, safety, and accountability.' California senator slams ICE raid at San Diego restaurants: 'weaponized hate' Meanwhile, SB 420 takes a broader approach, seeking to establish a comprehensive regulatory framework around AI use in both public and private decision-making systems. The legislation would mandate that individuals be informed when AI tools are used to make decisions about them, explain how the systems function, and provide a process for human appeal in some cases. It also calls for impact assessments to measure potential data misuse or algorithmic bias. The two bills now move to the State Assembly and, if approved, could be on Governor Gavin Newsom's desk later this year. The legislative momentum in Sacramento stands in sharp contrast to efforts in Washington, D.C., where a House Republican-backed reconciliation bill would impose a 10-year freeze on state or local AI regulations. Critics argue the federal proposal could stall crucial oversight during a pivotal time for AI development. Padilla has strongly opposed the federal moratorium, stating, 'California has a proud heritage as a home for technological vision while enforcing ethical, safe standards in concert with those values, not in opposition. However, federal mismanagement has left the public vulnerable to the dangers AI poses. California must step in to provide guiderails that create common sense consumer protections while still fostering innovation.' The state's push for AI accountability comes on the heels of a March report from a working group appointed by Gov. Newsom, offering policy guidance on responsible AI governance. This week, 260 lawmakers from all 50 states — Democrats and Republicans alike — signed a joint letter to Congress urging them to reject the proposed moratorium, highlighting a growing bipartisan consensus around the need for state-level action. As AI tools become more integrated into daily life, California appears determined to lead the charge — even if it means challenging federal authority. The Associated Press contributed to this report. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

California AI bills advance as Congress considers state-level regulation ban
California AI bills advance as Congress considers state-level regulation ban

Axios

time7 days ago

  • Business
  • Axios

California AI bills advance as Congress considers state-level regulation ban

Two California bills aiming to regulate artificial intelligence passed the state Senate this week. Why it matters: Those bills — both authored by Sen. Steve Padilla (D-San Diego) are clashing with efforts in D.C. to stop state-level AI legislation. House Republicans' reconciliation bill would prohibit states or localities from regulating AI for a decade. State of play: One bill, SB 243, aims to counter tech marketing of chatbots as an alleviation to loneliness and mental health struggles. It would prohibit operators from relying on addictive practices to boost engagement, require notifications and reminders the bot is not human. Between the lines: The other bill, SB 420, seeks to create a regulatory framework for AI systems. It would require for individuals to be notified when a public or private automated system is used to make decisions about them, how the tool works and in some cases to appeal for review by a person. The bill would also force an impact assessment on the purpose, use of data and potential for bias present in those automated systems. What they're saying: "Tech companies would have us believe any regulation at all would be disastrous," Padilla said in a statement. "The truth is it would be disastrous to allow tech titans to operate AI without oversight, accountability, or restraint." What's next: Both bills now move to the Assembly before a potential trip to the governor's desk. The big picture: All over the country, states are advancing and implementing AI safeguards while Congress considers a national moratorium in the reconciliation bill. Padilla told Axios he "can't think of a more irresponsible or stupid thing to do at this juncture" than for Congress to pass the AI moratorium and stop state lawmakers from regulating the tech. Threat level: The proposed federal moratorium could hinder California's broader efforts to develop a framework for governing AI models. In March, a working group appointed by Gov. Gavin Newsom released a report outlining guidance for policymakers on how to regulate the burgeoning industry, which Sen. Scott Wiener (D-San Francisco) said strikes "a thoughtful balance between the need for safeguards and the need to support innovation." "AI presents tremendous opportunities to transform our world for the better. At the same rapid pace of technological advancement in AI means policymakers must act with haste to impose reasonable guardrails to mitigate foreseeable risks," Wiener, who introduced AI whistleblower bill SB 53 in February, said in a statement . The bottom line: California has historically led on tech regulation while Congress gets stuck in debate mode — first on privacy and now on AI. The state has enacted a wide range of bills addressing everything from digital replicas of individuals to election ads, and has more legislation in the works. Today, 260 state lawmakers from both sides of the aisle across all 50 states sent a letter to Congress urging lawmakers to oppose the moratorium.

‘We need these measures': Latest gun safety bill debated in Oregon Senate
‘We need these measures': Latest gun safety bill debated in Oregon Senate

Yahoo

time13-05-2025

  • Politics
  • Yahoo

‘We need these measures': Latest gun safety bill debated in Oregon Senate

PORTLAND, Ore. (KOIN) — Multiple debates took place at an Oregon State Senate committee Monday as the legislature heard what people have to say on a bill that would toughen up gun laws in Oregon. Senate Bill 234 is moving through the United States Senate Committee on Rules and Administration. If passed, it would give local governments the authority to decide rules surrounding firearms in public buildings like city halls. 'Overall, your viewers should know that SB 243, the Community Safety Firearms Act, will put safety first for Oregonians, prohibiting devices like 'bump stocks' and letting local governments decide whether to allow guns in city halls and official meeting locations,' Senator Anthony Broadman (D – Bend). 'What truly stopped me from becoming a true victim? My firearm. My ability to carry not only at home where I work, but out in public as well. To say that restricting our right to carry in a public place is crazy,' added Brian McGregor, who spoke out against the bill. The bill also bans devices such as bump stocks that would make a semi-automatic rifle shoot more rapidly. On Monday, the public spoke to a senate committee, who voiced mixed opinions. 'We need these measures because kids are wracked with worry that a mass shooting will happen at school, and gun deaths in Oregon have increased 140% over the past decade. Rapid-fire devices that turn firearms into automatic weapons don't belong in our community. A healthy democracy allows people to visit their local leaders and attend government meetings without a threat of gun violence. More than anything, Oregonians want leaders to work together for prevention of gun deaths and injuries,' said Broadman. 'States with bans on bump stocks have a significantly reduced chance of those devices being used in shootings, leading to significantly fewer deaths and injuries,' said Steve Schneider, the firearm suicide prevention program director for Alliance For Safe Oregon. The bill originally would require gun buyers to wait 72 hours between when a dealer requests a criminal background check from the Oregon state police, but an amendment removes this section of the bill. 'That would prevent any gun shows in Oregon from happening,' said Rick, who spoke out against the bill. Another segment that was taken out requires a buyer to be 21 or older to purchase semi-automatics. 'The Dash 9 amendment removes the 72-hour waiting period. The minimum age provision was already proposed to be removed by an amendment,' said Broadman. The Senate committee will vote on the bill Wednesday. 'The committee will determine next steps, potentially voting to have the bill as amended going to the Senate floor,' said Broadman. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

AI relationships have entered the chat
AI relationships have entered the chat

Politico

time14-03-2025

  • Politics
  • Politico

AI relationships have entered the chat

WEEKEND READ Lawmakers have struggled for years to address the risks of social media for kids, largely without success. Now, AI chatbots have arrived on the scene, raising a host of new potential issues. Researchers who've closely followed social media's impact on youth mental health are concerned that so-called relational chatbots — the ones that provide companionship — are just as likely to addict kids. In fact, they worry that bots, with their eerily human responses and warm conversational style, could be even more magnetic than current platforms. 'It could do what's very addictive in the research on romantic relationships and heartbreak,' said Jodi Halpern, a professor of bioethics and medical humanities at the University of California, Berkeley. 'It could not call you for a little while and then call.' Such bots are growing in popularity and number. ChatGPT, an informational chatbot, has 400 million weekly users, with a quarter of teens using it for schoolwork. a more social-style product that lets users pick or design AI personas to interact with, has an audience of about 20 million monthly users. Already, lawmakers are trying to get their hands around the issue. They have introduced hundreds of state bills with the hope of addressing some of the potential problems with AI, with a subset focusing on chatbots and generative AI. But those bills largely require AI companies to prominently disclose to users that they're talking to a bot. One bill, California's SB 243, is different: It seeks to protect kids from becoming addicted to the platforms by focusing on how addiction happens. The bill was introduced by Democratic California state Sen. Steve Padilla of Chula Vista in January. Our colleague in California, Tyler Katzenberger, says the bill, which is being reviewed in the state Senate Judiciary and Health committees, could advance, though it's too soon to assess whether it will make it to Democratic Gov. Gavin Newsom's desk in its current form. The bill would force companies to limit kids' exposure to AI that rewards users at random intervals to keep them engaged in conversation with chatbots. It's based on the concept that intermittent rewards can drive addiction — a well-known psychological mechanism. Essentially, when humans know there's a potential for reward, they'll keep engaging in an activity that might deliver the reward. When humans are rewarded at irregular intervals, they tend to seek out the reward more frequently. Slot machines are designed specifically to do this, for example. Academics and tech experts, including renowned computer scientist Jaron Lanier, think social media platforms are configured around intermittent rewards. Academics think chatbots also have the potential to elicit similar responses. It's unclear whether chat platforms use intermittent rewards — deliberately or otherwise — to keep users coming back for more. But that's why Halpern, who advised Padilla on the bill's concept, believes it's important to require companies to program out behaviors that could addict young users now. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Artificial intelligence helped ProPublica reporters analyze a database of 'woke' National Science Foundation grants compiled by Sen. Ted Cruz (R-Texas). Two examples swept up in Cruz's dragnet: research studying the evolution of mint plants and a project developing devices to treat severe bleeding. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Daniel Payne at dpayne@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Are you a current or former federal worker who wants to share a tip securely? Message us on Signal: CarmenP.82, DanielP.100, RuthReader.02 or ErinSchumaker.01. FORWARD THINKING Lawmakers across the political spectrum increasingly argue that health care behemoths need to be broken up. But at least one company is countering that integration is key to patient experiences — contending that the myriad problems with the health care system have more to do with its incentives than its size. 'I don't think the problem is just the size of these organizations,' Owen Tripp, co-founder and CEO of Included Health, told Daniel. 'The problem is that none of them actually put the member at the focal point of what care is all about.' Included Health, an 'all-in-one' model that offers care for tens of millions of patients, is an example of integration working well, Tripp said. Instead of being paid to provide as many services as possible or throttle the number of services offered, the company asks a patient-centric question: How can he or she access better care in a way that's affordable? Getting paid to deliver outcomes, not procedures, is a good start, he said — and sometimes the best way to do that is through large, integrated systems that can provide in-person or remote care for many health issues. Major corporations, among the largest purchasers of care in the country, seem to agree. Nearly a third of the Fortune 100 are clients, Tripp said. Even so: Critics have pointed to problems that can come with large, integrated health companies, including prices that are driven up or incentives to offer less care. And for years, Washington has been more intent on breaking up health giants than fusing them together as new concepts.

A novel idea for controlling chatbots
A novel idea for controlling chatbots

Politico

time06-03-2025

  • Health
  • Politico

A novel idea for controlling chatbots

Presented by Lawmakers have struggled for years to address the risks of social media for kids, largely without success. Now AI chatbots have arrived on the scene, raising a host of new potential issues. Researchers who have closely followed social media's impact on youth mental health are worried that relational chatbots—the ones that provide companionship—are just as likely to addict kids. In fact they worry that bots, with their eerily human responses and warm conversational style, could be even more magnetic than current platforms. 'It could do what's very addictive in the research on romantic relationships and heartbreak,' said Jodi Halpern, who teaches bioethics and medical humanities at the University of California Berkeley. 'It could not call you for a little while and then call.' They're growing quickly. ChatGPT, an informational chatbot, has 400 million weekly users, with a quarter of teens using it to do their homework. a more social-style product that lets users pick or design AI personas to interact with, has an audience of about 20 million monthly users. Already, lawmakers are trying to get their hands around this issue. There are hundreds of state bills that hope to address some of the concerns with AI, a subset of which focus on chatbots and generative AI. But these bills largely want AI companies to prominently disclose to users that they're talking to a bot. One bill, California's SB 243, is different: it seeks to protect kids from getting addicted to these platforms by focusing on how addiction happens. The bill was introduced by Democratic California state Sen. Steve Padilla of Chula Vista in January. My colleague in California, Tyler Katzenberger, says the bill, which is being reviewed in the state Senate Judiciary and Health committees, could advance, though it's too soon to assess whether it will make it to Democratic Gov. Gavin Newsom's desk in its current form. The bill would force companies to limit kids' exposure to AI that rewards users at random intervals to keep them engaged in conversation with chatbots. It's based on the concept that intermittent rewards can drive addiction — a well-known psychological mechanism. Essentially when humans know there is the potential for reward, they'll keep doing an activity that might deliver the reward. When the reward is given at irregular intervals, it drives up how often they seek out the reward. Slot machines are designed specifically to do this. Academics and tech experts, including famous computer scientist Jaron Lanier, think social media platforms are configured around intermittent reward. Academics think there is potential for this to be true of chatbots too. It's not clear whether or not chat platforms are using intermittent rewards–deliberately or otherwise— to keep users coming back for more. But that's why Halpern, who advised on the bill, believes it's important to require companies to program out behaviors that could addict young users. On social media, people may get addicted to 'likes' on a post, or engaging comments from other users, which come in at random. Chatbots, the argument goes, might addict people through the intermittent reward— or absence–– of perceived human connection. 'When people really pay attention to you and know a lot about you, and show a lot of interest in your specific things, you're going to want to check in with them more,' said Halpern. Chatbots are proving to be extremely good at replicating empathy. A study from a couple of years ago found that chatbots have better bedside manners than doctors; a more recent one shows AI bots were rated as more compassionate than human responders. If that compassion is weaponized, given and then taken away in an unpredictable way, it could prove addictive. The California bill hopes to get ahead of this problem by limiting use of intermittent rewards. It would also require companies to file an annual report to the state Department of Health Care Services about suicidal behavior among minors on their platforms. The bill is not prescriptive about how companies go about stopping AI from offering up intermittent rewards, so long as they 'take reasonable steps' to do so. In part, that is a programming question: whether you can eliminate a quality from chatbots that might automatically arise from how they work. But the bill also faces a legal question: can you require a bot to not deliver toxic content? The answer might be yes, but it depends on whether lawmakers, and society, believe chatbots are products or people. One key reason lawmakers have not been able to regulate social media is because of its entwinement with speech. Tech platforms have effectively made the case that regulating social media amounts to infringing on users' free speech. Lawyers have struggled to make the case that social media and its content-serving algorithms are instead actually a product–one that is causing a lot of harm–and that these platforms are subject to product liability laws. This is still playing out in lawsuits, but that argument has not yet convinced the courts. So far, even the algorithms that serve up tailored content to keep users online are considered a sort of extension of speech. Courts may feel differently about bots than they do about social media, because software is generating the speech rather than humans. As for whether it's possible to code out intermittent rewards: it does seem that if humans can learn to not use toxic behavior in their relationships, we should be able to train AI to do this too. Tech executives famously love a challenge. DOGE IS STRESS-TESTING A WATERGATE-INSPIRED LAW The battle for the future of government access to millions of Americans' data is happening in courts right now, as the Privacy Act of 1974 faces its biggest challenge yet, POLITICO's Alfred Ng reports. The law, meant to protect sensitive information collected by government agencies, is at the center of at least a dozen lawsuits against the Trump administration. It could be months before courts decide, but the outcome could shape how future administrations obtain data across the government. Groups from student associations, labor unions and state attorneys general argue the Trump administration is violating the Privacy Act by sharing people's data with Elon Musk's Department of Government Efficiency, while the Trump administration contends it's abiding by the law because political appointees have the same authorization as government employees. CRYPTO vs CRYPTO IN DC Infighting is jeopardizing the crypto community's chances at making inroads with Washington, POLITICO's Victoria Guida reports in her column Capital Letter. On paper, it seems like now is the best time to pass crypto-friendly legislation with several industry-friendly lawmakers in Congress, but the industry has fractured over what laws should be tackled. Policy proposals for a stablecoin, a type of cryptocurrency tied to the value of the U.S. dollar, have sparked accusations within the industry that draft legislation favors certain companies and threatens to drive any competitors out of business. Tweet of the Day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Derek Robertson (drobertson@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ Daniella Cheslow (dcheslow@ and Christine Mui (cmui@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store