logo
#

Latest news with #JodiHalpern

AI relationships have entered the chat
AI relationships have entered the chat

Politico

time14-03-2025

  • Politics
  • Politico

AI relationships have entered the chat

WEEKEND READ Lawmakers have struggled for years to address the risks of social media for kids, largely without success. Now, AI chatbots have arrived on the scene, raising a host of new potential issues. Researchers who've closely followed social media's impact on youth mental health are concerned that so-called relational chatbots — the ones that provide companionship — are just as likely to addict kids. In fact, they worry that bots, with their eerily human responses and warm conversational style, could be even more magnetic than current platforms. 'It could do what's very addictive in the research on romantic relationships and heartbreak,' said Jodi Halpern, a professor of bioethics and medical humanities at the University of California, Berkeley. 'It could not call you for a little while and then call.' Such bots are growing in popularity and number. ChatGPT, an informational chatbot, has 400 million weekly users, with a quarter of teens using it for schoolwork. a more social-style product that lets users pick or design AI personas to interact with, has an audience of about 20 million monthly users. Already, lawmakers are trying to get their hands around the issue. They have introduced hundreds of state bills with the hope of addressing some of the potential problems with AI, with a subset focusing on chatbots and generative AI. But those bills largely require AI companies to prominently disclose to users that they're talking to a bot. One bill, California's SB 243, is different: It seeks to protect kids from becoming addicted to the platforms by focusing on how addiction happens. The bill was introduced by Democratic California state Sen. Steve Padilla of Chula Vista in January. Our colleague in California, Tyler Katzenberger, says the bill, which is being reviewed in the state Senate Judiciary and Health committees, could advance, though it's too soon to assess whether it will make it to Democratic Gov. Gavin Newsom's desk in its current form. The bill would force companies to limit kids' exposure to AI that rewards users at random intervals to keep them engaged in conversation with chatbots. It's based on the concept that intermittent rewards can drive addiction — a well-known psychological mechanism. Essentially, when humans know there's a potential for reward, they'll keep engaging in an activity that might deliver the reward. When humans are rewarded at irregular intervals, they tend to seek out the reward more frequently. Slot machines are designed specifically to do this, for example. Academics and tech experts, including renowned computer scientist Jaron Lanier, think social media platforms are configured around intermittent rewards. Academics think chatbots also have the potential to elicit similar responses. It's unclear whether chat platforms use intermittent rewards — deliberately or otherwise — to keep users coming back for more. But that's why Halpern, who advised Padilla on the bill's concept, believes it's important to require companies to program out behaviors that could addict young users now. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Artificial intelligence helped ProPublica reporters analyze a database of 'woke' National Science Foundation grants compiled by Sen. Ted Cruz (R-Texas). Two examples swept up in Cruz's dragnet: research studying the evolution of mint plants and a project developing devices to treat severe bleeding. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Daniel Payne at dpayne@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Are you a current or former federal worker who wants to share a tip securely? Message us on Signal: CarmenP.82, DanielP.100, RuthReader.02 or ErinSchumaker.01. FORWARD THINKING Lawmakers across the political spectrum increasingly argue that health care behemoths need to be broken up. But at least one company is countering that integration is key to patient experiences — contending that the myriad problems with the health care system have more to do with its incentives than its size. 'I don't think the problem is just the size of these organizations,' Owen Tripp, co-founder and CEO of Included Health, told Daniel. 'The problem is that none of them actually put the member at the focal point of what care is all about.' Included Health, an 'all-in-one' model that offers care for tens of millions of patients, is an example of integration working well, Tripp said. Instead of being paid to provide as many services as possible or throttle the number of services offered, the company asks a patient-centric question: How can he or she access better care in a way that's affordable? Getting paid to deliver outcomes, not procedures, is a good start, he said — and sometimes the best way to do that is through large, integrated systems that can provide in-person or remote care for many health issues. Major corporations, among the largest purchasers of care in the country, seem to agree. Nearly a third of the Fortune 100 are clients, Tripp said. Even so: Critics have pointed to problems that can come with large, integrated health companies, including prices that are driven up or incentives to offer less care. And for years, Washington has been more intent on breaking up health giants than fusing them together as new concepts.

A novel idea for controlling chatbots
A novel idea for controlling chatbots

Politico

time06-03-2025

  • Health
  • Politico

A novel idea for controlling chatbots

Presented by Lawmakers have struggled for years to address the risks of social media for kids, largely without success. Now AI chatbots have arrived on the scene, raising a host of new potential issues. Researchers who have closely followed social media's impact on youth mental health are worried that relational chatbots—the ones that provide companionship—are just as likely to addict kids. In fact they worry that bots, with their eerily human responses and warm conversational style, could be even more magnetic than current platforms. 'It could do what's very addictive in the research on romantic relationships and heartbreak,' said Jodi Halpern, who teaches bioethics and medical humanities at the University of California Berkeley. 'It could not call you for a little while and then call.' They're growing quickly. ChatGPT, an informational chatbot, has 400 million weekly users, with a quarter of teens using it to do their homework. a more social-style product that lets users pick or design AI personas to interact with, has an audience of about 20 million monthly users. Already, lawmakers are trying to get their hands around this issue. There are hundreds of state bills that hope to address some of the concerns with AI, a subset of which focus on chatbots and generative AI. But these bills largely want AI companies to prominently disclose to users that they're talking to a bot. One bill, California's SB 243, is different: it seeks to protect kids from getting addicted to these platforms by focusing on how addiction happens. The bill was introduced by Democratic California state Sen. Steve Padilla of Chula Vista in January. My colleague in California, Tyler Katzenberger, says the bill, which is being reviewed in the state Senate Judiciary and Health committees, could advance, though it's too soon to assess whether it will make it to Democratic Gov. Gavin Newsom's desk in its current form. The bill would force companies to limit kids' exposure to AI that rewards users at random intervals to keep them engaged in conversation with chatbots. It's based on the concept that intermittent rewards can drive addiction — a well-known psychological mechanism. Essentially when humans know there is the potential for reward, they'll keep doing an activity that might deliver the reward. When the reward is given at irregular intervals, it drives up how often they seek out the reward. Slot machines are designed specifically to do this. Academics and tech experts, including famous computer scientist Jaron Lanier, think social media platforms are configured around intermittent reward. Academics think there is potential for this to be true of chatbots too. It's not clear whether or not chat platforms are using intermittent rewards–deliberately or otherwise— to keep users coming back for more. But that's why Halpern, who advised on the bill, believes it's important to require companies to program out behaviors that could addict young users. On social media, people may get addicted to 'likes' on a post, or engaging comments from other users, which come in at random. Chatbots, the argument goes, might addict people through the intermittent reward— or absence–– of perceived human connection. 'When people really pay attention to you and know a lot about you, and show a lot of interest in your specific things, you're going to want to check in with them more,' said Halpern. Chatbots are proving to be extremely good at replicating empathy. A study from a couple of years ago found that chatbots have better bedside manners than doctors; a more recent one shows AI bots were rated as more compassionate than human responders. If that compassion is weaponized, given and then taken away in an unpredictable way, it could prove addictive. The California bill hopes to get ahead of this problem by limiting use of intermittent rewards. It would also require companies to file an annual report to the state Department of Health Care Services about suicidal behavior among minors on their platforms. The bill is not prescriptive about how companies go about stopping AI from offering up intermittent rewards, so long as they 'take reasonable steps' to do so. In part, that is a programming question: whether you can eliminate a quality from chatbots that might automatically arise from how they work. But the bill also faces a legal question: can you require a bot to not deliver toxic content? The answer might be yes, but it depends on whether lawmakers, and society, believe chatbots are products or people. One key reason lawmakers have not been able to regulate social media is because of its entwinement with speech. Tech platforms have effectively made the case that regulating social media amounts to infringing on users' free speech. Lawyers have struggled to make the case that social media and its content-serving algorithms are instead actually a product–one that is causing a lot of harm–and that these platforms are subject to product liability laws. This is still playing out in lawsuits, but that argument has not yet convinced the courts. So far, even the algorithms that serve up tailored content to keep users online are considered a sort of extension of speech. Courts may feel differently about bots than they do about social media, because software is generating the speech rather than humans. As for whether it's possible to code out intermittent rewards: it does seem that if humans can learn to not use toxic behavior in their relationships, we should be able to train AI to do this too. Tech executives famously love a challenge. DOGE IS STRESS-TESTING A WATERGATE-INSPIRED LAW The battle for the future of government access to millions of Americans' data is happening in courts right now, as the Privacy Act of 1974 faces its biggest challenge yet, POLITICO's Alfred Ng reports. The law, meant to protect sensitive information collected by government agencies, is at the center of at least a dozen lawsuits against the Trump administration. It could be months before courts decide, but the outcome could shape how future administrations obtain data across the government. Groups from student associations, labor unions and state attorneys general argue the Trump administration is violating the Privacy Act by sharing people's data with Elon Musk's Department of Government Efficiency, while the Trump administration contends it's abiding by the law because political appointees have the same authorization as government employees. CRYPTO vs CRYPTO IN DC Infighting is jeopardizing the crypto community's chances at making inroads with Washington, POLITICO's Victoria Guida reports in her column Capital Letter. On paper, it seems like now is the best time to pass crypto-friendly legislation with several industry-friendly lawmakers in Congress, but the industry has fractured over what laws should be tackled. Policy proposals for a stablecoin, a type of cryptocurrency tied to the value of the U.S. dollar, have sparked accusations within the industry that draft legislation favors certain companies and threatens to drive any competitors out of business. Tweet of the Day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Derek Robertson (drobertson@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ Daniella Cheslow (dcheslow@ and Christine Mui (cmui@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store