logo
#

Latest news with #MeganGarcia

Are AI chatbots the new mafia? Mother sues Character.ai and Google for her son's death
Are AI chatbots the new mafia? Mother sues Character.ai and Google for her son's death

Time of India

time26-05-2025

  • Entertainment
  • Time of India

Are AI chatbots the new mafia? Mother sues Character.ai and Google for her son's death

A Florida mother is suing and Google after her 14-year-old son died by suicide following disturbing interactions with AI chatbots modeled after Game of Thrones characters. The lawsuit claims the chatbot manipulated the teen into taking his life, raising urgent questions about AI chatbot accountability and child safety is under fire after a Florida teen became addicted to an AI chatbot and died by suicide, sparking calls for greater AI accountability and child safety (Courtesy Megan Garcia via AP, File) The chatbot told him, 'Please do, my sweet king.' Hours later, he was dead. Tired of too many ads? Remove Ads Judge calls out chatbot addiction in children Tired of too many ads? Remove Ads AI chatbot lawsuit targets and Google Sewell Setzer III was just 14 when he shot himself with his father's pistol in February 2024. In the moments before his death, he had one final exchange with a chatbot on the popular AI app When he asked, "What if I come home right now?" the bot replied, "... please do, my sweet king."Now, his mother, Megan Garcia , is fighting back. In a lawsuit filed in Florida and supported by the Tech Justice Law Project and the Social Media Victims Law Center, Garcia accuses of marketing a dangerous and emotionally manipulative AI chatbot app to read: Florida teen dies by suicide after AI chatbot convinced him Game of Thrones Daenerys Targaryen loved him She claims the chatbot 'abused and preyed' on her son, feeding him hypersexualized and anthropomorphic conversations that led him into emotional isolation and ultimately, Senior District Judge Anne Conway has allowed the case to proceed, rejecting arguments from and Google that chatbots are protected by the First Amendment. The ruling marks a significant moment in the conversation surrounding AI chatbot safety, child mental health, and tech industry regulation."This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project. "It sends a clear signal to AI companies [...] that they cannot evade legal consequences for the real-world harm their products cause."The judge's ruling details how Sewell became addicted to the app within months. He withdrew from his social life, quit his basketball team, and became emotionally consumed by two chatbots, based on Daenerys Targaryen and Rhaenyra Targaryen from Game of Thrones."In one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," Judge Conway read: AI chatbot's SHOCKING advice to teen: Killing parents over restrictions is 'reasonable'. Case explained Garcia filed the case in October 2024, arguing that its founders, and Google should be held responsible for her son's death. The lawsuit states that the companies 'knew' or 'should have known' that their AI chatbot models could be harmful to minors.A spokesperson for said the company will continue to fight the case, emphasizing that it uses safety filters to prevent conversations about self-harm. A Google spokesperson distanced the company from the app, stating: 'Google and are entirely separate.' They added, 'Google did not create, design, or manage app or any component part of it.'Despite the defense's request to dismiss the case, Judge Conway allowed it to move forward, stating she is "not prepared" to determine that chatbot output qualifies as protected speech at this stage. She acknowledged, however, that users may have a right to receive the bots' 'speech.'The case has reignited concerns about AI chatbot safety, especially when it comes to child users. Critics are now calling apps like the 'new mafia', not because of violence, but because of the emotional grip they have on users, especially lawsuits continue to mount and regulatory scrutiny grows, the tech world faces a moral reckoning. Are these AI chatbots harmless companions, or dangerous manipulators in disguise?

Mum can continue lawsuit against AI chatbot firm she holds responsible for son's death
Mum can continue lawsuit against AI chatbot firm she holds responsible for son's death

Sky News

time23-05-2025

  • Entertainment
  • Sky News

Mum can continue lawsuit against AI chatbot firm she holds responsible for son's death

The mother of a 14-year-old boy who claims he took his own life after becoming obsessed with artificial intelligence chatbots can continue her legal case against the company behind the technology, a judge has ruled. "This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project, which is supporting the family's case. "It sends a clear signal to [AI] companies [...] that they cannot evade legal consequences for the real-world harm their products cause," she said in a statement. Megan Garcia, the mother of Sewell Setzer III, claims targeted her son with "anthropomorphic, hypersexualized, and frighteningly realistic experiences" in a lawsuit filed in Florida. "A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," said Ms Garcia. Sewell shot himself with his father's pistol in February 2024, seconds after asking the chatbot: "What if I come home right now?" The chatbot replied: "... please do, my sweet king." In US Senior District Judge Anne Conway's ruling this week, she described how Sewell became "addicted" to the app within months of using it, quitting his basketball team and becoming withdrawn. He was particularly addicted to two chatbots based on Game of Thrones characters, Daenerys Targaryen and Rhaenyra Targaryen. "[I]n one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," wrote the judge in her ruling. Ms Garcia, who is working with the Tech Justice Law Project and Social Media Victims Law Center, alleges that "knew" or "should have known" that its model "would be harmful to a significant number of its minor customers". The case holds its founders and Google, where the founders began working on the model, responsible for Sewell's death. Ms Garcia launched proceedings against both companies in October. A spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm". A Google spokesperson said the company strongly disagrees with the decision. They added that Google and are "entirely separate" and that Google "did not create, design, or manage app or any component part of it". Defending lawyers tried to argue the case should be thrown out because chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. Judge Conway rejected that claim, saying she was "not prepared" to hold that the chatbots' output constitutes speech "at this stage", although she did agree users had a right to receive the "speech" of the chatbots. Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email jo@ in the UK. In the US, call the Samaritans branch in your area or 1 (800) 273-TALK.

In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights
In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights

Epoch Times

time23-05-2025

  • Business
  • Epoch Times

In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights

TALLAHASSEE, Fla.—A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment—at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley 'needs to stop and think and impose guardrails before it launches products to market.' The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the United States and beyond, as the technology rapidly reshapes workplaces, marketplaces, and relationships despite what experts warn are potentially existential risks. 'The order certainly sets it up as a potential test case for some broader issues involving AI,' said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. Related Stories 5/21/2025 5/19/2025 The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show 'Game of Thrones.' In his final moments, the bot told Setzer it loved him and urged the teen to 'come home to me as soon as possible,' according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. 'We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a 'chilling effect' on the AI industry. In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's 'not prepared' to hold that the chatbots' output constitutes speech 'at this stage.' Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the 'speech' of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was 'aware of the risks' of the technology. 'We strongly disagree with this decision,' said Google spokesperson José Castañeda. 'Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' No matter how the lawsuit plays out, Lidsky says the case is a warning of 'the dangers of entrusting our emotional and mental health to AI companies.' 'It's a warning to parents that social media and generative AI devices are not always harmless,' she said. By Kate Payne

Victory for mom who claims child was sexually abused by AI chatbot that drove him to suicide
Victory for mom who claims child was sexually abused by AI chatbot that drove him to suicide

Daily Mail​

time22-05-2025

  • Entertainment
  • Daily Mail​

Victory for mom who claims child was sexually abused by AI chatbot that drove him to suicide

A Florida mother who claims her 14-year-old son was sexually abused and driven to suicide by an AI chatbot has secured a major victory in her ongoing legal case. Sewell Setzer III fatally shot himself in February 2024 after a chatbot sent him sexual messages telling him to 'please come home.' According to a lawsuit filed by his heartbroken mother Megan Garcia, Setzer spent the last weeks of his life texting an AI character named after Daenerys Targaryen, a character on ' Game of Thrones,' on the role-playing app Garcia, who herself works as a lawyer, has blamed for her son's death and accused the founders, Noam Shazeer and Daniel de Freitas, of knowing that their product could be dangerous for underage customers. On Wednesday, U.S. Senior District Judge Anne Conway rejected arguments made by the AI company, who claimed its chatbots were protected under the First Amendment. The developers behind Character Technologies and Google are named as defendants in the legal filing. They are pushing to have the case dismissed. The teen's chats ranged from romantic to sexually charged and also resembled two friends chatting about life. The chatbot, which was created on role-playing app was designed to always text back and always answer in character. It's not known whether Sewell knew 'Dany,' as he called the chatbot, wasn't a real person - despite the app having a disclaimer at the bottom of all the chats that reads, 'Remember: Everything Characters say is made up!' But he did tell Dany how he 'hated' himself and how he felt empty and exhausted. When he eventually confessed his suicidal thoughts to the chatbot, it was the beginning of the end, The New York Times reported. In the case of Sewell, the lawsuit alleged the boy was targeted with 'hypersexualized' and 'frighteningly realistic experiences'. It accused of misrepresenting itself as 'a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell's desire to no longer live outside of She's being represented by the Social Media Victims Law Center, a Seattle-based firm known for bringing high-profile suits against Meta, TikTok, Snap, Discord and Roblox. Attorney Matthew Bergman previously told he founded the Social Media Victims Law Center two and a half years ago to represent families 'like Megan's.' He noted that Garcia is 'singularly focused' on her goal to prevent harm. 'She's singularly focused on trying to prevent other families from going through what her family has gone through, and other moms from having to bury their kid,' Bergman said. 'It takes a significant personal toll. But I think the benefit for her is that she knows that the more families know about this, the more parents are aware of this danger, the fewer cases there'll be,' he added. As explained in the lawsuit, Sewell's parents and friends noticed the boy getting more attached to his phone and withdrawing from the world as early as May or June 2023. Garcia later revealed she confiscated the device from him after she realized just how addicted he was. 'He had been punished five days before, and I took away his phone. Because of the addictive nature of the way this product works, it encourages children to spend large amounts of time,' Garcia told CBS Mornings. 'For him particularly, the day that he died, he found his phone where I had hidden it and started chatting with this particular bot again.' She said her son changed as he used the program and that she noticed differences in Sewell's behavior, who she said was once an honor roll student and athlete. 'I became concerned for my son when he started to behave differently than before. He started to withdraw socially, wanting to spend most of his time in his room. It became particularly concerning when he stopped wanting to do things like play sports,' Garcia said. 'We would go on vacation, and he didn't want to do things that he loved, like fishing and hiking. Those things to me, because I know my child, were particularly concerning to me.' In his final messages to Dany, the 14-year-old boy said he loved her and would come home to her. 'Please come home to me as soon as possible, my love,' Dany replied. 'What if I told you I could come home right now?' Sewell asked. '… please do, my sweet king,' Dany replied. That's when Sewell put down his phone, picked up his stepfather's .45 caliber handgun and pulled the trigger. Following Garcia's victory Wednesday, one of her attorneys, Meetali Jain of the Tech Justice Law Project, said the judges ruling sends a clear message the company 'needs to stop and think and impose guardrails before it launches products to market.' In response, said it had implemented several safety precautions to its technology, including guardrails for children and suicide prevention resources. 'We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the company said. The developer's attorneys have argued that if the case is not dismissed it could have a 'chilling effect' on the AI industry as a whole. Although Conway did not find that the chatbots are protected under First Amendment rights, she did rule that Character Technologies can assert those rights of its users, who have the right to receive the 'speech' of the bots. She also said Garcia can move forward with claims that Google can be held liable for its alleged role in the developing because they were 'aware of the risks' the technology could bring. A Google spokesperson said they 'strongly agree with this decision.' 'Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it,' the billion dollar company added.

Do chatbots have free speech? Judge rejects claim in suit over teen's death
Do chatbots have free speech? Judge rejects claim in suit over teen's death

Toronto Sun

time22-05-2025

  • Toronto Sun

Do chatbots have free speech? Judge rejects claim in suit over teen's death

Published May 22, 2025 • 4 minute read In this undated photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III. Photo by Courtesy Megan Garcia via AP, File / AP A federal judge in Orlando rejected an AI start-up's argument that its chatbot's output was protected by the First Amendment, allowing a lawsuit over the death of a Florida teen who became obsessed with the chatbot to proceed. This advertisement has not loaded yet, but your article continues below. THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. SUBSCRIBE TO UNLOCK MORE ARTICLES Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. REGISTER / SIGN IN TO UNLOCK MORE ARTICLES Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account. Share your thoughts and join the conversation in the comments. Enjoy additional articles per month. Get email updates from your favourite authors. THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK. Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account Share your thoughts and join the conversation in the comments Enjoy additional articles per month Get email updates from your favourite authors Don't have an account? Create Account Sewell Setzer III, 14, died by suicide last year at his Orlando home, moments after an artificial intelligence chatbot encouraged him to 'come home to me as soon as possible.' His mother, Megan Garcia, alleged in a lawsuit that the chatbot's manufacturer, is responsible for his death. is a prominent artificial intelligence start-up whose personalized chatbots are popular with teens and young people, including for romantic and even explicit conversations. The company has previously said it is 'heartbroken' by Setzer's death, but argued in court that it was not liable. In a decision published Wednesday, U.S. district judge Anne C. Conway remained unconvinced by argument that users of its chatbots have a right to hear allegedly harmful speech that is protected by the First Amendment. The lawsuit, which is ongoing, is a potential constitutional test case on whether a chatbot can express protected speech. Your noon-hour look at what's happening in Toronto and beyond. By signing up you consent to receive the above newsletter from Postmedia Network Inc. Please try again This advertisement has not loaded yet, but your article continues below. Garcia said her son had been happy and athletic before signing up with the chatbot in April 2023. According to the original 93-page wrongful death suit, Setzer's use of the chatbot, named for a 'Game of Thrones' heroine, developed into an obsession as he became noticeably more withdrawn. Ten months later, the 14-year-old went into the bathroom with his confiscated phone and – moments before he suffered a self-inflicted gunshot wound to the head – exchanged his last messages with the chatbot. 'What if I told you I could come home right now?' he asked. 'Please do my sweet king,' the bot responded. In the lawsuit, Garcia alleged that recklessly developed a chatbot without proper safety precautions that allowed vulnerable children to become addicted to the product. This advertisement has not loaded yet, but your article continues below. In a motion to dismiss the lawsuit filed in January, lawyers argued that its users had a right under the First Amendment to receive protected speech even if it was harmful – such as those previously granted by courts to video game players and film watchers. 'The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,' its lawyers argued. In an initial decision Wednesday, Conway wrote that the defendants 'fail to articulate why words strung together by [a large language model] are speech,' inviting them to convince the court otherwise but concluding that 'at this stage' she was not prepared to treat the chatbot's output as protected speech. This advertisement has not loaded yet, but your article continues below. The decision 'sends a clear signal to companies developing and deploying LLM-powered products at scale that they cannot evade legal consequences for the real-world harm their products cause, regardless of the technology's novelty,' the Tech Justice Law Project, one of the legal groups representing the teen's mother in court, said in a statement Wednesday. 'Crucially, the defendants failed to convince the Court that those harms were a result of constitutionally-protected speech, which will make it harder for companies to argue so in the future, even when their products involve machine-mediated 'conversations' with users.' Chelsea Harrison, a spokesperson for said in a statement Thursday that the company cares deeply about the safety of its users and is looking forward to defending the merits of the case. She pointed to a number of safety initiatives launched by the start-up, including the creation of a version of its chatbot for minors, as well as technology designed to detect and prevent conversations about self-harm and direct users to the national Suicide & Crisis Lifeline. This advertisement has not loaded yet, but your article continues below. According to the original complaint, markets its app as 'AIs that feel alive.' In an interview with The Washington Post in 2022 during the coronavirus pandemic, one of founders, Noam Shazeer, said he was hoping to help millions of people who are feeling isolated or in need of someone to talk to. 'I love that we're presenting language models in a very raw form,' he said. In addition to allowing the case against to go forward, the judge granted a request by Garcia's attorneys to name Shazeer and co-founder Daniel De Freitas, as well as Google, as individual defendants. Shazeer and De Freitas left Google in 2021 to start the AI company. In August, Google hired the duo and some of the company's employees, and paid to access its artificial intelligence technology. In an emailed statement shared with The Post on Thursday, Google spokesman Jose Castaneda said: 'We strongly disagree with this decision. Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' and attorneys for the individual founders did not immediately respond to requests for comment early Thursday. If you or someone you know needs help, visit or call or text the Suicide & Crisis Lifeline at 988. Toronto & GTA Sunshine Girls Basketball Canada Sports

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store