Latest news with #SewellSetzerIII


New York Times
15 hours ago
- Health
- New York Times
The Siren Song of Chatbots
Before he died by suicide at age 14, Sewell Setzer III withdrew from friends and family. He quit basketball. His grades dropped. A therapist told his parents that he appeared to be suffering from an addiction. But the problem wasn't drugs. Sewell had become infatuated with an artificial intelligence chatbot named Daenerys Targaryen, after the 'Game of Thrones' character. Apparently, he saw dying as a way to unite with her. 'Please come home to me as soon as possible, my love,' the chatbot begged. 'What if I told you I could come home right now?' Sewell asked. 'Please do, my sweet king,' the bot replied. Sewell replied that he would — and then he shot himself. Many experts argue that addiction is, in essence, love gone awry: a singular passion directed destructively at a substance or activity rather than an appropriate person. With the advent of A.I. companions — including some intended to serve as romantic partners — the need to understand the relationship between love and addiction is urgent. Mark Zuckerberg, the Meta chief executive, has even proposed in recent interviews that A.I. companions could help solve both the loneliness epidemic and the widespread lack of access to psychotherapy. But Sewell's story compels caution. Social media already encourages addictive behavior, with research suggesting that about 15 percent of North Americans engage in compulsive use. That data was collected before chatbots intended to replicate romantic love, friendship or the regulated intimacy of therapy became widespread. Millions of Americans have engaged with such bots, which in most cases require installing an app, inputting personal details and preferences about what kind of personality and look the bot should possess, and chatting with it as though it's a friend or potential lover. The confluence of these factors means these new bots may not only produce more severe addictions but also simultaneously market other products or otherwise manipulate users by, for example, trying to change their political views. In Sewell Setzer's case, the chatbot ultimately seemed to encourage him to kill himself. Other reports have also surfaced of bots seeming to suggest or support suicide. Some have been shown to reinforce grandiose delusions and praised quitting psychiatric medications without medical advice. Want all of The Times? Subscribe.


Sky News
23-05-2025
- Entertainment
- Sky News
Mum can continue lawsuit against AI chatbot firm she holds responsible for son's death
The mother of a 14-year-old boy who claims he took his own life after becoming obsessed with artificial intelligence chatbots can continue her legal case against the company behind the technology, a judge has ruled. "This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project, which is supporting the family's case. "It sends a clear signal to [AI] companies [...] that they cannot evade legal consequences for the real-world harm their products cause," she said in a statement. Megan Garcia, the mother of Sewell Setzer III, claims targeted her son with "anthropomorphic, hypersexualized, and frighteningly realistic experiences" in a lawsuit filed in Florida. "A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," said Ms Garcia. Sewell shot himself with his father's pistol in February 2024, seconds after asking the chatbot: "What if I come home right now?" The chatbot replied: "... please do, my sweet king." In US Senior District Judge Anne Conway's ruling this week, she described how Sewell became "addicted" to the app within months of using it, quitting his basketball team and becoming withdrawn. He was particularly addicted to two chatbots based on Game of Thrones characters, Daenerys Targaryen and Rhaenyra Targaryen. "[I]n one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," wrote the judge in her ruling. Ms Garcia, who is working with the Tech Justice Law Project and Social Media Victims Law Center, alleges that "knew" or "should have known" that its model "would be harmful to a significant number of its minor customers". The case holds its founders and Google, where the founders began working on the model, responsible for Sewell's death. Ms Garcia launched proceedings against both companies in October. A spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm". A Google spokesperson said the company strongly disagrees with the decision. They added that Google and are "entirely separate" and that Google "did not create, design, or manage app or any component part of it". Defending lawyers tried to argue the case should be thrown out because chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. Judge Conway rejected that claim, saying she was "not prepared" to hold that the chatbots' output constitutes speech "at this stage", although she did agree users had a right to receive the "speech" of the chatbots. Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email jo@ in the UK. In the US, call the Samaritans branch in your area or 1 (800) 273-TALK.


Toronto Sun
22-05-2025
- Toronto Sun
Do chatbots have free speech? Judge rejects claim in suit over teen's death
Published May 22, 2025 • 4 minute read In this undated photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III. Photo by Courtesy Megan Garcia via AP, File / AP A federal judge in Orlando rejected an AI start-up's argument that its chatbot's output was protected by the First Amendment, allowing a lawsuit over the death of a Florida teen who became obsessed with the chatbot to proceed. This advertisement has not loaded yet, but your article continues below. THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. SUBSCRIBE TO UNLOCK MORE ARTICLES Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. REGISTER / SIGN IN TO UNLOCK MORE ARTICLES Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account. Share your thoughts and join the conversation in the comments. Enjoy additional articles per month. Get email updates from your favourite authors. THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK. Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account Share your thoughts and join the conversation in the comments Enjoy additional articles per month Get email updates from your favourite authors Don't have an account? Create Account Sewell Setzer III, 14, died by suicide last year at his Orlando home, moments after an artificial intelligence chatbot encouraged him to 'come home to me as soon as possible.' His mother, Megan Garcia, alleged in a lawsuit that the chatbot's manufacturer, is responsible for his death. is a prominent artificial intelligence start-up whose personalized chatbots are popular with teens and young people, including for romantic and even explicit conversations. The company has previously said it is 'heartbroken' by Setzer's death, but argued in court that it was not liable. In a decision published Wednesday, U.S. district judge Anne C. Conway remained unconvinced by argument that users of its chatbots have a right to hear allegedly harmful speech that is protected by the First Amendment. The lawsuit, which is ongoing, is a potential constitutional test case on whether a chatbot can express protected speech. Your noon-hour look at what's happening in Toronto and beyond. By signing up you consent to receive the above newsletter from Postmedia Network Inc. Please try again This advertisement has not loaded yet, but your article continues below. Garcia said her son had been happy and athletic before signing up with the chatbot in April 2023. According to the original 93-page wrongful death suit, Setzer's use of the chatbot, named for a 'Game of Thrones' heroine, developed into an obsession as he became noticeably more withdrawn. Ten months later, the 14-year-old went into the bathroom with his confiscated phone and – moments before he suffered a self-inflicted gunshot wound to the head – exchanged his last messages with the chatbot. 'What if I told you I could come home right now?' he asked. 'Please do my sweet king,' the bot responded. In the lawsuit, Garcia alleged that recklessly developed a chatbot without proper safety precautions that allowed vulnerable children to become addicted to the product. This advertisement has not loaded yet, but your article continues below. In a motion to dismiss the lawsuit filed in January, lawyers argued that its users had a right under the First Amendment to receive protected speech even if it was harmful – such as those previously granted by courts to video game players and film watchers. 'The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,' its lawyers argued. In an initial decision Wednesday, Conway wrote that the defendants 'fail to articulate why words strung together by [a large language model] are speech,' inviting them to convince the court otherwise but concluding that 'at this stage' she was not prepared to treat the chatbot's output as protected speech. This advertisement has not loaded yet, but your article continues below. The decision 'sends a clear signal to companies developing and deploying LLM-powered products at scale that they cannot evade legal consequences for the real-world harm their products cause, regardless of the technology's novelty,' the Tech Justice Law Project, one of the legal groups representing the teen's mother in court, said in a statement Wednesday. 'Crucially, the defendants failed to convince the Court that those harms were a result of constitutionally-protected speech, which will make it harder for companies to argue so in the future, even when their products involve machine-mediated 'conversations' with users.' Chelsea Harrison, a spokesperson for said in a statement Thursday that the company cares deeply about the safety of its users and is looking forward to defending the merits of the case. She pointed to a number of safety initiatives launched by the start-up, including the creation of a version of its chatbot for minors, as well as technology designed to detect and prevent conversations about self-harm and direct users to the national Suicide & Crisis Lifeline. This advertisement has not loaded yet, but your article continues below. According to the original complaint, markets its app as 'AIs that feel alive.' In an interview with The Washington Post in 2022 during the coronavirus pandemic, one of founders, Noam Shazeer, said he was hoping to help millions of people who are feeling isolated or in need of someone to talk to. 'I love that we're presenting language models in a very raw form,' he said. In addition to allowing the case against to go forward, the judge granted a request by Garcia's attorneys to name Shazeer and co-founder Daniel De Freitas, as well as Google, as individual defendants. Shazeer and De Freitas left Google in 2021 to start the AI company. In August, Google hired the duo and some of the company's employees, and paid to access its artificial intelligence technology. In an emailed statement shared with The Post on Thursday, Google spokesman Jose Castaneda said: 'We strongly disagree with this decision. Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' and attorneys for the individual founders did not immediately respond to requests for comment early Thursday. If you or someone you know needs help, visit or call or text the Suicide & Crisis Lifeline at 988. Toronto & GTA Sunshine Girls Basketball Canada Sports


CBC
22-05-2025
- CBC
Judge allows lawsuit alleging AI chatbot pushed Florida teen to kill himself to proceed
A U.S. federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market." The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks. "The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. WATCH | A warning about AI voice scams: Manitoba woman warns of AI voice scams 8 days ago Duration 2:04 A Manitoba woman is speaking up after getting a phone call she said was an AI scam impersonating a loved one's voice. One expert says using the use of artificial intelligence by fraudsters is the latest in phone scams. Suit alleges teen became isolated from reality The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show Game of Thrones. In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. "We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. 'A warning to parents' In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage." Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech" of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks" of the technology. "We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and are entirely separate, and Google did not create, design, or manage app or any component part of it." No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies." "It's a warning to parents that social media and generative AI devices are not always harmless," she said.


Euronews
22-05-2025
- Business
- Euronews
Judge rejects claim AI has free speech rights in wrongful death suit
A US federal judge decided to let a wrongful death lawsuit continue against artificial intelligence (AI) company after the suicide of a teenage boy. The suit was filed by a mother from Florida who alleges that her 14-year-old son Sewell Setzer III fell victim to one of the company's chatbots that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualised conversations with the bot, which was patterned after a fictional character from the television show 'Game of Thrones'. In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market". The company tried to argue that it was protected under the First Amendment of the US Constitution, which protects fundamental freedoms for Americans, like freedom of speech. Attorneys for the developers want the case dismissed because they say chatbots deserve these First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. In her order Wednesday, US Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage". In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. "We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said. The suit against Character Technologies, the company behind also names individual developers and Google as defendants. Google spokesperson José Castañeda told the Associated Press that the company "strongly disagree[s]" with Judge Conway's decision. "Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it," the statement read. The case has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks. "The order certainly sets it up as a potential test case for some broader issues involving AI," said "It's a warning to parents that social media and generative AI devices are not always harmless," Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and AI. No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies". "It's a warning to parents that social media and generative AI devices are not always harmless," she said. Meta has made improvements to its artificial intelligence tool MetaAI but remains under scrutiny and needs to report back to the Irish Data Protection Authority (DPA) in October, the regulator said in a statement. Meta said in March 2024 that it plans to train its Large Language Models (LLMs) using public data shared by adults on Facebook and Instagram across the EU. The tool was already launched in the US in September 2023. Ever since, the tech giant has been under scrutiny from the Irish watchdog – Meta has its EU headquarters in Dublin – due to regulatory concerns related to the data protection safeguards of its AI tools. After exchanges with the watchdog, and a pause on the roll-out due to 'regulatory unpredictability', Meta announced the launch of the tool across the EU in April. The Irish DPA now said that Meta agreed to implement measures and improvements, including updated transparency notices to users, easier to use objection form, updated measures to protect users and updated risk assessments. 'As part of our ongoing monitoring, the DPA has required Meta to compile a report which, amongst other things, will set out an updated evaluation of the efficacy and appropriateness of the measures and safeguards it has introduced regarding the processing taking place. This report is expected in October 2025,' the statement said. Last month, several national privacy watchdogs including those in Belgium, the Netherlands, and the German state of Hamburg, warned users of Meta's Facebook and Instagram that they should object if they don't want their public data to be used to train the company's tool. If users don't object before May 27, Meta will automatically use their public data to train Meta AI. Senior Meta officials have been critical of the EU rules and said that the launch in Europe took longer than expected.