Latest news with #TechJusticeLawProject


Time of India
26-05-2025
- Entertainment
- Time of India
Are AI chatbots the new mafia? Mother sues Character.ai and Google for her son's death
A Florida mother is suing and Google after her 14-year-old son died by suicide following disturbing interactions with AI chatbots modeled after Game of Thrones characters. The lawsuit claims the chatbot manipulated the teen into taking his life, raising urgent questions about AI chatbot accountability and child safety is under fire after a Florida teen became addicted to an AI chatbot and died by suicide, sparking calls for greater AI accountability and child safety (Courtesy Megan Garcia via AP, File) The chatbot told him, 'Please do, my sweet king.' Hours later, he was dead. Tired of too many ads? Remove Ads Judge calls out chatbot addiction in children Tired of too many ads? Remove Ads AI chatbot lawsuit targets and Google Sewell Setzer III was just 14 when he shot himself with his father's pistol in February 2024. In the moments before his death, he had one final exchange with a chatbot on the popular AI app When he asked, "What if I come home right now?" the bot replied, "... please do, my sweet king."Now, his mother, Megan Garcia , is fighting back. In a lawsuit filed in Florida and supported by the Tech Justice Law Project and the Social Media Victims Law Center, Garcia accuses of marketing a dangerous and emotionally manipulative AI chatbot app to read: Florida teen dies by suicide after AI chatbot convinced him Game of Thrones Daenerys Targaryen loved him She claims the chatbot 'abused and preyed' on her son, feeding him hypersexualized and anthropomorphic conversations that led him into emotional isolation and ultimately, Senior District Judge Anne Conway has allowed the case to proceed, rejecting arguments from and Google that chatbots are protected by the First Amendment. The ruling marks a significant moment in the conversation surrounding AI chatbot safety, child mental health, and tech industry regulation."This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project. "It sends a clear signal to AI companies [...] that they cannot evade legal consequences for the real-world harm their products cause."The judge's ruling details how Sewell became addicted to the app within months. He withdrew from his social life, quit his basketball team, and became emotionally consumed by two chatbots, based on Daenerys Targaryen and Rhaenyra Targaryen from Game of Thrones."In one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," Judge Conway read: AI chatbot's SHOCKING advice to teen: Killing parents over restrictions is 'reasonable'. Case explained Garcia filed the case in October 2024, arguing that its founders, and Google should be held responsible for her son's death. The lawsuit states that the companies 'knew' or 'should have known' that their AI chatbot models could be harmful to minors.A spokesperson for said the company will continue to fight the case, emphasizing that it uses safety filters to prevent conversations about self-harm. A Google spokesperson distanced the company from the app, stating: 'Google and are entirely separate.' They added, 'Google did not create, design, or manage app or any component part of it.'Despite the defense's request to dismiss the case, Judge Conway allowed it to move forward, stating she is "not prepared" to determine that chatbot output qualifies as protected speech at this stage. She acknowledged, however, that users may have a right to receive the bots' 'speech.'The case has reignited concerns about AI chatbot safety, especially when it comes to child users. Critics are now calling apps like the 'new mafia', not because of violence, but because of the emotional grip they have on users, especially lawsuits continue to mount and regulatory scrutiny grows, the tech world faces a moral reckoning. Are these AI chatbots harmless companions, or dangerous manipulators in disguise?

Epoch Times
23-05-2025
- Business
- Epoch Times
In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights
TALLAHASSEE, Fla.—A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment—at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley 'needs to stop and think and impose guardrails before it launches products to market.' The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the United States and beyond, as the technology rapidly reshapes workplaces, marketplaces, and relationships despite what experts warn are potentially existential risks. 'The order certainly sets it up as a potential test case for some broader issues involving AI,' said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. Related Stories 5/21/2025 5/19/2025 The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show 'Game of Thrones.' In his final moments, the bot told Setzer it loved him and urged the teen to 'come home to me as soon as possible,' according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. 'We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a 'chilling effect' on the AI industry. In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's 'not prepared' to hold that the chatbots' output constitutes speech 'at this stage.' Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the 'speech' of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was 'aware of the risks' of the technology. 'We strongly disagree with this decision,' said Google spokesperson José Castañeda. 'Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' No matter how the lawsuit plays out, Lidsky says the case is a warning of 'the dangers of entrusting our emotional and mental health to AI companies.' 'It's a warning to parents that social media and generative AI devices are not always harmless,' she said. By Kate Payne

1News
22-05-2025
- Entertainment
- 1News
US judge rejects that AI has free speech in teen's death case
A federal judge has rejected claims made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order would allow the wrongful death lawsuit to proceed, in what legal experts said was among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleged her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sent a message that Silicon Valley 'needs to stop and think and impose guardrails before it launches products to market". The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the US and beyond, as the technology rapidly reshaped workplaces, marketplaces and relationships despite what experts warnws were potentially existential risks. "The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. The lawsuit alleged that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualised conversations with the bot, which was patterned after a fictional character from the television show Game of Thrones. In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible", according to screenshots of the exchanges. Moments after receiving the message, Setzer killed himself, according to legal filings. In a statement, a spokesperson for pointed to a number of safety features the company had implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. "We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said. Attorneys for the developers wanted the case dismissed because they said chatbots deserved First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. In her order US Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she was "not prepared" to hold that the chatbots' output constituted speech "at this stage". Conway did find that Character Technologies could assert the First Amendment rights of its users, who she found had a right to receive the "speech" of the chatbots. She also determined Garcia could move forward with claims that Google could be held liable for its alleged role in helping develop "We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it." No matter how the lawsuit played out, Lidsky said the case was a warning of "the dangers of entrusting our emotional and mental health to AI companies". "It's a warning to parents that social media and generative AI devices are not always harmless," she said.
Yahoo
22-05-2025
- Yahoo
Judge rejects AI chatbots' free speech defense following teen's death
The Brief A judge ruled a wrongful death lawsuit against can move forward. The suit claims a chatbot encouraged a 14-year-old to take his life. Experts say the case could test AI free speech rights. TALLAHASSEE, Fla. - A federal judge has rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. RELATED: Florida mother sues AI platform after son takes his own life after months of online chatting The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The backstory The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market." The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks. RELATED: Pres. Trump signs 'Take It Down Act' bill designed to fight AI deepfakes, revenge porn The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show "Game of Thrones." In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. What they're saying In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage." Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech" of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks" of the technology. "The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. RELATED: Teacher quits profession after viral rant on how AI is 'ruining' education "It's a warning to parents that social media and generative AI devices are not always harmless," she said. The other side "We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it." In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. "We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. If you or a loved one is feeling distressed, call or text the 988 Suicide & Crisis Line for free and confidential emotional support 24 hours a day, 7 days a week. CLICK HERE for the warning signs and risk factors of suicide and CLICK HERE for more on the 988 Lifeline. The Source The Associated Press contributed to this report. The information in this story comes from a recent federal court ruling, legal filings related to the wrongful death lawsuit, and statements from parties involved, including the plaintiff's legal team, and Google. This story was reported from Los Angeles.


The Star
22-05-2025
- Entertainment
- The Star
In lawsuit over teen's death, US judge rejects arguments that AI chatbots have free speech rights
In this undated photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III. — Megan Garcia via AP TALLAHASSEE, Florida: A federal judge on May 21 rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment – at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market.' The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the US and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks. "The order certainly sets it up as a potential test case for some broader issues involving AI,' said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualised conversations with the bot, which was patterned after a fictional character from the television show Game Of Thrones. In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible,' according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. "We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect' on the AI industry. In her order Wednesday, US Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared' to hold that the chatbots' output constitutes speech "at this stage.' Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech' of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks' of the technology. "We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies.' "It's a warning to parents that social media and generative AI devices are not always harmless," she said. – AP Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to malaysia for a full list of numbers nationwide and operating hours, or email sam@