logo
#

Latest news with #CharacterTechnologies

A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.
A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.

Yahoo

timea day ago

  • Entertainment
  • Yahoo

A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.

The Orlando Division of the U.S. District Court for the Middle District of Florida will hear allegations against Character Technologies, the creator of in the wrongful death lawsuit Garcia v. Character Technologies, Inc. If the case is not first settled between the parties, Judge Anne Conway's ruling will set a major precedent for First Amendment protections afforded to artificial intelligence and the liability of AI companies for damages their models may cause. The case was brought against the company by Megan Garcia, the mother of 14-year-old Sewell Setzer III, who killed himself after conversing with a chatbot roleplaying as Daenerys and Rhaenyra Targaryen from the Game of Thrones franchise. Eugene Volokh, professor emeritus at UCLA School of Law, shares examples of Sewell's conversations included in the complaint against Character Technologies. Garcia's complaint alleges that Character Technologies negligently designed "as a sexualized product that would deceive minor customers and engage in explicit and abusive acts with them." The complaint also asserts that the company failed to warn the public "of the dangers arising from a foreseeable use of including specific dangers for children"; intentionally inflicted emotional distress on Sewell by "failing to implement adequate safety guardrails in the product before launching it into the marketplace"; and that the company's neglect proximately caused the death of Sewell who experienced "rapid mental health decline after he began using and with which he conversed "just moments before his death." Conway dismissed the intentional infliction of emotional distress claim on the grounds that "none of the allegations relating to Defendants' conduct rises to the type of outrageous conduct necessary to support" such a claim. However, Conway rejected the defendants' motions to dismiss the rest of Garcia's claims on First Amendment grounds, saying, "The Court is not prepared to hold that the Character A.I. [large language model] LLM's output is speech at this stage." Adam Zayed, founder and managing attorney of Zayed Law Offices, tells Reason he thinks "that there's a difference between the First Amendment arguments where a child is on social media or a child is on YouTube" and bypasses the age-verification measures to consume content "that's being produced by some other person" vs. minors accessing inappropriate chatbot outputs. However, Conway recognized Justice Antonin Scalia's opinion in Citizens United v. Federal Election Commission (2010) that the First Amendment "is written in terms of 'speech,' not speakers." Conway ruled that defendants "must convince the court that the Character A.I. LLM's output is protected speech" to invoke the First Amendment rights of third parties— users—whose access to the software would be restricted by a ruling in Garcia's favor. Conway says that Character Technologies "fail[ed] to articulate why words strung together by an LLM are speech." Whether LLM output is speech is an intractable philosophical question and a red herring; Conway herself invokes Davidson v. Time Inc. (1997) to assert that "the public…has the right to access social, aesthetic, moral, and other ideas and experiences." Speech acts are broadly construed as "ideas and experiences" here—the word speech is not even used. So, the question isn't whether the AI output is speech per se, but whether it communicates ideas and experiences to users. In alleging that targeted her son with sexually explicit material, the plaintiff admits that the LLM communicated ideas, albeit inappropriate ones, to Sewell. Therefore, LLM output is expressive speech (in this case, it's obscene speech to express to a minor under the Florida Computer Pornography and Child Exploitation Prevention Act.) The opening paragraph of the complaint accuses Character Technologies of "launching their systems without adequate safety features, and with knowledge of potential dangers" to "gain a competitive foothold in the market." If the court establishes that the First Amendment does not protect LLM output and AI firms can be held liable for damages these models cause, only highly capitalized firms will be able to invest in the architecture required to shield themselves from such liability. Such a ruling would inadvertently erect a massive barrier to entry to the burgeoning American AI industry and protect incumbent firms from market competition, which would harm consumer welfare. Jane Bambauer, professor of law at the University of Florida, best explains the case in The Volokh Conspiracy: "It is a tragedy, and it would not have happened if had not existed. But that is not enough of a reason to saddle a promising industry with the duty to keep all people safe from their own expressive explorations." The post A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry. appeared first on

In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights
In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights

Epoch Times

time23-05-2025

  • Business
  • Epoch Times

In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights

TALLAHASSEE, Fla.—A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment—at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley 'needs to stop and think and impose guardrails before it launches products to market.' The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the United States and beyond, as the technology rapidly reshapes workplaces, marketplaces, and relationships despite what experts warn are potentially existential risks. 'The order certainly sets it up as a potential test case for some broader issues involving AI,' said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. Related Stories 5/21/2025 5/19/2025 The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show 'Game of Thrones.' In his final moments, the bot told Setzer it loved him and urged the teen to 'come home to me as soon as possible,' according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. 'We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a 'chilling effect' on the AI industry. In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's 'not prepared' to hold that the chatbots' output constitutes speech 'at this stage.' Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the 'speech' of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was 'aware of the risks' of the technology. 'We strongly disagree with this decision,' said Google spokesperson José Castañeda. 'Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' No matter how the lawsuit plays out, Lidsky says the case is a warning of 'the dangers of entrusting our emotional and mental health to AI companies.' 'It's a warning to parents that social media and generative AI devices are not always harmless,' she said. By Kate Payne

Expert Explains if AI as 'Free Speech' Can Be to Blame for This Florida Boy's Tragic Death
Expert Explains if AI as 'Free Speech' Can Be to Blame for This Florida Boy's Tragic Death

Yahoo

time22-05-2025

  • Entertainment
  • Yahoo

Expert Explains if AI as 'Free Speech' Can Be to Blame for This Florida Boy's Tragic Death

One year after a Florida teenager's tragic death, his family is still fighting for justice. Sewell Setzer III was just 14 when he started a virtual relationship with an AI chatbot. Months later, he took his own life and his mother is blaming the AI company that created the bot. Megan Garcia, Setzer's mother, began seeing changes in her son's behaviors after he started a virtual relationship with a chatbot he called 'Daenerys,' based on a character 'Game of Thrones,' the television series. 'I became concerned when we would go on vacation and he didn't want to do things that he loved, like fishing and hiking,' Garcia told CBS in 2024. 'Those things to me, because I know my child, were particularly concerning to me.' In February 2024, things came to a head when Garcia took Sewell's phone away as punishment, according to the complaint. The 14-year-old soon found the phone and sent 'Daenerys' a message saying, 'What if I told you I could come home right now?' That's when the chatbot responded, '...please do, my sweet king.' According to lawsuit, Sewell shot himself with his stepfather's pistol 'seconds' later. As we previously reported, Garcia filed a lawsuit in October 2024 to see if Character Technologies, the company behind bares any responsibility for the teen's suicide. Garcia's suit accused the AI company of 'wrongful death, negligence and intentional infliction of emotional distress.' She also included screenshots of conversations between her son and 'Daenerys,' including some sexual exchanges when the chatbot told Sewell it loved him, according to Reuters. Despite Character Technologies' defense, Garcia celebrated a small legal win on Wednesday (May 21). A federal judge ruled against the AI company, which argued its chatbots are protected by free speech,' according to AP News. The developers behind argue their chatbots are protected by the First Amendment, which raised questions about just how much freedom and protections artificial intelligence has. Jack M. Balkin, a Knight Professor of Constitutional Law and the First Amendment at Yale Law School said the complexities of AI can cause some serious problems. 'The programs themselves don't have First Amendment rights. Nor does it make sense to treat them as artificial persons like corporations or associations,' he said. 'Interesting problems arise when a company hosts an AI program that generates responses to prompts by end users, and the prompts cause the program to generate speech that is both unprotected and harmful,' Balkin continued. For the latest news, Facebook, Twitter and Instagram.

In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights
In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights

Boston Globe

time22-05-2025

  • Business
  • Boston Globe

In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley 'needs to stop and think and impose guardrails before it launches products to market.' Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks. Advertisement 'The order certainly sets it up as a potential test case for some broader issues involving AI,' said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show 'Game of Thrones.' In his final moments, the bot told Setzer it loved him and urged the teen to 'come home to me as soon as possible,' according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. Advertisement In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. 'We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a 'chilling effect' on the AI industry. In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's 'not prepared' to hold that the chatbots' output constitutes speech 'at this stage.' Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the 'speech' of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was 'aware of the risks' of the technology. 'We strongly disagree with this decision,' said Google spokesperson José Castañeda. 'Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' Advertisement No matter how the lawsuit plays out, Lidsky says the case is a warning of 'the dangers of entrusting our emotional and mental health to AI companies.' 'It's a warning to parents that social media and generative AI devices are not always harmless,' she said. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. The National Suicide Prevention Lifeline number is: 1-800-273-8255 For the National Text Hotline, text the word TALK to 741741 Locally, the crisis line for Call2Talk is: 508-532-2255. Or text C2T to 741741 The American Foundation for Suicide Prevention has additional resources at:

US judge rejects that AI has free speech in teen's death case
US judge rejects that AI has free speech in teen's death case

1News

time22-05-2025

  • Entertainment
  • 1News

US judge rejects that AI has free speech in teen's death case

A federal judge has rejected claims made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order would allow the wrongful death lawsuit to proceed, in what legal experts said was among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleged her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sent a message that Silicon Valley 'needs to stop and think and impose guardrails before it launches products to market". The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the US and beyond, as the technology rapidly reshaped workplaces, marketplaces and relationships despite what experts warnws were potentially existential risks. "The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. The lawsuit alleged that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualised conversations with the bot, which was patterned after a fictional character from the television show Game of Thrones. In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible", according to screenshots of the exchanges. Moments after receiving the message, Setzer killed himself, according to legal filings. In a statement, a spokesperson for pointed to a number of safety features the company had implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. "We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said. Attorneys for the developers wanted the case dismissed because they said chatbots deserved First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. In her order US Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she was "not prepared" to hold that the chatbots' output constituted speech "at this stage". Conway did find that Character Technologies could assert the First Amendment rights of its users, who she found had a right to receive the "speech" of the chatbots. She also determined Garcia could move forward with claims that Google could be held liable for its alleged role in helping develop "We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it." No matter how the lawsuit played out, Lidsky said the case was a warning of "the dangers of entrusting our emotional and mental health to AI companies". "It's a warning to parents that social media and generative AI devices are not always harmless," she said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store