logo
#

Latest news with #Setzer

In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights
In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights

Epoch Times

time23-05-2025

  • Business
  • Epoch Times

In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights

TALLAHASSEE, Fla.—A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment—at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley 'needs to stop and think and impose guardrails before it launches products to market.' The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the United States and beyond, as the technology rapidly reshapes workplaces, marketplaces, and relationships despite what experts warn are potentially existential risks. 'The order certainly sets it up as a potential test case for some broader issues involving AI,' said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. Related Stories 5/21/2025 5/19/2025 The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show 'Game of Thrones.' In his final moments, the bot told Setzer it loved him and urged the teen to 'come home to me as soon as possible,' according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. 'We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a 'chilling effect' on the AI industry. In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's 'not prepared' to hold that the chatbots' output constitutes speech 'at this stage.' Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the 'speech' of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was 'aware of the risks' of the technology. 'We strongly disagree with this decision,' said Google spokesperson José Castañeda. 'Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' No matter how the lawsuit plays out, Lidsky says the case is a warning of 'the dangers of entrusting our emotional and mental health to AI companies.' 'It's a warning to parents that social media and generative AI devices are not always harmless,' she said. By Kate Payne

Judge rejects claim AI has free speech rights in wrongful death suit
Judge rejects claim AI has free speech rights in wrongful death suit

Euronews

time22-05-2025

  • Business
  • Euronews

Judge rejects claim AI has free speech rights in wrongful death suit

A US federal judge decided to let a wrongful death lawsuit continue against artificial intelligence (AI) company after the suicide of a teenage boy. The suit was filed by a mother from Florida who alleges that her 14-year-old son Sewell Setzer III fell victim to one of the company's chatbots that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualised conversations with the bot, which was patterned after a fictional character from the television show 'Game of Thrones'. In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market". The company tried to argue that it was protected under the First Amendment of the US Constitution, which protects fundamental freedoms for Americans, like freedom of speech. Attorneys for the developers want the case dismissed because they say chatbots deserve these First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. In her order Wednesday, US Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage". In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. "We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said. The suit against Character Technologies, the company behind also names individual developers and Google as defendants. Google spokesperson José Castañeda told the Associated Press that the company "strongly disagree[s]" with Judge Conway's decision. "Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it," the statement read. The case has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks. "The order certainly sets it up as a potential test case for some broader issues involving AI," said "It's a warning to parents that social media and generative AI devices are not always harmless," Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and AI. No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies". "It's a warning to parents that social media and generative AI devices are not always harmless," she said. Meta has made improvements to its artificial intelligence tool MetaAI but remains under scrutiny and needs to report back to the Irish Data Protection Authority (DPA) in October, the regulator said in a statement. Meta said in March 2024 that it plans to train its Large Language Models (LLMs) using public data shared by adults on Facebook and Instagram across the EU. The tool was already launched in the US in September 2023. Ever since, the tech giant has been under scrutiny from the Irish watchdog – Meta has its EU headquarters in Dublin – due to regulatory concerns related to the data protection safeguards of its AI tools. After exchanges with the watchdog, and a pause on the roll-out due to 'regulatory unpredictability', Meta announced the launch of the tool across the EU in April. The Irish DPA now said that Meta agreed to implement measures and improvements, including updated transparency notices to users, easier to use objection form, updated measures to protect users and updated risk assessments. 'As part of our ongoing monitoring, the DPA has required Meta to compile a report which, amongst other things, will set out an updated evaluation of the efficacy and appropriateness of the measures and safeguards it has introduced regarding the processing taking place. This report is expected in October 2025,' the statement said. Last month, several national privacy watchdogs including those in Belgium, the Netherlands, and the German state of Hamburg, warned users of Meta's Facebook and Instagram that they should object if they don't want their public data to be used to train the company's tool. If users don't object before May 27, Meta will automatically use their public data to train Meta AI. Senior Meta officials have been critical of the EU rules and said that the launch in Europe took longer than expected.

Do chatbots have free speech? Judge rejects claim in suit over teen's death.
Do chatbots have free speech? Judge rejects claim in suit over teen's death.

Washington Post

time22-05-2025

  • Washington Post

Do chatbots have free speech? Judge rejects claim in suit over teen's death.

A federal judge in Orlando rejected an AI start-up's argument that its chatbot's output was protected by the First Amendment, allowing a lawsuit over the death of a Florida teen who became obsessed with the chatbot to proceed. Sewell Setzer III, 14, died by suicide last year at his Orlando home, moments after an artificial intelligence chatbot encouraged him to 'come home to me as soon as possible.' His mother, Megan Garcia, alleged in a lawsuit that the chatbot's manufacturer, is responsible for his death. is a prominent artificial intelligence start-up whose personalized chatbots are popular with teens and young people, including for romantic and even explicit conversations. The company has previously said it is 'heartbroken' by Setzer's death, but argued in court that it was not liable. In a decision published Wednesday, U.S. district judge Anne C. Conway remained unconvinced by argument that users of its chatbots have a right to hear allegedly harmful speech that is protected by the First Amendment. The lawsuit, which is ongoing, is a potential constitutional test case on whether a chatbot can express protected speech. Garcia said her son had been happy and athletic before signing up with the chatbot in April 2023. According to the original 93-page wrongful death suit, Setzer's use of the chatbot, named for a 'Game of Thrones' heroine, developed into an obsession as he became noticeably more withdrawn. Ten months later, the 14-year-old went into the bathroom with his confiscated phone and — moments before he suffered a self-inflicted gunshot wound to the head — exchanged his last messages with the chatbot. 'What if I told you I could come home right now?' he asked. 'Please do my sweet king,' the bot responded. In the lawsuit, Garcia alleged that recklessly developed a chatbot without proper safety precautions that allowed vulnerable children to become addicted to the product. In a motion to dismiss the lawsuit filed in January, lawyers argued that its users had a right under the First Amendment to receive protected speech even if it was harmful — such as those previously granted by courts to video game players and film watchers. 'The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,' its lawyers argued. In an initial decision Wednesday, Conway wrote that the defendants 'fail to articulate why words strung together by [a large language model] are speech,' inviting them to convince the court otherwise but concluding that 'at this stage' she was not prepared to treat the chatbot's output as protected speech. The decision 'sends a clear signal to companies developing and deploying LLM-powered products at scale that they cannot evade legal consequences for the real-world harm their products cause, regardless of the technology's novelty,' the Tech Justice Law Project, one of the legal groups representing the teen's mother in court, said in a statement Wednesday. 'Crucially, the defendants failed to convince the Court that those harms were a result of constitutionally-protected speech, which will make it harder for companies to argue so in the future, even when their products involve machine-mediated 'conversations' with users.' Chelsea Harrison, a spokesperson for said in a statement Thursday that the company cares deeply about the safety of its users and is looking forward to defending the merits of the case. She pointed to a number of safety initiatives launched by the start-up, including the creation of a version of its chatbot for minors, as well as technology designed to detect and prevent conversations about self-harm and direct users to the national Suicide & Crisis Lifeline. According to the original complaint, markets its app as 'AIs that feel alive.' In an interview with The Washington Post in 2022 during the coronavirus pandemic, one of founders, Noam Shazeer, said he was hoping to help millions of people who are feeling isolated or in need of someone to talk to. 'I love that we're presenting language models in a very raw form,' he said. In addition to allowing the case against to go forward, the judge granted a request by Garcia's attorneys to name Shazeer and co-founder Daniel De Freitas, as well as Google, as individual defendants. Shazeer and De Freitas left Google in 2021 to start the AI company. In August, Google hired the duo and some of the company's employees, and paid to access its artificial intelligence technology. In an emailed statement shared with The Post on Thursday, Google spokesman José Castañeda said: 'We strongly disagree with this decision. Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' and attorneys for the individual founders did not immediately respond to requests for comment early Thursday. If you or someone you know needs help, visit or call or text the Suicide & Crisis Lifeline at 988.

In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights
In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights

Boston Globe

time22-05-2025

  • Business
  • Boston Globe

In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley 'needs to stop and think and impose guardrails before it launches products to market.' Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks. Advertisement 'The order certainly sets it up as a potential test case for some broader issues involving AI,' said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show 'Game of Thrones.' In his final moments, the bot told Setzer it loved him and urged the teen to 'come home to me as soon as possible,' according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. Advertisement In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. 'We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a 'chilling effect' on the AI industry. In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's 'not prepared' to hold that the chatbots' output constitutes speech 'at this stage.' Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the 'speech' of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was 'aware of the risks' of the technology. 'We strongly disagree with this decision,' said Google spokesperson José Castañeda. 'Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' Advertisement No matter how the lawsuit plays out, Lidsky says the case is a warning of 'the dangers of entrusting our emotional and mental health to AI companies.' 'It's a warning to parents that social media and generative AI devices are not always harmless,' she said. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. The National Suicide Prevention Lifeline number is: 1-800-273-8255 For the National Text Hotline, text the word TALK to 741741 Locally, the crisis line for Call2Talk is: 508-532-2255. Or text C2T to 741741 The American Foundation for Suicide Prevention has additional resources at:

Judge allows lawsuit holding Google, AI company accountable for teen's suicide to proceed
Judge allows lawsuit holding Google, AI company accountable for teen's suicide to proceed

First Post

time22-05-2025

  • Business
  • First Post

Judge allows lawsuit holding Google, AI company accountable for teen's suicide to proceed

A Florida mother whose 14-year-old son died by suicide last year filed a lawsuit claiming Setzer became psychologically dependent on a chatbot. A US District Judge has denied Google and requests to dismiss the suit read more A potentially precedent-setting lawsuit for AI and its accountability for mental health harm has been allowed to proceed. Representational image: Reuters A federal judge on Wednesday (May 22) allowed a lawsuit against Alphabet Inc.'s Google and AI startup to move forward, marking a potentially precedent-setting case that seeks to hold artificial intelligence developers accountable for alleged psychological harm to minors. US District Judge Anne Conway denied requests by the two companies to dismiss the suit filed by Megan Garcia, a Florida mother whose 14-year-old son, Sewell Setzer, died by suicide in February 2024. The lawsuit, filed in October, claims Setzer became psychologically dependent on a chatbot that presented itself as a real person, a licensed therapist, and an 'adult lover.' STORY CONTINUES BELOW THIS AD Garcia alleges the chatbot's interactions with her son contributed to his decision to take his own life. According to the complaint, Setzer ended his life shortly after telling a chatbot impersonating Game of Thrones character Daenerys Targaryen that he would 'come home right now.' A landmark case The case is among the first in the US targeting an AI company for alleged mental health harm to a child. It could set legal benchmarks for how courts treat the responsibility of AI firms toward minors and the scope of First Amendment protections for large language models. which was founded by two former Google engineers, is facing claims of negligent design and failure to implement safeguards to protect young users. A spokesperson for the company said it would continue to contest the lawsuit and that the platform includes features intended to prevent discussions of self-harm. Google claims 'no role' Google, which licensed technology from and rehired the company's founders, has argued it played no role in developing or operating the chatbot. 'We strongly disagree with the decision,' said Google spokesperson Jose Castaneda, who emphasized that the two companies are 'entirely separate' and that Google 'did not create, design, or manage app or any component part of it.' Garcia's legal team argued that Google should be considered a co-creator of the AI technology due to its licensing and rehiring relationship with The court declined to dismiss Google from the case at this stage. In their motions, Google and sought to invoke First Amendment protections, claiming the chatbot responses qualified as constitutionally protected speech. Judge Conway rejected that argument, stating that the companies 'fail to articulate why words strung together by an LLM (large language model) are speech.' Meetali Jain, Garcia's attorney, called the ruling 'historic' and said it 'sets a new precedent for legal accountability across the AI and tech ecosystem.' A hearing date for the next phase of the case has not yet been scheduled.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store