
Judge allows lawsuit alleging AI chatbot pushed Florida teen to kill himself to proceed
A U.S. federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now.
The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.
The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market."
The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks.
"The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.
WATCH | A warning about AI voice scams:
Manitoba woman warns of AI voice scams
8 days ago
Duration 2:04
A Manitoba woman is speaking up after getting a phone call she said was an AI scam impersonating a loved one's voice. One expert says using the use of artificial intelligence by fraudsters is the latest in phone scams.
Suit alleges teen became isolated from reality
The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show Game of Thrones.
In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.
In a statement, a spokesperson for Character.AI pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed.
"We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said.
Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry.
'A warning to parents'
In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage."
Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech" of the chatbots.
She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Character.AI. Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks" of the technology.
"We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character.AI are entirely separate, and Google did not create, design, or manage Character.AI's app or any component part of it."
No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies."
"It's a warning to parents that social media and generative AI devices are not always harmless," she said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Globe and Mail
an hour ago
- Globe and Mail
Atua AI Expands Multichain Intelligence with Modular Infrastructure Innovations
Next-Gen Architecture Enables Scalable AI Deployment Across Blockchain Ecosystems Singapore, Singapore--(Newsfile Corp. - June 2, 2025) - Atua AI (TUA), the decentralized AI-powered productivity platform, has expanded its multichain capabilities through the launch of modular infrastructure innovations designed to optimize performance, adaptability, and intelligence across decentralized networks. The update provides Web3 developers and enterprises with enhanced tools for deploying flexible, AI-driven solutions across multiple blockchains. Scalable multichain intelligence powered by modular AI infrastructure To view an enhanced version of this graphic, please visit: With this modular infrastructure upgrade, Atua AI introduces a more granular system architecture that allows its core AI modules-such as Chat, Writer, and Classifier-to operate independently, synchronize across chains, and dynamically scale based on workload. Each module can now be deployed in isolation or in combination, allowing users to tailor automation workflows for specific tasks, chains, or governance structures. This advancement empowers builders to harness real-time intelligence across Ethereum, BNB Chain, XRP Ledger, and other major blockchains while maintaining low latency, cost-efficiency, and operational control. It supports diverse use cases from decentralized finance to NFT content generation, offering enterprises the flexibility to scale operations securely and efficiently. Atua AI's commitment to modular design ensures a future-ready infrastructure capable of meeting the evolving needs of decentralized applications. This innovation strengthens its position as a leading provider of AI-powered tools built to scale with the multichain internet. About Atua AI Atua AI offers AI-powered productivity and creativity tools in the Web3 space. Its features include Chat, Writer, Imagine, Voiceover, and Classifier-all designed to empower users with intelligent, decentralized solutions for content creation, coding, analysis, and more. Media Contact Dorothy Marley KaJ Labs +1 707-622-6168 media@ Social Media Twitter Instagram


CTV News
4 hours ago
- CTV News
Researchers urge caution when using ChatGPT to self-diagnose illnesses
Researchers examined the use of ChatGPT-4 to self-diagnose health problems. As Canadians increasingly turn to artificial intelligence for quick answers about health problems, a new study warns relying on tools like ChatGPT for self-diagnosis could be risky. A team, led by researchers at the University of Waterloo, evaluated the performance of ChatGPT-4, a large language model (LLM) released by OpenAI. The chatbot was asked a series of open-ended medical questions based on scenarios modified from a medical licensing exam. The findings were striking. Only 31 per cent of ChatGPT's responses were deemed entirely correct, and just 34 per cent were considered clear. Troy Zada Sirisha Rambhatla PhD student Troy Zada and Dr. Sirisha Rambhatla at the University of Waterloo are part of the research team. 'So, not that high,' said Troy Zada, a PhD student at the University of Waterloo who led the research team. 'If it is telling you that this is the right answer, even though it's wrong, that's a big problem, right?'' The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough. In one example, the chatbot confidently diagnosed a patient's rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves. The researchers concluded that LLMs are not yet reliable enough to replace medical professionals and should be used with caution when it comes to health matters. This is despite studies that have found AI chatbots can best human doctors in certain situations and pass medical exams involving multiple choice questions. Zada said he's not suggesting people stop using ChatGPT for medical information, but they must be aware of its limitations and potential for misinformation. 'It could tell you everything is fine when there's actually a serious underlying issue,' said Zada. He says it could also offer up information that would make someone needlessly worry. Millions of Canadians currently do not have a family doctor and there are concerns some may be relying on artificial intelligence to diagnose health problems, even though AI chatbots often advise users to consult an actual doctor. The researchers also noted the chatbots lack accountability, whereas a human doctor can face severe consequences for errors, such as having their licence revoked or being charged with medical malpractice. While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice. 'When you do get a response be sure to validate that response,' said Zada. Dr. Amrit Kirpalani agrees. He's a pediatric nephrologist and assistant professor at Western University who has studied AI in medicine and has noticed more patients and their family members bringing up AI platforms such as ChatGPT. He believes doctors should initiate conversations about its use with patients because some may be hesitant to talk about it. 'Nobody wants to tell their doctor that they went on ChatGPT and it told them something different,' says Kirpalani. He'd prefer patients discuss a chatbot's response with a physician, especially since an AI can sometimes be even more persuasive than a human. 'I'm not sure I could be as convincing as an AI tool. They can explain some things in a much more simple and understandable way,' says Kirpalani. 'But the accuracy isn't always there. So it could be so convincing even when it's wrong.' He likens AI to another familiar online tool. 'I kind of use the Wikipedia analogy of, it can be a great source of information, but it shouldn't be your primary source. It can be a jumping-off point.' The researchers also acknowledge as LLMs continue to improve, they could eventually be reliably used in a medical setting. But for now, Zada has this to say: 'Don't blindly accept the results.'


National Post
5 hours ago
- National Post
6 injured in Colorado flamethrower attack on rally for Israeli hostages
BOULDER, Colo. — Six people were injured Sunday in what the FBI immediately described as a 'targeted terror attack' at an outdoor mall in Boulder, Colo., where a group had gathered to raise attention to Israeli hostages held in Gaza. Article content Article content The suspect, identified as 45-year-old Mohamed Sabry Soliman, yelled 'Free Palestine' and used a makeshift flamethrower in the attack, said Mark Michalek, the special agent in charge of the Denver field office. Soliman was taken into custody. Article content We are aware of and fully investigating a targeted terror attack in Boulder, Colorado. Our agents and local law enforcement are on the scene already, and we will share updates as more information becomes available. @FBI — FBI Director Kash Patel (@FBIDirectorKash) June 1, 2025 Article content Article content Soliman was also injured and was taken to the hospital to be treated, but authorities didn't elaborate on the nature of his injuries. Article content Article content Video from the scene showed a witness shouting, 'He's right there. He's throwing Molotov cocktails,' as a police officer with his gun drawn advanced on a bare-chested suspect with containers in each hand. Article content The attack took place at the popular Pearl Street pedestrian mall, a four-block area in downtown Boulder, where demonstrators with a volunteer group called Run For Their Lives had gathered to raise visibility for the hostages who remain in Gaza as a war between Israel and Hamas continues to inflame global tensions and has contributed to a spike in antisemitic violence in the United States. It occurred more than a week after the fatal shooting of two Israeli embassy staffers in Washington by a Chicago man who yelled 'I did it for Palestine, I did it for Gaza' as he was being led away by police. Article content Article content FBI leaders in Washington said they were treating the Boulder attack as an act of terrorism, and the U.S. Justice Department — which leads investigations into acts of violence driven by religious, racial or ethnic motivations — decried the attack as a 'needless act of violence, which follows recent attacks against Jewish Americans.'