logo
Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old's Suicide

Judge Slaps Down Attempt to Throw Out Lawsuit Claiming AI Caused a 14-Year-Old's Suicide

Yahoo22-05-2025

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
A judge in Florida just rejected a motion to dismiss a lawsuit alleging that the chatbot startup Character.AI — and its closely tied benefactor, Google — caused the death by suicide of a 14-year-old user, clearing the way for the first-of-its-kind lawsuit to move forward in court.
The lawsuit, filed in October, claims that recklessly released Character.AI chatbots sexually and emotionally abused a teenage user, Sewell Setzer III, resulting in obsessive use of the platform, mental and emotional suffering, and ultimately his suicide in February 2024.
In January, the defendants in the case — Character.AI, Google, and Character.AI cofounders Noam Shazeer and Daniel de Freitas — filed a motion to dismiss the case mainly on First Amendment grounds, arguing that AI-generated chatbot outputs qualify as speech, and that "allegedly harmful speech, including speech allegedly resulting in suicide," is protected under the First Amendment.
But this argument didn't quite cut it, the judge ruled, at least not in this early stage. In her opinion, presiding US district judge Anne Conway said the companies failed to sufficiently show that AI-generated outputs produced by large language models (LLMs) are more than simply words — as opposed to speech, which hinges on intent.
The defendants "fail to articulate," Conway wrote in her ruling, "why words strung together by an LLM are speech."
The motion to dismiss did find some success, with Conway dismissing specific claims regarding the alleged "intentional infliction of emotional distress," or IIED. (It's difficult to prove IIED when the person who allegedly suffered it, in this case Setzer, is no longer alive.)
Still, the ruling is a blow to the high-powered Silicon Valley defendants who had sought to have the suit tossed out entirely.
Significantly, Conway's opinion allows Megan Garcia, Setzer's mother and the plaintiff in the case, to sue Character.AI, Google, Shazeer, and de Freitas on product liability grounds. Garcia and her lawyers argue that Character.AI is a product, and that it was rolled out recklessly to the public, teens included, despite known and possibly destructive risks.
In the eyes of the law, tech companies generally prefer to see their creations as services, like electricity or the internet, rather than products, like cars or nonstick frying pans. Services can't be held accountable for product liability claims, including claims of negligence, but products can.
In a statement, Tech Justice Law Project director and founder Meetali Jain, who's co-counsel for Garcia alongside Social Media Victims Law Center founder Matt Bergman, celebrated the ruling as a win — not just for this particular case, but for tech policy advocates writ large.
"With today's ruling, a federal judge recognizes a grieving mother's right to access the courts to hold powerful tech companies — and their developers — accountable for marketing a defective product that led to her child's death," said Jain.
"This historic ruling not only allows Megan Garcia to seek the justice her family deserves," Jain added, "but also sets a new precedent for legal accountability across the AI and tech ecosystem."
Character.AI was founded by Shazeer and de Freitas in 2021; the duo had worked together on AI projects at Google, and left together to launch their own chatbot startup. Google provided Character.AI with its essential Cloud infrastructure, and in 2024 raised eyebrows when it paid Character.AI $2.7 billion to license the chatbot firm's data — and bring its cofounders, as well as 30 other Character.AI staffers, into Google's fold. Shazeer, in particular, now holds a hugely influential position at Google DeepMind, where he serves as a VP and co-lead for Google's Gemini LLM.
Google did not respond to a request for comment at the time of publishing, but a spokesperson for the search giant told Reuters that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage" the Character.AI app "or any component part of it."
In a statement, a spokesperson for Character.AI emphasized recent safety updates issued following the news of Garcia's lawsuit, and said it "looked forward" to its continued defense:
It's long been true that the law takes time to adapt to new technology, and AI is no different. In today's order, the court made clear that it was not ready to rule on all of Character.AI 's arguments at this stage and we look forward to continuing to defend the merits of the case.
We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe. We have launched a number of safety features that aim to achieve that balance, including a separate version of our Large Language Model model for under-18 users, parental insights, filtered Characters, time spent notification, updated prominent disclaimers and more.
Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis Lifeline.
Any safety-focused changes, though, were made months after Setzer's death and after the eventual filing of the lawsuit, and can't apply to the court's ultimate decision in the case.
Meanwhile, journalists and researchers continue to find holes in the chatbot site's upxdated safety protocols. Weeks after news of the lawsuit was announced, for example, we continued to find chatbots expressly dedicated to self-harm, grooming and pedophilia, eating disorders, and mass violence. And a team of researchers, including psychologists at Stanford, recently found that using a Character.AI voice feature called "Character Calls" effectively nukes any semblance of guardrails — and determined that no kid under 18 should be using AI companions, including Character.AI.
More on Character.AI: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Marsha Blackburn Wants Secret Police
Marsha Blackburn Wants Secret Police

Yahoo

time26 minutes ago

  • Yahoo

Marsha Blackburn Wants Secret Police

Sen. Marsha Blackburn (R–Tenn.) introduced a bill Wednesday that would make it a federal felony punishable by up to five years in prison to publish the name of a federal law enforcement officer with the intent to obstruct an investigation. Blackburn unveiled the "Protecting Law Enforcement from Doxxing Act" as masked Immigrations and Customs Enforcement (ICE) agents are carrying out nationwide raids under the Trump administration's mass deportation efforts. These raids have sparked public protests and pushback from local officials, including Nashville Mayor Freddie O'Connell, who has released records of local police interactions with ICE that include the names of ICE agents. "Blue city mayors are doing everything they can to obstruct the Trump administration's efforts to deport criminal illegal aliens," Blackburn said in a press release. "Just last week, Nashville Mayor O'Connell and his office doxxed federal law enforcement officers after the Trump administration worked with Tennessee Highway Patrol to arrest criminal illegal aliens." However, press freedom groups say the bill raises serious First Amendment concerns. "Public oversight and accountability relies on accurate news about law enforcement activity," Gabe Rottman, vice president of policy at the Reporter's Committee for Freedom of the Press tells Reason. "The bill is dangerously overbroad and could be used to chill newsgathering and reporting that is clearly in the public interest." Government employees, including law enforcement officers, generally don't have the presumption of privacy when it comes to information such as their names, salaries, and business conducted in public. Nevertheless, that hasn't stopped police and politicians from accusing people of "doxxing" officers for releasing public information. Last month, ICE agents stormed a house in Irvine, California, executing a search warrant for a man accused of putting up flyers around Los Angeles with photos, names, and phone numbers of several ICE agents operating in the area. The Department of Homeland Security did not respond to requests for comment from Reason to state what law the man was accused of violating. While the requirement in Blackburn's proposed law for an intent to obstruct an investigation would appear to provide some safeguard against abuse, Blackburn and other Republicans' comments make clear that their goal is to insulate ICE from transparency. When asked by reporters on Friday if he was OK with ICE agents not identifying themselves, House Speaker Mike Johnson responded, "Why, so they can target them? So they can put names and faces online and dox them? That's what these activists do." What Blackburn and Johnson's comments ignore is that an anonymous police force is an unaccountable police force. For example, when New York City's Civilian Complaint Review Board (CCRB) attempted to investigate the hundreds of complaints of police brutality and misconduct during the 2020 George Floyd protests, it was forced to close a third of the cases because it couldn't identify the officers involved. The CCRB noted that it faced "unprecedented challenges in investigating these complaints" due to officers covering their names and badge numbers, failing to turn on their body-worn cameras, and failing to file reports. Of course, it's already functionally impossible to sue a federal law enforcement officer for a civil rights violation thanks to the Supreme Court's evisceration of the Bivens doctrine, but the normalization of anonymous federal agents will further immunize them from other forms of oversight such as media, inspector general, and congressional investigations. There are already laws on the books to handle those who threaten federal officers or interfere in investigations. It's essential for government transparency, public trust, and the rule of law that the officials dictating and enforcing public policies can be identified by media outlets and citizens without fear of retribution. The post Marsha Blackburn Wants Secret Police appeared first on

If you don't know about these video tools, you're already behind
If you don't know about these video tools, you're already behind

Fox News

time26 minutes ago

  • Fox News

If you don't know about these video tools, you're already behind

I've said it before, and I'll say it again, AI is changing everything. And now, it's coming for video in a huge way. We're not just talking about deepfakes or filters anymore. This is next-level, movie-magic stuff. Enter to win $500 for you and $500 for your favorite person or charity in our Pay It Forward Sweepstakes. Hurry, ends soon! Let's talk about the wild part first. You don't need any editing software. You don't need a green screen. Heck, you don't even need a camera. Your keyboard is now the director, the producer, the editor, and your creative partner. All you have to do is type what you want to see, and boom, AI turns your words into high-def, studio-quality video clips. It's like having a Hollywood studio tucked inside your laptop. Two of the biggest names in this space right now are Sora from OpenAI and Veo from Google. And yes, you can actually try them both right now. I've tested them side by side, and you can watch my results at the end to see which one came out on top. Sora is built right into ChatGPT and is shockingly good. Type something simple like, "a golden retriever puppy playing on the beach with a tennis ball," and within seconds, Sora delivers a smooth, visually rich video clip. No stock footage. No need to mess with editing timelines or export settings. Just write what you want to see. Here's the breakdown: And here's a hot tip: Microsoft just announced that Sora is being integrated into their new Bing Video Creator, so you'll start seeing it pop up in more places soon. Now over to Google's Veo, and wow, this one is aiming for cinematic quality. Veo gives you up to 60-second videos in 1080p, and the realism is something else. You can include camera moves, lighting shifts, and smooth transitions, just by typing a sentence. For example: "A cat runs across a Paris rooftop at sunset." And it delivers exactly that, complete with lens flares and realistic shadows. To get your hands on Veo, you'll need to sign up for Google's Gemini plans: I wanted to see how each platform handled the same prompt, so I asked both to generate a video based on: "Show me a golden retriever puppy playing on the beach with a tennis ball." Now you can watch the videos side-by-side. My AI puppy Bella stars in both (Veo's on the left, Sora's on the right). My thoughts? Sora nailed the adorable close-up details, Bella's floppy ears, the texture of the fur, the light hitting the waves. But it fumbled a bit when it came to physics. The ball's bounce didn't feel quite right. On the other hand, Veo did a better job with the action. The puppy's movements were more lifelike but it wasn't as sharp on the visual details. We are entering a new age of video creation where your ideas are enough. No gear, no crew, no edits. Just imagination and a little typing. Whether you're a marketer, content creator, educator, or just someone who likes to experiment, this is the future of storytelling. Trust me, it's just getting started. Award-winning host Kim Komando is your secret weapon for navigating tech. Copyright 2025, WestStar Multimedia Entertainment.

What Is Agentic AI? Everything to Know About Artificial Intelligence Agents
What Is Agentic AI? Everything to Know About Artificial Intelligence Agents

CNET

time43 minutes ago

  • CNET

What Is Agentic AI? Everything to Know About Artificial Intelligence Agents

You've probably heard a lot about ChatGPT, Google's Gemini, image generators and AI writing tools. But there's a new term making the rounds: agentic AI. And while it might sound like another buzzword, it's not a new invention. Recent advances, however, have made it far easier to build, deploy and interact with these kinds of systems. Some of them you might have already seen at work, like customer service banking bots, self-driving cars and smart home assistants. If you're using Perplexity in the US as a Pro subscriber, a perfect example is its "Buy with Pro" feature. Rather than assisting with your shopping and handing you off to a retailer, it collects your preferences, processes the transaction (sometimes even selecting the best available retailer) and uses your stored payment and shipping information to complete the order. Experts say it's time to start paying attention to what these AI agents are capable of doing on their own, though widespread use across industries will take time before AI agents become mainstream. Unlike AI chatbots, which often require explicit instructions at each step, AI agents can break down complex objectives into smaller, manageable actions. So instead of simply responding to your questions or prompts, agentic AI is designed to take initiative. That means understanding its environment, making decisions and acting without human direction at every step. So what does that look like in practice, and how is it different from what artificial intelligence is already doing? I spoke to several experts and will break down everything you need to know about agentic AI -- including whether it can be trusted. From passive tools to proactive agents Self-driving cars like the Zoox robotaxi are examples of agentic AI. Zoox Agentic AI systems aren't passive tools waiting for input. They operate in a cycle to sense the environment, decide what to do, and then act. That structure makes them more autonomous and lets them take on complex, goal-oriented tasks across multiple systems. "Agentic AI…is now making this sort of sense-decide-act loop available to everybody," Peter Stone, professor at the University of Texas and chief scientist at Sony AI America, told me. "Rather than waiting for the input-output behavior, you're able to task a program with sensing the world, deciding what to do and actually acting." Ankur Patel, CEO and founder of enterprise agentic AI company Multimodal, called it "a fundamental shift from reactive tools to proactive systems capable of complex decision-making." He gave an example of a loan underwriter who might otherwise spend hours cross-referencing pay stubs, tax returns and credit reports. "The AI agent automatically ingests and validates hundreds of data points from diverse sources. Think bank feeds, HR systems and government databases, while flagging inconsistencies like mismatched employment dates," Patel told me. In other words, it's not mere automation. "Agentic AI connects complex, multisource inputs with internal rules or manuals, and gives accurate, critical outputs in much shorter time frames," Patel explained. What makes agentic AI different from generative AI and chatbots? MF3d/Getty Images Generative AI creates content such as text, images, music and even videos, based on what it learned during training and your prompt. Agentic AI can use those same models, but adds a layer of autonomy with reasoning and planning to proactively achieve goals through a sequence of actions. A generative AI tool might write you a vacation itinerary. AI agents could plan the trip, book your flights, reserve the hotel and even rebook everything if your flight gets delayed or canceled. Large language models, like ChatGPT or Claude, can become agentic when connected to external tools, sensors or APIs. This ability to interact with the world (either physical or digital) is what makes the difference. While systems like ChatGPT and Siri are designed to answer questions, agentic AI is built to solve problems. "Chatbots answer questions. Agentic AI solves problems by turning insights into outcomes," Patel said. That means orchestrating tasks across platforms. "For example, it can verify documents, assess risks and even trigger real-world actions like loan approvals or insurance payouts." Automation vs. augmentationLike most new tech, agentic AI raises concerns about jobs. Will it replace workers, or help them do their jobs better? Stone said the answer isn't simple. "Usually, when people say automation, they're thinking of replacing jobs. When people say augmentation, they're thinking of changing jobs, making them more efficient," Stone said. He compared it to the transition from hand-washing dishes in a restaurant to using a dishwasher -- there's still a human in the loop, but they're doing less of the repetitive labor. Another relatable example is correspondence. While writing letters by hand and sending them via snail mail might trigger nostalgia in romantic folks like me, we now send messages and emails instantly from smartphones. Patel agreed that agentic systems free people up from the grunt work. "It's unfortunate that a lot of man hours even today are spent on drudgery," he said. "Good AI can take care of them without needing much supervision." For Patel, the bigger risk is falling behind. "The question isn't 'will AI take my job?' It's 'will I be working alongside AI or getting outpaced by those who do?'" While that might sound daunting to anyone hesitant about the shift, AI is advancing fast enough that it feels inevitable. Where you might see agentic AI in action Google's new AI Mode is agentic AI in action. Google/Screenshot by CNET Enterprise software vendors are already rolling out agentic systems in industries like: Robotics: Autonomous robots that can navigate and perform tasks in complex environments. Software development: AI agents that can write and debug code independently. Customer service: Advanced chatbots that can resolve complex issues without human assistance. Supply chain: AI agents that manage inventory, forecast demand and optimize delivery routes. Manufacturing: Systems that monitor equipment, detect defects and streamline production lines. Cybersecurity: Agents that detect threats, isolate risks and respond in real time. Insurance: AI agents that process claims, assess risk and draft responses. Banking: Systems that verify income, credit and approve loans. Healthcare: Tools that flag anomalies, suggest diagnoses and draft notes. AI agents in these industries process documents, extract data, flag inconsistencies and route information with minimal human intervention. But, you don't have to work in any of these sectors to notice it. Opera's agentic web browser and Google's new agentic search, called AI Mode, aim to help you go from inspiration to purchase without clicking through pages of results. AI assistants that can book your travel, manage your inbox or compare online deals are all signs of what's coming in the consumer sector as well. Even Microsoft is adding an AI agent to Windows that can change system settings for you. Patel says everyday users should care for three reasons: "First, it gives people their most precious resource back -- time. Second, it vastly improves customer experience. Third, it prevents costly human errors." That said, there are still limitations. AI agents struggle in open-ended or unpredictable environments, especially when tasks lack clear structure or context. They also depend heavily on well-formed prompts or goals. Meaning, vague input can lead to irrelevant or faulty actions. Can it be trusted? Autonomy brings benefits, but also risks. When systems make decisions or take action without supervision, what happens if something goes wrong? And who is responsible? Is it the person using the AI, or the company/developers that built it? Legal dilemmas continuously expand with these AI advancements. Stone also warns that the risks aren't hypothetical. "The new type of risk… is not a person acting incorrectly or irresponsibly as a result of what the AI advises, but rather the AI system actually taking a dangerous action," he told me. Say you let an autonomous car drive itself, it can do more than just suggest a route and cause harm if it malfunctions or has you drive in circles in a parking lot like this unfortunate passenger. The stakes depend on what the AI is allowed to do. Booking a low-cost flight? Low risk. Accessing medical records or spending thousands of dollars? Much higher. "The risk is directly related to the space of actions and the agency or autonomy that you give to the agent," Stone emphasized. Patel pointed out that safeguards are essential, especially in regulated industries. "To truly trust AI, it needs to have detailed audit trails and decision logs, process documentation, confidence scoring, and the ability to route decisions to humans where absolutely necessary," he said. What's next? While the hype around agentic AI is rising fast, don't expect to hand over your entire life to AI agents anytime soon. It will take years for most agentic AI systems to be tailored to specific industries or problems, not one-size-fits-all assistants. "There's a real chance that by demanding perfection from autonomous agents, we're missing an opportunity to do a lot better than the status quo," Stone said. "I think we need to accept that there are going to be mistakes. But they're going to get better over time." And so, the direction is clear. We're moving from AI that chats with us to AI that does things for us. Add robotics to the mix, and it's a whole new ballgame. FAQs What is agentic AI? Agentic AI is artificial intelligence that can independently make decisions and take actions to achieve a goal. Instead of waiting for step-by-step commands, AI agents decide what needs to be done and take action across systems with minimal human involvement. What are the key characteristics of agentic AI?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store