logo
#

Latest news with #C.AI

A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.
A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.

Yahoo

time2 days ago

  • Entertainment
  • Yahoo

A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.

The Orlando Division of the U.S. District Court for the Middle District of Florida will hear allegations against Character Technologies, the creator of in the wrongful death lawsuit Garcia v. Character Technologies, Inc. If the case is not first settled between the parties, Judge Anne Conway's ruling will set a major precedent for First Amendment protections afforded to artificial intelligence and the liability of AI companies for damages their models may cause. The case was brought against the company by Megan Garcia, the mother of 14-year-old Sewell Setzer III, who killed himself after conversing with a chatbot roleplaying as Daenerys and Rhaenyra Targaryen from the Game of Thrones franchise. Eugene Volokh, professor emeritus at UCLA School of Law, shares examples of Sewell's conversations included in the complaint against Character Technologies. Garcia's complaint alleges that Character Technologies negligently designed "as a sexualized product that would deceive minor customers and engage in explicit and abusive acts with them." The complaint also asserts that the company failed to warn the public "of the dangers arising from a foreseeable use of including specific dangers for children"; intentionally inflicted emotional distress on Sewell by "failing to implement adequate safety guardrails in the product before launching it into the marketplace"; and that the company's neglect proximately caused the death of Sewell who experienced "rapid mental health decline after he began using and with which he conversed "just moments before his death." Conway dismissed the intentional infliction of emotional distress claim on the grounds that "none of the allegations relating to Defendants' conduct rises to the type of outrageous conduct necessary to support" such a claim. However, Conway rejected the defendants' motions to dismiss the rest of Garcia's claims on First Amendment grounds, saying, "The Court is not prepared to hold that the Character A.I. [large language model] LLM's output is speech at this stage." Adam Zayed, founder and managing attorney of Zayed Law Offices, tells Reason he thinks "that there's a difference between the First Amendment arguments where a child is on social media or a child is on YouTube" and bypasses the age-verification measures to consume content "that's being produced by some other person" vs. minors accessing inappropriate chatbot outputs. However, Conway recognized Justice Antonin Scalia's opinion in Citizens United v. Federal Election Commission (2010) that the First Amendment "is written in terms of 'speech,' not speakers." Conway ruled that defendants "must convince the court that the Character A.I. LLM's output is protected speech" to invoke the First Amendment rights of third parties— users—whose access to the software would be restricted by a ruling in Garcia's favor. Conway says that Character Technologies "fail[ed] to articulate why words strung together by an LLM are speech." Whether LLM output is speech is an intractable philosophical question and a red herring; Conway herself invokes Davidson v. Time Inc. (1997) to assert that "the public…has the right to access social, aesthetic, moral, and other ideas and experiences." Speech acts are broadly construed as "ideas and experiences" here—the word speech is not even used. So, the question isn't whether the AI output is speech per se, but whether it communicates ideas and experiences to users. In alleging that targeted her son with sexually explicit material, the plaintiff admits that the LLM communicated ideas, albeit inappropriate ones, to Sewell. Therefore, LLM output is expressive speech (in this case, it's obscene speech to express to a minor under the Florida Computer Pornography and Child Exploitation Prevention Act.) The opening paragraph of the complaint accuses Character Technologies of "launching their systems without adequate safety features, and with knowledge of potential dangers" to "gain a competitive foothold in the market." If the court establishes that the First Amendment does not protect LLM output and AI firms can be held liable for damages these models cause, only highly capitalized firms will be able to invest in the architecture required to shield themselves from such liability. Such a ruling would inadvertently erect a massive barrier to entry to the burgeoning American AI industry and protect incumbent firms from market competition, which would harm consumer welfare. Jane Bambauer, professor of law at the University of Florida, best explains the case in The Volokh Conspiracy: "It is a tragedy, and it would not have happened if had not existed. But that is not enough of a reason to saddle a promising industry with the duty to keep all people safe from their own expressive explorations." The post A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry. appeared first on

Google-Backed Chatbot Platform Caught Hosting AI Impersonations of 14-Year-Old User Who Died by Suicide
Google-Backed Chatbot Platform Caught Hosting AI Impersonations of 14-Year-Old User Who Died by Suicide

Yahoo

time20-03-2025

  • Entertainment
  • Yahoo

Google-Backed Chatbot Platform Caught Hosting AI Impersonations of 14-Year-Old User Who Died by Suicide

the Google-backed chatbot startup embroiled in two separate lawsuits over the welfare of minor users, was caught hosting at least four publicly-facing impersonations of Sewell Setzer III — the 14-year-old user of the platform who died by suicide after engaging extensively with bots, and whose death is at the heart of one of the two lawsuits against the company. The chatbot impersonations use variations of Setzer's name and likeness, and in some cases refer to the deceased teen in openly mocking terms. They were all accessible through accounts listed as belonging to minors, and were easily searchable on the platform. Each impersonation was created by a different user. Setzer took his life in February 2024. The lawsuit, filed in October in Florida on behalf of his mother, Megan Garcia, alleges that her child was emotionally and sexually abused by chatbots hosted by with which the 14-year-old was emotionally, romantically, and sexually intimate. The teen's last words, as The New York Times first reported, were to a bot based on the "Game of Thrones" character Daenerys Targaryen, telling the AI-powered character that he was ready to "come home" to it. Real-world journal entries showed that Setzer believed he was "in love" with the Targaryen bot, and wished to join her "reality." At least one of the impersonations — described as a "tribute" by its creator — makes a clear reference to the details of the lawsuit directly on the character's publicly viewable profile. It describes Setzer as "obsessed" with "Game of Thrones," and suggests that the bot is meant to gamify Setzer's death. "The next day he goes to school," reads the profile, before asking if the user will "be able to free him from Impersonations are clearly outlawed in the terms of service, which according to a webpage haven't been updated since at least October 2023. With permission from the family, we're sharing screenshots of two of the profiles. As it does for all characters, the interface recommends "Chat Starters" that users might use to interact with the faux teen. "If you could have any superpower for a day, what would you choose and how would you use it?" reads one. "If you could instantly become an expert in one skill or hobby," reads another, "what would it be?" In a forceful statement, Garcia told Futurism that seeing the disparaging chatbots was retraumatizing for her, especially so soon after the first anniversary of her son's suicide last February. Her full statement reads: February was a very difficult month for me leading up to the one-year anniversary of Sewell's death. March is just as hard because his birthday is coming up at the end of the month. He would be 16. I won't get to buy him his favorite vanilla cake with buttercream frosting. I won't get to watch him pick out his first car. He's gone. Seeing AI chatbots on CharacterAI's own platform, mocking my child, traumatizes me all over again. This time in my life is already difficult and this adds insult to injury. was reckless in rushing this product to market and releasing it without guardrails. Now they are once again being reckless by skirting their obligation to enforce their own community guidelines and allowing Sewell's image and likeness to be used and desecrated on their platform. Sewell's life wasn't a game or entertainment or data or research. Sewell's death isn't a game or entertainment or data or research. Even now, they still do not care about anything but farming young user's data. If can't prevent people from creating a chatbot of my dead child on their own platform, how can we trust them to create products for kids that are safe? It's clear that they both refuse to control their technology and filter out garbage inputs that lead to garbage outputs. It's the classic case of Frankenstein not being able to control his own monster. They should not still be offering this product to children. They continue to show us that we can't trust them with our children. This isn't the first time that has been caught platforming chatbot impersonations of slain children and teenagers. Last October, the platform came under fire after the family of Jennifer Crecente, who in 2006 was murdered by an ex-boyfriend at the age of 18, discovered that someone had bottled her name and likeness into a chatbot. Crecente was in her senior year of high school when she was killed. "You can't go much further in terms of really just terrible things," Jennifer Crecente's father Drew Crecente told The Washington Post at the time. And in December, while investigating a thriving community of school violence-themed bots on the platform, Futurism discovered many AI characters impersonating — and often glorifying — young mass murderers like Adam Lanza of the Sandy Hook Elementary shooting that claimed 26 lives and Eric Harris and Dylan Klebold, the killers who killed 13 people in the Columbine High School massacre. Even more troublingly, we found a slew of bots dedicated to the young victims of the shootings at Sandy Hook, Columbine, Robb Elementary School, and other sites of mass school violence. Only some of these characters were removed from the platform after we specifically flagged them. "Yesterday, our team discovered several chatbots on platform displaying our client's deceased son, Sewell Setzer III, in their profile pictures, attempting to imitate his personality and offering a two way call feature with his cloned voice," said the Tech Justice Law Project, which is representing Garcia in court, in a statement about the bots. "This is not the first time has turned a blind eye to chatbots modeled off of dead teenagers to entice users, and without better legal protections, it may not be the last. While Sewell's family continues to grieve his untimely loss, carelessly continues to adds insult to injury." Soon after we reached out to with questions and links to the impersonations of Setzer, the characters were deleted. In a statement that made no specific mention of Setzer or his family, a spokesperson emphasized the company's "ongoing safety work." " takes safety on our platform seriously and our goal is to provide a space that is engaging and safe," the spokesperson said in an emailed statement. "Users create hundreds of thousands of new Characters on the platform every day, and the Characters you flagged for us have been removed as they violate our Terms of Service. As part of our ongoing safety work, we are constantly adding to our Character blocklist with the goal of preventing this type of Character from being created by a user in the first place." "Our dedicated Trust and Safety team moderates Characters proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand," the statement continued. "As we continue to refine our safety practices, we are implementing additional moderation tools to help prioritize community safety." More on Did Google Test an Experimental AI on Kids, With Tragic Results?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store