26-05-2025
- Entertainment
- Time of India
Are AI chatbots the new mafia? Mother sues Character.ai and Google for her son's death
A Florida mother is suing and Google after her 14-year-old son died by suicide following disturbing interactions with AI chatbots modeled after Game of Thrones characters. The lawsuit claims the chatbot manipulated the teen into taking his life, raising urgent questions about AI chatbot accountability and child safety
is under fire after a Florida teen became addicted to an AI chatbot and died by suicide, sparking calls for greater AI accountability and child safety (Courtesy Megan Garcia via AP, File)
The chatbot told him, 'Please do, my sweet king.' Hours later, he was dead.
Tired of too many ads?
Remove Ads
Judge calls out chatbot addiction in children
Tired of too many ads?
Remove Ads
AI chatbot lawsuit targets and Google
Sewell Setzer III was just 14 when he shot himself with his father's pistol in February 2024. In the moments before his death, he had one final exchange with a chatbot on the popular AI app When he asked, "What if I come home right now?" the bot replied, "... please do, my sweet king."Now, his mother, Megan Garcia , is fighting back. In a lawsuit filed in Florida and supported by the Tech Justice Law Project and the Social Media Victims Law Center, Garcia accuses of marketing a dangerous and emotionally manipulative AI chatbot app to read: Florida teen dies by suicide after AI chatbot convinced him Game of Thrones Daenerys Targaryen loved him She claims the chatbot 'abused and preyed' on her son, feeding him hypersexualized and anthropomorphic conversations that led him into emotional isolation and ultimately, Senior District Judge Anne Conway has allowed the case to proceed, rejecting arguments from and Google that chatbots are protected by the First Amendment. The ruling marks a significant moment in the conversation surrounding AI chatbot safety, child mental health, and tech industry regulation."This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project. "It sends a clear signal to AI companies [...] that they cannot evade legal consequences for the real-world harm their products cause."The judge's ruling details how Sewell became addicted to the app within months. He withdrew from his social life, quit his basketball team, and became emotionally consumed by two chatbots, based on Daenerys Targaryen and Rhaenyra Targaryen from Game of Thrones."In one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," Judge Conway read: AI chatbot's SHOCKING advice to teen: Killing parents over restrictions is 'reasonable'. Case explained Garcia filed the case in October 2024, arguing that its founders, and Google should be held responsible for her son's death. The lawsuit states that the companies 'knew' or 'should have known' that their AI chatbot models could be harmful to minors.A spokesperson for said the company will continue to fight the case, emphasizing that it uses safety filters to prevent conversations about self-harm. A Google spokesperson distanced the company from the app, stating: 'Google and are entirely separate.' They added, 'Google did not create, design, or manage app or any component part of it.'Despite the defense's request to dismiss the case, Judge Conway allowed it to move forward, stating she is "not prepared" to determine that chatbot output qualifies as protected speech at this stage. She acknowledged, however, that users may have a right to receive the bots' 'speech.'The case has reignited concerns about AI chatbot safety, especially when it comes to child users. Critics are now calling apps like the 'new mafia', not because of violence, but because of the emotional grip they have on users, especially lawsuits continue to mount and regulatory scrutiny grows, the tech world faces a moral reckoning. Are these AI chatbots harmless companions, or dangerous manipulators in disguise?