
Who is liable if a friendly chatbot ‘abets' suicide?
A US judge has admitted a case against American firm Character.AI over charges that its chatbot drove a teenager to suicide. The ruling will be closely watched for its potential to establish developer and corporate liability for 'friendly" but 'addictive" chatbots.
Also Read | YouTubers vs ANI: Fair-use in the spotlight
What's this case all about?
In May, a US judge allowed a wrongful death lawsuit against Character.AI and Google to proceed, rejecting claims that chatbot conversations were protected by the First Amendment, which guarantees free speech but not when it causes harm. The judge noted that the companies 'fail to articulate why words strung together by an LLM (large language model) are speech". The judge added that the chatbot could be considered a 'product" under liability law. Character.AI and Google must respond by 10 June. Google was made party as it has licensing rights to the startup's technology.
Also Read | Dr AI is here, but will the human touch go away?
Why exactly is this app being sued?
Character.AI allows users to interact with life-like AI 'characters", including fictional and celebrity personas that mimic human traits like stuttering. On 14 April 2023, 14-year-old Sewell Setzer III began using the app, mainly engaging with Game of Thrones bots like Daenerys and Rhaenyra Targaryen. He became obsessed, expressing his love for Daenerys. He withdrew socially, quit basketball, and upgraded to the premium version. A therapist diagnosed him with anxiety and mood disorder, unaware of his chatbot use. On 28 February 2024, days after his phone was confiscated, he died by suicide.
Also Read | Can dissenters aid shareholder democracy?
Is this the first legal suit against an AI chatbot?
In March 2023, a Belgian man died by suicide after prolonged conversations with an AI chatbot named Eliza on the Chai AI app, but no case was filed. The National Eating Disorders Association also shut down its chatbot after it began offering harmful weight loss advice. Separately, tech ethics groups have filed a complaint against an AI companion app, Replika.
Don't AI chatbots help users cope with stress?
AI chatbots are being increasingly used as mental health tools, with apps like Wysa (India), Woebot, Replika and Youper offering support based on cognitive behavioral therapy (CBT). These bots aid in mood tracking and coping, and include disclaimers that they are not substitutes for professional care. Yet, as experts note, bots can fake intimacy but don't have real feelings. Although users value their availability and human-like interactions, this can foster over-attachment and blur reality.
Are there regulatory safeguards?
Character.AI claims that its language model version for users under 18 aims to reduce exposure to sensitive or suggestive content. The EU's AI Act classifies certain AI systems as 'high risk" when used in sensitive areas like mental health. China enforces platform accountability for AI-generated content. The US and India rely on case law and product liability, but have no regulator. As AI becomes more autonomous and mental health bots avoid oversight by using disclaimers, new legal frameworks will be essential.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
28 minutes ago
- India Today
Salman Rushdie says he has never used AI, warns it could replace writers but first it needs to learn comedy
When OpenAI launched ChatGPT in 2023, the reaction was mixed. On one hand, everyone was excited to see cutting-edge technology and what artificial intelligence could do for humans. On the other, fears began to linger about AI replacing humans in various jobs. One of the roles people were most concerned about was that of writers, as AI platforms like ChatGPT and Gemini could write everything from essays to novels. And since then these tools have become much famous writer Salman Rushdie believes that writers shouldn't fear AI — at least not for now — as it lacks one very important skill: a sense of humour. Speaking at the Hay Festival in Wales, Rushdie quipped that writers are safe from being overtaken by machines, at least until a bot can write a book that genuinely makes people laugh.'I've never tried AI,' he said. 'I pretend it doesn't exist. It has no sense of humour, you don't want to hear a joke told by ChatGPT. If there's ever a moment when a funny book is written by ChatGPT, I think we're screwed.' Rushdie's comments on AI came during his first major in-person appearance in the UK since he was seriously injured after he was stabbed while on stage in the United States. The attack left him blind in his right eye, but he has continued to make public appearances and comment on developments in literature and writers have been among the first white-collar professionals to raise the alarm about the rise of AI and its potential to replace humans in writing work. In recent years, several prominent authors have voiced concerns that AI companies are training language models using their copyrighted works, without permission or 2023, a group of more than 8,500 authors came together and signed an open letter to tech companies asking them to stop using their books for training AI tools.'These technologies mimic and regurgitate our language, stories, style, and ideas. Millions of copyrighted books, articles, essays, and poetry provide the 'food' for AI systems, endless meals for which there has been no bill,' they wrote in the growing tension between technology and creativity is even prompting many in the literary community to call for regulation and transparency. Writers argue that while AI can mimic sentence structure and tone, it lacks the emotional nuance, lived experience, and cultural insight that define truly impactful literature.


Mint
29 minutes ago
- Mint
Your screen protectors need AI too, apparently. Do AI screen protectors really work?
First, they came for your devices, then they came for your jobs, and now they're coming for your screens. By 'they,' I'm referring to artificial intelligence (AI) - a term you hear almost daily across all spheres of life. Want a quick text summary of a canonical event? Simply ChatGPT it or ask Gemini to create an image that depicts the headline appropriately. Yep, it's that easy. But now, AI is being misused to represent technologies that are farcical or simply don't exist yet. One such tech that has climbed viral trends lately is screen protectors, or screen guards, as they're commonly known in India. Why would your screen protector need AI, you wonder? Well, we're wondering too. It's not uncommon for companies to now use 'AI' as a way to grab people's attention, even if those features were available to use prior to the AI-isation of our world. You may be interested in A Reddit post recently showed a screen protector with the 'optimised for AI' badge. Since then, the post has hit some nerves. Puritans think of AI as a marketing gimmick, while the new age adopters think AI has uses everywhere. So, do AI screen protectors work? According to Android Authority's investigation, the screen guard in question was, well, a regular screen protecting film that was marketed as AI optimised to garner attention. And it clearly worked. Was AI used to design the screen guard for better accuracy or for bubble elimination? Perhaps, yes. But we'll never know, because companies now slap the AI tag onto every product without headlining where AI was used and whether AI is part of the final product. In this case, it's possible that AI was used to create the screen guard, but there is no suggestion of the same, at least based on the Reddit post. If nothing else, the AI screen protector smoke show is driving attention to the lack of transparency prevailing in the tech world when it comes to AI functionality and products - especially in the face of evolving regulations. Slapping an AI logo should not be enough to sway potential customers, but it is the case right now. But when AI technologies mature enough to create more tangible impact, people will naturally know how to differentiate a gimmick from a revolution. Till then, remember - not everything needs AI optimisation, especially not your next screen guard - at least for now!


Time of India
an hour ago
- Time of India
As AI redefines engineers' roles, Cisco CPO Jeetu Patel reveals 'grossly underestimated' skills needed for the future
Hiring More, Not Less: Cisco's Contrarian Approach AI as a Co-Pilot, Not a Competitor Forget Syntax — Think Strategy You Might Also Like: AI poses a greater threat to women in the workforce, but why? ILO study reveals alarming gender gap predictions The Imagination Age is Here You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why In a world increasingly reshaped by artificial intelligence, the engineering profession is undergoing a rapid evolution. While many tech companies are rethinking the size and scope of their engineering teams, Cisco is taking a different path — one rooted in expansion and reimagination. Jeetu Patel, President and Chief Product Officer at Cisco, recently spoke to Business Insider, offering a compelling vision of how the engineer of tomorrow will look very different from the one we know the industry buzzes with discussions of automation replacing human roles, Cisco is "unapologetically hiring" engineers, Patel revealed. With 27,000 engineers already on board, the company isn't slowing down. "We feel more constrained now than ever before on not having enough engineers to get prosecuted all the ideas that we've got going internally," he said. For Cisco, the limitation isn't in the talent they have — it's in the untapped ideas they want to bring to said, Patel is not blind to the reality of AI redefining roles. Cisco has already partnered with OpenAI to integrate its Codex AI coding assistant into the company's workflow. The tool performs tasks like code writing, bug fixing, and test running — effectively becoming a digital co-pilot to human engineers. This shift will make engineers significantly more productive, freeing them from routine coding and allowing them to focus on high-impact innovation.'AI will make engineers 10 to 50 times more productive,' Patel noted. 'The speed at which an idea becomes a product will go from months to minutes.'So, what does the engineer of the future need to focus on? Patel emphasized that traditional skills like syntax understanding — while still important — will become less central. Instead, two 'grossly underestimated' skills will come to define engineering success in the AI AI tools become more sophisticated, engineers will need to master the art of managing not just human teams but also agent ecosystems. These AI agents will communicate, collaborate, and solve problems in tandem — and someone needs to oversee that orchestration. Patel believes that knowing how to delegate intelligently between humans and machines will be 'super important.'Beyond coordination lies creativity. According to Patel, the true value of a future engineer will rest not in how well they code, but in how well they imagine. As AI reduces the grunt work, engineers will have more space to ideate and innovate. Tools like Codex, he says, 'unlock human imagination,' removing bottlenecks created by a scarcity of paradigm shift isn't just about productivity — it's about job satisfaction. Patel believes that engineers, liberated from repetitive tasks, will find more meaning in their roles. "This evolution will improve output capacity, but also the satisfaction that someone gets from a job," he said. In his words, the only remaining constraint in the engineering world will be one's Cisco redefines what it means to be an engineer in the AI age, the message is clear: the future belongs not to those who code the fastest, but to those who think the furthest.