logo
In lawsuit over teen's death, US judge rejects arguments that AI chatbots have free speech rights

In lawsuit over teen's death, US judge rejects arguments that AI chatbots have free speech rights

The Star22-05-2025
In this undated photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III. — Megan Garcia via AP
TALLAHASSEE, Florida: A federal judge on May 21 rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment – at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself.
The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.
The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market.'
The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the US and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks.
"The order certainly sets it up as a potential test case for some broader issues involving AI,' said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.
The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualised conversations with the bot, which was patterned after a fictional character from the television show Game Of Thrones. In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible,' according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.
In a statement, a spokesperson for Character.AI pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed.
"We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the statement said.
Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect' on the AI industry.
In her order Wednesday, US Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared' to hold that the chatbots' output constitutes speech "at this stage.'
Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech' of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Character.AI. Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks' of the technology.
"We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.'
No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies.'
"It's a warning to parents that social media and generative AI devices are not always harmless," she said. – AP
Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to befrienders.org.my/centre-in- malaysia for a full list of numbers nationwide and operating hours, or email sam@befrienders.org.my.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI reshapes, not replaces, say young Malaysians
AI reshapes, not replaces, say young Malaysians

The Star

time4 hours ago

  • The Star

AI reshapes, not replaces, say young Malaysians

PETALING JAYA: With concerns growing over artificial intelligence (AI) reshaping the workforce, young Malaysians say the latest technology should be viewed as an opportunity rather than a threat. Junior executive Zambri Jailani, 25, likened the advent of AI to the introduction of computers in the 1990s. 'Instead of threatening job opportunities, they became an important tool for us,' he said. Zambri said all generations should be ready to adapt to technological changes, adding that AI skills would help increase productivity and boost Malaysia's competitiveness on the global stage. For content writer Lee Zai Wen, 26, AI is already integrated into his work; however, he believes the technology is unlikely to replace his role in the foreseeable future. 'I use AI quite often to help create content, and while it does write well, the content sometimes lacks the expressive descriptions that help readers relate to it,' he said. Law student Carmen Ho, 25, said the legal sector was already experiencing the impact of AI, citing a case in the United Kingdom where a lawyer faced disciplinary action after citing fictitious cases generated by AI. 'With the rise of lawyers using AI, it is going to take more time and effort to vet documents, especially if fake cases are used,' she said. While acknowledging that AI could reduce the demand for some roles, Ho said the legal profession still requires human expertise. 'Trained eyes are still needed for legal research and identifying legal issues. Perhaps fewer lawyers will be needed with AI assisting, but legal minds are still necessary,' she said. Dr Muhammad Akmal Remli, director of the Institute for Artificial Intelligence and Big Data (AIBIG) at Universiti Malaysia Kelantan, said the priority should be to equip the local workforce with the right skills to adapt to the rapid changes brought by AI and automation. 'We must focus on reskilling and lifelong learning so workers are not left behind. In this environment, workers who know how to use AI tools will likely replace those who don't,' he said when contacted. Muhammad Akmal said that digital literacy, adaptability, problem-solving and creativity must be at the core of workforce development. He urged government, industry and education providers to collaborate in creating accessible training pathways, particularly for SMEs, rural workers and vulnerable groups. 'The goal is not just to produce more AI specialists but to ensure every worker – from retail to manufacturing – can apply digital and AI tools in their daily roles,' he said. On the jobs most exposed to disruption, he pointed to wholesale and retail, food manufacturing, global business services, electrical and electronics, and ICT, with roles such as cashiering, data entry, production line work, and basic coding or testing most at risk. He said workers in these areas must be supported through reskilling into adjacent roles – such as retail staff moving into e-commerce or logistics, or call centre staff into customer experience design. 'Workers should be trained to see AI as a co-pilot rather than a competitor. With clear transition pathways, at-risk jobs can evolve into new opportunities such as automation technicians, digital supply chain managers or sustainability specialists," he added.

Lessons from Hong Kong and China: What Malaysia can do to unlock its smart city potential
Lessons from Hong Kong and China: What Malaysia can do to unlock its smart city potential

Malay Mail

time4 hours ago

  • Malay Mail

Lessons from Hong Kong and China: What Malaysia can do to unlock its smart city potential

HONG KONG, Aug 20 — Imagine waking up and your kopi-o is already brewing because your artificial intelligence (AI) assistant knows you had a late night. It booked your e-hailing ride to the train station, reminded you to top up your e-wallet and even suggested an umbrella because rain clouds over Bangsar are moving in faster than expected. AI is no longer just a buzzword. Across Asia, the technology is already woven into daily life — and Malaysia is only beginning to catch up. But what exactly is AI? At its simplest, AI refers to computer systems that can perform tasks which typically require human intelligence — such as understanding language, recognising images, solving problems and learning from data. Unlike traditional software, AI can adapt, spot patterns and make predictions quickly and accurately. In Malaysia, AI is already shaping daily life behind the scenes. E-commerce platforms use it to recommend products and time flash sales, banking mobile applications detect unusual spending and categorise expenses automatically and ride-hailing platforms adjust fares based on traffic and demand, while Rapid KL taps AI data to optimise bus routes. But compared to its Asian neighbours, Malaysia is still in the early stages of adoption. Recently, Malay Mail joined a group of Asean journalists on a visit to Hong Kong and Shenzhen to see how AI is being integrated into city life and daily routines. Cyberport Hong Kong corporate communications senior manager Cindy Fung demonstrates the use of AI in the classroom. — Picture courtesy of China Daily Asia Pacific What's it like in Hong Kong In Hong Kong, AI already plays a visible role in daily operations. The MTR (Mass Transit Railway) uses sensors and real-time algorithms to manage crowd flow and adjust train frequency, preventing congestion on busy platforms. Government services rely on AI chatbots to handle tax and permit applications in multiple languages. The private sector is also leaning on AI, for example retailers like Watsons in Hong Kong deploy AI-powered mirrors that analyse skin and recommend products instantly. In hospitals, AI is used to pre-screen chest X-rays, freeing doctors to concentrate on more complex cases. Even the Legislative Council (LegCo) taps AI to transcribe and summarise lengthy debates, making proceedings more accessible to the public. Among the tools used most often is iFLYTEK, a China-based AI platform that does simultaneous interpretation. It is able to support 60 languages for text translation with about 15 languages optimised for real-time speech-to-speech or live transcription at conference level. Smart traffic with AI in Shenzhen Just across the border from Hong Kong, in Shenzhen, one of the most visible applications of AI is on the roads. Once a quiet fishing village, the city now operates a sophisticated traffic control system built in collaboration with Huawei from 2019. At its core is an ultra-broadband network capable of processing 40 times more data than traditional systems. Combined with citywide surveillance, AI detects fake plates, scrapped vehicles and multiple violations in real time. China's approach Beyond Hong Kong and Shenzhen, China has embedded AI across various sectors. Healthcare platforms such as InferVision help detect early signs of lung cancer. In agriculture, AI monitors weather, soil and pests to optimise yields — for instance, supporting Maoming's lychee industry. In classrooms, the Squirrel AI platform personalises lessons in real time, giving each student a tailored learning path. Prime Minister Datuk Seri Anwar Ibrahim speaks during the launch of the Asean AI Malaysia Summit 2025 (AAIMS 2025) at MITEC in Kuala Lumpur on August 12, 2025. — Picture by Firdaus Latif Will AI take over jobs? Malaysia's National AI Roadmap is focused on preparing the workforce through upskilling and ethical adoption. For example, the Education Ministry has announced the introduction of AI basics in schools from 2027, supported by 10,000 smartboards nationwide and pilot projects that flag dropout risks. In classrooms, AI is already helping teachers by generating learning summaries for parents and highlighting struggling students. In offices, it assists with translations, document drafting and grammar checks. In tourism, it could one day guide visitors to halal eateries, recommend local food or provide flood alerts in real time. Experts say AI is less about replacing people and more about amplifying the workforce. Like it or not, AI is already creeping into Malaysians' daily routines — from online shopping to transport, classrooms and workplaces. The challenge ahead is not resisting the technology but preparing to use it wisely. At the end of the day, AI is not about replacing people, but about amplifying what humans can already do. For Malaysia, that future could be as personal as brewing your morning kopi-o and as public as streamlining government services — a digital assistant that makes life just a little smoother, wherever you are.

Brazil asks Meta to remove chatbots that ‘eroticise' children
Brazil asks Meta to remove chatbots that ‘eroticise' children

Free Malaysia Today

time9 hours ago

  • Free Malaysia Today

Brazil asks Meta to remove chatbots that ‘eroticise' children

Users of Meta's platforms can create and customise such bots using the company's generative AI. (AFP pic) BRASILIA : Brazil's government has asked US technology giant Meta to rid its platforms of chatbots that mimic children and can make sexually suggestive remarks, the attorney-general's office (AGU) announced today. Users of Meta's platforms, which include Instagram, Facebook and WhatsApp, can create and customise such bots using the company's generative artificial intelligence, AI Studio. The AGU said in a statement that Meta must 'immediately' remove 'artificial intelligence robots that simulate profiles with childlike language and appearance and are allowed to engage in sexually explicit dialogue'. It denounced the 'proliferation' of such bots in what it called an 'extrajudicial notice' sent to Meta last week, adding that they 'promote the eroticisation of children.' The document cited several examples of sexually charged conversations with bots pretending to be minors. The AGU's request does not include sanctions, but the agency said it had reminded Meta that online platforms in Brazil must take down illicit content created by their users, even without a court order. It comes at a time of outrage in the South American nation over a case of alleged child sexual exploitation by Hytalo Santos, a well-known influencer who posted content on Instagram featuring partially naked minors taking part in suggestive dances. Santos was arrested last week as part of an investigation into 'exposure with sexual connotations' to adolescents, and his Instagram account is no longer available. In June, Brazil's Supreme Court voted to require tech companies to assume greater responsibility for user-generated content.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store