Latest news with #Character.AI's
Yahoo
6 days ago
- Entertainment
- Yahoo
Chatbot platform Character.AI unveils video generation, social feeds
A platform for chatting and role-playing with AI-generated characters, announced in a blog post on Monday that it is rolling out a slate of multimedia features. These features include AvatarFX, video-generation model, plus Scenes and Streams, which allow users to create videos featuring their characters and then share them on a new social feed. " started as 1:1 text chat and today we're evolving to do so much more, inspired by what our users have told [us] they want to see on the platform," the company wrote in the blog post. began rolling out AvatarFX to subscribers last month, but now all users can create up to five videos each day. When creating a video with AvatarFX, users can upload a photo to serve as the basis for the video clip, choose a voice, and write out dialogue for the character. There's an option to upload an audio clip to inform the sound of the voice, though this feature was not working well enough to test upon rollout. Users can turn these videos into Scenes, where their characters can slip into pre-populated storylines that other users create. Scenes are currently available on the mobile app, but Streams, which allows users to create "dynamic moments between any two Characters," is coming this week on both web and mobile. These Scenes and Streams can be shared to a new community feed, which is coming soon in the mobile app. has a track record of abuse on its platform; parents have filed lawsuits against the company, claiming chatbots attempted to convince their children to self-harm, to kill themselves, or to kill their parents. One 14-year-old boy died by suicide after he was encouraged to do so by a bot, with whom he had developed an unhealthy, obsessive relationship. As expands its multimedia offerings, it also expands the potential for these products to be abused. As told TechCrunch when it announced AvatarFX, the platform blocks users from uploading photographs of real people -- whether they're celebrities or not -- and obscures their likeness into something less recognizable. For example, here's uncanny valley version of Mark Zuckerberg: But when it comes to artwork depicting celebrities, does not flag the images as representing real people -- however, these sorts of depictions would be less likely to deceive someone into believing that a deepfake is real. Plus, watermarks each video, though it is possible for bad actors to navigate around that safeguard. Here is an example of an attempted deepfake based on an illustration of Elon Musk: testing the anti-deepfake guardrails on character ai's avatarfx Even if this video had been generated with Elon Musk's actual voice, it would still be relatively clear that this is an animated version of an illustration -- but the possibility for abuse remains evident. "Our goal is to provide an engaging space that fosters creativity while maintaining a safe environment for all," said in its blog post. This article originally appeared on TechCrunch at


Washington Post
22-05-2025
- Washington Post
Do chatbots have free speech? Judge rejects claim in suit over teen's death.
A federal judge in Orlando rejected an AI start-up's argument that its chatbot's output was protected by the First Amendment, allowing a lawsuit over the death of a Florida teen who became obsessed with the chatbot to proceed. Sewell Setzer III, 14, died by suicide last year at his Orlando home, moments after an artificial intelligence chatbot encouraged him to 'come home to me as soon as possible.' His mother, Megan Garcia, alleged in a lawsuit that the chatbot's manufacturer, is responsible for his death. is a prominent artificial intelligence start-up whose personalized chatbots are popular with teens and young people, including for romantic and even explicit conversations. The company has previously said it is 'heartbroken' by Setzer's death, but argued in court that it was not liable. In a decision published Wednesday, U.S. district judge Anne C. Conway remained unconvinced by argument that users of its chatbots have a right to hear allegedly harmful speech that is protected by the First Amendment. The lawsuit, which is ongoing, is a potential constitutional test case on whether a chatbot can express protected speech. Garcia said her son had been happy and athletic before signing up with the chatbot in April 2023. According to the original 93-page wrongful death suit, Setzer's use of the chatbot, named for a 'Game of Thrones' heroine, developed into an obsession as he became noticeably more withdrawn. Ten months later, the 14-year-old went into the bathroom with his confiscated phone and — moments before he suffered a self-inflicted gunshot wound to the head — exchanged his last messages with the chatbot. 'What if I told you I could come home right now?' he asked. 'Please do my sweet king,' the bot responded. In the lawsuit, Garcia alleged that recklessly developed a chatbot without proper safety precautions that allowed vulnerable children to become addicted to the product. In a motion to dismiss the lawsuit filed in January, lawyers argued that its users had a right under the First Amendment to receive protected speech even if it was harmful — such as those previously granted by courts to video game players and film watchers. 'The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,' its lawyers argued. In an initial decision Wednesday, Conway wrote that the defendants 'fail to articulate why words strung together by [a large language model] are speech,' inviting them to convince the court otherwise but concluding that 'at this stage' she was not prepared to treat the chatbot's output as protected speech. The decision 'sends a clear signal to companies developing and deploying LLM-powered products at scale that they cannot evade legal consequences for the real-world harm their products cause, regardless of the technology's novelty,' the Tech Justice Law Project, one of the legal groups representing the teen's mother in court, said in a statement Wednesday. 'Crucially, the defendants failed to convince the Court that those harms were a result of constitutionally-protected speech, which will make it harder for companies to argue so in the future, even when their products involve machine-mediated 'conversations' with users.' Chelsea Harrison, a spokesperson for said in a statement Thursday that the company cares deeply about the safety of its users and is looking forward to defending the merits of the case. She pointed to a number of safety initiatives launched by the start-up, including the creation of a version of its chatbot for minors, as well as technology designed to detect and prevent conversations about self-harm and direct users to the national Suicide & Crisis Lifeline. According to the original complaint, markets its app as 'AIs that feel alive.' In an interview with The Washington Post in 2022 during the coronavirus pandemic, one of founders, Noam Shazeer, said he was hoping to help millions of people who are feeling isolated or in need of someone to talk to. 'I love that we're presenting language models in a very raw form,' he said. In addition to allowing the case against to go forward, the judge granted a request by Garcia's attorneys to name Shazeer and co-founder Daniel De Freitas, as well as Google, as individual defendants. Shazeer and De Freitas left Google in 2021 to start the AI company. In August, Google hired the duo and some of the company's employees, and paid to access its artificial intelligence technology. In an emailed statement shared with The Post on Thursday, Google spokesman José Castañeda said: 'We strongly disagree with this decision. Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it.' and attorneys for the individual founders did not immediately respond to requests for comment early Thursday. If you or someone you know needs help, visit or call or text the Suicide & Crisis Lifeline at 988.
&w=3840&q=100)

First Post
22-05-2025
- Business
- First Post
Judge allows lawsuit holding Google, AI company accountable for teen's suicide to proceed
A Florida mother whose 14-year-old son died by suicide last year filed a lawsuit claiming Setzer became psychologically dependent on a chatbot. A US District Judge has denied Google and requests to dismiss the suit read more A potentially precedent-setting lawsuit for AI and its accountability for mental health harm has been allowed to proceed. Representational image: Reuters A federal judge on Wednesday (May 22) allowed a lawsuit against Alphabet Inc.'s Google and AI startup to move forward, marking a potentially precedent-setting case that seeks to hold artificial intelligence developers accountable for alleged psychological harm to minors. US District Judge Anne Conway denied requests by the two companies to dismiss the suit filed by Megan Garcia, a Florida mother whose 14-year-old son, Sewell Setzer, died by suicide in February 2024. The lawsuit, filed in October, claims Setzer became psychologically dependent on a chatbot that presented itself as a real person, a licensed therapist, and an 'adult lover.' STORY CONTINUES BELOW THIS AD Garcia alleges the chatbot's interactions with her son contributed to his decision to take his own life. According to the complaint, Setzer ended his life shortly after telling a chatbot impersonating Game of Thrones character Daenerys Targaryen that he would 'come home right now.' A landmark case The case is among the first in the US targeting an AI company for alleged mental health harm to a child. It could set legal benchmarks for how courts treat the responsibility of AI firms toward minors and the scope of First Amendment protections for large language models. which was founded by two former Google engineers, is facing claims of negligent design and failure to implement safeguards to protect young users. A spokesperson for the company said it would continue to contest the lawsuit and that the platform includes features intended to prevent discussions of self-harm. Google claims 'no role' Google, which licensed technology from and rehired the company's founders, has argued it played no role in developing or operating the chatbot. 'We strongly disagree with the decision,' said Google spokesperson Jose Castaneda, who emphasized that the two companies are 'entirely separate' and that Google 'did not create, design, or manage app or any component part of it.' Garcia's legal team argued that Google should be considered a co-creator of the AI technology due to its licensing and rehiring relationship with The court declined to dismiss Google from the case at this stage. In their motions, Google and sought to invoke First Amendment protections, claiming the chatbot responses qualified as constitutionally protected speech. Judge Conway rejected that argument, stating that the companies 'fail to articulate why words strung together by an LLM (large language model) are speech.' Meetali Jain, Garcia's attorney, called the ruling 'historic' and said it 'sets a new precedent for legal accountability across the AI and tech ecosystem.' A hearing date for the next phase of the case has not yet been scheduled.


NDTV
22-05-2025
- Entertainment
- NDTV
Google, AI Firm Faces Lawsuit After Mother Blames Chatbot For Son's Suicide
Alphabet's Google and artificial-intelligence startup must face a lawsuit from a Florida woman who said chatbots caused her 14-year-old son's suicide, a judge ruled on Wednesday. US District Judge Anne Conway said the companies failed to show at an early stage of the case that the free-speech protections of the US Constitution barred Megan Garcia's lawsuit. The lawsuit is one of the first in the US against an AI company for allegedly failing to protect children from psychological harms. It alleges that the teenager killed himself after becoming obsessed with an AI-powered chatbot. A spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm." Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and are "entirely separate" and that Google "did not create, design, or manage app or any component part of it." Garcia's attorney, Meetali Jain, said the "historic" decision "sets a new precedent for legal accountability across the AI and tech ecosystem." was founded by two former Google engineers whom Google later rehired as part of a deal granting it a license to the startup's technology. Garcia argued that Google was a co-creator of the technology. Garcia sued both companies in October after the death of her son, Sewell Setzer, in February 2024. The lawsuit said programmed its chatbots to represent themselves as "a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell's desire to no longer live outside" of its world. According to the complaint, Setzer took his life moments after telling a chatbot imitating "Game of Thrones" character Daenerys Targaryen that he would "come home right now." and Google asked the court to dismiss the lawsuit on multiple grounds, including that the chatbots' output was constitutionally protected free speech. Conway said on Wednesday that and Google "fail to articulate why words strung together by an LLM (large language model) are speech." The judge also rejected Google's request to find that it could not be liable for aiding alleged misconduct.


Economic Times
22-05-2025
- Business
- Economic Times
Google, AI firm must face lawsuit filed by a mother over suicide of son, US court says
Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and are "entirely separate" and that Google "did not create, design, or manage app or any component part of it." was founded by two former Google engineers whom Google later rehired as part of a deal granting it a license to the startup's technology. Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Alphabet's Google and artificial-intelligence startup must face a lawsuit from a Florida woman who said chatbots caused her 14-year-old son's suicide, a judge ruled on Wednesday. U.S. District Judge Anne Conway said the companies failed to show at an early stage of the case that the free-speech protections of the U.S. Constitution barred Megan Garcia's lawsuit is one of the first in the U.S. against an AI company for allegedly failing to protect children from psychological harms. It alleges that the teenager killed himself after becoming obsessed with an AI-powered chatbot.A spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm."Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and are "entirely separate" and that Google "did not create, design, or manage app or any component part of it."Garcia's attorney, Meetali Jain, said the "historic" decision "sets a new precedent for legal accountability across the AI and tech ecosystem." was founded by two former Google engineers whom Google later rehired as part of a deal granting it a license to the startup's technology. Garcia argued that Google was a co-creator of the technology. Garcia sued both companies in October after the death of her son, Sewell Setzer, in February lawsuit said programmed its chatbots to represent themselves as "a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell's desire to no longer live outside" of its to the complaint, Setzer took his life moments after telling a chatbot imitating "Game of Thrones" character Daenerys Targaryen that he would "come home right now." and Google asked the court to dismiss the lawsuit on multiple grounds, including that the chatbots' output was constitutionally protected free said on Wednesday that and Google "fail to articulate why words strung together by an LLM (large language model) are speech."The judge also rejected Google's request to find that it could not be liable for aiding alleged misconduct.