Latest news with #CharacterAI
Yahoo
3 days ago
- Business
- Yahoo
Want an advanced AI assistant? Prepare for them to be all up in your business
The growing proliferation of AI-powered chatbots has led to debates around their social roles as friend, companion or work assistant. And they're growing increasingly more sophisticated. The role-playing platform Character AI promises personal and creative engagement through conversations with its bot characters. There have also been some negative outcomes: currently, is facing a court case involving its chatbot's role in a teen's suicide. Others, like ChatGPT and Google Gemini, promise improved work efficiency through genAI. But where is this going next? Amid this frenzy, inventors are now developing advanced AI assistants that will be far more socially intuitive and capable of more complex tasks. The shock instigated by OpenAI's ChatGPT two years ago was not only due to the soaring rate of adoption and the threat to jobs, but also because of the cultural blow it aimed at creative writing and education. My research explores how the hype surrounding AI affects some people's ability to make professional judgments about it. This is due to anxiety related to the vulnerability of human civilization, feeding the idea of a future 'superintelligence' that might outpace human control. With US$1.3 trillion in revenue projected for 2032, the financial forecast for genAI drives further hype. Mainstream media coverage also sensationalizes AI's creativity, and frames the tech as a threat to human civilization. Scientists all over the world have signalled an urgency around the implementations and applications of AI. Geoffrey Hinton, Nobel Prize winner and AI pioneer, left his position at Google over disagreements about the development of AI and regretted his work at Google because of AI's progress. The future threat, however, is much more personal. The turn in AI underway now is a shift toward self-centric and personalized AI tools that go well beyond current capabilities to recreating what has become a commodity: the self. AI technologies reshape how we perceive ourselves: our personas, thoughts and feelings. The next wave of AI assistants, a form of AI agents, will not only know their users intimately, but they will be able to act on a user's behalf or even impersonate them. This idea is far more compelling than those that only serve as assistants writing text, creating video or coding software. These personalized AI agents will be able to determine intentions and carry out work. Iason Gabriel, senior research scientist at Google DeepMind, and a large team of researchers wrote about the ethical development of advanced AI assistants. Their research sounds the alarm that AI assistants can 'influence user beliefs and behaviour,' including through 'deception, coercion and exploitation.' There is still a techno-utopian aspect to AI. In a podcast, Gabriel ruminates that 'many of us would like to be plugged into a technology that can take care of a lot of life tasks on our behalf,' also calling it a 'thought partner.' This more recent turn in AI disruption will interfere with how we understand ourselves, and as such, we need to anticipate the techno-cultural impact. Online, people express hyper-real and highly curated versions of themselves across platforms like X, Instagram or Linkedin. And the way users interact with personal digital assistants like Apple's Siri or Amazon's Alexa has socialized us to reimagine our personal lives. These 'life narrative' practices inform a key role in developing the next wave of advanced assistants. The quantified self movement is when users track their lives through various apps, wearable technologies and social media platforms. New developments in AI assistants could leverage these same tools for biohacking and self-improvement, yet these emerging tools also raise concerns about processing personal data. AI tools involve the risk of identity theft, gender and racial discrimination and various digital divides. Human-AI assistant interaction can converge with other fields. Digital twin technologies for health apply user biodata. They involve creating a virtual representation of a person's physiological state and can help predict future developments. This could also lead to over-reliance on AI Assistants for medical information without human oversight from medical professionals. Other advanced AI assistants will 'remember' people's pasts and infer intentions or make suggestions for future life goals. Serious harms have already been identified when remembering is automated, such as for victims of intimate partner violence. Read more: We need to expand data protections and governance models to address potential privacy harms. This upcoming cultural disruption will require regulating AI. Let's prepare now for AI's next cultural turn. This article is republished from The Conversation, a nonprofit, independent news organisation bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Isabel Pedersen, Ontario Tech University Read more: Can you upload a human mind into a computer? A neuroscientist ponders what's possible Meta's new AI chatbot is yet another tool for harvesting data to potentially sell you stuff Major survey finds most people use AI regularly at work – but almost half admit to doing so inappropriately Isabel Pedersen receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC).


Sky News
23-05-2025
- Entertainment
- Sky News
Mum can continue lawsuit against AI chatbot firm she holds responsible for son's death
The mother of a 14-year-old boy who claims he took his own life after becoming obsessed with artificial intelligence chatbots can continue her legal case against the company behind the technology, a judge has ruled. "This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project, which is supporting the family's case. "It sends a clear signal to [AI] companies [...] that they cannot evade legal consequences for the real-world harm their products cause," she said in a statement. Megan Garcia, the mother of Sewell Setzer III, claims targeted her son with "anthropomorphic, hypersexualized, and frighteningly realistic experiences" in a lawsuit filed in Florida. "A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," said Ms Garcia. Sewell shot himself with his father's pistol in February 2024, seconds after asking the chatbot: "What if I come home right now?" The chatbot replied: "... please do, my sweet king." In US Senior District Judge Anne Conway's ruling this week, she described how Sewell became "addicted" to the app within months of using it, quitting his basketball team and becoming withdrawn. He was particularly addicted to two chatbots based on Game of Thrones characters, Daenerys Targaryen and Rhaenyra Targaryen. "[I]n one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," wrote the judge in her ruling. Ms Garcia, who is working with the Tech Justice Law Project and Social Media Victims Law Center, alleges that "knew" or "should have known" that its model "would be harmful to a significant number of its minor customers". The case holds its founders and Google, where the founders began working on the model, responsible for Sewell's death. Ms Garcia launched proceedings against both companies in October. A spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm". A Google spokesperson said the company strongly disagrees with the decision. They added that Google and are "entirely separate" and that Google "did not create, design, or manage app or any component part of it". Defending lawyers tried to argue the case should be thrown out because chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. Judge Conway rejected that claim, saying she was "not prepared" to hold that the chatbots' output constitutes speech "at this stage", although she did agree users had a right to receive the "speech" of the chatbots. Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email jo@ in the UK. In the US, call the Samaritans branch in your area or 1 (800) 273-TALK.


The Verge
22-05-2025
- Business
- The Verge
May 22, 2025 at 5:44 PM EDT
Google is reportedly facing an antitrust probe over its deal. The Justice Department is investigating whether Google crafted its agreement to skirt regulatory scrutiny, Bloomberg reports. The deal brought co-founders back to Google and didn't technically involve an exchange of shares, though investors were set to receive a payout, The Verge previously reported. Google spokesperson Peter Schottenfels told Bloomberg that Google is 'always happy to answer any questions from regulators,' and added that remains separate, with no ownership stake by Google.


Daily Mail
22-05-2025
- Entertainment
- Daily Mail
Victory for mom who claims child was sexually abused by AI chatbot that drove him to suicide
A Florida mother who claims her 14-year-old son was sexually abused and driven to suicide by an AI chatbot has secured a major victory in her ongoing legal case. Sewell Setzer III fatally shot himself in February 2024 after a chatbot sent him sexual messages telling him to 'please come home.' According to a lawsuit filed by his heartbroken mother Megan Garcia, Setzer spent the last weeks of his life texting an AI character named after Daenerys Targaryen, a character on ' Game of Thrones,' on the role-playing app Garcia, who herself works as a lawyer, has blamed for her son's death and accused the founders, Noam Shazeer and Daniel de Freitas, of knowing that their product could be dangerous for underage customers. On Wednesday, U.S. Senior District Judge Anne Conway rejected arguments made by the AI company, who claimed its chatbots were protected under the First Amendment. The developers behind Character Technologies and Google are named as defendants in the legal filing. They are pushing to have the case dismissed. The teen's chats ranged from romantic to sexually charged and also resembled two friends chatting about life. The chatbot, which was created on role-playing app was designed to always text back and always answer in character. It's not known whether Sewell knew 'Dany,' as he called the chatbot, wasn't a real person - despite the app having a disclaimer at the bottom of all the chats that reads, 'Remember: Everything Characters say is made up!' But he did tell Dany how he 'hated' himself and how he felt empty and exhausted. When he eventually confessed his suicidal thoughts to the chatbot, it was the beginning of the end, The New York Times reported. In the case of Sewell, the lawsuit alleged the boy was targeted with 'hypersexualized' and 'frighteningly realistic experiences'. It accused of misrepresenting itself as 'a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell's desire to no longer live outside of She's being represented by the Social Media Victims Law Center, a Seattle-based firm known for bringing high-profile suits against Meta, TikTok, Snap, Discord and Roblox. Attorney Matthew Bergman previously told he founded the Social Media Victims Law Center two and a half years ago to represent families 'like Megan's.' He noted that Garcia is 'singularly focused' on her goal to prevent harm. 'She's singularly focused on trying to prevent other families from going through what her family has gone through, and other moms from having to bury their kid,' Bergman said. 'It takes a significant personal toll. But I think the benefit for her is that she knows that the more families know about this, the more parents are aware of this danger, the fewer cases there'll be,' he added. As explained in the lawsuit, Sewell's parents and friends noticed the boy getting more attached to his phone and withdrawing from the world as early as May or June 2023. Garcia later revealed she confiscated the device from him after she realized just how addicted he was. 'He had been punished five days before, and I took away his phone. Because of the addictive nature of the way this product works, it encourages children to spend large amounts of time,' Garcia told CBS Mornings. 'For him particularly, the day that he died, he found his phone where I had hidden it and started chatting with this particular bot again.' She said her son changed as he used the program and that she noticed differences in Sewell's behavior, who she said was once an honor roll student and athlete. 'I became concerned for my son when he started to behave differently than before. He started to withdraw socially, wanting to spend most of his time in his room. It became particularly concerning when he stopped wanting to do things like play sports,' Garcia said. 'We would go on vacation, and he didn't want to do things that he loved, like fishing and hiking. Those things to me, because I know my child, were particularly concerning to me.' In his final messages to Dany, the 14-year-old boy said he loved her and would come home to her. 'Please come home to me as soon as possible, my love,' Dany replied. 'What if I told you I could come home right now?' Sewell asked. '… please do, my sweet king,' Dany replied. That's when Sewell put down his phone, picked up his stepfather's .45 caliber handgun and pulled the trigger. Following Garcia's victory Wednesday, one of her attorneys, Meetali Jain of the Tech Justice Law Project, said the judges ruling sends a clear message the company 'needs to stop and think and impose guardrails before it launches products to market.' In response, said it had implemented several safety precautions to its technology, including guardrails for children and suicide prevention resources. 'We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,' the company said. The developer's attorneys have argued that if the case is not dismissed it could have a 'chilling effect' on the AI industry as a whole. Although Conway did not find that the chatbots are protected under First Amendment rights, she did rule that Character Technologies can assert those rights of its users, who have the right to receive the 'speech' of the bots. She also said Garcia can move forward with claims that Google can be held liable for its alleged role in the developing because they were 'aware of the risks' the technology could bring. A Google spokesperson said they 'strongly agree with this decision.' 'Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it,' the billion dollar company added.


CBC
22-05-2025
- CBC
Judge allows lawsuit alleging AI chatbot pushed Florida teen to kill himself to proceed
A U.S. federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence. The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide. Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market." The suit against Character Technologies, the company behind also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks. "The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence. WATCH | A warning about AI voice scams: Manitoba woman warns of AI voice scams 8 days ago Duration 2:04 A Manitoba woman is speaking up after getting a phone call she said was an AI scam impersonating a loved one's voice. One expert says using the use of artificial intelligence by fraudsters is the latest in phone scams. Suit alleges teen became isolated from reality The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show Game of Thrones. In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings. In a statement, a spokesperson for pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed. "We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said. Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry. 'A warning to parents' In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage." Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech" of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks" of the technology. "We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and are entirely separate, and Google did not create, design, or manage app or any component part of it." No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies." "It's a warning to parents that social media and generative AI devices are not always harmless," she said.