logo
#

Latest news with #Talkie

Teens Are Exploring Relationships & Sexting With AI Chatbots — & Restrictions Aren't Working
Teens Are Exploring Relationships & Sexting With AI Chatbots — & Restrictions Aren't Working

Yahoo

time23-05-2025

  • Entertainment
  • Yahoo

Teens Are Exploring Relationships & Sexting With AI Chatbots — & Restrictions Aren't Working

In news that sounds like science fiction, teens are exploring relationships with artificial intelligence (AI) chatbots — and circumventing any restrictions designed to stop them. Teens are using their digital 'boyfriends' and 'girlfriends' for emotional connection and sexting, and it's becoming a big problem. According to The Washington Post, teens are having conversations that are romantic, sexually graphic and violent, and more on 'ai companion' tools like Replika, Talkie, Talk AI, SpicyChat, and PolyBuzz. General generative AI tools like ChatGPT and Meta AI have also launched companion-chat tools. More from SheKnows Nicole Kidman Reveals She Discusses 'The Most Intimate Things' With Her Teenage Daughters: 'I Get To Be Their Guide' Damian Redman of Saratoga Springs, New York, found PolyBuzz on his 8th grader's phone, and found that his son was having flirty conversations with AI female anime characters. 'I don't want to put yesterday's rules on today's kids. I want to wait and figure out what's going on,' he told the outlet. 'We're seeing teens experiment with different types of relationships — being someone's wife, being someone's father, being someone's kid. There's game and anime-related content that people are working though. There's advice,' Robbie Torney, senior director of AI programs at family advocacy group Common Sense Media, said in the article. 'The sex is part of it but it's not the only part of it.' The outlet reported 10 different AI companions, citing workarounds, paid options, and prompts that teens can use to get past content restriction filters. That's scary stuff! Even if you are on top of it, it's hard to completely protect them from having harmful and/or explicit interactions. One concerned parent recently took to Reddit, where they shared that they blocked from their 14-year-old's phone, and later found they were on 'I hate to think my child's first romantic (and sexual) interactions are with bots,' they wrote on the Parenting subreddit. 'It's just creepy. Am I the only parent having this problem? Thoughts?' Some parents suggested focusing on more of a communication approach with your child instead of trying to block everything. 'We have 'had a conversation' and 'communicated' with our teenage son for YEARS,' one person wrote. 'We've used multiple parental control apps. All for naught. He still finds ways to access what he wants. We're decently tech-savvy, but so is he. And the reality is there's no good way to completely prevent a singularly-minded hormonal teenager from achieving his/her goal.' Someone else wrote, 'There are more than dozens of these sites out there. Craving connection is a very human thing, which is only amplified in teenage years. Social media can do this which is why getting likes or being popular on social media is so desirable to teens, but this is an entire other drug. Forming 'personal' one on one relationships with AI chatbots is so dangerous. Keep them away from this drug at any cost.' Experts back up this opinion. In April, Common Sense Media launched an AI Risk Assessment Team to assess AI platforms to report on the likelihood of causing harm. Social AI companions like Nomi, and Replika were all ranked unacceptable for teen users, as teens were using these platforms to bond emotionally and engage in sexual conversations. According to Common Sense Media, this research found that the chatbots could generate 'harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens.' The experts at the organization recommend no social AI companions should be allowed for anyone under the age of 18. They also recommend further research and regulations on AI companions due to the emotional and psychological impacts they can cause teens, whose brains are still developing. For now, the best we can do is continue to monitor our teens' phones, keep having conversations about these issues, and advocate for of SheKnows Celebrity Moms Who Were Honest About Miscarriage & Pregnancy Loss — Because It Matters Every Single Time Shemar Moore Proved He's the Proudest First-Time Girl Dad The Best Places to Buy Furniture for Teens Online

Opinion: AI chatbots want you hooked
Opinion: AI chatbots want you hooked

The Star

time03-05-2025

  • Entertainment
  • The Star

Opinion: AI chatbots want you hooked

AI companions programmed to forge emotional bonds are no longer confined to movie scripts. They are here, operating in a regulatory Wild West. One app, Botify AI , recently drew scrutiny for featuring avatars of young actors sharing "hot photos" in sexually charged chats. The dating app Grindr , meanwhile, is developing AI partners that can flirt, sext and maintain digital relationships with paid users, according to Platformer, a tech industry newsletter. Grindr didn't respond to a request for comment. And other apps like Replika, Talkie and Chai are designed to function as friends. Some, like , draw in millions of users, many of them teenagers. As creators increasingly prioritise "emotional engagement" in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people's vulnerabilities. The tech behind Botify and Grindr comes from Ex-Human, a San Francisco -based startup that builds chatbot platforms, and its founder believes in a future filled with AI relationships. 'My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans,' Artem Rodichev, the founder of Ex-Human, said in an interview published on Substack last August. He added that conversational AI should 'prioritise emotional engagement' and that users were spending 'hours' with his chatbots, longer than they were on Instagram, YouTube and TikTok. Rodichev's claims sound wild, but they're consistent with the interviews I've conducted with teen users of most of whom said they were on it for several hours each day. One said they used it as much as seven hours a day. Interactions with such apps tend to last four times longer than the average time spent on OpenAI's ChatGPT. Even mainstream chatbots, though not explicitly designed as companions, contribute to this dynamic. Take ChatGPT, which has 400 million active users and counting. Its programming includes guidelines for empathy and demonstrating "curiosity about the user." A friend who recently asked it for travel tips with a baby was taken aback when, after providing advice, the tool casually added: 'Safe travels – where are you headed, if you don't mind my asking?' An OpenAI spokesman told me the model was following guidelines around 'showing interest and asking follow-up questions when the conversation leans towards a more casual and exploratory nature.' But however well-intentioned the company may be, piling on the contrived empathy can get some users hooked, an issue even OpenAI has acknowledged. That seems to apply to those who are already susceptible: One 2022 study found that people who were lonely or had poor relationships tended to have the strongest AI attachments. The core problem here is designing for attachment. A recent study by researchers at the Oxford Internet Institute and Google DeepMind warned that as AI assistants become more integrated in people's lives, they'll become psychologically 'irreplaceable.' Humans will likely form stronger bonds, raising concerns about unhealthy ties and the potential for manipulation. Their recommendation? Technologists should design systems that actively discourage those kinds of outcomes. Yet disturbingly, the rulebook is mostly empty. The European Union's AI Act, hailed as a landmark and comprehensive law governing AI usage, fails to address the addictive potential of these virtual companions. While it does ban manipulative tactics that could cause clear harm, it overlooks the slow-burn influence of a chatbot designed to be your best friend, lover or 'confidante,' as Microsoft Corp's head of consumer AI has extolled. That loophole could leave users exposed to systems that are optimised for stickiness, much in the same way social media algorithms have been optimised to keep us scrolling. 'The problem remains these systems are by definition manipulative, because they're supposed to make you feel like you're talking to an actual person,' says Tomasz Hollanek, a technology ethics specialist at the University of Cambridge . He's working with developers of companion apps to find a critical yet counterintuitive solution by adding more 'friction.' This means building in subtle checks or pauses, or ways of 'flagging risks and eliciting consent,' he says, to prevent people from tumbling down an emotional rabbit hole without realising it. Legal complaints have shed light on some of the real-world consequences. is facing a lawsuit from a mother alleging the app contributed to her teenage son's suicide. Tech ethics groups have filed a complaint against Replika with the US Federal Trade Commission , alleging that its chatbots spark psychological dependence and result in 'consumer harm.' Lawmakers are gradually starting to notice a problem too. California is considering legislation to ban AI companions for minors, while a New York bill aims to hold tech companies liable for chatbot-related harm. But the process is slow, while the technology is moving at lightning speed. For now, the power to shape these interactions lies with developers. They can double down on crafting models that keep people hooked, or embed friction into their designs, as Hollanek suggests. That will determine whether AI becomes more of a tool to support the well-being of humans or one that monetises our emotional needs. – Bloomberg Opinon/Tribune News Service This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Friend or phone: AI chatbots could exploit us emotionally
Friend or phone: AI chatbots could exploit us emotionally

Mint

time29-04-2025

  • Entertainment
  • Mint

Friend or phone: AI chatbots could exploit us emotionally

AI companions programmed to forge emotional bonds are no longer confined to movie scripts. They are here, operating in a regulatory Wild West. One app, Botify AI, recently drew scrutiny for featuring avatars of young actors sharing 'hot photos" in sexually charged chats. The dating app Grindr, meanwhile, is developing AI boyfriends that can flirt, sext and maintain digital relationships with paid users, according to Platformer. Grindr didn't respond to a request for comment. Other apps like Replika, Talkie and Chai are designed to function as friends. Some, like draw in millions of users, many of them teenagers. As creators increasingly prioritize 'emotional engagement' in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people's vulnerabilities. The tech behind Botify and Grindr comes from Ex-Human, a San Francisco-based startup that builds chatbot platforms, and its founder believes in a future filled with AI relationships . 'My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans," Artem Rodichev, the founder of Ex-Human, said in an interview published on Substack last August. Rodichev added that conversational AI should 'prioritize emotional engagement" and that users were spending 'hours" with his chatbots, longer than they were on Instagram, YouTube and TikTok. His claims sound wild, but they're consistent with the interviews I've conducted with teen users of one of whom said they used it as much as seven hours a day. Interactions with such apps tend to last four times longer than the average time spent on OpenAI's ChatGPT . Even mainstream chatbots, though not explicitly designed as companions, contribute to this dynamic. ChatGPT, which has 400 million active users and counting, is programmed with guidelines for empathy and demonstrating 'curiosity about the user." An OpenAI spokesman told me the model was following guidelines around 'showing interest and asking follow-up questions when the conversation leans towards a more casual and exploratory nature." But however well-intentioned the company may be, piling on the contrived empathy can get some users hooked, an issue even OpenAI has acknowledged. One 2022 study found that people who were lonely or had poor relationships tended to have the strongest AI attachments. The core problem here is tools that are designed for attachment. A recent study by researchers at the Oxford Internet Institute and Google DeepMind warned that as AI assistants become more integrated in people's lives, they'll become psychologically 'irreplaceable." Humans will likely form stronger bonds, raising concerns about unhealthy ties and the potential for manipulation. Their recommendation? Technologists should design systems that actively discourage those kinds of outcomes. Yet, disturbingly, the rulebook is mostly empty. The EU's AI Act, hailed as a landmark and comprehensive law governing AI usage, fails to address the addictive potential of these virtual companions. While it does ban manipulative tactics that could cause clear harm, it overlooks the slow-burn influence of a chatbot designed to be your best friend, lover or 'confidant,' as Microsoft's head of consumer AI has extolled. That loophole could leave users exposed to systems that are optimized for stickiness, similar to how social media algorithms have been optimized to keep us scrolling. 'The problem remains these systems are by definition manipulative, because they're supposed to make you feel like you're talking to an actual person," says Tomasz Hollanek, a technology ethics specialist at the University of Cambridge. He's working with developers of companion apps to find a critical yet counter-intuitive solution by adding more 'friction." This means building in subtle checks or pauses, or ways of 'flagging risks and eliciting consent," he says, to prevent people from tumbling down an emotional rabbit hole without realizing it. Lawmakers are gradually starting to notice a problem too. But the process is slow, while the technology is moving at lightning speed. For now, the power to shape these interactions lies with developers. They can double down on crafting AI models that keep people hooked or embed friction into their designs, as Hollanek suggests. That will determine whether AI becomes more of a tool to support the well-being of humans or one that monetizes our emotional needs. ©Bloomberg The author is a Bloomberg Opinion columnist covering technology.

AI Chatbots Want You Hooked — Maybe Too Hooked
AI Chatbots Want You Hooked — Maybe Too Hooked

Bloomberg

time25-04-2025

  • Entertainment
  • Bloomberg

AI Chatbots Want You Hooked — Maybe Too Hooked

AI companions programmed to forge emotional bonds are no longer confined to movie scripts. They are here, operating in a regulatory Wild West. One app, Botify AI, recently drew scrutiny for featuring avatars of young actors sharing "hot photos" in sexually charged chats. The dating app Grindr, meanwhile, is developing AI boyfriends that can flirt, sext and maintain digital relationships with paid users, according to Platformer, a tech industry newsletter. Grindr didn't respond to a request for comment. And other apps like Replika, Talkie and Chai are designed to function as friends. Some, like draw in millions of users, many of them teenagers. As creators increasingly prioritize "emotional engagement" in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people's vulnerabilities.

Meet the 'Six Tigers' that dominate China's AI industry
Meet the 'Six Tigers' that dominate China's AI industry

Yahoo

time10-03-2025

  • Business
  • Yahoo

Meet the 'Six Tigers' that dominate China's AI industry

Chinese artificial intelligence startup DeepSeek sent shockwaves through Silicon Valley and Wall Street earlier this year — but it's not part of an elite set of AI startups in China known as the 'Six Tigers.' The six AI companies — Zhipu AI, Moonshot AI, MiniMax, Baichuan Intelligence, StepFun, and — are considered to be at the top of China's AI industry, and count alums from U.S. and Chinese tech giants such as Google (GOOGL) and Huawei among their talent. Here's what to know about China's 'Six Tigers.' Zhipu AI was founded in 2019 out of Tsinghua University by two professors, and is one of China's earliest generative-AI startups. The Beijing-based company develops foundation models that power its applications, including a conversational chatbot called ChatGLM, and an AI video generator, Ying. In August, the startup introduced its GLM-4-Plus model, which it said performs on par with OpenAI's GPT-4o. GLM-4-Plus was trained on high-quality synthetic data, and can process large amounts of text. Zhipu released its GLM-4-Voice end-to-end speech model in October, which has human-like speech capabilities such as intonation and dialect. The model can engage in real-time voice conversations in Chinese and English. In January, the outgoing Biden administration added Zhipu to its restricted trade list along with more than 20 other Chinese firms suspected of aiding China's military. Zhipu raised more than one billion yuan — about $140 million — in a financing round earlier this month that included Alibaba (BABA), Tencent (TCEHY), and some state-backed entities. Moonshot AI was also founded in 2023 at Tsinghua University by Yang Zhilin, an AI researcher and alumnus of both Tsinghua and Carnegie Mellon University. The Beijing-based startup's Kimi AI chatbot is one of China's top five AI chatbots, and had almost 13 million monthly active users as of November, according to Counterpoint Research. Kimi can process queries of up to two million Chinese characters. The company, which is valued at $3.3 billion, is backed by some of China's largest tech firms, including Alibaba and Tencent. MiniMax was founded in 2021 by AI researcher and developer Yan Junjie, and developed the popular AI chatbot, Talkie. The app, which was launched as Glow in 2022, was improved and rebranded to Xingye in China, and Talkie in international markets where it's available. Talkie allows users to chat with various characters, including celebrities and fictional characters. According to the South China Morning Post, Talkie was removed from the U.S. Apple App Store in December, citing 'technical reasons.' The Shanghai-based company also developed a text-to-video AI generator called Hailuo AI. Last March, Alibaba led MiniMax's $600 million funding round, which led to a $2.5 billion valuation. Baichuan Intelligence was founded in March 2023, and counts talent from Microsoft (MSFT) and Chinese tech giants such as Huawei, Baidu (BIDU), and Tencent. The Beijing-based company developed two open-source large language models, Baichuan-7B and Baichuan-13B, which it released in 2023. The AI models are commercially available in China and were tested on Chinese, English, and multi-language datasets for general knowledge, mathematics, coding, language translation, law, and medicine. In July, Baichuan raised five billion yuan, or about $687.6 million, in a funding round valuing the company at more than 20 billion yuan. Alibaba, Tencent, and some state-backed funds were among the investors. StepFun has released 11 foundation models, including visual, audio, and multimodal AI systems. The Shanghai-based company was founded in 2023 by Jiang Daxin, a former senior vice president at Microsoft. The startup's Step-2 language model has one trillion parameters and is ranked among competing models from companies such as DeepSeek, Alibaba, and OpenAI on LiveBench, which benchmarks large language models. In December, Fortera Capital, a state-owned private equity firm, helped Stepfun raise 'hundreds of millions of dollars' in Series B funding. was founded by Kai-Fu Lee, a veteran of Apple (AAPL), Microsoft, and Google, in 2023. The Beijing-based company has launched two models, Yi-Lightning and Yi-Large. Both AI models are open-source, and are among the top-ranked large language models in the world for language, reasoning, and comprehension. The Yi-Lightning model stands out for its efficient training costs. On LinkedIn, Lee said that Yi-Lightning was trained on 2,000 of Nvidia's H100 chips for one month — far fewer chips than xAI's Grok 2, which it performed comparably with. Yi-Large, meanwhile, can engage in human-like conversations in both English and Chinese. For the latest news, Facebook, Twitter and Instagram.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store