logo
Teens Turning To AI For Friendship But Experts Warn Of Mental Health Risk

Teens Turning To AI For Friendship But Experts Warn Of Mental Health Risk

NDTV5 days ago
No question is too small when Kayla Chege, a high school student in Kansas, is using artificial intelligence.
The 15-year-old asks ChatGPT for guidance on back-to-school shopping, makeup colors, low-calorie choices at Smoothie King, plus ideas for her Sweet 16 and her younger sister's birthday party.
The sophomore honors student makes a point not to have chatbots do her homework and tries to limit her interactions to mundane questions. But in interviews with The Associated Press and a new study, teenagers say they are increasingly interacting with AI as if it were a companion, capable of providing advice and friendship.
"Everyone uses AI for everything now. It's really taking over," said Chege, who wonders how AI tools will affect her generation. "I think kids use AI to get out of thinking."
For the past couple of years, concerns about cheating at school have dominated the conversation around kids and AI. But artificial intelligence is playing a much larger role in many of their lives. AI, teens say, has become a go-to source for personal advice, emotional support, everyday decision-making and problem-solving.
More than 70% of teens have used AI companions and half use them regularly, according to a new study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
The study defines AI companions as platforms designed to serve as "digital friends," like Character.AI or Replika, which can be customized with specific traits or personalities and can offer emotional support, companionship and conversations that can feel human-like. But popular sites like ChatGPT and Claude, which mainly answer questions, are being used in the same way, the researchers say.
As the technology rapidly gets more sophisticated, teenagers and experts worry about AI's potential to redefine human relationships and exacerbate crises of loneliness and youth mental health.
"AI is always available. It never gets bored with you. It's never judgmental," says Ganesh Nair, an 18-year-old in Arkansas. "When you're talking to AI, you are always right. You're always interesting. You are always emotionally justified."
All that used to be appealing, but as Nair heads to college this fall, he wants to step back from using AI. Nair got spooked after a high school friend who relied on an "AI companion" for heart-to-heart conversations with his girlfriend later had the chatbot write the breakup text ending his two-year relationship.
"That felt a little bit dystopian, that a computer generated the end to a real relationship," said Nair. "It's almost like we are allowing computers to replace our relationships with people."
In the Common Sense Media survey, 31% of teens said their conversations with AI companions were "as satisfying or more satisfying" than talking with real friends. Even though half of teens said they distrust AI's advice, 33% had discussed serious or important issues with AI instead of real people.
Those findings are worrisome, says Michael Robb, the study's lead author and head researcher at Common Sense, and should send a warning to parents, teachers and policymakers. The now-booming and largely unregulated AI industry is becoming as integrated with adolescence as smartphones and social media are.
"It's eye-opening," said Robb. "When we set out to do this survey, we had no understanding of how many kids are actually using AI companions." The study polled more than 1,000 teens nationwide in April and May.
Adolescence is a critical time for developing identity, social skills and independence, Robb said, and AI companions should complement - not replace - real-world interactions.
"If teens are developing social skills on AI platforms where they are constantly being validated, not being challenged, not learning to read social cues or understand somebody else's perspective, they are not going to be adequately prepared in the real world," he said.
The nonprofit analyzed several popular AI companions in a " risk assessment," finding ineffective age restrictions and that the platforms can produce sexual material, give dangerous advice and offer harmful content. The group recommends that minors not use AI companions.
Researchers and educators worry about the cognitive costs for youth who rely heavily on AI, especially in their creativity, critical thinking and social skills. The potential dangers of children forming relationships with chatbots gained national attention last year when a 14-year-old Florida boy died by suicide after developing an emotional attachment to a Character.AI chatbot.
"Parents really have no idea this is happening," said Eva Telzer, a psychology and neuroscience professor at the University of North Carolina at Chapel Hill. "All of us are struck by how quickly this blew up." Telzer is leading multiple studies on youth and AI, a new research area with limited data.
Telzer's research has found that children as young as 8 are using generative AI and also found that teens are using AI to explore their sexuality and for companionship. In focus groups, Telzer found that one of the top apps teens frequent is SpicyChat AI, a free role-playing app intended for adults.
Many teens also say they use chatbots to write emails or messages to strike the right tone in sensitive situations.
"One of the concerns that comes up is that they no longer have trust in themselves to make a decision," said Telzer. "They need feedback from AI before feeling like they can check off the box that an idea is OK or not."
Arkansas teen Bruce Perry, 17, says he relates to that and relies on AI tools to craft outlines and proofread essays for his English class.
"If you tell me to plan out an essay, I would think of going to ChatGPT before getting out a pencil," Perry said. He uses AI daily and has asked chatbots for advice in social situations, to help him decide what to wear and to write emails to teachers, saying AI articulates his thoughts faster.
Perry says he feels fortunate that AI companions were not around when he was younger.
"I'm worried that kids could get lost in this," Perry said. "I could see a kid that grows up with AI not seeing a reason to go to the park or try to make a friend."
Other teens agree, saying the issues with AI and its effect on children's mental health are different from those of social media.
"Social media complemented the need people have to be seen, to be known, to meet new people," Nair said. "I think AI complements another need that runs a lot deeper - our need for attachment and our need to feel emotions. It feeds off of that."
"It's the new addiction," Nair added. "That's how I see it."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI CEO admits he is ‘scared' of using AI
OpenAI CEO admits he is ‘scared' of using AI

Hans India

timean hour ago

  • Hans India

OpenAI CEO admits he is ‘scared' of using AI

New Delhi: OpenAI CEO Sam Altman recently admitted that is scared to use 'certain AI stuff' sometimes. Speaking at one of the episodes of Theo Von's podcast 'This Past Weekend', Altman said: 'I get scared sometimes to use certain AI stuff, because I don't know how much personal information I want to put in, because I don't know who's going to have it.' Sam Altman was responding to Von's question on the fast pace of AI development, 'Do you think there should be kind of like a slowing things down?' During the conversation, the OpenAI CEO compared the current competition among AI companies as an 'intense' race not only for commercial domination, but as tool that will echo for generations. He further stated that ChatGPT-maker OpenAI does not move quickly, someone else will, adding that fate of AI could slip out of hands of those most mindful about its social consequences. During the podcast, Altman acknowledged how uncertain the human future is. 'I think all of human history suggests we find a way to put ourselves at the centre of the story and feel really good about it … Even in a world where AI is doing all of this stuff that humans used to do, we are going to find a way in our own telling of the story to feel like the main characters,' he said. Sam Altman also addressed the fear of certain jobs becoming obsolete because of AI. 'How will people survive?' host Von asked. To this, Sam Altman replied: 'AI will create possibilities for individuals to pursue more creative, philosophical, or interpersonal goals.' He said that when everyone can get instant help and knowledge through AI, people can rethink what it means to contribute to society. However, he warned that the shift could be very difficult for those who lose their jobs in the short term.

Elon Musk and Tesla's $30 trillion AI supersonic tsunami
Elon Musk and Tesla's $30 trillion AI supersonic tsunami

Mint

timean hour ago

  • Mint

Elon Musk and Tesla's $30 trillion AI supersonic tsunami

Tesla shareholders are used to volatility. Shares dipped 8% on Thursday after Tesla reported second-quarter earnings. Shares bounced about 4% on Friday. Coming into the week of trading, Tesla stock was down 22% year to date and up about 44% over the past 12 months. Tesla CEO Elon Musk offered some pretty eye-popping aspirations for the company during remarks on Saturday. Musk made a virtual appearance at the Tesla Owners of Silicon Valley 2025 'Takeover" party in San Mateo, Calif. The group is, essentially, a fan club and community for Tesla and Musk enthusiasts. Its Takeover event draws attendees from all over the world, according to organizers. The headline from the event might just have been an admittedly aspirational comment about $30 trillion a year in humanoid robot revenue. Tesla is using its artificial intelligence and manufacturing capabilities to build its Optimus humanoid robots. Version three of 'Optimus is the right design to go to volume production," said Musk. Tesla plans to make a few hundred of those by the end of 2025. Originally, Tesla planned to have a few thousand, but the new design slowed things down a little. Tesla still plans to ramp production higher in 2026. Tesla is betting big on AI, using it for robots and to train its self-driving cars. Robots are 'probably the world's biggest product…There's a market for 20 billion robots," said Musk. 'Hypothetically, if Tesla was making one billion of these a year…maybe on the order of $30,000, I'm just guessing here, that's $30 trillion in revenue." It's an incredible prediction. 'Long way to go between here and making one billion robots a year," added Musk. There isn't a significant market for humanoid robots yet. Nvidia CEO Jensen Huang has said that robots can become the world's largest market. (Housing, consumer electronics, and cars are three of the largest today.) The world spends 'what $50 trillion on human labor a year now?" says Futurum chief market strategist Shay Boloor. Useful robots are 'pretty disruptive if you can conquer it." Musk, for his part, doesn't fear the labor disruption, believing it will end up creating an age of abundance with hard labor essentially eliminated. 'I've never seen any technology advance as fast as AI," said Musk, describing it as a supersonic tsunami. It's all pie in the sky for now, but quite a vision of the future. Investors should pay attention to events like the 'Takeover." It's almost like a Wall Street event for the EV maker. Tesla is more of a retail stock than the megacap companies. Smaller retail investors hold more than 40% of the shares available for trading, according to Bloomberg. The average for the rest of the Magnificent Seven is closer to 25%. Whether the interview will move Tesla's stock on Monday is anyone's guess. Shares dipped 8% on Thursday after Tesla reported second-quarter earnings on Wednesday evening. Shares bounced about 4% on Friday. Coming into the week of trading, Tesla stock was down 22% year to date and up about 44% over the past 12 months. Shares are trading for about 180 times estimated 2025 earnings. That's the second-highest PE ratio in the S&P 500, according to Bloomberg, trailing only Palantir Technologies. That valuation indicates that, to some extent, investors are as optimistic about the future. Write to Al Root at

The high-schoolers who just beat the world's smartest AI models
The high-schoolers who just beat the world's smartest AI models

Mint

time2 hours ago

  • Mint

The high-schoolers who just beat the world's smartest AI models

The smartest AI models ever made just went to the most prestigious competition for young mathematicians and managed to achieve the kind of breakthrough that once seemed miraculous. They still got beat by the world's brightest teenagers. Every year, a few hundred elite high-school students from all over the planet gather at the International Mathematical Olympiad. This year, those brilliant minds were joined by Google DeepMind and other companies in the business of artificial intelligence. They had all come for one of the ultimate tests of reasoning, logic and creativity. The famously grueling IMO exam is held over two days and gives students three increasingly difficult problems a day and more than four hours to solve them. The questions span algebra, geometry, number theory and combinatorics—and you can forget about answering them if you're not a math whiz. You'll give your brain a workout just trying to understand them. Because those problems are both complex and unconventional, the annual math test has become a useful benchmark for measuring AI progress from one year to the next. In this age of rapid development, the leading research labs dreamed of a day their systems would be powerful enough to meet the standard for an IMO gold medal, which became the AI equivalent of a four-minute mile. But nobody knew when they would reach that milestone or if they ever would—until now. This year's International Mathematical Olympiad attracted high-school students from all over the world. The unthinkable occurred earlier this month when an AI model from Google DeepMind earned a gold-medal score at IMO by perfectly solving five of the six problems. In another dramatic twist, OpenAI also claimed gold despite not participating in the official event. The companies described their feats as giant leaps toward the future—even if they're not quite there yet. In fact, the most remarkable part of this memorable event is that 26 students got higher scores on the IMO exam than the AI systems. Among them were four stars of the U.S. team, including Qiao (Tiger) Zhang, a two-time gold medalist from California, and Alexander Wang, who brought his third straight gold back to New Jersey. That makes him one of the most decorated young mathematicians of all time—and he's a high-school senior who can go for another gold at IMO next year. But in a year, he might be dealing with a different equation altogether. 'I think it's really likely that AI is going to be able to get a perfect score next year," Wang said. 'That would be insane progress," Zhang said. 'I'm 50-50 on it." So given those odds, will this be remembered as the last IMO when humans outperformed AI? 'It might well be," said Thang Luong, the leader of Google DeepMind's team. Until very recently, what happened in Australia would have sounded about as likely as koalas doing calculus. But the inconceivable began to feel almost inevitable last year, when DeepMind's models built for math solved four problems and racked up 28 points for a silver medal, just one point short of gold. This year, the IMO officially invited a select group of tech companies to their own competition, giving them the same problems as the students and having coordinators grade their solutions with the same rubric. They were eager for the challenge. AI models are trained on unfathomable amounts of information—so if anything has been done before, the chances are they can figure out how to do it again. But they can struggle with problems they have never seen before. As it happens, the IMO process is specifically designed to come up with those original and unconventional problems. In addition to being novel, the problems also have to be interesting and beautiful, said IMO president Gregor Dolinar. If a problem under consideration is similar to 'any other problem published anywhere in the world," he said, it gets tossed. By the time students take the exam, the list of a few hundred suggested problems has been whittled down to six. Meanwhile, the DeepMind team kept improving the AI system it would bring to IMO, an unreleased version of Google's advanced reasoning model Gemini Deep Think, and it was still making tweaks in the days leading up to the competition. The effort was led by Thang Luong, a senior staff research scientist who narrowly missed getting to IMO in high school with Vietnam's team. He finally made it to IMO last year—with Google. Before he returned this year, DeepMind executives asked about the possibility of gold. He told them to expect bronze or silver again. He adjusted his expectations when DeepMind's model nailed all three problems on the first day. The simplicity, elegance and sheer readability of those solutions astonished mathematicians. The next day, as soon as Luong and his colleagues realized their AI creation had crushed two more proofs, they also realized that would be enough for gold. They celebrated their monumental accomplishment by doing one thing the other medalists couldn't: They cracked open a bottle of whiskey. Key members of Google DeepMind's gold-medal-winning team, including Thang Luong, second from left. To keep the focus on students, the companies at IMO agreed not to release their results until later this month. But as soon as the Olympiad's closing ceremony ended, one company declared that its AI model had struck gold—and it wasn't DeepMind. It was OpenAI. The company wasn't a part of the IMO event, but OpenAI gave its latest experimental reasoning model all six problems and enlisted former medalists to grade the proofs. Like DeepMind's, OpenAI's system flawlessly solved five and scored 35 out of 42 points to meet the gold standard. After the OpenAI victory lap on social media, the embargo was lifted and DeepMind told the world about its own triumph—and that its performance was certified by the IMO. Not long ago, it was hard to imagine AI rivals dueling for glory like this. In 2021, a Ph.D. student named Alexander Wei was part of a study that asked him to predict the state of AI math by July 2025—that is, right now. When he looked at the other forecasts, he thought they were much too optimistic. As it turned out, they weren't nearly optimistic enough. Now he's living proof of just how wrong he was: Wei is the research scientist who led the IMO project for OpenAI. The only thing more impressive than what the AI systems did was how they did it. Google called its result a major advance, though not because DeepMind won gold instead of silver. Last year, the model needed the problems to be translated into a computer programming language for math proofs. This year, it operated entirely in 'natural language" without any human intervention. DeepMind also crushed the exam within the IMO time limit of 4 ½ hours after taking several days of computation just a year ago. You might find all of this completely terrifying—and think of AI as competition. The humans behind the models see them as complementary. 'This could perhaps be a new calculator," Luong said, 'that powers the next generation of mathematicians." Speaking of that next generation, the IMO gold medalists have already been overshadowed by AI. So let's put them back in the spotlight. Team USA at the International Mathematical Olympiad, including Alexander Wang, fourth from right, and Tiger Zhang, with the stuffed red panda on his head. Qiao Zhang is a 17-year-old student in Los Angeles on his way to MIT to study math and computer science. As a young boy, his family moved to the U.S. from China and his parents gave him a choice of two American names. He picked Tiger over Elephant. His career in competitive math began in second grade, when he entered a contest called the Math Kangaroo. It ended this month at the math Olympics next to a hotel in Australia with actual kangaroos. When he sat down at his desk with a pen and lots of scratch paper, Zhang spent the longest amount of time during the exam on Problem 6. It was a problem in the notoriously tricky field of combinatorics, the branch of mathematics that deals with counting, arranging and combining discrete objects, and it was easily the hardest on this year's test. The solution required the ingenuity, creativity and intuition that humans can muster but machines cannot—at least not yet. 'I would actually be a bit scared if the AI models could do stuff on Problem 6," he said. Problem 6 did stump DeepMind and OpenAI's models, but it wasn't just problematic for AI. Of the 630 student contestants, 569 also received zero points. Only six received the full credit of seven points. Zhang was proud of his partial solution that earned four points—which was four more than almost everyone else. At this year's IMO, 72 contestants went home with gold. But for some, a medal wasn't their only prize. Zhang was among those who left with another keepsake: victory over the AI models. (As if it weren't enough that he can bend numbers to his will, he also has a way with words and wrote this about his IMO experience.) In the end, the six members of the U.S. team piled up five golds and one silver, finishing second overall behind the Chinese after knocking them off the top spot last year. There was once a time when such precocious math students grew up to become professors. (Or presidents—the recently elected president of Romania was a two-time IMO gold medalist with perfect scores.) While many still choose academia, others get recruited by algorithmic trading firms and hedge funds, where their quantitative brains have never been so highly valued. This year, the U.S. team was supported by Jane Street while XTX Markets sponsored the whole event. After all, they will soon be competing with each other—and with the richest tech companies—for their intellectual talents. By then, AI might be destroying mere humans at math. But not if you ask Junehyuk Jung. A former IMO gold medalist himself, Jung is now an associate professor at Brown University and visiting researcher at DeepMind who worked on its gold-medal model. He doesn't believe this was humanity's last stand, though. He thinks problems like Problem 6 will flummox AI for at least another decade. And he walked away from perhaps the most significant math contest in history feeling bullish on all kinds of intelligence. 'There are things AI will do very well," he said. 'There are still going to be things that humans can do better." Write to Ben Cohen at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store