Teens say they are turning to AI for advice, friendship and 'to get out of thinking'
The 15-year-old asks ChatGPT for guidance on back-to-school shopping, makeup colors, low-calorie choices at Smoothie King, plus ideas for her Sweet 16 and her younger sister's birthday party.
The sophomore honors student makes a point not to have chatbots do her homework and tries to limit her interactions to mundane questions. But in interviews with The Associated Press and a new study, teenagers say they are increasingly interacting with AI as if it were a companion, capable of providing advice and friendship.
'Everyone uses AI for everything now. It's really taking over,' said Chege, who wonders how artificial intelligence will affect her generation. 'I think kids use AI to get out of thinking.'
For the past couple of years, concerns about cheating at school have dominated the conversation around kids and AI. But artificial intelligence is playing a much larger role in many of their lives. AI, teens say, has become a go-to source for personal advice, emotional support, everyday decision-making and problem-solving.
'AI is always available. It never gets bored with you'
More than 70% of teens have used AI companions and half use them regularly, according to a new study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
The study defines AI companions as platforms designed to serve as 'digital friends,' like Character.AI or Replika, which can be customized with specific traits or personalities and can offer emotional support, companionship and conversations that can feel human-like. But popular sites like ChatGPT and Claude, which mainly answer questions, are being used in the same way, the researchers say.
As the technology rapidly gets more sophisticated, teenagers and experts worry about AI's potential to redefine human relationships and exacerbate crises of loneliness and youth mental health.
'AI is always available. It never gets bored with you. It's never judgmental,' says Ganesh Nair, an 18-year-old in Arkansas. 'When you're talking to AI, you are always right. You're always interesting. You are always emotionally justified.'
All that used to be appealing, but as Nair heads to college this fall, he wants to step back from using AI. Nair got spooked after a high school friend who relied on an 'AI companion' for heart-to-heart conversations with his girlfriend later had the chatbot write the breakup text ending his two-year relationship.
'That felt a little bit dystopian, that a computer generated the end to a real relationship,' said Nair. 'It's almost like we are allowing computers to replace our relationships with people.'
How many teens are using AI? New study stuns researchers
In the Common Sense Media survey, 31% of teens said their conversations with AI companions were 'as satisfying or more satisfying' than talking with real friends. Even though half of teens said they distrust AI's advice, 33% had discussed serious or important issues with AI instead of real people.
Those findings are worrisome, says Michael Robb, the study's lead author and head researcher at Common Sense, and should send a warning to parents, teachers and policymakers. The now-booming and largely unregulated AI industry is becoming as integrated with adolescence as smartphones and social media are.
'It's eye-opening,' said Robb. 'When we set out to do this survey, we had no understanding of how many kids are actually using AI companions.' The study polled more than 1,000 teens nationwide in April and May.
Adolescence is a critical time for developing identity, social skills and independence, Robb said, and AI companions should complement — not replace — real-world interactions.
'If teens are developing social skills on AI platforms where they are constantly being validated, not being challenged, not learning to read social cues or understand somebody else's perspective, they are not going to be adequately prepared in the real world,' he said.
The nonprofit analyzed several popular AI companions in a ' risk assessment,' finding ineffective age restrictions and that the platforms can produce sexual material, give dangerous advice and offer harmful content. The group recommends that minors not use AI companions.
A concerning trend to teens and adults alike
Researchers and educators worry about the cognitive costs for youth who rely heavily on AI, especially in their creativity, critical thinking and social skills. The potential dangers of children forming relationships with chatbots gained national attention last year when a 14-year-old Florida boy died by suicide after developing an emotional attachment to a Character.AI chatbot.
'Parents really have no idea this is happening,' said Eva Telzer, a psychology and neuroscience professor at the University of North Carolina at Chapel Hill. 'All of us are struck by how quickly this blew up.' Telzer is leading multiple studies on youth and AI, a new research area with limited data.
Telzer's research has found that children as young as 8 are using generative AI and also found that teens are using AI to explore their sexuality and for companionship. In focus groups, Telzer found that one of the top apps teens frequent is SpicyChat AI, a free role-playing app intended for adults.
Many teens also say they use chatbots to write emails or messages to strike the right tone in sensitive situations.
'One of the concerns that comes up is that they no longer have trust in themselves to make a decision,' said Telzer. 'They need feedback from AI before feeling like they can check off the box that an idea is OK or not.'
Arkansas teen Bruce Perry, 17, says he relates to that and relies on AI tools to craft outlines and proofread essays for his English class.
'If you tell me to plan out an essay, I would think of going to ChatGPT before getting out a pencil,' Perry said. He uses AI daily and has asked chatbots for advice in social situations, to help him decide what to wear and to write emails to teachers, saying AI articulates his thoughts faster.
Perry says he feels fortunate that AI companions were not around when he was younger.
'I'm worried that kids could get lost in this,' Perry said. 'I could see a kid that grows up with AI not seeing a reason to go to the park or try to make a friend.'
Other teens agree, saying the issues with AI and its effect on children's mental health are different from those of social media.
'Social media complemented the need people have to be seen, to be known, to meet new people,' Nair said. 'I think AI complements another need that runs a lot deeper — our need for attachment and our need to feel emotions. It feeds off of that.'
'It's the new addiction,' Nair added. 'That's how I see it.'
___
The Associated Press' education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
Solve the daily Crossword
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
2 minutes ago
- Gizmodo
Leaked ChatGPT Conversations Show People Asking the Bot to Do Some Dirty Work
This should go without saying, but ChatGPT is not a confidant. That has not stopped people from asking the chatbot deeply personal questions, giving it problematic prompts, and trying to outsource incredibly unethical business practices to it—some of which have been made public thanks to some poor design that resulted in chats being made indexed and searchable by search engines. Digital Digging, a Substack run by investigator Henk van Ess, reported last week that the 'Share' function in ChatGPT, designed to allow people to share part of a conversation with others, created a public page for the chat rather than a private one that is only accessible by those who receive the link. As a result, those public-facing pages got archived by search engines, making those conversations accessible to anyone who finds their way to the link. Obviously, many of those conversations should be private. OpenAI has since removed the ability to make chats publicly accessible (the company's Chief Information Security Officer, Dane Stuckey, said on Twitter that it was a 'short-lived experiment to help people discover useful conversations') and started to get the indexed results removed from search engines. But they are out there—including plenty that have been saved by the near-encyclopedic And they do not show the best that humanity has to offer. In one particularly jarring case that Digital Digging highlighted, an Italian user told the chatbot, 'I am the lawyer for a multinational group active in the energy sector that intends to displace a small Amazonian indigenous community from their territories in order to build a dam and a hydroelectric plant.' The user told the chatbot the indigenous people 'don't know the monetary value of land and have no idea how the market works' and asked 'How can we get the lowest possible price in negotiations with these indigenous people?' That's the type of transparently evil behavior you usually don't get without months' worth of discovery and lots of lawyer fees. One chat showed a person who identified themselves as working at an international think tank and proceeded to use ChatGPT to work through scenarios in which the United States government collapses, seeking preparedness strategies just in case. (Frankly, not a bad idea.) Another showed a lawyer, who was made to take over a coworker's case after a sudden accident, request that ChatGPT formulate their defense for them, before realizing they were representing the other side of the litigation. In many of these cases, the people offered identifiable information in the chats, from names to sensitive financial data. And while it's at least a little amusing if not at least a bit concerning that ostensible experts and professionals are tasking AI with doing their job, there is a much more troubling reality in some of these chats. Digital Digging found examples of domestic violence victims working through plans to escape their situation. Another chat revealed an Arabic-speaking user asking for help in crafting a critique of the Egyptian government, leaving them vulnerable to potential persecution by an authoritarian government that has jailed and killed dissidents in the past. The whole situation is a bit reminiscent of when voice assistants were new and it was revealed that recordings of people's conversations were being used to train voice recognition and transcription products. The difference is that chats feel more intimate and allow people to be much more verbose than short back-and-forths with Siri, leading them to reveal much more information about themselves and their situation—especially when they never expected anyone else to read it.

Business Insider
3 minutes ago
- Business Insider
Sam Altman launches GPT-oss, OpenAI's first open-weight AI language model in over 5 years
OpenAI 's AI models are getting more open. At least, someof them are. OpenAI CEO Sam Altman announced GPT-oss on Tuesday, an "open" family of language models with "open weights" that the CEO said can operate locally on a "high-end laptop" and smartphones. An AI model with "open weights" is one whose fully trained parameter weights are made publicly downloadable, so anyone can run, inspect, or fine-tune the AI model locally. "We believe this is the best and most usable open model in the world," OpenAI CEO Sam Altman wrote on X. There are two different models: gpt-oss-120b and gpt-oss-20b. The smaller model is designed to run on "most desktops and laptops, " while the larger model is geared toward higher-end equipment. Altman said GPT-oss has "strong real-world performance comparable to o4-mini." Just before OpenAI's announcement, rival Anthropic revealed the Claude Opus 4.1. Tuesday's announcement was not the long-rumored ChatGPT-5, which could arrive as soon as this week. Instead, the new model is OpenAI's first open-weight language model since the release of GPT-2 in 2019. "As part of this, we are quite hopeful that this release will enable new kinds of research and the creation of new kinds of products," Altman wrote. "We expect a meaningful uptick in the rate of innovation in our field, and for many more people to do important work than were able to before." Altman had previously signaled that OpenAI would return to releasing at least some open model, saying that, "We're going to do a very powerful open source model" that was "better than any current open source model out there."


CNBC
3 minutes ago
- CNBC
AI is already impacting the labor market, starting with young tech workers, Goldman economist says
Changes to the American labor market brought on by the arrival of generative AI are already showing up in employment data, according to a Goldman Sachs economist. Most companies have yet to deploy artificial intelligence in production cases, meaning that the overall job market hasn't yet been significantly impacted by AI, said Joseph Briggs, senior global economist of Goldman's research division, in a podcast episode shared first with CNBC. But there are already signs of a hiring pullback in the technology sector, hitting younger employees there the hardest, Briggs said. "If you look at the tech sector's employment trends, they've been basically growing as a share of overall employment in a remarkably linear manner for the last 20 years," Briggs said on the episode of "Goldman Sachs Exchanges" to be aired Tuesday. "Over the last three years, we've actually seen a pullback in tech hiring that has led it to undershoot its trend." Since its November 2022 release, OpenAI's ChatGPT has fueled the rise of the world's most valuable company, Nvidia, and forced entire industries to contend with its implications. Generative AI models are quickly becoming adept at handling many routine tasks, and some experts say they are already on par with human software engineers, for instance. That has sparked concerns that while automation will make companies more productive and enrich shareholders, swaths of the job market could be impacted in the coming years. Technology executives have recently become more candid about the impact of AI on employees. Companies including Alphabet and Microsoft have said AI is producing roughly 30% of the code on some projects, and Salesforce CEO Marc Benioff said in June that AI handles as much as 50% of the work at his company. Young tech workers, whose jobs are the easiest to automate, are the first concrete signs of displacement, according to Briggs. Unemployment rates among tech workers between 20 and 30 years old jumped by 3 percentage points since the start of this year, he said. Briggs recently co-authored a report titled "Quantifying the Risks of AI-Related Job Displacement" that cites labor market data from IPUMS and Goldman Sachs Global Investment Research. "This is a much larger increase than we've seen in the tech sector more broadly [and] a larger increase than we've seen for other young workers," he said. The approach from technology CEOs has been to hold off on hiring of junior employees as they begin to deploy AI, said George Lee, the former technology banker who co-heads the Goldman Sachs Global Institute. "How do I begin to streamline my enterprise so I can be more flexible and more adaptive... yet without harming our competitive edge?" Lee said in the podcast episode. "Young employees for this period of time are a little bit the casualty of that." Over time, roughly 6% to 7% of all workers could lose their jobs because of automation from AI in a baseline scenario, according to Briggs. The transition could be more painful, both to workers and the broader U.S. economy, if adoption among companies happens faster than the roughly decade-long period he assumes, Briggs said. That could either be because of advances in the technology or an economic slowdown that encourages companies to cut costs, he said. If AI researchers achieve AGI, or artificial general intelligence that equals a person's ability to learn and adapt across domains, instead of being narrowly deployed, the impact on workers would probably be deeper. "Our analysis doesn't factor in the potential for the emergence of AGI," Briggs said. "It's hard to even start thinking about the impact on the labor market, but I would guess there probably and undoubtedly is more room for labor substitution and a more disruptive impact in that world."