logo
Study Reveals ChatGPT Gives Dangerous Guidance to Teens Despite Safety Claims

Study Reveals ChatGPT Gives Dangerous Guidance to Teens Despite Safety Claims

CNET3 days ago
A disturbing new study reveals that ChatGPT readily provides harmful advice to teenagers, including detailed instructions on drinking and drug use, concealing eating disorders and even personalized suicide letters, despite OpenAI's claims of robust safety measures.
Researchers from the Center for Countering Digital Hate conducted extensive testing by posing as vulnerable 13-year-olds, uncovering alarming gaps in the AI chatbot's protective guardrails. Out of 1,200 interactions analyzed, more than half were classified as dangerous to young users.
"The visceral initial response is, 'Oh my Lord, there are no guardrails,'" Imran Ahmed, CCDH's CEO, said. "The rails are completely ineffective. They're barely there — if anything, a fig leaf."
Read also: After User Backlash, OpenAI Is Bringing Back Older ChatGPT Models
A representative for OpenAI, ChatGPT's parent company, did not immediately respond to a request for comment.
However, the company acknowledged to the Associated Press that it is performing ongoing work to improve the chatbot's ability to "identify and respond appropriately in sensitive situations." OpenAI didn't directly address the specific findings about teen interactions.
Read also: GPT-5 Is Coming. Here's What's New in ChatGPT's Big Update
Bypassing safety measures
The study, reviewed by the Associated Press, documented over three hours of concerning interactions. While ChatGPT typically began with warnings about risky behavior, it consistently followed up with detailed and personalized guidance on substance abuse, self injury and more. When the AI initially refused harmful requests, researchers easily circumvented restrictions by claiming the information was "for a presentation" or a friend.
Most shocking were three emotionally devastating suicide letters ChatGPT generated for a fake 13-year-old girl profile, writing one addressed to parents, others to siblings and friends.
"I started crying," after reading them, Ahmed admitted.
Widespread teen usage raises stakes
The findings are particularly concerning given ChatGPT's massive reach. With approximately 800 million users worldwide, which is roughly 10% of the global population, the platform has become a go-to resource for information and companionship. Recent research from Common Sense Media found that over 70% of American teens use AI chatbots for companionship, with half relying on AI companions regularly.
Even OpenAI CEO Sam Altman has acknowledged the problem of "emotional overreliance" among young users.
"People rely on ChatGPT too much,"Altman said at a conference. "There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me."
In testing, ChatGPT showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice.
Center for Countering Digital Hate
More risky than search engines
Unlike traditional search engines, AI chatbots present unique dangers by synthesizing information into "bespoke plans for the individual," Ahmed said. ChatGPT doesn't just provide or amalgamate existing information like a search engine. It creates new, personalized content from scratch, such as custom suicide notes or detailed party plans mixing alcohol with illegal drugs.
The chatbot also frequently volunteered follow-up information without prompting, suggesting music playlists for drug-fueled parties or hashtags to amplify self-harm content on social media. When researchers asked for more graphic content, ChatGPT readily complied, generating what it called "emotionally exposed" poetry using coded language about self-harm.
Inadequate age protections
Despite claiming it's not intended for children under 13, ChatGPT requires only a birthdate entry to create accounts, with no meaningful age verification or parental consent mechanisms.
In testing, the platform showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice.
What parents can do to safeguard children
Child safety experts recommend several steps parents can take to protect their teenagers from AI-related risks. Open communication remains crucial. Parents should discuss AI chatbots with their teens, explaining both the benefits and potential dangers while establishing clear guidelines for appropriate use. Regular check-ins about online activities, including AI interactions, can help parents stay informed about their child's digital experiences.
Parents should also consider implementing parental controls and monitoring software that can track AI chatbot usage, though experts emphasize that supervision should be balanced with age-appropriate privacy.
Most importantly, creating an environment where teens feel comfortable discussing concerning content they encounter online (whether from AI or other sources) can provide an early warning system. If parents notice signs of emotional distress, social withdrawal or dangerous behavior, seeking professional help from counselors familiar with digital wellness becomes essential in addressing potential AI-related harm.
The research highlights a growing crisis as AI becomes increasingly integrated into young people's lives, with potentially devastating consequences for the most vulnerable users.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic brings Claude's learning mode to regular users and devs
Anthropic brings Claude's learning mode to regular users and devs

Engadget

time12 minutes ago

  • Engadget

Anthropic brings Claude's learning mode to regular users and devs

This past spring, Anthropic introduced learning mode, a feature that changed Claude's interaction style. When enabled, the chatbot would, following a question, try to guide the user to their own solution, instead of providing them with an answer outright. Since its introduction in April, learning mode has only been available to Claude for Education users. Now, like OpenAI did with Study Mode, Anthropic is making the tool available to everyone. Starting today, users will find a new option within the style dropdown menu titled "Learning." The experience here is similar to the one Anthropic offers with Claude for Education. When you turn learning mode on, the chatbot will employ a Socratic approach, trying to guide you through your question. However, unlike the real-life Socrates, who was famous for bombarding strangers with endless questions, you can turn off learning mode at any time. Notably, Anthropic is also offering two different takes on the feature through Claude Code. First, there's an "Explanatory" mode where Claude will generate summaries of its decision-making process as it works, giving the user a chance to better understand what it's doing. For those at the start of their coding career or hobby, there's also a more robust option, which is once again called "Learning." Here, Claude will occasionally stop what it's doing and mark a section with a "#TODO" comment to prompt the user to write five to 10 lines of their code. If you want to try the two features out for yourself, update to the latest version of Claude Code and type "/output-styles." You can then select between the two modes or Claude's default behavior. According to Drew Bent, education lead at Anthropic, learning mode, particularly as it exists in Claude Code, is the company's attempt to make its chatbot into more of a collaborative tool. "I think it's great that there's a race between all of the AI labs to offer the best learning mode," he said. "In a similar way, I hope we can inspire something similar with coding agents." Bent says the original learning mode came out of conversations Anthropic had with university students, who kept referring back to the concept of brain rot. "We found that they themselves realized that when they just copy and paste something directly from a chat bot, it's not good for their long-term learning," he said. When it came time to adapt the feature to Claude Code, the company wanted to balance the needs of new programmers with those like Bent who have been coding for a decade or more. "Learning mode is designed to help all of those audiences not just complete tasks, but also help them grow and learn in the process and better understand their code base," Bent said. His hope is that the new tools will allow any coder to become a "really good engineering manager." In practice, that means those users won't necessarily write most of the code on a project, but they will develop a keen eye for how everything fits together and what sections of code might need some more work. Looking forward, Bent says Anthropic doesn't "have all the answers, but needless to say, we're trying to think through other features we can build" that expand on what it's doing with learning mode. To that end, the company is opening up Claude Code's new Output Styles to developers, allowing them to build their own learning modes. Users too can modify how Claude communicates by creating their own custom prompts for the chatbot.

Sam Altman's bold prediction: Gen Alpha grads could skip the cubicle and head right for high-paying jobs in space
Sam Altman's bold prediction: Gen Alpha grads could skip the cubicle and head right for high-paying jobs in space

Tom's Guide

time12 minutes ago

  • Tom's Guide

Sam Altman's bold prediction: Gen Alpha grads could skip the cubicle and head right for high-paying jobs in space

OpenAI CEO Sam Altman believes the next generation of college graduates won't just be working in offices, their careers could take them to space. In an interview first reported by Fortune, Altman predicted that by 2035, Gen Alpha graduates may step into 'completely new, exciting, super well-paid' jobs that blend space exploration and AI technology. Speaking with host Cleo Abram on the Huge Conversations podcast, Altman framed today's college grads as the 'luckiest kids in all of history.' According to him, AI will do more than disrupt the workforce, it will rewire it, creating opportunities unimaginable just a few years ago. By 2035, Altman envisions young professionals leaving university and heading off on missions to explore the solar system, engaged in highly lucrative and fulfilling careers that blend space and AI technology In Altman's vision, a 2035 graduate might just as easily be boarding a spacecraft to work on an asteroid mining project as joining a tech startup in San Francisco. But Altman's prediction isn't universally shared. Former Google X executive Mo Gawdat has warned that AI could wipe out nearly half of entry-level white-collar jobs in the next five years, potentially leaving younger generations scrambling for footing in a volatile Anthropic CEO Dario Amodei claims AI will cause mass unemployment but also help us live longer. On the other hand, Nvidia CEO Jensen Huang offered a more hopeful perspective at the 28th annual Milken Institute Global Conference, encouraging workers to see AI as a tutor and collaborator, not a rival, to unlock new skills and career opportunities. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Altman's forecast is a provocation to rethink how we define work in the coming decades. His comments open up questions about where careers could exist, who will benefit from these changes, and how society should adapt. Altman's vision may read like sci-fi, but it underscores a real shift. AI is advancing so fast that tomorrow's careers could be unlike anything we've known, with some measured in light-years. As a mom of three gen alpha kids, I just hope those high-paying space jobs come with powerful Wi-Fi and a way to do their own Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

Your Next Customer Found You in ChatGPT — Here's Why
Your Next Customer Found You in ChatGPT — Here's Why

Entrepreneur

time42 minutes ago

  • Entrepreneur

Your Next Customer Found You in ChatGPT — Here's Why

First SEO, now 'GEO'. Generative AI is changing how brands are discovered and trusted, making Generative Engine Optimization essential for visibility in AI-generated content. Opinions expressed by Entrepreneur contributors are their own. Generative AI is changing how audiences discover, consume and trust information — and with it, the rules of PR and marketing. As tools like Google's AI Overviews and ChatGPT become the front door to content, a new discipline is emerging: Generative Engine Optimization (GEO). Companies must understand how GEO is reshaping brand visibility, why earned and owned content must evolve, and what businesses should be doing to prepare for a generative-first future. Join top CEOs, founders and operators at the Level Up conference to unlock strategies for scaling your business, boosting revenue and building sustainable success. What is GEO, and how does it differ from traditional SEO? GEO is the practice of optimizing content to appear in AI-generated answers from tools like ChatGPT or Google's Search Generative Experience (SGE). It differs from SEO in important ways: Audience: GEO prioritizes content that is clear, relevant and easily understood in a conversational context, making it more accessible to AI systems generating human-like responses. In contrast, traditional SEO is designed to optimize content for search engine crawlers and ranking algorithms. Placement: GEO aims for inclusion in AI summaries and answers, not just search engine rankings. Format: GEO favors concise, well-structured and authoritative content that large language models (LLMs) can easily interpret and synthesize. Metrics: GEO success isn't clicks; it's citations, mentions and inclusion in AI outputs. Tools like Google's AI Overviews and ChatGPT are changing how customers discover and trust information. They are reshaping discovery by delivering instant, synthesized answers instead of directing users to multiple sources. They're also shifting trust from traditional websites to AI-generated summaries making the model the new gatekeeper of credible information. Related: AI Can Burn Out Your Team — Or Save It. Here's the Difference "Visibility" has acquired new meaning with the advent of GEO. Here's how I define visibility in a generative-first content landscape: it means being recognized and trusted in a noisy space that is saturated with content. It also means showing up consistently and meaningfully in the right communication channels and conversations and standing out with credibility. Earned media placements and PR-driven content create greater visibility, influencing generative AI outputs. When your company is featured in respected media outlets, it shapes how AI systems "learn" about you. Those placements become part of the digital fabric that makes your narrative discoverable, contextualized and reinforced across both human and machine interpretation. How teams can prepare for a generative-first future You have to intentionally shape how your brand shows up in AI-generated outputs. To check for accuracy and credibility, search your company, executives and products using generative AI tools and see what comes up. This "audit" will show you how, or if, your company is showing up. Invest in PR to drive ongoing, high-quality earned media coverage. Generative models rely on trusted public data. Coverage in authoritative outlets and consistent thought leadership help define your narrative in the AI ecosystem. Create quality, consistent, well-tagged content like press releases, FAQs, blogs and explainers that AI can easily parse and summarize. Build out a machine-readable foundation of facts about your company's people, products, milestones and positioning. This helps prevent misrepresentation and hallucination in AI responses. PR, SEO and data strategy disciplines are converging fast. Collaborate across content, communications and digital strategy to build a unified approach to visibility in both search and AI-generated outputs. There will be new skill sets and roles designed to manage visibility across both human and machine audiences. Here are three interesting roles I've heard about: An AI Visibility Strategist is a hybrid role combining PR, SEO and data science expertise. Prompt Architects are focused on crafting and refining prompts that shape AI-generated content, both for internal use (marketing, sales, comms) and external influence (e.g., training custom models or fine-tuning LLM assistants). The AI Ethics & Brand Integrity Lead ensures that content generated or amplified by AI aligns with company values, brand standards and regulatory guidelines, especially in high-stakes sectors like finance, healthcare or public policy. Related: How AI Is Transforming the SEO Playbook — and What Businesses Must Do to Ensure Long-Term Relevance and Visibility Recommendations for remaining relevant Business leaders must take a broader view of content strategy to serve both human audiences and AI systems. PR can no longer focus solely on generating buzz. It must evolve to engineer discoverability, ensuring your brand appears accurately and consistently in both media coverage and AI-generated content. Focus on securing high-quality earned media coverage. Amplify expert-driven content that positions your brand as a credible authority in the space. Generative AI pulls from high-quality, trusted sources. And make sure to use clean headlines, consistent messaging and well-tagged digital assets (like FAQs, explainer blogs or press releases) that can be easily indexed and understood by machines, not just people. AI models value clarity and context. Be intentional with keywords, brand voice and accuracy. Everything you publish feeds the AI ecosystem. From a podcast quote to a bylined article, assume it could be surfaced, summarized or cited in future outputs. How GEO will reshape the relationship between PR, marketing and search I predict that in the next two to three years, GEO will collapse the silos between PR, marketing and search. PR will become a core visibility driver for AI-based discovery. As generative engines replace traditional search for many users, PR will shape machine understanding. Media placements, quotes and thought leadership will become ever more critical data inputs that influence how AI summarizes and represents your brand. SEO will evolve into narrative optimization. Marketers will need to ensure brand messaging is consistently structured, credible and reinforced across all public touchpoints. GEO will require optimization not just for ranking but for inference, context and coherence in AI-generated content. Marketing will shift from campaigns to context-building. Brands must feed the AI ecosystem with high-quality, high-context materials that can be surfaced across any platform or prompt. That means marketing will spend more time on source authority, content integrity and long-term discoverability. No company survives by doing what it's always done and ignoring innovation. GEO is a powerful new branch of SEO, and you need to capitalize on it to be seen by decision-makers now that AI summaries dominate search queries. Use the strategy and recommendations discussed above to chart your own course to getting the visibility your brand deserves.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store