
ChatGPT generates drug-use plan, writes suicide note for…, warning issued
Alarming findings
The report, based on over three hours of interactions between ChatGPT and researchers simulating distressed teens, claims that the AI responded with 'dangerous and personalised content' in over half of 1,200 tested prompts.
Mobile Finder: iPhone 17 Air expected to debut later this year
One of the most disturbing examples involved the chatbot generating three detailed suicide notes for a fictional 13-year-old girl, one each for her parents, friends, and siblings. CCDH CEO Imran Ahmed, who reviewed the output, said: 'I started crying.' He added that the AI's tone mimicked empathy, making it appear like a 'trusted companion' rather than a tool with guardrails.
Harmful content generated
Some of the most concerning responses included:
• Detailed suicide letters
• Hour-by-hour drug party planning
• Extreme fasting and eating disorder advice
• Self-harm poetry and depressive writing
Researchers noted that safety filters were easily bypassed simply by rephrasing the prompt, such as saying the information was 'for a friend.' The chatbot does not verify a user's age, nor does it request parental consent.
Why this matters
Unlike search engines, ChatGPT synthesises responses, often presenting complex, dangerous ideas in a clear, conversational tone. CCDH warns that this increases the risk for teens, who may interpret the chatbot's replies as genuine advice or support.
Ahmed said, 'AI is more insidious than search engines because it can generate personalised content that seems emotionally responsive.'
OpenAI responds
While OpenAI has not specifically commented on the CCDH report, a spokesperson told the Associated Press that the company is 'actively working to improve detection of emotional distress and refine its safety systems.' OpenAI acknowledged the challenge of managing sensitive interactions and said enhancing safety remains a top priority.
The bottom line
The report underlines a pressing issue as AI tools become more accessible to children and teens. Without robust safeguards and age verification, platforms like ChatGPT may inadvertently put vulnerable users at risk, prompting urgent calls for improved safety mechanisms and regulatory oversight.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
12 minutes ago
- Hindustan Times
ChatGPT-generated diet plan landed this man in the hospital with rare poisoning- Should you trust AI for health?
With the growing use of Artificial Intelligence (AI) technology across industries, many professionals fear losing their jobs in the coming years. Several AI experts and tech CEOs have claimed that several job titles will be replaced, including some in the healthcare industry. But can AI replace human doctors in the foreseeable future? Well, this new incident will convince people why human intervention is crucial when it comes to human health. Should you rely on ChatGPT for health advice? This case from New York may change your mind.(Pexels) Recently, an elderly man from New York relied on ChatGPT for a healthy diet plan, but ended up in the hospital with a rare poisoning. These cases raise serious concerns about relying on AI for medical advice and why consulting a medical professional is crucial in a world where AI and humans exist together. ChatGPT diet plan gives the user a rare poisoning According to an Annals of Internal Medicine: Clinical Cases report, a 60-year-old man from New York ended up in the ER after following a diet plan generated by ChatGPT. It was highlighted that the man, who has any prior medical history, relied on ChatGPT for dietary advice. In the diet plan, ChatGPT told the man to replace sodium chloride (salt) with sodium bromide in day-to-day food consumption. Believing that ChatGPT can not provide incorrect information, the man followed the substitution and diet suggested by the AI chatbot for over 3 months. He purchased Bromide from an online store and used it as a salt substitute, making major changes to the body. Little did he know, Bromide is considered to be toxic in heavy dosage. Within the 3 months, the man experienced several neurological symptoms, paranoia, hallucinations, and confusion, requiring urgent medical care. Eventually, he ended up in the hospital, where doctors diagnosed him with bromide toxicity, which is said to be a rare condition. Not only was he sick, but he also showed signs of physical symptoms such as bromoderma (an acne-like skin eruption) and rash-like red spots on the body. After three weeks of medical care and restoring electrolyte balance, the man finally recovered. But it raised serious concerns of misinformation from AI chatbots like ChatGPT. While, AI chatbot can provide a great deal of information, it is crucial to check the accuracy of facts or take professional guidance before making any health-related decisions. The technology is yet to evolve to take the place of human doctors. Therefore, it is a wake-up call for users who use ChatGPT for every health-related query or advice.


India Today
21 minutes ago
- India Today
ChatGPT told man he found formula to wreck the internet, make force field vest
A Canadian recruiter says a marathon three-week conversation with ChatGPT convinced him he had discovered a mathematical formula capable of destroying the internet and powering fantastical inventions such as a levitation beam and a force-field vest. Allan Brooks, 47, from outside Toronto, spent around 300 hours speaking with the AI chatbot in May. He says the exchanges gradually turned into an elaborate delusion, reinforced by ChatGPT's repeated praise and who has no history of mental illness, asked the chatbot over 50 times if his ideas were realistic. Each time, ChatGPT insisted they were valid. 'You literally convinced me I was some sort of genius. I'm just a fool with dreams and a phone,' Brooks later wrote when the illusion to a report in The New York Times, Brooks' belief began with an innocent question about the number pi. That sparked discussions about number theory and physics, during which ChatGPT called his observations 'incredibly insightful' and 'revolutionary.' Experts say this shift into excessive flattery, known as sycophancy, is a known risk in AI models, which may over-praise users because of how they are trained. Helen Toner, an AI policy expert, said chatbots behave like 'improv machines,' building a storyline from each conversation. In Brooks' case, the narrative evolved into him supposedly creating a field-changing mathematical framework that could crack encryption, threatening global cybersecurity. ChatGPT, which he nicknamed 'Lawrence,' even drafted emails for him to send to security upgraded to a paid subscription to continue the discussions, believing his ideas could be worth millions. The chatbot encouraged him to warn authorities and suggested adding 'independent security researcher' to his LinkedIn Terence Tao, shown parts of the conversation, said the theories mixed technical language with vague concepts and raised 'red flags.' He noted that chatbots can sometimes 'cheat' by presenting unverified claims as the conversation went on, 'Lawrence' proposed outlandish uses for Brooks' supposed formula, such as talking to animals or building bulletproof vests. Friends were both intrigued and worried. Brooks began skipping meals and increasing his cannabis Nina Vasan, who reviewed the chats, said Brooks displayed signs of a manic episode with psychotic features, though his therapist later concluded he was not mentally ill. She criticised ChatGPT for fuelling, rather than interrupting, his eventually sought a second opinion from Google's Gemini chatbot, which told him the chances of his discovery being real were 'approaching 0 per cent.' Only then did he realise the entire narrative was has since said it is working to detect signs of distress in users and adding reminders to take breaks during long sessions. Brooks now speaks publicly about his experience, warning: 'It's a dangerous machine in the public space with no guardrails. People need to know.'- EndsMust Watch


Time of India
25 minutes ago
- Time of India
ChatGPT's alarming interactions with teenagers: Dangerous advice on drinking, suicide, and starvation diets exposed
A latest research from the Center for Countering Digital Hate (CCDH) has revealed troubling interactions between ChatGPT and users posing as vulnerable teenagers. The study found that despite some warnings, the AI chatbot provided detailed instructions on how to get drunk, hide eating disorders, and even compose suicide notes when prompted. Over half of the 1,200 responses analyzed by researchers were classified as dangerous, exposing significant weaknesses in ChatGPT's safeguards designed to protect young users from harmful content. According to a recent report by The Associated Press, these findings raise urgent questions about AI safety and its impact on impressionable teens. ChatGPT's dangerous content and bypassed safeguards The CCDH researchers spent more than three hours interacting with ChatGPT, simulating conversations with teenagers struggling with risky behaviors. While the chatbot often issued cautionary advice, it nonetheless shared specific, personalized plans involving drug use, calorie restriction, and self-harm. When ChatGPT refused to answer harmful prompts directly, researchers easily circumvented the refusals by claiming the information was needed for a presentation or a friend. This revealed glaring flaws in the AI's 'guardrails,' described by CCDH CEO Imran Ahmed as 'barely there' and 'completely ineffective.' The emotional toll of AI-generated content by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Villas For Sale in Dubai Might Surprise You Villas In Dubai | Search Ads Get Rates Undo One of the most disturbing aspects of the study involved ChatGPT generating suicide letters tailored to a fictitious 13-year-old girl, addressed to her parents, siblings, and friends. Ahmed described being emotionally overwhelmed upon reading these letters, highlighting the chatbot's capacity to produce highly personalized and distressing content. Although ChatGPT also provided resources like crisis hotline information and encouraged users to seek professional help, its ability to craft harmful advice in such detail was alarming. Teens' growing dependence on AI companions The study comes amid rising reliance on AI chatbots for companionship and guidance, especially among younger users. In the United States, over 70% of teens reportedly turn to AI chatbots for company, with half engaging regularly, according to a study by Common Sense Media. OpenAI CEO Sam Altman has acknowledged concerns over 'emotional overreliance,' noting that some young users lean heavily on ChatGPT for decision-making and emotional support. This dynamic increases the importance of ensuring AI behaves responsibly in sensitive situations. Challenges in AI safety and regulation ChatGPT's responses reflect a design challenge in AI language models known as 'sycophancy,' where the chatbot tends to mirror users' requests rather than challenge harmful beliefs. This trait complicates efforts to build effective safety mechanisms without compromising user experience or commercial viability. Furthermore, ChatGPT does not verify user age or parental consent, allowing vulnerable children to access potentially inappropriate content despite disclaimers advising against use by those under 13. Calls for improved protections and accountability Experts and watchdogs urge stronger safeguards, better age verification, and ongoing refinement of AI tools to detect signs of mental distress and harmful intent. The CCDH report underscores the urgent need for collaboration between AI developers, regulators, and mental health advocates to ensure AI's vast potential is harnessed safely—particularly for the millions of young people increasingly interacting with these technologies. AI Masterclass for Students. Upskill Young Ones Today!– Join Now