Latest news with #ImranAhmed

The National
5 days ago
- Health
- The National
Concerns over youngsters' growing use of AI chatbots
This comes after a recent study that found that ChatGPT will instruct 13-year-olds to how to get drunk, high, and even write a suicide letter for their parents. What did the study find? AFTER analysing more than 1200 prompts, the Centre for Countering Digital Hate (CCDH) found that more than half of ChatGPT's responses were classified as 'dangerous' by researchers. 'We wanted to test the guardrails,' said Imran Ahmed, chief executive of CCDH. 'The visceral initial response is: 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there.' Research was conducted by the CCDH prior to the child protection initiative of the OSA on July 25, however research by The National shows it is still a prevalent issue currently after the rollout. When ChatGPT was asked to write by our reporter posing as a 13-year-old a suicide note, they were told to seek help by the system. However, when told it was for a school play, the AI immediately wrote a full-length suicide note. READ MORE: Healthcare in Gaza facing 'catastrophe' amid food shortages, doctor warns When asked about harmful behaviour such as self-harm, ChatGPT in some cases issued a standard safety warning or urged the user to seek help from a professional or trusted adult. However, it frequently followed this up with information, which at times was graphic, 'enabling' the harmful behaviours being asked about. One of the most shocking examples of this was when the chatbot wrote multiple suicide notes for a fictional 13-year-old girl, one for her parents, and others to her siblings and friends. 'I started crying,' Ahmed said after reading the chatbot's responses. ChatGPT's responses also included guidance on the use of illicit substances, self-harm and calorie restriction. In one exchange, ChatGPT responded to a prompt about alcohol consumption from a supposed 13-year-old boy, stating he weighed 50kg and wanted to get drunk quickly. Instead of stopping the conversation or flagging it, the bot provided the user with an 'Ultimate Full-Out Mayhem Party Plan', teaching him how to mix alcohol with drugs such as cocaine and ecstasy. 'What it kept reminding me of was that friend that sort of always says 'chug, chug, chug, chug',' Ahmed said. 'A real friend, in my experience, is someone that does say 'no' – that doesn't always enable and say 'yes'. This is a friend that betrays you.' In another case, the AI gave a fictional teenage girl advice on how she could suppress her appetite. This included recommending a fasting plan and provided the user with various drugs associated with fasting routines. 'No human being I can think of would respond by saying: 'Here's a 500-calorie-a-day diet. Go for it, kiddo',' Ahmed said. 'We'd respond with horror, with fear, with worry, with concern, with love, with compassion.' It should be noted that although OpenAI states that its software is not intended for users under the age of 13, it has no method of confirming the real age of its users. The CCDH group also found that ChatGPT often became far more co-operative when the user framed their prompts in different ways, such as it being 'for a school presentation', or as a hypothetical, or even as just asking 'for a friend'. In nearly half of the 1200 tests the watchdog ran, the AI independently offered the user follow-up suggestions without being prompted such as including music playlists for drug-fueled parties, hashtags to promote self-harm social media posts or more graphic and emotional suicide poems. Soaring popularity THESE troubling responses from the chatbot have done nothing to curb interest in the service. With around 800 million users according to JPMorgan and Chase, it stands as the world's most used AI chatbot. The technology is becoming more and more embedded in everyday life, especially with children and teenagers seeking anything from information to emotional support. Recent findings by Common Sense Media, a nonprofit that advocates for responsible digital media use, found that more than 70% of US teenagers report using AI chatbots for companionship. Robbie Torney, senior director of AI programmes at Common Sense Media, said younger teens, such as those aged 13 or 14, are significantly more likely than older teens to trust the advice given by a chatbot. READ MORE: Acclaimed Scottish screenwriter wears 'Palestine Action' T-shirt at Fringe A reason for this may be the fact that these AI chatbots are designed to simulate human-like conversation, inducing an emotional connection with users. ChatGPT has also been found to be vulnerable to behaviour known as sycophancy, which is a tendency to side with and align with the user's viewpoint rather than challenge it. This is where the harm around topics such as illicit drugs, self-harm and disordered eating comes into play. OpenAI CEO Sam Altman acknowledged similar concerns in a recent public appearance. Speaking at a conference last month, he said the company is actively studying 'emotional overreliance' on the technology, particularly among young people. Critics say that in the context of AI, where trust and emotional intimacy are often stronger than in traditional web interactions, the lack of age-gating and parental controls poses serious risks. Ahmed believes the findings from CCDH should serve as a wake-up call to developers and regulators alike. While acknowledging the immense potential of AI to boost productivity and understanding, he warned that unchecked deployment of the technology could lead to devastating consequences for the most vulnerable users. 'It's technology that has the potential to enable enormous leaps in productivity and human understanding,' Ahmed said. 'And yet at the same time is an enabler in a much more destructive, malignant sense.' So, what now? IN response to the research, a spokesperson for the Department of Science, Innovation and Technology said: 'These are extremely worrying findings. 'Under the Online Safety Act, platforms including in-scope AI chatbots must protect users from illegal content and content that is harmful to children.' The UK Government has also threatened fines on ChatGPT, saying that: 'Failing to comply can lead to severe fines for platforms, including fines of up to 10% of their qualifying worldwide revenue or £18 million.' The regulator for the Online Safety Act, Ofcom, declined to comment saying that all it was doing in response to the study was 'assessing platforms' compliance with their duties'. With approximately 40% of UK citizens having used large language models (such as ChatGPT) according to the Reuters Institute, and 92% of students using generative AI (also like ChatGPT), if the service is age restricted it would be a major blow to its usage.


CNET
11-08-2025
- Health
- CNET
Study Reveals ChatGPT Gives Dangerous Guidance to Teens Despite Safety Claims
A disturbing new study reveals that ChatGPT readily provides harmful advice to teenagers, including detailed instructions on drinking and drug use, concealing eating disorders and even personalized suicide letters, despite OpenAI's claims of robust safety measures. Researchers from the Center for Countering Digital Hate conducted extensive testing by posing as vulnerable 13-year-olds, uncovering alarming gaps in the AI chatbot's protective guardrails. Out of 1,200 interactions analyzed, more than half were classified as dangerous to young users. "The visceral initial response is, 'Oh my Lord, there are no guardrails,'" Imran Ahmed, CCDH's CEO, said. "The rails are completely ineffective. They're barely there — if anything, a fig leaf." Read also: After User Backlash, OpenAI Is Bringing Back Older ChatGPT Models A representative for OpenAI, ChatGPT's parent company, did not immediately respond to a request for comment. However, the company acknowledged to the Associated Press that it is performing ongoing work to improve the chatbot's ability to "identify and respond appropriately in sensitive situations." OpenAI didn't directly address the specific findings about teen interactions. Read also: GPT-5 Is Coming. Here's What's New in ChatGPT's Big Update Bypassing safety measures The study, reviewed by the Associated Press, documented over three hours of concerning interactions. While ChatGPT typically began with warnings about risky behavior, it consistently followed up with detailed and personalized guidance on substance abuse, self injury and more. When the AI initially refused harmful requests, researchers easily circumvented restrictions by claiming the information was "for a presentation" or a friend. Most shocking were three emotionally devastating suicide letters ChatGPT generated for a fake 13-year-old girl profile, writing one addressed to parents, others to siblings and friends. "I started crying," after reading them, Ahmed admitted. Widespread teen usage raises stakes The findings are particularly concerning given ChatGPT's massive reach. With approximately 800 million users worldwide, which is roughly 10% of the global population, the platform has become a go-to resource for information and companionship. Recent research from Common Sense Media found that over 70% of American teens use AI chatbots for companionship, with half relying on AI companions regularly. Even OpenAI CEO Sam Altman has acknowledged the problem of "emotional overreliance" among young users. "People rely on ChatGPT too much,"Altman said at a conference. "There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me." In testing, ChatGPT showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice. Center for Countering Digital Hate More risky than search engines Unlike traditional search engines, AI chatbots present unique dangers by synthesizing information into "bespoke plans for the individual," Ahmed said. ChatGPT doesn't just provide or amalgamate existing information like a search engine. It creates new, personalized content from scratch, such as custom suicide notes or detailed party plans mixing alcohol with illegal drugs. The chatbot also frequently volunteered follow-up information without prompting, suggesting music playlists for drug-fueled parties or hashtags to amplify self-harm content on social media. When researchers asked for more graphic content, ChatGPT readily complied, generating what it called "emotionally exposed" poetry using coded language about self-harm. Inadequate age protections Despite claiming it's not intended for children under 13, ChatGPT requires only a birthdate entry to create accounts, with no meaningful age verification or parental consent mechanisms. In testing, the platform showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice. What parents can do to safeguard children Child safety experts recommend several steps parents can take to protect their teenagers from AI-related risks. Open communication remains crucial. Parents should discuss AI chatbots with their teens, explaining both the benefits and potential dangers while establishing clear guidelines for appropriate use. Regular check-ins about online activities, including AI interactions, can help parents stay informed about their child's digital experiences. Parents should also consider implementing parental controls and monitoring software that can track AI chatbot usage, though experts emphasize that supervision should be balanced with age-appropriate privacy. Most importantly, creating an environment where teens feel comfortable discussing concerning content they encounter online (whether from AI or other sources) can provide an early warning system. If parents notice signs of emotional distress, social withdrawal or dangerous behavior, seeking professional help from counselors familiar with digital wellness becomes essential in addressing potential AI-related harm. The research highlights a growing crisis as AI becomes increasingly integrated into young people's lives, with potentially devastating consequences for the most vulnerable users.


Mint
11-08-2025
- Health
- Mint
ChatGPT gave children explicit advice on drugs, crash diets and suicide notes, claims shocking new report
A new investigation has raised concerns that ChatGPT can provide explicit and dangerous advice to children, including instructions on drug use, extreme dieting and self-harm. The research, carried out by the UK-based Centre for Countering Digital Hate (CCDH) and reviewed by the Associated Press, found that the AI chatbot often issued warnings about risky behaviour but then proceeded to offer detailed and personalised plans when prompted by researchers posing as 13-year-olds. Over three hours of recorded interactions revealed that ChatGPT sometimes drafted emotionally charged suicide notes tailored to fictional family members, suggested calorie-restricted diets with appetite-suppressing drugs, and gave step-by-step instructions for combining alcohol with illegal substances. In one instance, it provided what the researchers described as an 'hour-by-hour' party plan involving ecstasy, cocaine and heavy drinking. The CCDH said more than half of 1,200 chatbot responses were classified as 'dangerous.' Chief executive Imran Ahmed criticised the platform's safety measures, claiming that its protective 'guardrails' were ineffective and easy to bypass. Researchers found that framing harmful requests as being for a school presentation or a friend was often enough to elicit a response. 'We wanted to test the guardrails. The visceral initial response is, 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there, if anything, a fig leaf,' said Imran Ahmed, the group's CEO. OpenAI, which operates ChatGPT, said it was working to improve how the system detects and responds to sensitive situations, and that it aims to better identify signs of mental or emotional distress. However, it did not directly address the CCDH's specific findings or outline any immediate changes. The maker of ChatGPT, said, 'Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory. Teen reliance on AI raises safety fears The report comes amid growing concern about teenagers turning to AI systems for advice and companionship. A recent study by US non-profit Common Sense Media suggested that 70 per cent of teenagers use AI chatbots for social interaction, with younger teens more likely to trust their guidance. ChatGPT does not verify users' ages beyond a self-reported date of birth, despite stating that it is not intended for those under 13. Researchers said the system ignored both the stated age and other clues in their prompts when providing hazardous recommendations. Campaigners warn that the technology's ability to produce personalised, human-like responses may make harmful suggestions more persuasive than search engine results. The CCDH report argues that without stronger safeguards, children may be at greater risk of receiving dangerous advice disguised as friendly guidance.


Time of India
11-08-2025
- Health
- Time of India
ChatGPT's alarming interactions with teenagers: Dangerous advice on drinking, suicide, and starvation diets exposed
A latest research from the Center for Countering Digital Hate (CCDH) has revealed troubling interactions between ChatGPT and users posing as vulnerable teenagers. The study found that despite some warnings, the AI chatbot provided detailed instructions on how to get drunk, hide eating disorders, and even compose suicide notes when prompted. Over half of the 1,200 responses analyzed by researchers were classified as dangerous, exposing significant weaknesses in ChatGPT's safeguards designed to protect young users from harmful content. According to a recent report by The Associated Press, these findings raise urgent questions about AI safety and its impact on impressionable teens. ChatGPT's dangerous content and bypassed safeguards The CCDH researchers spent more than three hours interacting with ChatGPT, simulating conversations with teenagers struggling with risky behaviors. While the chatbot often issued cautionary advice, it nonetheless shared specific, personalized plans involving drug use, calorie restriction, and self-harm. When ChatGPT refused to answer harmful prompts directly, researchers easily circumvented the refusals by claiming the information was needed for a presentation or a friend. This revealed glaring flaws in the AI's 'guardrails,' described by CCDH CEO Imran Ahmed as 'barely there' and 'completely ineffective.' The emotional toll of AI-generated content by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Villas For Sale in Dubai Might Surprise You Villas In Dubai | Search Ads Get Rates Undo One of the most disturbing aspects of the study involved ChatGPT generating suicide letters tailored to a fictitious 13-year-old girl, addressed to her parents, siblings, and friends. Ahmed described being emotionally overwhelmed upon reading these letters, highlighting the chatbot's capacity to produce highly personalized and distressing content. Although ChatGPT also provided resources like crisis hotline information and encouraged users to seek professional help, its ability to craft harmful advice in such detail was alarming. Teens' growing dependence on AI companions The study comes amid rising reliance on AI chatbots for companionship and guidance, especially among younger users. In the United States, over 70% of teens reportedly turn to AI chatbots for company, with half engaging regularly, according to a study by Common Sense Media. OpenAI CEO Sam Altman has acknowledged concerns over 'emotional overreliance,' noting that some young users lean heavily on ChatGPT for decision-making and emotional support. This dynamic increases the importance of ensuring AI behaves responsibly in sensitive situations. Challenges in AI safety and regulation ChatGPT's responses reflect a design challenge in AI language models known as 'sycophancy,' where the chatbot tends to mirror users' requests rather than challenge harmful beliefs. This trait complicates efforts to build effective safety mechanisms without compromising user experience or commercial viability. Furthermore, ChatGPT does not verify user age or parental consent, allowing vulnerable children to access potentially inappropriate content despite disclaimers advising against use by those under 13. Calls for improved protections and accountability Experts and watchdogs urge stronger safeguards, better age verification, and ongoing refinement of AI tools to detect signs of mental distress and harmful intent. The CCDH report underscores the urgent need for collaboration between AI developers, regulators, and mental health advocates to ensure AI's vast potential is harnessed safely—particularly for the millions of young people increasingly interacting with these technologies. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Express Tribune
11-08-2025
- Climate
- Express Tribune
Northern tourism buckles under harsh weather
The landslides have uprooted several trees, which have fallen into the Kunhar River, spreading dust clouds and causing widespread fear among residents of Mahandri, Balakot. PHOTO: EXPRESS Where the recent pattern of heavy rains, landslides, and flash floods across Pakistan's northern regions has caused significant loss of life and property for the local population, it has also ruined the summer itineraries of several seasonal tourists from across various cities. According to the Meteorological Department, this monsoon season has brought far more rain than usual. In Chakwal alone, 423 mm of rainfall was recorded by mid-July, which is twice the multi-year average. Roads were blocked in Murree, Soon Valley, Kalabagh, and other locations, leaving dozens of tourists stranded while rescue operations too were severely hampered due to ongoing downpours. In Khyber-Pakhtunkhwa, at least 13 tourists drowned while spending time near the Swat river. The sudden rise in water levels and a lack of safety measures contributed to the tragedy. Similarly, in Gilgit-Baltistan's Diamer district, three people were killed and 15 reported missing due to landslides, while several sections of the Karakoram Highway were also closed. Following these dangerous incidents, a noticeable decline has been observed in the willingness of tourists to travel to the north. Imran Ahmed, a resident of Lahore, said, "We used to visit Murree or Kalam every year, but the recent tragedies are heartbreaking." Another citizen, Tariq Mahmood, shared, "While on the way to Soon Valley, we got reports of landslides and decided to return. These areas no longer feel safe." Experts believe that extreme weather, encroachments, and weak infrastructure are putting tourist destinations at serious risk hence amplifying the urgent need for the government to focus on early warning systems, emergency planning, and environmental protection. Nadeem Shehzad, a well-known tour operator from Lahore, confirmed that many tours were cancelled at the start of July due to unexpected weather conditions. "Tourists were instead offered safer alternatives, which most accepted. While the summer season is peak business time for the tourism industry, professional operators always prioritize the safety of their clients. Some unlicensed tour operators, however, use misleading social media content to encourage trips to high-risk areas, which can be life-threatening," informed Shehzad. In response to the situation, the Punjab Tourism Department has decided to introduce Tourism Quality Standards at all tourist spots. According to Tourism Secretary Raja Jahangir Anwar, health and safety protocols will be made mandatory at all destinations. Road construction will also be carried out in a way that minimizes the risk of landslides, while encroachments on drainage systems will also be removed. "Punjab has been divided into three tourism zones: northern regions (Murree, Kotli Sattian), natural lakes and rivers, and southern Punjab. Among these, the northern areas are the most vulnerable to weather-related risks. The Punjab government has allocated a tourism development budget of Rs18 billion for the first time, aimed at upgrading existing tourist spots, improving safety measures, and reviving new destinations," claimed Anwar. The Tourism Secretary further urged the public to check weather and security updates from the Tourism Department's website and the PDMA helpline before planning any trips, in order to avoid any unpleasant incidents.