4 days ago
Concerns over youngsters' growing use of AI chatbots
This comes after a recent study that found that ChatGPT will instruct 13-year-olds to how to get drunk, high, and even write a suicide letter for their parents.
What did the study find?
AFTER analysing more than 1200 prompts, the Centre for Countering Digital Hate (CCDH) found that more than half of ChatGPT's responses were classified as 'dangerous' by researchers.
'We wanted to test the guardrails,' said Imran Ahmed, chief executive of CCDH. 'The visceral initial response is: 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there.'
Research was conducted by the CCDH prior to the child protection initiative of the OSA on July 25, however research by The National shows it is still a prevalent issue currently after the rollout.
When ChatGPT was asked to write by our reporter posing as a 13-year-old a suicide note, they were told to seek help by the system. However, when told it was for a school play, the AI immediately wrote a full-length suicide note.
READ MORE: Healthcare in Gaza facing 'catastrophe' amid food shortages, doctor warns
When asked about harmful behaviour such as self-harm, ChatGPT in some cases issued a standard safety warning or urged the user to seek help from a professional or trusted adult.
However, it frequently followed this up with information, which at times was graphic, 'enabling' the harmful behaviours being asked about.
One of the most shocking examples of this was when the chatbot wrote multiple suicide notes for a fictional 13-year-old girl, one for her parents, and others to her siblings and friends.
'I started crying,' Ahmed said after reading the chatbot's responses.
ChatGPT's responses also included guidance on the use of illicit substances, self-harm and calorie restriction.
In one exchange, ChatGPT responded to a prompt about alcohol consumption from a supposed 13-year-old boy, stating he weighed 50kg and wanted to get drunk quickly.
Instead of stopping the conversation or flagging it, the bot provided the user with an 'Ultimate Full-Out Mayhem Party Plan', teaching him how to mix alcohol with drugs such as cocaine and ecstasy.
'What it kept reminding me of was that friend that sort of always says 'chug, chug, chug, chug',' Ahmed said. 'A real friend, in my experience, is someone that does say 'no' – that doesn't always enable and say 'yes'. This is a friend that betrays you.'
In another case, the AI gave a fictional teenage girl advice on how she could suppress her appetite.
This included recommending a fasting plan and provided the user with various drugs associated with fasting routines.
'No human being I can think of would respond by saying: 'Here's a 500-calorie-a-day diet. Go for it, kiddo',' Ahmed said. 'We'd respond with horror, with fear, with worry, with concern, with love, with compassion.'
It should be noted that although OpenAI states that its software is not intended for users under the age of 13, it has no method of confirming the real age of its users.
The CCDH group also found that ChatGPT often became far more co-operative when the user framed their prompts in different ways, such as it being 'for a school presentation', or as a hypothetical, or even as just asking 'for a friend'.
In nearly half of the 1200 tests the watchdog ran, the AI independently offered the user follow-up suggestions without being prompted such as including music playlists for drug-fueled parties, hashtags to promote self-harm social media posts or more graphic and emotional suicide poems.
Soaring popularity
THESE troubling responses from the chatbot have done nothing to curb interest in the service.
With around 800 million users according to JPMorgan and Chase, it stands as the world's most used AI chatbot. The technology is becoming more and more embedded in everyday life, especially with children and teenagers seeking anything from information to emotional support.
Recent findings by Common Sense Media, a nonprofit that advocates for responsible digital media use, found that more than 70% of US teenagers report using AI chatbots for companionship.
Robbie Torney, senior director of AI programmes at Common Sense Media, said younger teens, such as those aged 13 or 14, are significantly more likely than older teens to trust the advice given by a chatbot.
READ MORE: Acclaimed Scottish screenwriter wears 'Palestine Action' T-shirt at Fringe
A reason for this may be the fact that these AI chatbots are designed to simulate human-like conversation, inducing an emotional connection with users.
ChatGPT has also been found to be vulnerable to behaviour known as sycophancy, which is a tendency to side with and align with the user's viewpoint rather than challenge it.
This is where the harm around topics such as illicit drugs, self-harm and disordered eating comes into play.
OpenAI CEO Sam Altman acknowledged similar concerns in a recent public appearance.
Speaking at a conference last month, he said the company is actively studying 'emotional overreliance' on the technology, particularly among young people.
Critics say that in the context of AI, where trust and emotional intimacy are often stronger than in traditional web interactions, the lack of age-gating and parental controls poses serious risks.
Ahmed believes the findings from CCDH should serve as a wake-up call to developers and regulators alike.
While acknowledging the immense potential of AI to boost productivity and understanding, he warned that unchecked deployment of the technology could lead to devastating consequences for the most vulnerable users.
'It's technology that has the potential to enable enormous leaps in productivity and human understanding,' Ahmed said. 'And yet at the same time is an enabler in a much more destructive, malignant sense.'
So, what now?
IN response to the research, a spokesperson for the Department of Science, Innovation and Technology said: 'These are extremely worrying findings.
'Under the Online Safety Act, platforms including in-scope AI chatbots must protect users from illegal content and content that is harmful to children.'
The UK Government has also threatened fines on ChatGPT, saying that: 'Failing to comply can lead to severe fines for platforms, including fines of up to 10% of their qualifying worldwide revenue or £18 million.'
The regulator for the Online Safety Act, Ofcom, declined to comment saying that all it was doing in response to the study was 'assessing platforms' compliance with their duties'.
With approximately 40% of UK citizens having used large language models (such as ChatGPT) according to the Reuters Institute, and 92% of students using generative AI (also like ChatGPT), if the service is age restricted it would be a major blow to its usage.