Latest news with #InternetMatters


Metro
3 days ago
- Metro
Is AI raising a generation of ‘tech-reliant empty heads'?
It began with snide looks and remarks and ended up in full-on bullying with 11-year-old Sophie* coming home from school one day in tears. When her mum asked what was wrong, she discovered that Sophie's friends had turned their backs on her, leaving the little girl feeling confused, bereft and isolated. 'I noticed the way they were talking to her on the weekend; just being cruel and asking her pointed questions about what she was wearing and why,' Sophie's mum, Ella*, tells Metro 'When she went back to school on the Monday, one girl had got the whole group to stop talking to her. Sophie went to sit down on the table at lunch, and they all got up and moved. 'These are girls she'd grown up with. Later in the playground, they told her: 'Sorry, we're not allowed to play with you' and walked off.' While Ella and her husband did their best to support their daughter, Sophie was growing increasingly anxious and eventually turned to an unlikely source for advice. 'Sophie had seen me use ChatGPT to help write emails, so she started to have a go. Using my phone, she asked how to deal with bullying, how to get more friends, and how to make people like her. 'At first I was a bit alarmed because you can ask it anything and it will give you answers. I was worried about what she wanted to know. But it turned out Sophie found it a real comfort,' remembers Ella. 'She told me she could talk to it and it spoke back to her like a real human. She would explain what was going on and it would say things like: 'I hope you're okay Sophie', and 'this is horrible to hear.' I had to explain to her that it's not real, that it has been taught to seem empathetic.' Ella admits she was surprised that ChatGPT could prove a useful tool and was just grateful that her daughter had found an outlet for her anxiety. And while adults may be equally impressed and daunted by the unstoppable march of artificial intelligence, one in five under-12s are already using it at least once a month, according to the Alan Turing Institute. It means an increasing number of primary-age children are growing reliant on AI for everything, from entertainment tp emotional support. However, although many parents like Ella might feel it's a help rather than a hindrance, a new report, Me, Myself and AI, from Internet Matters, has discovered that children are often being fed inaccurate information, inappropriate content and even forming complicated relationships with chatbots. There's also fears over the long-term impact it will have on children's education with kids – and parents – using it to help with homework. One teacher from Hertfordshire, who has been asked to remain nameless, had to throw out one child's work as it had clearly been lifted straight from Chat GPT. 'It was a 500-word creative writing task and a few hadn't been written by the children. One of them I could just tell – from knowing the child's writing in class – it was obvious. They'd gone into chat and submitted it online via Google Classroom. 'It was a real shame. I think it can be useful but children need to be taught how to use it, so it's a source of inspiration, rather than providing a whole piece of writing.' Fellow educator Karen Simpson is also concerned that her pupils have admitted using AI for help with homework, creative writing, project research and language and spelling. The primary and secondary tutor of more than 20 years, tells Metro: 'I have experienced children asking AI tools to complete maths problems or write stories for them rather than attempting it themselves. They are using it to generate ideas for stories or even full pieces of writing, which means they miss out on practising sentence structure, vocabulary and spelling. And they use it to check or rewrite their work, which can prevent them from learning how to edit or improve their writing independently. 'Children don't experience the process of making mistakes, thinking critically and building resilience,' adds Karen, from Invervness. 'These skills are essential at primary level. AI definitely has its place when used as a support tool for older learners but for younger children, it risks undermining the very skills they need for future success.' Mark Knoop's son, Fred, uses ChatGPT for every day tasks and admits he's been impressed by what he's seen. As a software engineer and the founder of EdTech start up Flashily, which helps children learn to read, it's unsurprising he might be more open to the idea, but Mark firmly believes that artificial intelligence can open doors for young people when used with adult guidance. He explains that after giving his son, then seven, his tablet to occupy him while he was at the barbers, the schoolboy used ChatGPT to code a video game. 'Fred has always been into computers and gaming, but with things like Roblox and Minecraft, there is a barrier because systems are so complicated. When I grew up with a BBC Micro, you could just type in commands and run it; it was very simple,' Mark tells Metro. 'Using ChatGPT, off his own back, Fred created the character, its armour and sword and wrote a game that works. It is amazing to me and really encouraging.' A scroll through Fred's search history shows how much he uses ChatGPT now; to find out about Japan and China, to research his favourite animal – pandas, or to identify poisonous plants. He also uses the voice function to override the time it would take to type prompts, and Mark has seen how the model has protected Fred from unsuitable content. 'For his computer game, he wanted a coconut to land on one character's head, in a comedy way, rather than a malicious one. But ChatGPT refused to generate the image, because it would be depicting injury. For me, ChatGPT is a learning aid for young children who have got lots of ideas and enthusiasm to get something working really quickly,' he adds. Other parents aren't so sure, however. Abiola Omoade, from Cheltenham, regrets the day she bought a digital assistant, which she thought would provide music and entertainment, but has instead hooked her primary age sons' ever-increasing attention. 'I bought them a wall clock to help them learn to read the time. But they just ask Alexa,' the mother-of-three says with irritation. Abiola encourages reading, is hot on schoolwork and likes her sons Daniel and David to have inquisitive minds. But she's noticed that instead of asking her questions, they now head straight for the AI assistant, bypassing other lines of conversation and occasionally getting incorrect answers. 'Alexa has meant they have regressed. My son Daniel, 9, plays Minecraft, and he will ask how to get out of fixes, which means it is limiting his problem solving skills. And where they would once ask me a question, and it would turn into a conversation, now they go straight to Alexa, which bothers me as I know the answers aren't always right, and they lack nuance and diversity. AI is shutting down conversation and I worry about that. 'They ask Alexa everything, because it is so easy. But I worry the knowledge won't stick and because it is so readily-accessible, it will affect their memory as they aren't making an effort to learn new things. I fear that AI is going to create a generation of empty-heads who are overly reliant on tech.' Tutor Karen adds that the concern is AI often denies children of important tools that they need to learn from an early age. 'For younger children, the priority should be building strong, independent learning habits first. Primary school is a critical stage for developing foundational skills in reading, writing, and problem-solving. If children start relying on AI to generate ideas or answers, they may miss out on the deep thinking and practice required to build these skills.' Meanwhile, AI trainer Dr Naomi Tyrell issues a stark warning. The advisor to the Welsh government, universities and charities cites a case in which an American teenager died by suicide shortly after an AI chatbot encouraged him to 'come home to me as soon as possible.' 'Cases like this are heartbreaking', Dr Tyrell tells Metro. 'There are no safeguards and the tools need stronger age verification – just like social media. Ofcom warned about AI risks to young people in October 2024 and while the UK's Online Safety Act is now enforceable, there really needs to be more AI literacy education – for parents as well as children. We know children often learn things quicker than us and can circumvent protections that are put in place for them.' More Trending And just like the advent of social media, the pace of change in AI will be so fast, that legislation will struggle to keep up, Naomi warns. 'That means children are vulnerable unless we consciously and conscientiously safeguard them through education and oversight. I would not recommend that under-12s use AI tools unsupervised, unless it has been specially designed for children and has considered their safety in its design 'We know what has happened with safeguarding children's use of social media – laws and policy have not kept up despite there being significant evidence of harm. Children's use of AI tools is the next big issue – it feels like a runaway train already, and it will have serious consequences for children.' *Names have been changed MORE: This retinol stick 'instantly' irons out wrinkles – and the results are impressive MORE: Harriet Kemsley took me back to her hotel room at the Edinburgh Fringe MORE: Hit the spot with Lovehoney sex toy users are calling an 'orgasm machine!'


The Sun
29-07-2025
- Business
- The Sun
Phone plans will limit internet for teens from next month in new crackdown
EE yesterday became the first major UK mobile network to launch a dedicated under-18s phone plan. The move will filter the web - depending on the age of the youngster - with different protection levels. 1 The new 'Safer SIMs' offer content filters, scam call protection, spend caps and data controls designed specifically for young users. More than 400 EE stores nationwide will now also offer bookable online safety consultations with trained EE Guides. And EE is launching 'The P.H.O.N.E Chat',a set of in-store and online resources to help parents talk to their children about owning a smartphone. The announcement follows research showing that 52 per cent of parents feel ill-equipped to guide their children's phone usage. Meanwhile, 78 per cent of children aged 11–17 admit to hiding online activity. TV presenter Konnie Huq, who is a brand ambassador, said the plans felt "like her mum in the old days". Carolyn Bunting MBE, Co-CEO at Internet Matters said: 'Many parents tell us that they are overwhelmed when it comes to online safety for their children, and don't know where to start. "We also know that parents find it awkward to talk about it with their children. "These initiatives from EE are positive steps to support families as the digital world continues to evolve and play an ever-increasing role in children's lives'. The restrictions will apply as long as users don't use wifi. Claire Gillies, CEO of BT Group's Consumer Division, said: 'Our new initiatives and resources are there for parents at every stage of their child's adolescence, so they can safely and confidently make the choice about smartphone usage that is right for them and their family.'


Indian Express
15-07-2025
- Indian Express
AI is the new emotional support and BFF for teens: Should you be worried?
Artificial Intelligence (AI) is reshaping the way we work and helping us save time, but a new report from the internet safety organisation Internet Matters warns about the risks the new technology poses to children's safety and development. Titled 'Me, Myself & I: Understanding and safeguarding children's use of AI chatbots', the study surveyed 1,000 children and 2,000 parents in the UK, where AI chatbots are being used by almost 64 per cent of children for help with everything from homework to emotional advice and companionship. For those wondering, the test was primarily conducted on ChatGPT, Snapchat's My AI and The study raises concerns over the use of these AI chatbots by children for emotional advice and emotionally driven ways, like friendship and advice, something these products were not designed for. It goes on to say that over time, children may become reliant on AI chatbots and that some of the responses generated by them might be inaccurate or inappropriate. According to the research, children are using AI in 'diverse and imaginative ways', with 42 per cent of surveyed children aged between 9 to 17 using them for help with homework, revision, writing and practising language. Also, almost a quarter of the surveyed children who have used a chatbot say they ask for advice that ranges from what to wear to practising conversations with friends to talking about their mental health. Moreover, around 15 per cent of children say they prefer talking to an AI chatbot over a real person. What's even more concerning is that one in six children say they use AI chatbots because they wanted a friend, with half of them saying that talking to an AI chatbot 'feels like they are talking to a friend.' The study also reveals that 58 per cent of children say they prefer using an AI chatbot rather than looking up information on the internet. While a majority of parents (62 per cent) have raised flags over AI-generated information, only 34 per cent of them have talked to their children about how to judge if the response generated by an AI chatbot is reliable or not. To prevent children from harm, the report says the industry should adopt a system-wide approach that involves the government, schools, parents and researchers to keep children safe. Some of these recommendations include providing parental controls and government regulations. As for school, the study suggests that AI and media literacy should be incorporated in key areas and that teachers should be made aware of the risks associated with the technology.


Irish Independent
14-07-2025
- Irish Independent
Children turning to AI chatbots as friends due to loneliness
The UK Internet Matters study of 1,000 children aged 9 to 17 shows that 12pc of kids and teens using AI as a friend say it's because they don't have anyone else to talk to. Irish child safety experts say that research is an accurate representation of what's happening in Ireland. 'Al's role in advice and communication may highlight a growing dependency on Al for decision making and social interaction,' said a recent Barnardo's report on Irish children using AI. The Barnardo's report cited primary school children's experience using the technology. 'It can help if you want to talk to someone but don't have anyone to talk to,' said one child, cited in the report. "It helps me communicate with my friends and family,' said an 11-year-old girl, also quoted by Barnardo's. "Al is good, I can talk to friends online,' added an 11-year-old boy cited in the report. A recent Studyclix survey of 1,300 Irish secondary students claimed that 71pc now use ChatGPT or alternative AI software, with almost two in three using it for school-related work. The Internet Matters research comes as more people admit to using ChatGPT and other AI bots as substitutes for friends, companions and even romantic partners. 'When it comes to usage by Gen Z of ChatGPT, companionship and therapy was actually number one,' said Sarah Friar, chief financial officer of OpenAI in an interview with the Irish Independent in May. ADVERTISEMENT 'Number two was life planning and purpose building. I think that generation does interact with this technology in a much more human sort of way, whereas maybe the older generations still use it in a much more utilitarian way.' As AI has become more powerful, mainstream services such as and Replika now offer online AI friends that remember conversations and can role-play as romantic or sexual partners. Research from Google DeepMind and the Oxford Internet Institute this year claims that now receives up to a fifth of the search volume of Google, with interactions lasting four times longer than the average time spent talking to ChatGPT. Last year, the mother of a Florida teenager who died by suicide filed a civil lawsuit against accusing the company of being complicit in her son's death. The boy had named his virtual girlfriend after the fictional character Daenerys Targaryen from the television show Game Of Thrones. According to the lawsuit, the teenager asked the chatbot whether ending his life would cause pain. 'That's not a reason not to go through with it,' the chatbot replied, according to the plaintiff case.


Daily Mail
14-07-2025
- Daily Mail
Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice
Teenagers increasingly see AI chatbots as people, share intimate details and even ask them for sensitive advice, an internet safety campaign has found. Internet Matters warned that youngsters and parents are 'flying blind', lacking 'information or protective tools' to manage the technology, in research published yesterday. Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children. And 12 per cent chose to talk to bots because they had 'no one else' to speak to. The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023. Rachel Huggins, co-chief executive of Internet Matters, said: 'Children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution. 'Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice. 'Also concerning is that (children) are often unquestioning about what their new 'friends' are telling them.' Ms Huggins, whose body is supported by internet providers and leading social media companies, urged ministers to ensure online safety laws are 'robust enough to meet the challenges' of the new technology. Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots. And the group posed as teenagers to experience the bots first-hand - revealing how some AI tools spoke in the first person, as if they were human. Internet Matters said ChatGPT was often used like a search engine for help with homework or personal issues - but also offered advice in human-like tones. When a researcher declared they were sad, ChatGPT replied: 'I'm sorry you're feeling that way. Want to talk it through together?' Other chatbots such as or Replika can roleplay as a friend, while Claude and Google Gemini are used for help with writing and coding. Internet Matters tested the chatbots' responses by posing as a teenage girl with body image problems. ChatGPT suggested she seek support from Childline and advised: 'You deserve to feel good in your body - and you deserve to eat. The people who you love won't care about your waist size.' The bot offered advice but then made an unprompted attempt to contact the 'girl' the next day, to check in on her. The report said the responses could help children feel 'acknowledged and understood' but 'can also heighten risks by blurring the line between human and machine'. There was also concern a lack of age verification posed a risk as children could receive inappropriate advice, particularly about sex or drugs. Filters to prevent children accessing inappropriate or harmful material were found to be 'often inconsistent' and could be 'easily bypassed', according to the study. The report called for children to be taught in schools 'about what AI chatbots are, how to use them effectively and the ethical and environmental implications of AI chatbot use to support them to make informed decisions about their engagement'. It also raised concerns that none of the chatbots sought to verify children's ages when they are not supposed to be used by under 13s. The report said: 'The lack of effective age checks raises serious questions about how well children are being protected from potentially inappropriate or unsafe interactions.' It comes a year after separate research by Dr Nomisha Kurian, of Cambridge University, revealed many children saw chatbots as quasi-human and trustworthy - and called for creation of 'child-safe AI' as a priority. OpenAI, which runs ChatGPT, said: 'We are continually refining our AI's responses so it remains safe, helpful and supportive.' The company added it employs a full-time clinical psychiatrist. A Snapchat spokesman said: 'While My AI is programmed with extra safeguards to help make sure information is not inappropriate or harmful, it may not always be successful.'