logo
Is AI raising a generation of ‘tech-reliant empty heads'?

Is AI raising a generation of ‘tech-reliant empty heads'?

Metroa day ago
It began with snide looks and remarks and ended up in full-on bullying with 11-year-old Sophie* coming home from school one day in tears.
When her mum asked what was wrong, she discovered that Sophie's friends had turned their backs on her, leaving the little girl feeling confused, bereft and isolated.
'I noticed the way they were talking to her on the weekend; just being cruel and asking her pointed questions about what she was wearing and why,' Sophie's mum, Ella*, tells Metro
'When she went back to school on the Monday, one girl had got the whole group to stop talking to her. Sophie went to sit down on the table at lunch, and they all got up and moved.
'These are girls she'd grown up with. Later in the playground, they told her: 'Sorry, we're not allowed to play with you' and walked off.'
While Ella and her husband did their best to support their daughter, Sophie was growing increasingly anxious and eventually turned to an unlikely source for advice.
'Sophie had seen me use ChatGPT to help write emails, so she started to have a go. Using my phone, she asked how to deal with bullying, how to get more friends, and how to make people like her.
'At first I was a bit alarmed because you can ask it anything and it will give you answers. I was worried about what she wanted to know. But it turned out Sophie found it a real comfort,' remembers Ella.
'She told me she could talk to it and it spoke back to her like a real human. She would explain what was going on and it would say things like: 'I hope you're okay Sophie', and 'this is horrible to hear.' I had to explain to her that it's not real, that it has been taught to seem empathetic.'
Ella admits she was surprised that ChatGPT could prove a useful tool and was just grateful that her daughter had found an outlet for her anxiety.
And while adults may be equally impressed and daunted by the unstoppable march of artificial intelligence, one in five under-12s are already using it at least once a month, according to the Alan Turing Institute.
It means an increasing number of primary-age children are growing reliant on AI for everything, from entertainment tp emotional support. However, although many parents like Ella might feel it's a help rather than a hindrance, a new report, Me, Myself and AI, from Internet Matters, has discovered that children are often being fed inaccurate information, inappropriate content and even forming complicated relationships with chatbots.
There's also fears over the long-term impact it will have on children's education with kids – and parents – using it to help with homework.
One teacher from Hertfordshire, who has been asked to remain nameless, had to throw out one child's work as it had clearly been lifted straight from Chat GPT.
'It was a 500-word creative writing task and a few hadn't been written by the children. One of them I could just tell – from knowing the child's writing in class – it was obvious. They'd gone into chat and submitted it online via Google Classroom.
'It was a real shame. I think it can be useful but children need to be taught how to use it, so it's a source of inspiration, rather than providing a whole piece of writing.'
Fellow educator Karen Simpson is also concerned that her pupils have admitted using AI for help with homework, creative writing, project research and language and spelling.
The primary and secondary tutor of more than 20 years, tells Metro: 'I have experienced children asking AI tools to complete maths problems or write stories for them rather than attempting it themselves. They are using it to generate ideas for stories or even full pieces of writing, which means they miss out on practising sentence structure, vocabulary and spelling. And they use it to check or rewrite their work, which can prevent them from learning how to edit or improve their writing independently.
'Children don't experience the process of making mistakes, thinking critically and building resilience,' adds Karen, from Invervness. 'These skills are essential at primary level. AI definitely has its place when used as a support tool for older learners but for younger children, it risks undermining the very skills they need for future success.'
Mark Knoop's son, Fred, uses ChatGPT for every day tasks and admits he's been impressed by what he's seen.
As a software engineer and the founder of EdTech start up Flashily, which helps children learn to read, it's unsurprising he might be more open to the idea, but Mark firmly believes that artificial intelligence can open doors for young people when used with adult guidance.
He explains that after giving his son, then seven, his tablet to occupy him while he was at the barbers, the schoolboy used ChatGPT to code a video game.
'Fred has always been into computers and gaming, but with things like Roblox and Minecraft, there is a barrier because systems are so complicated. When I grew up with a BBC Micro, you could just type in commands and run it; it was very simple,' Mark tells Metro.
'Using ChatGPT, off his own back, Fred created the character, its armour and sword and wrote a game that works. It is amazing to me and really encouraging.'
A scroll through Fred's search history shows how much he uses ChatGPT now; to find out about Japan and China, to research his favourite animal – pandas, or to identify poisonous plants. He also uses the voice function to override the time it would take to type prompts, and Mark has seen how the model has protected Fred from unsuitable content.
'For his computer game, he wanted a coconut to land on one character's head, in a comedy way, rather than a malicious one. But ChatGPT refused to generate the image, because it would be depicting injury. For me, ChatGPT is a learning aid for young children who have got lots of ideas and enthusiasm to get something working really quickly,' he adds.
Other parents aren't so sure, however. Abiola Omoade, from Cheltenham, regrets the day she bought a digital assistant, which she thought would provide music and entertainment, but has instead hooked her primary age sons' ever-increasing attention.
'I bought them a wall clock to help them learn to read the time. But they just ask Alexa,' the mother-of-three says with irritation.
Abiola encourages reading, is hot on schoolwork and likes her sons Daniel and David to have inquisitive minds. But she's noticed that instead of asking her questions, they now head straight for the AI assistant, bypassing other lines of conversation and occasionally getting incorrect answers.
'Alexa has meant they have regressed. My son Daniel, 9, plays Minecraft, and he will ask how to get out of fixes, which means it is limiting his problem solving skills. And where they would once ask me a question, and it would turn into a conversation, now they go straight to Alexa, which bothers me as I know the answers aren't always right, and they lack nuance and diversity. AI is shutting down conversation and I worry about that.
'They ask Alexa everything, because it is so easy. But I worry the knowledge won't stick and because it is so readily-accessible, it will affect their memory as they aren't making an effort to learn new things. I fear that AI is going to create a generation of empty-heads who are overly reliant on tech.'
Tutor Karen adds that the concern is AI often denies children of important tools that they need to learn from an early age.
'For younger children, the priority should be building strong, independent learning habits first. Primary school is a critical stage for developing foundational skills in reading, writing, and problem-solving. If children start relying on AI to generate ideas or answers, they may miss out on the deep thinking and practice required to build these skills.'
Meanwhile, AI trainer Dr Naomi Tyrell issues a stark warning. The advisor to the Welsh government, universities and charities cites a case in which an American teenager died by suicide shortly after an AI chatbot encouraged him to 'come home to me as soon as possible.'
'Cases like this are heartbreaking', Dr Tyrell tells Metro.
'There are no safeguards and the tools need stronger age verification – just like social media. Ofcom warned about AI risks to young people in October 2024 and while the UK's Online Safety Act is now enforceable, there really needs to be more AI literacy education – for parents as well as children. We know children often learn things quicker than us and can circumvent protections that are put in place for them.' More Trending
And just like the advent of social media, the pace of change in AI will be so fast, that legislation will struggle to keep up, Naomi warns.
'That means children are vulnerable unless we consciously and conscientiously safeguard them through education and oversight. I would not recommend that under-12s use AI tools unsupervised, unless it has been specially designed for children and has considered their safety in its design
'We know what has happened with safeguarding children's use of social media – laws and policy have not kept up despite there being significant evidence of harm. Children's use of AI tools is the next big issue – it feels like a runaway train already, and it will have serious consequences for children.'
*Names have been changed
MORE: This retinol stick 'instantly' irons out wrinkles – and the results are impressive
MORE: Harriet Kemsley took me back to her hotel room at the Edinburgh Fringe
MORE: Hit the spot with Lovehoney sex toy users are calling an 'orgasm machine!'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How to spot if an image is AI after experts warn you 'can't trust your eyes'
How to spot if an image is AI after experts warn you 'can't trust your eyes'

Metro

time2 hours ago

  • Metro

How to spot if an image is AI after experts warn you 'can't trust your eyes'

Last year, Metro's night news editor, Barney Davis, wrote a story about a piano appearing on a London railway station platform. Commuters were confused, to say the least, but apparently didn't mind the musical interlude. Except, Davis didn't – he wasn't even at the Metro in 2024. And commuters at Clapham Junction were never 'baffled' by a 'mystery piano', given that this never happened. (Davis can also spell words correctly.) This is a fake news story that we asked ChatGPT, an artificial intelligence (AI) chatbot, to generate today. It took only a minute or two to make. More than 34million AI images are churned out a day, with the technology constantly getting better at producing lifelike photos and video. 'AI imagery doesn't just fake reality, it bends context, and even a harmless-looking photo can be used to drive a dangerous false narrative,' said Naomi Owusu, the CEO and co-founder of live digital publishing platform Tickaroo, told Metro. 'Once fake imagery is seen, it can install a 'new reality' in people's minds, and even if it's proven false later, it can be incredibly hard to undo the impression it leaves.' Experts told Metro that there are still a fair few ways to spot an AI fake – for now, at least. Images made using text-to-image algorithms look good from afar, but are often far from good, Vaidotas Šedys, the chief risk officer at the web intelligence solutions provider Oxylabs, said. 'Hands are often the giveaway,' Šedys said. 'Too many fingers, misaligned limbs, or strange shadowing can suggest the image isn't human-made.' Take the bogus Metro story, the copy is riddled with spelling mistakes ('fuli-sized' and your favourite section, 'E-Dutton') and the font looks smeared. Or with the above viral TikTok clip, showing bunnies bouncing on a trampoline, the furry animals blend into one another and warp at times, which they definitely don't do in real life. Julius Černiauskas, Oxylabs CEO, added: 'You can typically identify a video fake when it has unnatural movements and poor quality, grainy images.' Okay, first of all—no judgement here. We're all fooled by AI images. It's not just you, but parents, politicians and CEOs. The result? It means the tech is doing exactly what it was designed to do: look real. The above is our attempt to write in the same way that chatbots now infamously do, a style that Šedys is 'overly polished and robotic'. 'Look for content that lacks natural rhythm or relies on too many generic examples,' he adds. To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video Deepfake technology is software that allows people to swap faces, voices and other characteristics to create talking digital puppets. 'Some people deploy deepfakes to scam or extort money, fabricating compromising 'evidence' to deceive others,' Černiauskas said. 'In other cases, deepfakes are used for spreading misinformation, often with political motives. They can destabilise countries, sway public opinion, or be manipulated by foreign agents. 'There are even those who create deepfakes just for the thrill, without any regard for the damage they may cause. 'Just because we haven't fallen victim to deepfakes, doesn't mean it won't happen as AI continues to improve.' ChatGPT and other synthetic image-makers excel at faces as they're examples of generative AI, so spend their days gobbling up online data. Researchers have found that faces created by AI systems were perceived as more realistic than photographs of people, called hyper-realism. So, ironically, a giveaway an image isn't genuine is that someone's face look's a little too perfect. Owusu said: 'It's not about trusting your eyes anymore, it's about checking your sources. We can't fact-check pixels alone. 'Use common sense: ask yourself how likely it is that what you're seeing is real, and how likely it is that someone would have captured that exact moment? If it looks too Photoshopped, it might be AI.' If you're suspicious, do a reverse image search, which will find other places on the internet the photo exists. 'If it can't be traced, it should be treated as unverified, no matter how compelling it looks,' Owusu said. Services like ZeroGPT can rifle through text to see if it's AI jibber jabber. Check the metadata – the digital fingerprints embedded in photos, documents and web pages – if something is raising your eyebrow. In May, four of the top 10 YouTube channels by subscribers featured AI-manufactured material in every video, according to an analysis by the Garbage Day newsletter. Some AI content creators don't specify that what their followers are looking at is a phoney, whether by saying in the caption or clicking a 'Made with AI' tag that some social media platforms have. Some accounts are nothing more than AI bots, spewing inauthentic posts and images. This shows how important it is to be intentional with who you follow, especially with news outlets, all three experts we spoke with said. Free and easy to use, countless people now turn to AI to help write emails, plan their weekly budgets or get life advice. But many people are also generating with impunity a flood of fake photos and videos about world leaders, political candidates and their supporters. This conveyor belt of misinformation can sway elections and erode democracy, political campaigners previously told Metro. The International Panel on the Information Environment, a group of scientists, found that of the more than 50 elections that took place last year, AI was used in 80%. More Trending It won't be long until counterfeit images and video become 'impossible' to differentiate from authentic ones, Černiauskas said, further chipping away at the public's distrust in what's in front of them. 'Don't take it for granted that you can spot the difference with the real thing,' he added. As much as governments and regulators may try to get a grip on AI hoaxes, the ever-advancing tech easily outpaces legislation. 'Don't trust everything you see on your screen,' said Šedys. 'Critical thinking will be your best defence as AI continues to evolve.' Get in touch with our news team by emailing us at webnews@ For more stories like this, check our news page. MORE: Parents and kids are using ChatGPT for schoolwork – is AI raising a generation of 'tech-reliant empty heads'? MORE: Here's why saying 'please' and 'thank you' costs 158,000,000 bottles of water a day MORE: Everything you need to know about the latest ChatGPT update

If there's no escaping AI, should students even try?
If there's no escaping AI, should students even try?

Times

time12 hours ago

  • Times

If there's no escaping AI, should students even try?

If you listen to the optimists, you'll hear that artificial intelligence will give everyone a free friend, therapist or even doctor. But unlike the search engines that came before, large language models (LLMs) such as ChatGPT can also produce highly customised answers to nuanced and specific questions. For all of the possible benefits, the 'expert in your pocket' is therefore also a huge problem for academic integrity. Almost nine out of ten undergraduates already use AI to help on their assessments, a survey found this year, while a 2024 study revealed that, on average, chatbots received higher grades than real undergraduates. With LLM-detectors widely seen as unfit for purpose, many therefore suspect cheating is now rife. Some pedagogists believe pen-and-paper tests, like GCSE exams, are the only credible way to ensure students still learn the content. Others, however, think coursework and at-home exams remain crucial to encourage responsible AI use. • AI cheating surges at universities 'I am certain that most of my students use LLMs at some point to solve coding problems,' Professor Stephane Wolton says. 'But for me that is fine because in their real job they are going to use it too. Why would I forbid them from doing something they are going to do later?' Wolton, who teaches political science at the London School of Economics (LSE), is among the first professors in the country to experiment with an innovative assessment design — one that he thinks will likely catch on. For the final quarter of their grade, his students produce a short essay using an LLM. They are not assessed on the quality of that answer but instead on a justification of their prompts and a 1,000-word critique of the response. This year Wolton asked postgraduates to see what a chatbot had to say about the claim that 'only a strong autocratic regime' could effectively tackle climate change. The marks, he says, were for explaining how and why the AI answer was unsatisfactory. 'Everybody is saying it's a revolution and I tend to believe that you cannot fight progress. I don't want us to be Luddites but to actually embrace this thing,' Wolton says. 'We don't know exactly how LLMs are going to be used in future, but they are going to be used — I want my students to have a headstart.' Wolton says that in-person exams remain important to check students learn the core concepts, but, drawing a comparison with the early internet era, he adds that it is also crucial to teach students to think critically about AI-generation. 'From a pedagogical perspective this is quite important. My goal is to get them to use this tool intelligently. I was surprised by how many students, probably two thirds, didn't think properly about how to use these tools,' he says. 'It's not changing the philosophy, it's changing what this philosophy is applied to.' Most students appreciated his novel assessment method, according to Sophia Moore, an undergraduate who took one of his classes. 'It actually engages with the role of AI in academia instead of simply ignoring it,' she says. 'The way we write, think and learn is changing, as do the expectations in the job market — assessments should change too,' agrees Valeria Schell, a postgraduate student. 'The traditional model of cramming facts, writing them out under time pressure, and then forgetting it all a week later feels outdated.' Schell was especially frustrated at classes that, in her view, had ignored the technological improvements of the past few years. 'Right now, students are often already using AI — just quietly. So instead of rewarding those who break the rules, do it skilfully and hide it well, why not make it part of the assignment?' she says. Professor Christopher Tucci, who teaches digital strategy and innovation at Imperial College London, also updated his course shortly after the launch of ChatGPT in 2022. Seeing how a colleague generated a book draft using an LLM, Tucci was convinced that bot-generated content wasn't going anywhere. Students will need to understand their strengths and weaknesses in the real world, he argues, meaning they should be included at university. An outright ban is 'a ship that sailed a long time ago', he adds. 'The most honest ones are not going to use it but everyone else is. That's going to put incredible pressure on them because it will be difficult to keep up.' However, others push back against integrating AI into student assessments by pointing to potential drawbacks. • 'An existential crisis': can universities survive ChatGPT? Despite using LLMs to help with translation and debugging code, Angelo Pirrone, a psychology lecturer at the University of Liverpool, doesn't think academics outside of computer science should use valuable classroom time teaching their use. 'I don't buy into the idea that we should embrace whatever innovation comes our way. It seems to me that students are getting worse by the year so old school teaching and learning — reading, writing, having conversations — goes a long way,' Pirrone says. 'I feel we could have students engage with much better material. Should we succumb as a consequence of endemic cheating to this type of AI-centred learning? I don't think so.' Some students on Wolton's course privately agree. One, speaking anonymously, expresses frustration that they had spent hours researching LLM techniques rather than political economy. Even Wolton himself cautions that time would run out for his method if the models become good enough to convincingly critique themselves. But the future is 'frightening and exciting', he believes — so perhaps we should try to keep up. Which is the best university in the UK? See the definitive university rankings, get expert advice on your application and more in The Sunday Times Good University Guide

Alan Turing Institute accused of ‘mismanagement of public funds'
Alan Turing Institute accused of ‘mismanagement of public funds'

Telegraph

timea day ago

  • Telegraph

Alan Turing Institute accused of ‘mismanagement of public funds'

Staff at the Alan Turing Institute (ATI) have filed a whistleblowing complaint with the charity watchdog, alleging the 'mismanagement of public funds' amid a 'crisis' at the publicly funded research institution. The ATI, which last year was handed £100m in taxpayer funding, was accused of a 'failure to deliver on its charitable mission' in the filing with the Charity Commission, The Telegraph understands. The complaint alleges that public cash and donations have been spent on 'wasted resources' with 'no accountability' over how funds have been deployed. Established in 2015 as Britain's leading centre of artificial intelligence (AI) research, the ATI has been in turmoil amid questions over its effectiveness and internal anger from staff. Peter Kyle, the Technology Secretary, stepped in last month, writing a letter to the chairman of the ATI demanding 'reform' and that it change its focus to defence. Mr Kyle told Doug Gurr, the former Amazon UK boss who is chairman of the ATI's board of trustees, it must 'evolve and adapt' and warned long-term funding for Turing would be tied to new objectives prioritising 'defence, national security and sovereign capabilities'. In the whistleblowing complaint, staff warned that the threat to funding 'could lead to the Institute's collapse'. It is understood that the Charity Commission is in the early stages of examining the claims. As part of the complaint, staff claim that the ATI has shifted its priorities away from its stated charitable purpose, which includes research into 'data-centric engineering, high performance computing and cyber security, to smart cities, health, the economy and data ethics'. Questions for the ATI The ATI, which is named after the Second World War code-breaker Alan Turing, has since scrapped or paused a number of initiatives under its public policy programme, including initiatives to study women and diversity in data science and AI bias. It is not the first time the ATI has faced questions over its direction. A report last month from British Progress, a think tank, claimed it had a 'fragmented and thinly spread research portfolio' that had drifted toward 'work rooted in social and political critique'. The uncertainty at the research lab has been accompanied by the exit of senior researchers and executives. Turing's chief technology officer, Jonathan Starck, left the ATI just nine months after being appointed, while two senior scientists – Andrew Duncan and Marc Deisenroth – both also left earlier this year after originally being asked to lead a series of 'grand challenges' for the organisation. The ATI has been in the process of cutting dozens of jobs, while it has been grappling with plunging morale among staff after ending a number of projects. It is understood that a separate whistleblowing complaint, sent to the UK Research and Innovation funding agency, about the ATI was the subject of an independent investigation, which found no concerns. A spokesman for the Alan Turing Institute said: 'We're shaping a new phase for the Turing, and this requires substantial organisational change to ensure we deliver on the promise and unique role of the UK's national institute for data science and AI. 'As we move forward, we're focused on delivering real-world impact across society's biggest challenges, including responding to the national need to double down on our work in defence, national security and sovereign capabilities.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store