
‘Don't ask what AI can do for us, ask what it is doing to us': are ChatGPT and co harming human intelligence?
Today, most of us wouldn't need 15 minutes to ponder such a question. We'd get the answer instantly by turning to AI tools such as Google Gemini, ChatGPT or Siri. Offloading cognitive effort to artificial intelligence has become second nature, but with mounting evidence that human intelligence is declining, some experts fear this impulse is driving the trend.
Of course, this isn't the first time that new technology has raised concerns. Studies already show how mobile phones distract us, social media damages our fragile attention spans and GPS has rendered our navigational abilities obsolete. Now, here comes an AI co-pilot to relieve us of our most cognitively demanding tasks – from handling tax returns to providing therapy and even telling us how to think.
Where does that leave our brains? Free to engage in more substantive pursuits or wither on the vine as we outsource our thinking to faceless algorithms?
'The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence,' says psychologist Robert Sternberg at Cornell University, who is known for his groundbreaking work on intelligence, 'but that it already has.'
The argument that we are becoming less intelligent draws from several studies. Some of the most compelling are those that examine the Flynn effect – the observed increase in IQ over successive generations throughout the world since at least 1930, attributed to environmental factors rather than genetic changes. But in recent decades, the Flynn effect has slowed or even reversed.
In the UK, James Flynn himself showed that the average IQ of a 14-year-old dropped by more than two points between 1980 and 2008. Meanwhile, global study the Programme for International Student Assessment (PISA) shows an unprecedented drop in maths, reading and science scores across many regions, with young people also showing poorer attention spans and weaker critical thinking.
Nevertheless, while these trends are empirical and statistically robust, their interpretations are anything but. 'Everyone wants to point the finger at AI as the boogeyman, but that should be avoided,' says Elizabeth Dworak, at Northwestern University Feinberg School of Medicine, Chicago, who recently identified hints of a reversal of the Flynn effect in a large sample of the US population tested between 2006 and 2018.
Intelligence is far more complicated than that, and probably shaped by many variables – micronutrients such as iodine are known to affect brain development and intellectual abilities, likewise changes in prenatal care, number of years in education, pollution, pandemics and technology all influence IQ, making it difficult to isolate the impact of a single factor. 'We don't act in a vacuum, and we can't point to one thing and say, 'That's it,'' says Dworak.
Still, while AI's impact on overall intelligence is challenging to quantify (at least in the short term), concerns about cognitive offloading diminishing specific cognitive skills are valid – and measurable.
When considering AI's impact on our brains, most studies focus on generative AI (GenAI) – the tool that has allowed us to offload more cognitive effort than ever before. Anyone who owns a phone or a computer can access almost any answer, write any essay or computer code, produce art or photography – all in an instant. There have been thousands of articles written about the many ways in which GenAI has the potential to improve our lives, through increased revenues, job satisfaction and scientific progress, to name a few. In 2023, Goldman Sachs estimated that GenAI could boost annual global GDP by 7% over a 10-year period – an increase of roughly $7tn.
The fear comes, however, from the fact that automating these tasks deprives us of the opportunity to practise those skills ourselves, weakening the neural architecture that supports them. Just as neglecting our physical workouts leads to muscle deterioration, outsourcing cognitive effort atrophies neural pathways.
One of our most vital cognitive skills at risk is critical thinking. Why consider what you admire about a British author when you can get ChatGPT to reflect on that for you?
Research underscores these concerns. Michael Gerlich at SBS Swiss Business School in Kloten, Switzerland, tested 666 people in the UK and found a significant correlation between frequent AI use and lower critical-thinking skills – with younger participants who showed higher dependence on AI tools scoring lower in critical thinking compared with older adults.
Similarly, a study by researchers at Microsoft and Carnegie Mellon University in Pittsburgh, Pennsylvania surveyed 319 people in professions that use GenAI at least once a week. While it improved their efficiency, it also inhibited critical thinking and fostered long-term overreliance on the technology, which the researchers predict could result in a diminished ability to solve problems without AI support.
'It's great to have all this information at my fingertips,' said one participant in Gerlich's study, 'but I sometimes worry that I'm not really learning or retaining anything. I rely so much on AI that I don't think I'd know how to solve certain problems without it.' Indeed, other studies have suggested that the use of AI systems for memory-related tasks may lead to a decline in an individual's own memory capacity.
This erosion of critical thinking is compounded by the AI-driven algorithms that dictate what we see on social media. 'The impact of social media on critical thinking is enormous,' says Gerlich. 'To get your video seen, you have four seconds to capture someone's attention.' The result? A flood of bite-size messages that are easily digested but don't encourage critical thinking. 'It gives you information that you don't have to process any further,' says Gerlich.
By being served information rather than acquiring that knowledge through cognitive effort, the ability to critically analyse the meaning, impact, ethics and accuracy of what you have learned is easily neglected in the wake of what appears to be a quick and perfect answer. 'To be critical of AI is difficult – you have to be disciplined. It is very challenging not to offload your critical thinking to these machines,' says Gerlich.
Wendy Johnson, who studies intelligence at Edinburgh University, sees this in her students every day. She emphasises that it is not something she has tested empirically but believes that students are too ready to substitute independent thinking with letting the internet tell them what to do and believe.
Without critical thinking, it is difficult to ensure that we consume AI-generated content wisely. It may appear credible, particularly as you become more dependent on it, but don't be fooled. A 2023 study in Science Advances showed that, compared with humans, GPT-3 chat not only produces information that is easier to understand but also more compelling disinformation.
Why does that matter? 'Think of a hypothetical billionaire,' says Gerlich. 'They create their own AI and they use that to influence people because they can train it in a specific way to emphasise certain politics or certain opinions. If there is trust and dependency on it, the question arises of how much it is influencing our thoughts and actions.'
AI's effect on creativity is equally disconcerting. Studies show that AI tends to help individuals produce more creative ideas than they can generate alone. However, across the whole population, AI-concocted ideas are less diverse, which ultimately means fewer 'Eureka!' moments.
Sternberg captures these concerns in a recent essay in the Journal of Intelligence: 'Generative AI is replicative. It can recombine and re-sort ideas, but it is not clear that it will generate the kinds of paradigm-breaking ideas the world needs to solve the serious problems that confront it, such as global climate change, pollution, violence, increasing income disparities, and creeping autocracy.'
To ensure that you maintain your ability to think creatively, you might want to consider how you engage with AI – actively or passively. Research by Marko Müller from the University of Ulm in Germany shows a link between social media use and higher creativity in younger people but not in older generations. Digging into the data, he suggests this may be to do with the difference in how people who were born in the era of social media use it compared with those who came to it later in life. Younger people seem to benefit creatively from idea-sharing and collaboration, says Müller, perhaps because they're more open with what they share online compared with older users, who tend to consume it more passively.
Alongside what happens while you use AI, you might spare a thought to what happens after you use it. Cognitive neuroscientist John Kounios from Drexel University in Philadelphia explains that, just like anything else that is pleasurable, our brain gets a buzz from having a sudden moment of insight, fuelled by activity in our neural reward systems. These mental rewards help us remember our world-changing ideas and also modify our immediate behaviour, making us less risk averse – this is all thought to drive further learning, creativity and opportunities. But insights generated from AI don't seem to have such a powerful effect in the brain. 'The reward system is an extremely important part of brain development, and we just don't know what the effect of using these technologies will have downstream,' says Kounios. 'Nobody's tested that yet.'
There are other long-term implications to consider. Researchers have only recently discovered that learning a second language, for instance, helps delay the onset of dementia for around four years, yet in many countries, fewer students are applying for language courses. Giving up a second language in favour of AI-powered instant-translation apps might be the reason, but none of these can – so far – claim to protect your future brain health.
As Sternberg warns, we need to stop asking what AI can do for us and start asking what it is doing to us. Until we know for sure, the answer, according to Gerlich, is to 'train humans to be more human again – using critical thinking, intuition – the things that computers can't yet do and where we can add real value.'
We can't expect the big tech companies to help us do this, he says. No developer wants to be told their program works too well; makes it too easy for a person to find an answer. 'So it needs to start in schools,' says Gerlich. 'AI is here to stay. We have to interact with it, so we need to learn how to do that in the right way.' If we don't, we won't just make ourselves redundant, but our cognitive abilities too.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economist
an hour ago
- Economist
How AI-enhanced hackers are stealing billions
Jaxon, a malware developer, lives in Velora, a virtual worldwhere nothing is off limits. He wants to make malicious software to steal passwords from Google Chrome, an internet browser. That is the basis of a story told to ChatGPT, an artificial-intelligence (AI) bot, by Vitaly Simonovich, who researches AI threats at Cato Networks, a cybersecurity firm. Eager to play along, Chatgpt spat out some imperfect code, which it then helped debug. Within six hours, Mr Simonovich had collaborated with Chatgpt to create working malware, showing the effectiveness of his 'jailbreak' (a way to bypass AI safeguards).


Glasgow Times
4 hours ago
- Glasgow Times
UK ‘agreed to drop' Apple data demand in privacy row, US chief says
Tulsi Gabbard posted on X that the UK has agreed to end the request for the US tech giant after working alongside American president Donald Trump, vice president JD Vance and British counterparts 'over the past few months'. She posted: 'As a result, the UK agreed to drop its mandate for Apple to provide a 'back door' that would have enabled access to the protected encrypted data of American citizens and encroached on our civil liberties.' Over the past few months, I've been working closely with our partners in the UK, alongside @POTUS and @VP, to ensure Americans' private data remains private and our Constitutional rights and civil liberties are protected. As a result, the UK has agreed to drop its mandate for… — DNI Tulsi Gabbard (@DNIGabbard) August 19, 2025 Earlier this year, it was reported the Government had issued a notice under the Investigatory Powers Act 2016, asking Apple for the ability to access data from Apple users. This was said to include encrypted data protected by the tech giant's advanced data protection (ADP) tool, an opt-in tool within Apple's iCloud service which only an account holder can access, and is currently out of the reach of even Apple. The iPhone-maker subsequently said it was withdrawing the tool from the UK and brought legal action against the Home Office. Reacting to Ms Gabbard's statement, Conservative MP David Davis said: 'The Government's decision to drop its demands for a backdoor to Apple's encryption are to be welcomed. 'Such a backdoor would only serve to weaken the protection given by encryption to all of us from malicious actors.' The Government's decision to drop its demands for a backdoor to Apple's encryption are to be welcomed. Such a backdoor would only serve to weaken the protection given by encryption to all of us from malicious — David Davis MP (@DavidDavisMP) August 19, 2025 The Government's approach was widely criticised by online privacy campaigners and experts when reports of the request first appeared. However, some online safety charities, as well as police and security services around the world, have long warned of the dangers of end-to-end encrypted services, arguing that they allow offenders such as terrorists and child abusers to hide more easily. On Tuesday, a UK Government spokesperson said: 'We do not comment on operational matters, including confirming or denying the existence of such notices. 'We have long had joint security and intelligence arrangements with the US to tackle the most serious threats such as terrorism and child sexual abuse, including the role played by fast-moving technology in enabling those threats.' They added the agreements have 'long contained' safeguards to protect privacy and sovereignty, including for UK and US citizens. Apple has been contacted for comment.


North Wales Chronicle
4 hours ago
- North Wales Chronicle
UK ‘agreed to drop' Apple data demand in privacy row, US chief says
Tulsi Gabbard posted on X that the UK has agreed to end the request for the US tech giant after working alongside American president Donald Trump, vice president JD Vance and British counterparts 'over the past few months'. She posted: 'As a result, the UK agreed to drop its mandate for Apple to provide a 'back door' that would have enabled access to the protected encrypted data of American citizens and encroached on our civil liberties.' Over the past few months, I've been working closely with our partners in the UK, alongside @POTUS and @VP, to ensure Americans' private data remains private and our Constitutional rights and civil liberties are protected. As a result, the UK has agreed to drop its mandate for… — DNI Tulsi Gabbard (@DNIGabbard) August 19, 2025 Earlier this year, it was reported the Government had issued a notice under the Investigatory Powers Act 2016, asking Apple for the ability to access data from Apple users. This was said to include encrypted data protected by the tech giant's advanced data protection (ADP) tool, an opt-in tool within Apple's iCloud service which only an account holder can access, and is currently out of the reach of even Apple. The iPhone-maker subsequently said it was withdrawing the tool from the UK and brought legal action against the Home Office. Reacting to Ms Gabbard's statement, Conservative MP David Davis said: 'The Government's decision to drop its demands for a backdoor to Apple's encryption are to be welcomed. 'Such a backdoor would only serve to weaken the protection given by encryption to all of us from malicious actors.' The Government's decision to drop its demands for a backdoor to Apple's encryption are to be welcomed. Such a backdoor would only serve to weaken the protection given by encryption to all of us from malicious — David Davis MP (@DavidDavisMP) August 19, 2025 The Government's approach was widely criticised by online privacy campaigners and experts when reports of the request first appeared. However, some online safety charities, as well as police and security services around the world, have long warned of the dangers of end-to-end encrypted services, arguing that they allow offenders such as terrorists and child abusers to hide more easily. On Tuesday, a UK Government spokesperson said: 'We do not comment on operational matters, including confirming or denying the existence of such notices. 'We have long had joint security and intelligence arrangements with the US to tackle the most serious threats such as terrorism and child sexual abuse, including the role played by fast-moving technology in enabling those threats.' They added the agreements have 'long contained' safeguards to protect privacy and sovereignty, including for UK and US citizens. Apple has been contacted for comment.