logo
‘Don't ask what AI can do for us, ask what it is doing to us': are ChatGPT and co harming human intelligence?

‘Don't ask what AI can do for us, ask what it is doing to us': are ChatGPT and co harming human intelligence?

Yahoo15-05-2025
Imagine for a moment you are a child in 1941, sitting the common entrance exam for public schools with nothing but a pencil and paper. You read the following: 'Write, for no more than a quarter of an hour, about a British author.'
Today, most of us wouldn't need 15 minutes to ponder such a question. We'd get the answer instantly by turning to AI tools such as Google Gemini, ChatGPT or Siri. Offloading cognitive effort to artificial intelligence has become second nature, but with mounting evidence that human intelligence is declining, some experts fear this impulse is driving the trend.
Of course, this isn't the first time that new technology has raised concerns. Studies already show how mobile phones distract us, social media damages our fragile attention spans and GPS has rendered our navigational abilities obsolete. Now, here comes an AI co-pilot to relieve us of our most cognitively demanding tasks – from handling tax returns to providing therapy and even telling us how to think.
Where does that leave our brains? Free to engage in more substantive pursuits or wither on the vine as we outsource our thinking to faceless algorithms?
'The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence,' says psychologist Robert Sternberg at Cornell University, who is known for his groundbreaking work on intelligence, 'but that it already has.'
The argument that we are becoming less intelligent draws from several studies. Some of the most compelling are those that examine the Flynn effect – the observed increase in IQ over successive generations throughout the world since at least 1930, attributed to environmental factors rather than genetic changes. But in recent decades, the Flynn effect has slowed or even reversed.
In the UK, James Flynn himself showed that the average IQ of a 14-year-old dropped by more than two points between 1980 and 2008. Meanwhile, global study the Programme for International Student Assessment (PISA) shows an unprecedented drop in maths, reading and science scores across many regions, with young people also showing poorer attention spans and weaker critical thinking.
Related: James Flynn: IQ may go up as well as down
Nevertheless, while these trends are empirical and statistically robust, their interpretations are anything but. 'Everyone wants to point the finger at AI as the boogeyman, but that should be avoided,' says Elizabeth Dworak, at Northwestern University Feinberg School of Medicine, Chicago, who recently identified hints of a reversal of the Flynn effect in a large sample of the US population tested between 2006 and 2018.
Intelligence is far more complicated than that, and probably shaped by many variables – micronutrients such as iodine are known to affect brain development and intellectual abilities, likewise changes in prenatal care, number of years in education, pollution, pandemics and technology all influence IQ, making it difficult to isolate the impact of a single factor. 'We don't act in a vacuum, and we can't point to one thing and say, 'That's it,'' says Dworak.
Still, while AI's impact on overall intelligence is challenging to quantify (at least in the short term), concerns about cognitive offloading diminishing specific cognitive skills are valid – and measurable.
Studies have suggested that the use of AI for memory-related tasks may lead to a decline in an individual's own memory capacity
When considering AI's impact on our brains, most studies focus on generative AI (GenAI) – the tool that has allowed us to offload more cognitive effort than ever before. Anyone who owns a phone or a computer can access almost any answer, write any essay or computer code, produce art or photography – all in an instant. There have been thousands of articles written about the many ways in which GenAI has the potential to improve our lives, through increased revenues, job satisfaction and scientific progress, to name a few. In 2023, Goldman Sachs estimated that GenAI could boost annual global GDP by 7% over a 10-year period – an increase of roughly $7tn.
The fear comes, however, from the fact that automating these tasks deprives us of the opportunity to practise those skills ourselves, weakening the neural architecture that supports them. Just as neglecting our physical workouts leads to muscle deterioration, outsourcing cognitive effort atrophies neural pathways.
One of our most vital cognitive skills at risk is critical thinking. Why consider what you admire about a British author when you can get ChatGPT to reflect on that for you?
Research underscores these concerns. Michael Gerlich at SBS Swiss Business School in Kloten, Switzerland, tested 666 people in the UK and found a significant correlation between frequent AI use and lower critical-thinking skills – with younger participants who showed higher dependence on AI tools scoring lower in critical thinking compared with older adults.
Similarly, a study by researchers at Microsoft and Carnegie Mellon University in Pittsburgh, Pennsylvania surveyed 319 people in professions that use GenAI at least once a week. While it improved their efficiency, it also inhibited critical thinking and fostered long-term overreliance on the technology, which the researchers predict could result in a diminished ability to solve problems without AI support.
'It's great to have all this information at my fingertips,' said one participant in Gerlich's study, 'but I sometimes worry that I'm not really learning or retaining anything. I rely so much on AI that I don't think I'd know how to solve certain problems without it.' Indeed, other studies have suggested that the use of AI systems for memory-related tasks may lead to a decline in an individual's own memory capacity.
This erosion of critical thinking is compounded by the AI-driven algorithms that dictate what we see on social media. 'The impact of social media on critical thinking is enormous,' says Gerlich. 'To get your video seen, you have four seconds to capture someone's attention.' The result? A flood of bite-size messages that are easily digested but don't encourage critical thinking. 'It gives you information that you don't have to process any further,' says Gerlich.
By being served information rather than acquiring that knowledge through cognitive effort, the ability to critically analyse the meaning, impact, ethics and accuracy of what you have learned is easily neglected in the wake of what appears to be a quick and perfect answer. 'To be critical of AI is difficult – you have to be disciplined. It is very challenging not to offload your critical thinking to these machines,' says Gerlich.
Wendy Johnson, who studies intelligence at Edinburgh University, sees this in her students every day. She emphasises that it is not something she has tested empirically but believes that students are too ready to substitute independent thinking with letting the internet tell them what to do and believe.
Without critical thinking, it is difficult to ensure that we consume AI-generated content wisely. It may appear credible, particularly as you become more dependent on it, but don't be fooled. A 2023 study in Science Advances showed that, compared with humans, GPT-3 chat not only produces information that is easier to understand but also more compelling disinformation.
* * *
Why does that matter? 'Think of a hypothetical billionaire,' says Gerlich. 'They create their own AI and they use that to influence people because they can train it in a specific way to emphasise certain politics or certain opinions. If there is trust and dependency on it, the question arises of how much it is influencing our thoughts and actions.'
AI's effect on creativity is equally disconcerting. Studies show that AI tends to help individuals produce more creative ideas than they can generate alone. However, across the whole population, AI-concocted ideas are less diverse, which ultimately means fewer 'Eureka!' moments.
Sternberg captures these concerns in a recent essay in the Journal of Intelligence: 'Generative AI is replicative. It can recombine and re-sort ideas, but it is not clear that it will generate the kinds of paradigm-breaking ideas the world needs to solve the serious problems that confront it, such as global climate change, pollution, violence, increasing income disparities, and creeping autocracy.'
To ensure that you maintain your ability to think creatively, you might want to consider how you engage with AI – actively or passively. Research by Marko Müller from the University of Ulm in Germany shows a link between social media use and higher creativity in younger people but not in older generations. Digging into the data, he suggests this may be to do with the difference in how people who were born in the era of social media use it compared with those who came to it later in life. Younger people seem to benefit creatively from idea-sharing and collaboration, says Müller, perhaps because they're more open with what they share online compared with older users, who tend to consume it more passively.
Alongside what happens while you use AI, you might spare a thought to what happens after you use it. Cognitive neuroscientist John Kounios from Drexel University in Philadelphia explains that, just like anything else that is pleasurable, our brain gets a buzz from having a sudden moment of insight, fuelled by activity in our neural reward systems. These mental rewards help us remember our world-changing ideas and also modify our immediate behaviour, making us less risk averse – this is all thought to drive further learning, creativity and opportunities. But insights generated from AI don't seem to have such a powerful effect in the brain. 'The reward system is an extremely important part of brain development, and we just don't know what the effect of using these technologies will have downstream,' says Kounios. 'Nobody's tested that yet.'
There are other long-term implications to consider. Researchers have only recently discovered that learning a second language, for instance, helps delay the onset of dementia for around four years, yet in many countries, fewer students are applying for language courses. Giving up a second language in favour of AI-powered instant-translation apps might be the reason, but none of these can – so far – claim to protect your future brain health.
As Sternberg warns, we need to stop asking what AI can do for us and start asking what it is doing to us. Until we know for sure, the answer, according to Gerlich, is to 'train humans to be more human again – using critical thinking, intuition – the things that computers can't yet do and where we can add real value.'
We can't expect the big tech companies to help us do this, he says. No developer wants to be told their program works too well; makes it too easy for a person to find an answer. 'So it needs to start in schools,' says Gerlich. 'AI is here to stay. We have to interact with it, so we need to learn how to do that in the right way.' If we don't, we won't just make ourselves redundant, but our cognitive abilities too.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How AI could bring back American exceptionalism
How AI could bring back American exceptionalism

Axios

time18 minutes ago

  • Axios

How AI could bring back American exceptionalism

Investors are flocking to Europe, not for vacation, but for returns. But without the market power of artificial intelligence companies, they may have to quickly come back to America. Why it matters: Much of Europe's outperformance this year stems from a weakening dollar, not stronger fundamentals. Without the AI boom that is fueling the resurgence in U.S. stocks, the old world may struggle to keep up. What they're saying: " People want to be in the U.S. markets in the AI trade," Stuart Kaiser, head of U.S. equity strategy at Citi, told Axios. "It's a market you have to be involved in." Zoom out: The American stock market has outpaced gains in global equities for the last 15-plus years. Europe saw a 17.9% total return in U.S. dollar terms in the first half of 2025, but just 8.8% in local currencies, according to Vanguard. That gap signals the rally was largely driven by currencies and requires a catch-up on fundamentals to continue driving its growth. By the numbers: Nvidia alone is worth an amount equal to 14% of the total U.S. GDP, according to Robert Ruggirello, chief investment officer of Brave Eagle Wealth Management. "Not owning it is…painful," he wrote in a note. Of note: The European index was outperforming the S&P 500 for most of 2025 until recently. S&P 500 tech stocks are now outperforming both the broader U.S. and European markets. Zoom in: U.S. firms are embracing AI at scale. Europe is behind. European firms lag U.S. peers by 45% to 75% on AI adoption, according to McKinsey research last fall. Over the past 50 years, the U.S. has created 241 companies worth over $10 billion from scratch, while Europe has created just 14, Andrew McAfee of MIT told the Wall Street Journal. Between the lines: The slower AI momentum in Europe reflects regulatory pressure, higher corporate taxes, and fragmented markets, barriers the U.S. lacks. Even European AI successes often funnel into U.S. markets: DeepMind, the British AI firm behind Gemini, sold to Alphabet in 2014. Reality check: Strategists still see growth opportunities in Europe and beyond, given fiscal stimulus and potentially better economic growth. Trends like the shift to global assets can "last a lot longer than you think," Ryan Detrick, chief market strategist at the Carson Group, told Axios. We're only about seven months into this rotation. The bottom line: AI is powering the American stock market. If you're seeking diversification, that may be hard to find in the U.S. indices.

I Asked ChatGPT What ‘Generational Wealth' Really Means — and How To Start Building It
I Asked ChatGPT What ‘Generational Wealth' Really Means — and How To Start Building It

Yahoo

timean hour ago

  • Yahoo

I Asked ChatGPT What ‘Generational Wealth' Really Means — and How To Start Building It

The term 'generational wealth' gets thrown around a lot these days, but what does it actually mean? And more importantly, how can regular Americans start building it? Read Next: Learn More: GOBankingRates asked ChatGPT for a comprehensive breakdown, and its response was both enlightening and surprisingly actionable. Also see five strategies high-net-worth families use to build generational wealth. Defining Generational Wealth: ChatGPT's Take When ChatGPT was asked to define generational wealth, it explained it as 'assets and financial resources that are passed down from one generation to the next, providing ongoing financial stability and opportunities for future family members.' But it went deeper, explaining that true generational wealth isn't just about leaving money behind; it's about creating a financial foundation that can grow and sustain multiple generations. The AI emphasized that generational wealth is more than just inheritance money. It's about creating a system where each generation can build upon the previous one's success, creating a compounding effect that grows over time. This includes not just financial assets, but also financial knowledge, business relationships and strategic thinking skills. Check Out: ChatGPT's Blueprint for Building Generational Wealth When asked for a practical roadmap, ChatGPT provided a comprehensive strategy broken down into actionable steps. Start With Financial Education ChatGPT emphasized that generational wealth begins with financial literacy — not just for yourself, but for your entire family. Here is what it recommended: Teach children about money management from an early age. Create family financial discussions and goal-setting sessions. Ensure all family members understand investment principles. Build a culture of financial responsibility. It stressed that many wealthy families fail to maintain their wealth across generations because they don't adequately prepare their children with the knowledge and mindset needed to manage money effectively. Build a Diversified Investment Portfolio ChatGPT recommended a multi-asset approach to wealth building: Real estate investments for appreciation and passive income Stock market investments through index funds and individual stocks Business ownership or equity stakes Alternative investments like real estate investment trusts or commodities. It explained that diversification is crucial because different asset classes perform differently in various economic conditions. This approach helps protect wealth from market volatility while providing multiple income streams. Establish Legal Protection Structures The AI strongly emphasized the importance of estate planning tools as well. Here are a few it highlighted: Wills and trusts to control asset distribution Life insurance policies to provide immediate liquidity Business succession planning for family enterprises Tax optimization strategies to minimize transfer costs. ChatGPT explained that without proper legal structures, wealth can be decimated by taxes, legal disputes or poor decision-making by inexperienced heirs. It stressed that these structures must be created while you're alive and able to make strategic decisions. Consider Dynasty Trusts For families with substantial assets, ChatGPT recommended exploring dynasty trusts. It explained these as vehicles that can preserve wealth across multiple generations while providing tax benefits. These trusts can potentially last forever in certain states, creating a truly perpetual wealth-building vehicle. Overcoming Common Obstacles ChatGPT identified several barriers to building generational wealth as well. First, it acknowledged that starting from different financial positions affects strategy. Those with limited resources need to focus first on building basic wealth before thinking about generational strategies. ChatGPT also warned against increasing spending as income grows. The AI suggested automating savings and investments to prevent lifestyle inflation from derailing wealth-building efforts. It also highlighted the complexity of tax planning for generational wealth, noting that improper planning can result in significant tax penalties that erode wealth transfer. This makes professional guidance particularly important for families with substantial assets, and the cost of professional advice is typically far outweighed by the value created through proper planning. Starting Small: ChatGPT's Practical First Steps For those just beginning, ChatGPT provided a few accessible starting points. Build an emergency fund (three to six months' worth of expenses). Maximize employer 401(k) matching. Start a Roth IRA for tax-free growth. Purchase adequate life insurance. Create a basic will. Begin investing in index funds. Consider real estate when financially ready. It emphasized that these steps can be started by anyone, regardless of income level, and that the key is consistency over time. The Importance of Values and Purpose One of ChatGPT's most interesting insights was about the importance of instilling values and purpose alongside wealth. The AI explained that families with strong values and a clear sense of purpose are more likely to maintain their wealth across generations. This can include teaching children about responsibility and work ethic and involving family members in charitable activities It also noted that generational wealth isn't primarily about the amount you leave behind. It's about creating a financial foundation and knowledge system that empowers future generations to build upon your efforts. The process of building generational wealth requires patience, discipline and strategic thinking, but the AI emphasized that with the right approach, any family can begin building wealth that will benefit generations to come. The key is to start now, stay consistent and always keep the long-term vision in mind. More From GOBankingRates 3 Luxury SUVs That Will Have Massive Price Drops in Summer 2025 The 10 Most Reliable SUVs of 2025 The 5 Car Brands Named the Least Reliable of 2025 This article originally appeared on I Asked ChatGPT What 'Generational Wealth' Really Means — and How To Start Building It

How to spot AI writing — 5 telltale signs to look for
How to spot AI writing — 5 telltale signs to look for

Tom's Guide

timean hour ago

  • Tom's Guide

How to spot AI writing — 5 telltale signs to look for

AI writing is everywhere now, flooding social media, websites, and emails—so you're probably encountering it more than you realize. That email you just received, the product review you're reading, or the Reddit post that sounds oddly corporate might all be generated by tools like AI chatbots like ChatGPT, Gemini or Claude. The writing often appears polished, maybe too polished, hitting every point perfectly while maintaining an unnaturally enthusiastic tone throughout. While AI detectors promise to catch machine-generated text, they're often unreliable and miss the subtler signs that reveal when algorithms have done the heavy lifting. You don't need fancy software or expensive tools to spot it. The clues are right there in the writing itself. There's nothing wrong with using AI to improve your writing. These tools excel at checking grammar, suggesting better word choices, and helping with tone—especially if English isn't your first language. AI can help you brainstorm ideas, overcome writer's block, or polish rough drafts. The key difference is using AI to enhance your own knowledge and voice rather than having it generate everything from scratch. The problems arise when people let AI do all the thinking and just copy-paste whatever it produces without adding their own insights, and that's when you start seeing the telltale signs below. AI writing tools consistently rely on the same attention-grabbing formulae. You'll see openings like "Have you ever wondered..." "Are you struggling with..." or "What if I told you..." followed by grand promises. This happens because AI models learn from countless blog posts and marketing copy that use these exact patterns. Real people mix it up more, they might jump straight into a story, share a fact, or just start talking about the topic without all the setup. When you spot multiple rhetorical questions bunched together or openings that feel interchangeable across different topics, you're likely reading AI-generated content. You'll see phrases like "many studies show", "experts agree", or "a recent survey found" without citing actual sources. AI tends to speak in generalities like "a popular app" or "leading industry professionals" instead of naming specific companies or real people. Human writers naturally include concrete details, actual brand names, specific statistics, and references to particular events or experiences they've encountered. When content lacks these specific, verifiable details, it's usually because AI doesn't have access to real, current information or personal experience. AI writing often sounds impressive at first glance but becomes hollow when you examine it closely. You'll find excessive use of business jargon like "game-changing", "cutting-edge", "revolutionary", and "innovative" scattered throughout without explaining what these terms actually mean. The writing might use sophisticated vocabulary but fail to communicate ideas clearly. A human expert will tell you exactly why one method works better than another, or admit when something is kind of a pain to use. If the content feels like it was written to impress rather than inform, AI likely played a major role. AI writing maintains an unnaturally consistent, enthusiastic tone throughout entire pieces. Every sentence flows smoothly into the next, problems are always simple to solve and there's rarely any acknowledgment that things can be complicated or frustrating. Real people get frustrated, go off on tangents, and have strong opinions. Human writing naturally varies in tone, sometimes confident, sometimes uncertain, occasionally annoyed or conversational. When content sounds relentlessly positive and avoids any controversial takes, you're probably reading AI-generated material. This is where the lack of real experience shows up most clearly. AI might correctly explain the basics of complex topics, but it often misses the practical complications that anyone who's actually done it knows about. The advice sounds textbook-perfect but lacks the yeah, but in reality... insights that make content actually useful. Human experts naturally include caveats, mention common pitfalls, or explain why standard advice doesn't always work in practice. When content presents complex topics as straightforward without acknowledging the messy realities, it's usually because real expertise is missing. People love to point at em dashes as proof of AI writing, but that's unfair to a perfectly good punctuation mark. Writers have used em dashes for centuries—to add drama, create pauses or insert extra thoughts into sentences. The real issue isn't that AI uses them, it's how AI uses them incorrectly. You'll often see AI throwing in em dashes where a semicolon would work better, or using them to create false drama in boring sentences. Real writers use em dashes purposefully to enhance their meaning, while AI tends to sprinkle them in as a lazy way to make sentences sound more sophisticated. Before you dismiss something as AI-written just because of punctuation, check whether those dashes actually serve a purpose or if they're just there for show. Now you've learned the tell-tale signs for spotting AI-generated writing, why not take a look at our other useful guides? Don't miss this tool identifies AI-generated images, text and videos — here's how it works and you can stop Gemini from training on your data — here's how Get instant access to breaking news, the hottest reviews, great deals and helpful tips. And if you want to explore some lesser known AI models, take a look at I write about AI for a living — here's my 7 favorite free AI tools to try now.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store