Latest news with #RobertSternberg


The Guardian
19-04-2025
- Science
- The Guardian
‘Don't ask what AI can do for us, ask what it is doing to us': are ChatGPT and co harming human intelligence?
Imagine for a moment you are a child in 1941, sitting the common entrance exam for public schools with nothing but a pencil and paper. You read the following: 'Write, for no more than a quarter of an hour, about a British author.' Today, most of us wouldn't need 15 minutes to ponder such a question. We'd get the answer instantly by turning to AI tools such as Google Gemini, ChatGPT or Siri. Offloading cognitive effort to artificial intelligence has become second nature, but with mounting evidence that human intelligence is declining, some experts fear this impulse is driving the trend. Of course, this isn't the first time that new technology has raised concerns. Studies already show how mobile phones distract us, social media damages our fragile attention spans and GPS has rendered our navigational abilities obsolete. Now, here comes an AI co-pilot to relieve us of our most cognitively demanding tasks – from handling tax returns to providing therapy and even telling us how to think. Where does that leave our brains? Free to engage in more substantive pursuits or wither on the vine as we outsource our thinking to faceless algorithms? 'The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence,' says psychologist Robert Sternberg at Cornell University, who is known for his groundbreaking work on intelligence, 'but that it already has.' The argument that we are becoming less intelligent draws from several studies. Some of the most compelling are those that examine the Flynn effect – the observed increase in IQ over successive generations throughout the world since at least 1930, attributed to environmental factors rather than genetic changes. But in recent decades, the Flynn effect has slowed or even reversed. In the UK, James Flynn himself showed that the average IQ of a 14-year-old dropped by more than two points between 1980 and 2008. Meanwhile, global study the Programme for International Student Assessment (PISA) shows an unprecedented drop in maths, reading and science scores across many regions, with young people also showing poorer attention spans and weaker critical thinking. Nevertheless, while these trends are empirical and statistically robust, their interpretations are anything but. 'Everyone wants to point the finger at AI as the boogeyman, but that should be avoided,' says Elizabeth Dworak, at Northwestern University Feinberg School of Medicine, Chicago, who recently identified hints of a reversal of the Flynn effect in a large sample of the US population tested between 2006 and 2018. Intelligence is far more complicated than that, and probably shaped by many variables – micronutrients such as iodine are known to affect brain development and intellectual abilities, likewise changes in prenatal care, number of years in education, pollution, pandemics and technology all influence IQ, making it difficult to isolate the impact of a single factor. 'We don't act in a vacuum, and we can't point to one thing and say, 'That's it,'' says Dworak. Still, while AI's impact on overall intelligence is challenging to quantify (at least in the short term), concerns about cognitive offloading diminishing specific cognitive skills are valid – and measurable. When considering AI's impact on our brains, most studies focus on generative AI (GenAI) – the tool that has allowed us to offload more cognitive effort than ever before. Anyone who owns a phone or a computer can access almost any answer, write any essay or computer code, produce art or photography – all in an instant. There have been thousands of articles written about the many ways in which GenAI has the potential to improve our lives, through increased revenues, job satisfaction and scientific progress, to name a few. In 2023, Goldman Sachs estimated that GenAI could boost annual global GDP by 7% over a 10-year period – an increase of roughly $7tn. The fear comes, however, from the fact that automating these tasks deprives us of the opportunity to practise those skills ourselves, weakening the neural architecture that supports them. Just as neglecting our physical workouts leads to muscle deterioration, outsourcing cognitive effort atrophies neural pathways. One of our most vital cognitive skills at risk is critical thinking. Why consider what you admire about a British author when you can get ChatGPT to reflect on that for you? Research underscores these concerns. Michael Gerlich at SBS Swiss Business School in Kloten, Switzerland, tested 666 people in the UK and found a significant correlation between frequent AI use and lower critical-thinking skills – with younger participants who showed higher dependence on AI tools scoring lower in critical thinking compared with older adults. Similarly, a study by researchers at Microsoft and Carnegie Mellon University in Pittsburgh, Pennsylvania surveyed 319 people in professions that use GenAI at least once a week. While it improved their efficiency, it also inhibited critical thinking and fostered long-term overreliance on the technology, which the researchers predict could result in a diminished ability to solve problems without AI support. 'It's great to have all this information at my fingertips,' said one participant in Gerlich's study, 'but I sometimes worry that I'm not really learning or retaining anything. I rely so much on AI that I don't think I'd know how to solve certain problems without it.' Indeed, other studies have suggested that the use of AI systems for memory-related tasks may lead to a decline in an individual's own memory capacity. This erosion of critical thinking is compounded by the AI-driven algorithms that dictate what we see on social media. 'The impact of social media on critical thinking is enormous,' says Gerlich. 'To get your video seen, you have four seconds to capture someone's attention.' The result? A flood of bite-size messages that are easily digested but don't encourage critical thinking. 'It gives you information that you don't have to process any further,' says Gerlich. By being served information rather than acquiring that knowledge through cognitive effort, the ability to critically analyse the meaning, impact, ethics and accuracy of what you have learned is easily neglected in the wake of what appears to be a quick and perfect answer. 'To be critical of AI is difficult – you have to be disciplined. It is very challenging not to offload your critical thinking to these machines,' says Gerlich. Wendy Johnson, who studies intelligence at Edinburgh University, sees this in her students every day. She emphasises that it is not something she has tested empirically but believes that students are too ready to substitute independent thinking with letting the internet tell them what to do and believe. Without critical thinking, it is difficult to ensure that we consume AI-generated content wisely. It may appear credible, particularly as you become more dependent on it, but don't be fooled. A 2023 study in Science Advances showed that, compared with humans, GPT-3 chat not only produces information that is easier to understand but also more compelling disinformation. Why does that matter? 'Think of a hypothetical billionaire,' says Gerlich. 'They create their own AI and they use that to influence people because they can train it in a specific way to emphasise certain politics or certain opinions. If there is trust and dependency on it, the question arises of how much it is influencing our thoughts and actions.' AI's effect on creativity is equally disconcerting. Studies show that AI tends to help individuals produce more creative ideas than they can generate alone. However, across the whole population, AI-concocted ideas are less diverse, which ultimately means fewer 'Eureka!' moments. Sternberg captures these concerns in a recent essay in the Journal of Intelligence: 'Generative AI is replicative. It can recombine and re-sort ideas, but it is not clear that it will generate the kinds of paradigm-breaking ideas the world needs to solve the serious problems that confront it, such as global climate change, pollution, violence, increasing income disparities, and creeping autocracy.' To ensure that you maintain your ability to think creatively, you might want to consider how you engage with AI – actively or passively. Research by Marko Müller from the University of Ulm in Germany shows a link between social media use and higher creativity in younger people but not in older generations. Digging into the data, he suggests this may be to do with the difference in how people who were born in the era of social media use it compared with those who came to it later in life. Younger people seem to benefit creatively from idea-sharing and collaboration, says Müller, perhaps because they're more open with what they share online compared with older users, who tend to consume it more passively. Alongside what happens while you use AI, you might spare a thought to what happens after you use it. Cognitive neuroscientist John Kounios from Drexel University in Philadelphia explains that, just like anything else that is pleasurable, our brain gets a buzz from having a sudden moment of insight, fuelled by activity in our neural reward systems. These mental rewards help us remember our world-changing ideas and also modify our immediate behaviour, making us less risk averse – this is all thought to drive further learning, creativity and opportunities. But insights generated from AI don't seem to have such a powerful effect in the brain. 'The reward system is an extremely important part of brain development, and we just don't know what the effect of using these technologies will have downstream,' says Kounios. 'Nobody's tested that yet.' There are other long-term implications to consider. Researchers have only recently discovered that learning a second language, for instance, helps delay the onset of dementia for around four years, yet in many countries, fewer students are applying for language courses. Giving up a second language in favour of AI-powered instant-translation apps might be the reason, but none of these can – so far – claim to protect your future brain health. As Sternberg warns, we need to stop asking what AI can do for us and start asking what it is doing to us. Until we know for sure, the answer, according to Gerlich, is to 'train humans to be more human again – using critical thinking, intuition – the things that computers can't yet do and where we can add real value.' We can't expect the big tech companies to help us do this, he says. No developer wants to be told their program works too well; makes it too easy for a person to find an answer. 'So it needs to start in schools,' says Gerlich. 'AI is here to stay. We have to interact with it, so we need to learn how to do that in the right way.' If we don't, we won't just make ourselves redundant, but our cognitive abilities too.

Wall Street Journal
03-04-2025
- Wall Street Journal
How I Realized AI Was Making Me Stupid—and What I Do Now
I first suspected artificial intelligence was eating my brain while writing an email about my son's basketball coach. I wanted to complain to the local rec center—in French—that the coach kept missing classes. As an American reporter living in Paris, I've come to speak French pretty well, but the task was still a pain. I described the situation, in English, to ChatGPT. Within seconds, the bot churned out a French email that sounded both resolute and polite. I changed a few words and sent it. I soon tasked ChatGPT with drafting complex French emails to my kids' school. I asked it to summarize long French financial documents. I even began asking it to dash off casual-sounding WhatsApp messages to French friends, emojis and all. After years of building up my ability to articulate nuanced ideas in French, AI had made this work optional. I felt my brain get a little rusty. I was surprised to find myself grasping for the right words to ask a friend for a favor over text. But life is busy. Why not choose the easy path? AI developers have promised their tools will liberate humans from the drudgery of repetitive brain labor. It will unshackle our minds to think big. It will give us space to be more creative. But what if freeing our minds actually ends up making them lazy and weak? 'With creativity, if you don't use it, it starts to go away,' Robert Sternberg, a Cornell University professor of psychology, told me. Sternberg, who studies human creativity and intelligence, argues that AI has already taken a toll on both. Smartphones are already blamed for what some researchers call 'digital dementia.' In study after study, scientists have shown that people who regularly rely on digital help for some tasks can lose capacity to do them alone. The more we use GPS, the worse we become at finding our way on our own. The more we rely on our stored contacts, the less likely we are to know the phone numbers of close friends, or even our spouse's. Most of us don't worry about not learning phone numbers anymore, if we're old enough to have ever learned them at all. But what happens when we start outsourcing core parts of our thinking to a machine? Such as understanding a text well enough to summarize it. Or finding the words that best express a thought. Is there a way to use these new AI tools without my brain becoming mush? Like AI itself, research into its cognitive effects is in its infancy, but early results are inauspicious. A study published in January in the journal Societies found that frequent use of AI tools such as ChatGPT correlated with reduced critical thinking, particularly among younger users. In a new survey of knowledge workers, Microsoft researchers found that those with more confidence in generative AI engaged in less critical thinking when using it. 'Tools like GPS and generative AI make us cognitively lazy,' said Louisa Dahmani, a neuroscientist at Massachusetts General Hospital, who in 2020 showed that habitual use of GPS navigation reduces one's spatial memory. 'While it's possible to use these tools in a mindful manner, I think that most of us will take the path of least resistance,' she told me. Adopting tools for brain work—a process called cognitive offloading—has been largely an engine of human progress. Ever since Sumerians scratched their debts into clay tablets, people have been using stone, papyrus and paper to outsource their memories and conceptions of everything from theorems to shopping lists. Opportunities for cognitive offloading have multiplied lately. Paper calendars have long kept appointments; digital ones send alerts when they are happening. Calculators add up numbers; Excel spreadsheets balance whole budgets. Generative AI promises to boost our productivity further. Workers are increasingly using it to write emails, transcribe meetings or even—shhh—summarize those way-too-long documents your boss sends. By late last year, around a quarter of all corporate press releases were likely written with AI help, according to a preprint paper led by Stanford Ph.D. students. But these short-term gains may have long-term costs. George Roche, co-founder of Bindbridge, an AI molecular-discovery startup, told me he uploads several scientific papers a day, on topics from botany to chemistry, to an AI chatbot. It has been a boon, allowing Roche to stay on top of far more research than he could before. Yet this ease has begun to trouble him. 'I'm outsourcing my synthesis of information,' Roche told me. 'Am I going to lose that ability? Am I going to get less sharp?' Hemant Taneja, chief executive of Silicon Valley venture-capital firm General Catalyst, which has invested in AI companies including Anthropic and Mistral AI, concedes that while AI technology offers real benefits, it may also compromise our thinking skills. 'Our ability to ask the right questions is going to weaken if we don't practice,' Taneja said. These risks could be greater for young people if they start offloading to AI cognitive skills that they haven't yet honed for themselves. Yes, some studies show that AI tutors can help students if used well. But a Wharton School study last year found that high-school math students who studied with an AI chatbot that was willing to provide answers to math problems trailed a group of bot-free students on the AI-free final exam. 'There is a possible cyberpunk dystopian future where we become stupid and computers do all the thinking,' Richard Heersmink, a philosopher of technology at Tilberg University in the Netherlands, told me. Let's not panic just yet. Humans have a history of issuing dire predictions about new technologies that later prove to be misplaced. More than 2,400 years ago, Socrates reportedly suggested that writing itself would 'produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory.' It would be hard to suggest, however, that the benefits of writing and reading don't outweigh the costs. Since then, new technologies, from the printing press to the knitting machine to the telegraph, have all provoked objections about their impact on individuals and society—with varying degrees of prescience. But there is no stopping progress. With the AI future on our doorsteps, what do scientists say we ought to do to keep our minds spry? The basic principle is use it or lose it. Writing is a good way to practice thinking and reasoning precisely because it is hard. 'The question is what skills do we think are important and what skills do we want to relinquish to our tools,' said Hamsa Bastani, a professor at the Wharton School and an author of that study on the effects of AI on high-school math students. Bastani told me she uses AI to code, but makes sure to check its work and does some of her own coding too. 'It's like forcing yourself to take the stairs instead of taking the elevator.' Mark Maitland, a senior partner at the consulting firm Simon-Kucher, said that although his staff now uses AI transcriptions of meetings, he asks his team to take handwritten notes, too, given research that taking notes leads to better recall. 'It's easy to become lazy if you think something else is doing it for you,' Maitland told me. I'm now leaning into mental effort in my own life, too. That means I make myself turn off the GPS in unfamiliar places. I take handwritten notes when I want to remember something. I also resist my kids' demands to ask ChatGPT for a made-up story and encourage them to create their own instead. I've even started writing my own French-language emails and WhatsApp messages again. At least most of the time. I'm still busy after all. Sam Schechner is a technology reporter in The Wall Street Journal's Paris bureau.