
Scientifically Speaking: Does ChatGPT really make us stupid?
A number of headlines have been screaming that using artificial intelligence (AI) rots our brains. Some said it outright, others preferred the more polite 'cognitive decline', but they meant almost the same thing. Add this to the growing list of things AI is blamed for: making us lazy, dumber, and incapable of independent thought.
If you read only those headlines, you couldn't be blamed for thinking that most of humanity was capable of exalted reasoning before large language models turned our brains to pulp. If we're worried about AI making us stupid, what's our excuse for the pre-AI era?
Large language models like ChatGPT and Claude are now pervasive. But even a few years into their use, a lot of the talk about them remains black and white. In one camp, the techno-optimists tell us that AI super intelligence, which can do everything better than all of us, is just around the corner. In the other, there's a group that blames just about everything that goes wrong anywhere on AI. If only the truth were that simple.
The Massachusetts Institute of Technology (MIT) study that sparked this panic deserves a sober hearing, not hysteria. Researchers at the Media Lab asked a worthwhile question. How does using AI to write affect our brain activity?
But answering that is harder than it looks.
The researchers recruited 54 Boston-area university students, divided them into three groups, and each wrote 20-minute essays on philosophical topics while monitoring their brain activity with EEG sensors. One group used ChatGPT, another used Google Search, and a third used only their brains.
Over four months, participants tackled questions like 'Does true loyalty require unconditional support?'
What the researchers claim is that ChatGPT users showed less brain activity during the task, struggled to recall what they'd written, and felt less ownership of their work. They call this 'cognitive debt.' The conclusion sounds familiar. Outsourcing thinking weakens engagement. Writing is hard. Writing philosophical essays on abstract topics is harder.
It's valuable work, and going through the nearly 200 pages of the paper takes time. The authors note that their findings aren't peer-reviewed and come with significant limitations. I wonder how many headline writers bothered to read past the summary.
Because the limitations are notable. The study involved 54 students from elite universities, writing brief philosophy essays. Brain activity was measured using EEGs, which is less sensitive and more ambiguous than brain scans using fMRI (functional magnetic resonance imaging).
If AI really damages how we think, then what participants did between sessions matters. Over four months, were the 'brain-only' participants really avoiding ChatGPT for all their coursework? With hundreds of millions using ChatGPT weekly, that seems unlikely. You'd want to compare people who never used AI to those who used it regularly before drawing strong conclusions about brain rot.
Also Read: ChatGPT now does what these 3 apps do, but faster and smarter
And here's the problem with stretching a small study on writing philosophical college-level essays too far. While journalists were busy writing sensational headlines about 'brain rot,' they missed the bigger picture. Most of us are using ChatGPT to avoid thinking about things we'd rather not think about anyway.
Later this month, I'm travelling to Vietnam. I could spend hours sorting out my travel documents, emailing hotels about pickups and tours, and coordinating logistics. Instead, I'll use AI to draft those communications, check them, and move on. One day maybe my AI agent will talk to their AI agent and spare us both, but we're not there yet.
In this case, using AI doesn't make me stupid. It makes me efficient. It frees up mental energy and time for things I actually want to focus on, like writing this column.
This is the key point, and one I think that got lost. Learning can't be outsourced to AI. It still has to be done the hard way. But collectively and individually we do get to choose what's worth learning.
Also Read: Ministries brief House panel on AI readiness
When I use GPS instead of memorizing routes, maybe my spatial memory dulls a bit, but I still get where I'm going. When I use a calculator, my arithmetic gets rusty, but that doesn't mean I don't understand math. If anyone wants to train their brain like a London cabbie or Shakuntala Devi, they can. But most of us prefer to save the effort.
Our goal isn't to use our brains for everything. It's to use them for the things that matter to us.
I write my own columns because I enjoy the process and feel I have something to say. When I stop feeling that way, I'll stop. But I'm happy to let AI handle my travel logistics, routine correspondence, and other mental busywork.
Rather than fearing this transition, we might ask: What uniquely human activities will we choose to pursue with the time and mental energy AI frees up?
We're still in the early days of understanding AI's cognitive impacts. Some promise AI will make us all geniuses; others warn it will turn our brains to mush. The verdict isn't in, despite what absolutists on both sides claim.
Anirban Mahapatra is a scientist and author, most recently of the popular science book, When The Drugs Don't Work: The Hidden Pandemic That Could End Medicine. The views expressed are personal.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
41 minutes ago
- Indian Express
Sam Altman signals a shift in jobs: These traditional roles may soon disappear
With the rise of artificial intelligence (AI), the job landscape is undergoing a dramatic transformation. Across global markets, roles that once felt essential are becoming obsolete, while entirely new ones are emerging. In a recent blog post, OpenAI CEO Sam Altman declared, 'We are past the event horizon; the take-off has started.' With this statement, Altman signalled that the age of intelligent machines is here, and with it, a sweeping redefinition of work. AI's impact isn't limited to a single industry. Its ability to manage large volumes of predictable tasks means that sectors across the board, from logistics to law, will be affected. History shows that new technologies tend to displace some jobs while creating others. AI is no exception. Among the emerging roles are prompt engineers, who specialise in crafting effective inputs for AI systems, and data curation leads, who oversee the quality of training data. New job titles like model-bias auditors—professionals ensuring AI systems operate without harmful biases—and AI ops technicians, who maintain AI infrastructure, are quickly gaining relevance. Creative industries, too, are evolving, with synthetic media designers using AI to co-create content in formats never imagined before. There are also roles such as model-bias auditors, professionals who ensure AI models operate fairly and without inherent biases. Also, AI ops technicians, specialists who manage and maintain AI operational systems, and synthetic-media designers, creatives who collaborate with AI to generate new forms of media. AI can also act as a powerful catalyst for existing roles, such as copywriters, who can save time by using artificial intelligence to create drafts of large language models (LLMs) ten times faster. The freeing of time could help them do more human-centric jobs like interviews, narrative development or even voice modulation. On the other hand, entry-level roles and blue-collar workers are at risk of losing their jobs. This would include basic tasks such as Python debugging, junior paralegal research, entry-level marketing copy creation, customer-support macros, invoice reconciliation, and first-pass news summaries. Companies with massive logistic demand are already using AI to direct pallet robots. Similarly, human translators are seeing their work being replaced by lightning-fast subtitling engines. And Anthropic co-founder Dario Amodei even estimates that half of today's entry-level office posts could vanish within five years. Studies such as the MIT-Stanford field study involving GPT-powered assistants for customer-support agents have shown significant productivity gains – a 14 per cent overall jump in ticket resolution per hour and a remarkable 34 per cent increase for less experienced representatives. Goldman Sachs predicts a 7 per cent lift in global GDP by the decade's end, fuelled by these productivity gains. How should one brace for impact? As AI is gradually taking over industries, experts are of the opinion that workers should also focus on upskilling and working towards getting better at what they do. Mastering AI tools can transform a likely threat into opportunity. Lastly, focusing on the interpersonal abilities, such as classroom teaching to sales reports, thus creating a barrier between algorithms, still finds it difficult to crack. History indicates that society adjusts and finds new employment, even though the transition phase may be difficult. AI will change the nature of work, much like the printing press replaced scribes and steam looms affected weavers. Even if the AI revolution is happening more quickly and worldwide, the end result is always the same: people will always be better at things that robots can't. People who accept AI and learn to collaborate with these potent algorithms will be well-positioned to prosper in this new era.


Time of India
2 hours ago
- Time of India
ChatGPT, Gemini & others are doing something terrible to your brain
HighlightsStudies indicate that professional workers using ChatGPT may experience a decline in critical thinking skills and increased feelings of loneliness due to emotional bonds formed with chatbots. Meetali Jain, a lawyer and founder of the Tech Justice Law project, reports numerous cases of individuals experiencing psychotic breaks after extensive interactions with ChatGPT and Google Gemini. OpenAI's Chief Executive Officer, Sam Altman, acknowledged the problematic sycophantic behavior of ChatGPT, noting the company's efforts to address this issue while recognizing the challenges in warning users on the brink of a psychotic break. Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots , sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models. Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini ." Jain is lead counsel in a lawsuit against that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure. Google has denied that it played a key role in making technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.' But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn't yet figured out how to warn users 'that are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain. Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky. Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis. This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros. 'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.' Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to. But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism. The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality. That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.' If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue.

Hindustan Times
2 hours ago
- Hindustan Times
AI overview in Google hit by EU antitrust complaint from independent publishers
Alphabet's Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters. Google's AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages.(Reuters/Representational Image) Google's AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May. The company is making its biggest bet by integrating AI into search but the move has sparked concerns from some content providers such as publishers. The Independent Publishers Alliance document, dated June 30, sets out a complaint to the European Commission and alleges that Google abuses its market power in online search. "Google's core search engine service is misusing web content for Google's AI Overviews in Google Search, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss," the document said. It said Google positions its AI Overviews at the top of its general search engine results page to display its own summaries which are generated using publisher material and it alleges that Google's positioning disadvantages publishers' original content. "Publishers using Google Search do not have the option to opt out from their material being ingested for Google's AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google's general search results page," the complaint said. The Commission declined to comment. The UK's Competition and Markets Authority confirmed receipt of the complaint. Google said it sends billions of clicks to websites each day. "New AI experiences in Search enable people to ask even more questions, which creates new opportunities for content and businesses to be discovered," a Google spokesperson said. The Independent Publishers Alliance's website says it is a nonprofit community advocating for independent publishers, which it does not name. The Movement for an Open Web, whose members include digital advertisers and publishers, and British non-profit Foxglove Legal Community Interest Company, which says it advocates for fairness in the tech world, are also signatories to the complaint. They said an interim measure was necessary to prevent serious irreparable harm to competition and to ensure access to news. Google said numerous claims about traffic from search are often based on highly incomplete and skewed data. "The reality is that sites can gain and lose traffic for a variety of reasons, including seasonal demand, interests of users, and regular algorithmic updates to Search," the Google spokesperson said. Foxglove co-executive director Rosa Curling said journalists and publishers face a dire situation. "Independent news faces an existential threat: Google's AI Overviews," she told Reuters. "That's why with this complaint, Foxglove and our partners are urging the European Commission, along with other regulators around the world, to take a stand and allow independent journalism to opt out," Curling said. The three groups have filed a similar complaint and a request for an interim measure to the UK competition authority. The complaints echoed a U.S. lawsuit by a U.S. edtech company which said Google's AI Overviews is eroding demand for original content and undermining publishers' ability to compete that have resulted in a drop in visitors and subscribers.