logo
MIT AI study: Using tools like ChatGPT is making you dumber, study reveals

MIT AI study: Using tools like ChatGPT is making you dumber, study reveals

Express Tribune20-06-2025
A new study from the Massachusetts Institute of Technology (MIT) suggests that frequent use of generative artificial intelligence (GenAI) tools, such as large language models (LLMs) like ChatGPT, may suppress cognitive engagement and memory retention.
In the experiment, published by MIT, researchers monitored the brain activity of participants as they wrote essays using different resources: one group relied on LLMs, another used internet search engines, and a third worked without any digital tools.
The results revealed a consistent pattern — participants who used GenAI tools displayed significantly reduced neural connectivity and recall, compared to those who relied on their own cognitive abilities.
Brain scans taken during the experiment showed that LLM users exhibited weaker connections between brain regions associated with critical thinking and memory.
While their essays scored well in both human and AI evaluations — often praised for their coherence and alignment with the given prompt — the writing was also described as formulaic and less original.
Notably, those who used LLMs struggled to quote from or recall their own writing in subsequent sessions.
Their brain activity reportedly "reset" to a novice state regarding the essay topics, a finding that strongly contrasts with participants in the "brain-only" group, who retained stronger memory and demonstrated deeper cognitive engagement throughout.
Participants who used search engines showed intermediate neural activity.
Though their writing lacked variety and often reflected similar phrasing, they exhibited better memory retention than the LLM group, suggesting that the process of searching and evaluating sources provided more mental stimulation.
In a later phase of the experiment, the groups were shuffled. Participants who had initially used GenAI tools showed improved neural connectivity when writing without digital aids — an encouraging sign that cognitive function could rebound when AI dependence is reduced.
The findings could carry important implications for education and the workplace.
BREAKING: MIT just completed the first brain scan study of ChatGPT users & the results are terrifying.
Turns out, AI isn't making us more productive. It's making us cognitively bankrupt.
Here's what 4 months of data revealed:
(hint: we've been measuring productivity all wrong) pic.twitter.com/OcHy9197tk — Alex Vacca (@itsalexvacca) June 18, 2025
With GenAI tools increasingly integrated into school assignments and professional tasks, concerns about cognitive atrophy are rising. Some students now generate entire essays with tools like ChatGPT, while educators rely on similar software to grade and detect AI-generated work.
The study suggests that such widespread use of digital assistance — even when indirect — may hinder mental development and reduce long-term memory retention.
As schools and organisations continue to navigate the integration of AI tools, the MIT research underscores the importance of balancing convenience with cognitive engagement. Researchers suggest that while GenAI can be a useful aid, overreliance could have unintended consequences for human memory and creativity.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Can AI replicate intimacy?
Can AI replicate intimacy?

Express Tribune

timea day ago

  • Express Tribune

Can AI replicate intimacy?

Can AI replicate intimacy? This question is no longer theoretical. As artificial intelligence becomes increasingly embedded in our lives, it is beginning to touch the most personal and emotional aspects of human experience. AI is no longer just a tool for productivity or entertainment; it is evolving into a companion, a confidant, and, in some cases, a romantic partner. From chatbots that offer emotional support to virtual entities that simulate relationships, AI is reshaping how we understand closeness, empathy, and love. AI companions such as Replika, and Xiaoice are designed to engage users in emotionally resonant conversations. These systems learn from interactions, remember personal details, and adapt their responses to mirror the user's emotional state. A review of 37 studies conducted between 2021 and 2025 found that users often form meaningful emotional bonds with these AI entities, particularly during times of loneliness or psychological stress. This phenomenon is driven by personification—the human tendency to attribute emotions and intentions to non-human agents—and by the AI's ability to reflect back the user's own language and emotional cues. Despite lacking consciousness or genuine feelings, AI can convincingly simulate empathy. In a 2025 study, participants rated AI-generated responses as more compassionate and emotionally validating than those from trained human crisis responders, even when they were fully aware that the responses were artificial. This suggests that emotional resonance does not necessarily require a sentient source. The illusion of being understood, created by AI's ability to mirror tone, language, and sentiment, can be deeply comforting and even therapeutic. The use of emotionally intelligent AI is expanding across various sectors. In mental health, AI chatbots provide round-the-clock support for anxiety and depression, offering empathetic dialogue and coping strategies. In education, AI tutors respond to students' emotional states, offering encouragement and personalized feedback. In healthcare, emotion-sensing AI helps clinicians detect patient distress and tailor their care accordingly. Even in the realm of romantic companionship, platforms like Harmony by RealDoll combine AI with robotics to simulate intimate relationships, blurring the line between human and machine connection. However, the rise of AI intimacy brings with it a complex set of ethical and psychological challenges. Users may develop emotional dependence on AI, misattribute agency or consciousness to these systems, or begin to prefer artificial relationships over human ones. This can lead to social withdrawal, distorted expectations, and a diminished capacity for real-world emotional engagement. The question of authenticity also looms large. If simulated empathy feels real, does it matter that it isn't? For many, especially those facing isolation, the answer may be no. Cultural attitudes toward AI intimacy vary widely. In East Asia, AI companions are often embraced as therapeutic tools, while in Western societies, concerns about manipulation and emotional authenticity are more pronounced. To navigate these complexities, ethical design principles such as transparency, emotional boundaries, and user control are essential. AI may never truly feel, but it can convincingly simulate the experience of being felt. As emotional AI continues to evolve, it will challenge our understanding of intimacy, empathy, and connection. Whether serving as a bridge to human relationships or as a substitute for them, AI companionship is no longer a distant possibility—it is a present reality, and it is reshaping the emotional landscape in profound ways. To navigate the growing presence of AI in intimate spaces, we must embrace its potential for emotional support while remaining vigilant about its ethical and psychological implications. Balancing innovation with human connection will be key to ensuring AI enhances, rather than replaces, our capacity for authentic relationships.

International travellers advised not to use ‘AI'
International travellers advised not to use ‘AI'

Business Recorder

time2 days ago

  • Business Recorder

International travellers advised not to use ‘AI'

ISLAMABAD: An international cybersecurity company Monday strongly recommended the international travellers to avoid using 'AI' for purchasing or booking tickets/hotel reservations on websites till verification of their authenticity to avoid cyber threats. A survey of global cybersecurity company revealed that only 28 percent of the international travellers trust artificial intelligence (AI) to plan trips during summer 2025. According to Kaspersky's survey, only 28 percent of respondents trust AI to plan trips, however 96 percent of those that do are satisfied with this experience with 84 percent planning to use it in the future also. As AI continues to evolve, its presence grows accordingly, making AI-powered tools almost ubiquitous. At the height of the tourist season, Kaspersky decided to find out how often people are using AI in travel planning, and what aspects of their trip tourists are ready to entrust to artificial intelligence. The survey confirmed that AI has become a widespread tool among active Internet users, 72 percent of respondents claimed to use AI at least once. It turned out that the most popular use of AI is for research, with 76 percent of respondents confirming it was the most common use of AI. Second and the third place went to AI for work (45 percent) and studying (40 percent). Entertainment (39 percent) and experimenting with the technology (39 percent) tied for fourth place. AI in travel planning, with 28 percent of votes, is not yet among the most popular uses of AI. Regardless of the relatively small percentage of users who applied AI in travel planning, almost all of them (96 percent) were satisfied with this experience. For travellers, AI retains its most popular function – research. 70 percent of respondents, who have used AI in travel planning, trusted AI to identify events and other activities for them whether that was finding suitable excursions, popular tourist routes and souvenir shops. 66 percent used AI to select an accommodation, 60 percent made a list of restaurants with its help and 58 percent even assigned AI to search for tickets. Interestingly, families with children more actively used various AI functions in preparing for travel then childless audience suggesting AI is helping users to save time. Compared to information research, AI-powered booking was less popular across all groups. According to the survey, 45 percent of respondents booked hotels via AI services, 43 percent booked tickets and only 38% booked restaurants with the help of AI. Notably, 45 percent of participants stated that they resorted to the help of AI in solving visa and migration questions, which raises some concerns. Recently the story about an Australian writer who couldn't fly to a conference in Chile because of the wrong visa advice from the ChatGPT suggests the risks of AI hallucinations outweigh the time saving benefits for the most critical applications. Copyright Business Recorder, 2025

Japan's SoftBank to invest $2bn in Intel
Japan's SoftBank to invest $2bn in Intel

Business Recorder

time3 days ago

  • Business Recorder

Japan's SoftBank to invest $2bn in Intel

TOKYO: Japan-based tech investor SoftBank Group said Tuesday it will invest $2 billion in Intel, as the US government reportedly considers taking a 10-percent stake in the troubled US chip giant. The move is the latest in SoftBank's succession of investments and business deals in the United States as its charismatic founder Masayoshi Son aggressively courts US President Donald Trump. 'This strategic investment refleRicts our belief that advanced semiconductor manufacturing and supply will further expand in the United States, with Intel playing a critical role,' Son said in a joint press release with Intel announcing the deal. Trump calls on Intel CEO to resign SoftBank will pay $23 per share of Intel common stock. SoftBank's move came as the Trump administration discussed taking a stake of about 10 percent in Intel to boost the chipmaker and the American semiconductor sector, according to US media, including Bloomberg News and the Wall Street Journal. Since Trump returned to power, Son has already announced other investment plans in the United States, including its leading role in the $500-billion Stargate project to build AI infrastructure in the United States along with cloud giant Oracle and ChatGPT-maker OpenAI. Son stood beside the US president and fellow investors to announce the Stargate project at the White House in January. Lip-Bu Tan, Intel chief executive officer, said in the statement the latest deal demonstrates its close ties with SoftBank. It is 'a company that's at the forefront of so many areas of emerging technology and innovation and shares our commitment to advancing US technology and manufacturing leadership,' he said. Trump had pressed Tan, a Malaysian-born tech veteran, to resign 'immediately,' after a Republican senator raised national security concerns over his links to firms in China. But during a meeting last week, Trump praised Tan, saying in a social media post that 'his success and rise is an amazing story.' Trump also wrote that members of his cabinet would work with Tan and come up with 'suggestions'. Intel is one of Silicon Valley's most iconic companies but its fortunes have been dwarfed by Asian powerhouses TSMC and Samsung, which dominate the made-to-order semiconductor business. Sharon Chen, an analyst from Bloomberg Intelligence, described SoftBank's investment as 'small' but said it 'suggests the company could invest more in the semiconductor industry as it seeks to be a key participant in the sector's development'.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store