logo
Essay aid or cognitive crutch? MIT study tests the cost of writing with AI

Essay aid or cognitive crutch? MIT study tests the cost of writing with AI

While LLMs reduce cognitive load, a new study warns they may also hinder critical thinking and memory retention - raising concerns about their growing role in learning and cognitive development
Rahul Goreja New Delhi
A new study from the Massachusetts Institute of Technology (MIT) Media Lab has raised concerns about how artificial intelligence tools like ChatGPT may impact students' cognitive engagement and learning when used to write essays.
The research, led by Nataliya Kosmyna and a team from MIT and Wellesley College, examines how reliance on large language models (LLMs) such as ChatGPT compares to traditional methods like web searches or writing without any digital assistance. Using a combination of electroencephalogram (EEG) recordings, interviews, and text analysis, the study revealed distinct differences in neural activity, essay quality, and perceived ownership depending on the method used.
Note: EEG is a test that measures electrical activity in the brain.
Setup for cognitive engagement study
54 participants from five Boston-area universities were split into three groups: those using only ChatGPT (LLM group), those using only search engines (search group), and those writing without any tools (brain-only group). Each participant completed three writing sessions. A subset also participated in a fourth session where roles were reversed: LLM users wrote without assistance, and brain-only participants used ChatGPT.
All participants wore EEG headsets to monitor brain activity during writing. Researchers also interviewed participants' post-session and assessed essays using both human markers and an AI judge.
Findings on neural engagement
Electroencephalogram (EEG) analysis showed that participants relying solely on their own cognitive abilities exhibited the highest levels of neural connectivity across alpha, beta, theta, and delta bands — indicating deeper cognitive engagement. In contrast, LLM users showed the weakest connectivity. The search group fell in the middle.
'The brain connectivity systematically scaled down with the amount of external support,' the authors wrote. Notably, LLM-to-Brain participants in the fourth session continued to show under-engagement, suggesting a lingering cognitive effect from prior LLM use.
Essay structure, memory, and ownership
When asked to quote from their essays shortly after writing, 83.3 per cent of LLM users failed to do so. In comparison, only 11.1 per cent of participants in the other two groups struggled with this task. One participant noted that they 'did not believe the essay prompt provided required AI assistance at all,' while another described ChatGPT's output as 'robotic.'
Essay ownership also varied. Most brain-only participants reported full ownership, while the LLM group responses ranged widely from full ownership to explicit denial to many taking partial credit.
Despite this, essay satisfaction remained relatively high across all groups, with the search group being unanimously satisfied. Interestingly, LLM users were often satisfied with the output, even when they acknowledged limited involvement in the content's creation.
Brain power trumps AI aid
While AI tools may improve efficiency, the study cautions against their unnecessary adoption in learning contexts. 'The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of four months, the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring,' the authors wrote.
This pattern was especially evident in session four, where brain-to-LLM participants showed stronger memory recall and more directed neural connectivity than those who moved in the opposite direction.
Less effort, lower retention
The study warns that although LLMs reduce cognitive load, they may diminish critical thinking and reduce long-term retention. 'The reported ownership of LLM group's essays in the interviews was low,' the authors noted.
'The LLM undeniably reduced the friction involved in answering participants' questions compared to the search engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or 'opinions' (probabilistic answers based on the training datasets),' it concluded.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Perplexity's AI chatbot can now generate videos on X: Here's how to use it
Perplexity's AI chatbot can now generate videos on X: Here's how to use it

Indian Express

time2 hours ago

  • Indian Express

Perplexity's AI chatbot can now generate videos on X: Here's how to use it

AI search startup Perplexity has upgraded its chatbot on X (formerly Twitter) with a new feature that has flooded the Elon Musk-owned platform with AI-generated visuals. The 'Ask Perplexity' bot on X can now generate short, eight-second video clips with sound using AI. Users simply have to tag @AskPerplexity along with a short prompt and in return, they will receive an AI-generated video with creative visuals and audio, including dialogue. While the new video generation feature may boost engagement for Perplexity's chatbot, it also raises concerns about the spread of misinformation on X, which is a platform that has already been strongly criticised for its lax content moderation. However, Perplexity has said that it has implemented strong content filters to prevent the misuse of the latest AI video generation feature. It also highlights the growing rivalry between AskPerplexity and Grok, developed by Elon Musk's xAI venture. They are two of the most popular automated accounts on X that are frequently tagged and asked questions in the replies. However, the Grok AI models do not yet have the ability to generate videos. Soon after Perplexity introduced the new AI feature, users on X began posting wildly imaginative, AI-generated videos depicting fictional scenarios involving real-life celebrities, politicians, and world leaders, etc. The surge in demand inevitably led to a delay in generating videos with the bot account stating that video generation could take longer than expected due to high traffic. 'I've read through your video request DMs. Some of y'all need help,' the AskPerplexity account posted on X. Perplexity has also been looking to make its AI chatbot more accessible by rolling out its services on WhatsApp. In April this year, Perplexity AI became available directly on the messaging platform, allowing users to access the AI-powered answer engine without downloading a separate app or signing up. Similar to Perplexity, users can also access ChatGPT on WhatsApp, along with the natively integrated Meta AI. To access Perplexity AI, save +1 (833) 436-3285 to your contact list and start asking questions or queries. Users can access Perplexity AI on smartphones, PCs, and Macs, as well as via WhatsApp Web. The Google rival has also been facing legal challenges from various publishers. Recently, BBC said it would take legal action against Perplexity for allegedly training its 'default AI model' using the UK broadcaster's content. In a letter addressed to Aravind Srinivas, the CEO of Perplexity, BBC said it may seek an injunction against it unless the AI firm stops scraping its content, deletes existing copies used to train its AI systems, and submits 'a proposal for financial compensation' for the alleged misuse of its intellectual property, according to a report by Reuters. In response, Perplexity said BBC's claims were 'manipulative and opportunistic'. It added that the publisher had 'a fundamental misunderstanding of technology, the internet and intellectual property law.'

Telegram CEO gives 'one sentence' personality assessment of his 'biggest rivals' Elon Musk, Mark Zuckerberg and Sam Altman
Telegram CEO gives 'one sentence' personality assessment of his 'biggest rivals' Elon Musk, Mark Zuckerberg and Sam Altman

Time of India

time3 hours ago

  • Time of India

Telegram CEO gives 'one sentence' personality assessment of his 'biggest rivals' Elon Musk, Mark Zuckerberg and Sam Altman

Credit: X/@PavelDurov Telegram CEO Pavel Durov recently gave intriguing 'one-sentence' personality assessments for his tech rivals including Tesla CEO Elon Musk , Meta CEO Mark Zuckerberg and OpenAI CEO Sam Altman . Speaking to a French publication Le Point, Durov offered a sharp assessments for the three tech CEOs. What Telegram CEO Pavel Durov said about Tesla Elon Musk On Elon Musk, Durov said: 'Elon can be very emotional, while I try to think deeply before acting.' He acknowledged their contrasting leadership styles, noting that Musk's impulsiveness can be both a strength and a liability. What Telegram CEO Pavel Durov said about Meta CEO Mark Zuckerberg On Mark Zuckerberg, Durov remarked: 'Mark adapts well and quickly follows trends, but he seems to lack fundamental values that he would remain faithful to, regardless of changes in the political climate or tech industry trends.'What Telegram CEO Pavel Durov said about OpenAI CEO Sam Altman On Sam Altman, Durov offered a mix of praise and skepticism: 'Sam has excellent social skills, which allowed him to forge alliances around ChatGPT—but some wonder if his technical expertise is still sufficient, now that his co-founder Ilya [Sutskever] and many other scientists have left OpenAI.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch vàng CFDs với mức chênh lệch giá thấp nhất IC Markets Đăng ký Undo The remarks come amid rising tensions and shifting alliances in the AI and messaging space. Telegram recently struck a major deal with Musk's xAI to distribute the Grok chatbot, while Durov has long criticized Meta's WhatsApp as a 'watered-down imitation' of Telegram. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

AI sludge has entered the job search
AI sludge has entered the job search

Indian Express

time5 hours ago

  • Indian Express

AI sludge has entered the job search

Katie Tanner, a human resource consultant in Utah, knew the job would be popular: It was fully remote, was at a tech company and required only three years of experience. But she was still shocked by the response on LinkedIn. After 12 hours, 400 applications had been submitted. By 24 hours, there were 600. A few days later, there were more than 1,200, at which point she removed the post. Three months later, she's still whittling down candidates. 'It's crazy,' she said. 'You just get inundated.' The number of applications submitted on LinkedIn has surged more than 45% in the past year. The platform is clocking an average of 11,000 applications per minute, and generative artificial intelligence tools are contributing to the deluge. With a simple prompt, ChatGPT, the chatbot developed by OpenAI, will insert every keyword from a job description into a resume. Some candidates are going a step further, paying for AI agents that can autonomously find jobs and apply on their behalf. Recruiters say it's getting harder to tell who is genuinely qualified or interested, and many of the resumes look suspiciously similar. 'It's an 'applicant tsunami' that's just going to get bigger,' said Hung Lee, a former recruiter who writes a widely read newsletter about the industry. One popular method for navigating the surge? Automatic chat or video interviews, sometimes conducted by AI. Chipotle CEO Scott Boatwright, said at a conference this month that its AI chatbot screening and scheduling tool (named Ava Cado) had reduced hiring time by 75%. HireVue, a popular AI video interview platform, offers recruiters an option to have AI assess responses and rank candidates. But candidates can also use AI to cheat in these interviews, and some companies have added more automated skill assessments early in the hiring process. For example, HireVue offers AI-powered games to gauge abilities like pattern recognition and working memory, and a virtual 'tryout' that tests emotional intelligence or skills like counting change. Sometimes, Lee said, 'we end up with an AI versus AI type of situation.' In January, the Justice Department announced indictments in a scheme to place North Korean nationals in IT roles working remotely at U.S. companies. Emi Chiba, a human resource technology analyst at Gartner, told DealBook that reports of candidates who used fake identities had been 'growing and growing and growing.' A report that Chiba published with other Gartner analysts in April estimated that by 2028, about 1 in 4 job applicants could be made up. Among its recommendations was that companies deploy more sophisticated identity-verification software. To address the problem, LinkedIn recently added tools to help both candidates and recruiters narrow their focus, including an AI agent, introduced in October, that can write follow-up messages, conduct screening chats with candidates, suggest top applicants and search for potential hires using natural language. A feature that shows potential applicants how well their qualifications match up with a job description, which LinkedIn introduced to premium subscribers in January, reduced the rate at which they apply to 'low match' jobs by 10%, according to the company. Concerns that using AI in hiring can introduce bias have led to lawsuits and a patchwork of state legislation. The European Union's AI Act classifies hiring under its high-risk category, with the most stringent restrictions, and while no U.S. federal law specifically addresses AI use in hiring, general antidiscrimination laws can potentially come into play if the result of any process is discrimination. 'You're not allowed to discriminate, and of course most employers are trying not to discriminate, but easier said than done,' said Marcia Goodman, a partner at Mayer Brown who primarily represents employers. The problem is less that candidates are using AI — a skill many employers say they want — than it is that they're being sloppy. Alexa Marciano, the managing director of Syndicatebleu, a recruiting agency, said job seekers were reacting to recruiters' use of automated screening. 'It's really frustrating for the candidates because they spend all this time creating very catered cover letters, very catered resumes,' she said. Jeremy Schifeling, a career coach who regularly conducts technology-focused job-search training at universities, said he could see this back-and-forth going on for a while. 'As students get more desperate, they say, 'Well, I have no choice but to up the ante with these paid tools to automate everything.' And I'm sure the recruiters are going to raise the bar again.' He argues the endgame will be authenticity from both sides. But, he said, 'I do think that a lot of people are going to waste a lot of time, a lot of processing power, a lot of money until we reach that realization.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store