logo
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

Time​ Magazine4 hours ago

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT's Media Lab has returned some concerning results.
The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels.' Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper's main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.
'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' she says. 'Developing brains are at the highest risk.'
Generating ideas
The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel.
Kosmyna, who has been a full-time research scientist at the MIT Media Lab since 2021, wanted to specifically explore the impacts of using AI for schoolwork, because more and more students are using AI. So she and her colleagues instructed subjects to write 20-minute essays based on SAT prompts, including about the ethics of philanthropy and the pitfalls of having too many choices.
The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely 'soulless.' The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. 'It was more like, 'just give me the essay, refine this sentence, edit it, and I'm done,'' Kosmyna says.
The brain-only group, conversely, showed the highest neural connectivity, especially in alpha, theta and delta bands, which are associated with creativity ideation, memory load, and semantic processing. Researchers found this group was more engaged and curious, and claimed ownership and expressed higher satisfaction with their essays.
The third group, which used Google Search, also expressed high satisfaction and active brain function. The difference here is notable because many people now search for information within AI chatbots as opposed to Google Search.
After writing the three essays, the subjects were then asked to re-write one of their previous efforts—but the ChatGPT group had to do so without the tool, while the brain-only group could now use ChatGPT. The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. 'The task was executed, and you could say that it was efficient and convenient,' Kosmyna says. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.'
The second group, in contrast, performed well, exhibiting a significant increase in brain connectivity across all EEG frequency bands. This gives rise to the hope that AI, if used properly, could enhance learning as opposed to diminishing it.
Post publication
This is the first pre-review paper that Kosmyna has ever released. Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. 'Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical,' says Kosmyna. 'We need to have active legislation in sync and more importantly, be testing these tools before we implement them.'
Ironically, upon the paper's release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to 'only read this table below,' thus ensuring that LLMs would return only limited insight from the paper.
She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. 'We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,' she says, laughing.
Kosmyna says that she and her colleagues are now working on another similar paper testing brain activity in software engineering and programming with or without AI, and says that so far, 'the results are even worse.' That study, she says, could have implications for the many companies who hope to replace their entry-level coders with AI. Even if efficiency goes up, an increasing reliance on AI could potentially reduce critical thinking, creativity and problem-solving across the remaining workforce, she argues.
Scientific studies examining the impacts of AI are still nascent and developing. A Harvard study from May found that generative AI made people more productive, but less motivated. Also last month, MIT distanced itself from another paper written by a doctoral student in its economic program, which suggested that AI could substantially improve worker productivity.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google plans major AI shift after Meta's surprising $14 billion move
Google plans major AI shift after Meta's surprising $14 billion move

Yahoo

time38 minutes ago

  • Yahoo

Google plans major AI shift after Meta's surprising $14 billion move

Google plans major AI shift after Meta's surprising $14 billion move originally appeared on TheStreet. After all the talk about AI's godlike powers, it turns out that they still run on people, and now that critical human feedback has become Big Tech's newest battleground, Ironically, since ChatGPT took off in late 2022, artificial intelligence has consistently needed humans to improve. It's essentially the layers of human feedback that help train AI to evolve and make smarter, safer, more useful choices. 💵💰💰💵 In true tech fashion, though, AI's human-in-the-loop (HITL) pipelines are turning into a slugfest. At the heart of this showdown is Scale AI, perhaps one of the leading names in the niche. However, that premium position is now under duress with two of the biggest tech giants, Google and Meta Platforms () , at the center of it all. In the latest twist, Google is stepping back while Meta ramps up its role with Scale AI, with the broader narrative of Big Tech guarding its training data like gold. Since its founding in 2016, Scale AI has become one of the key players in fine-tuning the most advanced AI models. Specifically, it delivers the high-fidelity labels needed for reinforcement learning from human feedback (RLHF).Simply put, it's how humans guide AI by giving feedback so it learns to make better choices. AI bellwethers like OpenAI and Google () have leaned on these human-verified datasets, a role OpenAI's CFO Sarah Friar recently deemed 'critical' in maintaining a healthy AI ecosystem. Naturally, investors took notice. A $100 million boost from Founders Fund in 2019 helped Scale jump past billion-dollar unicorn status. From there, it was off to the races as by 2021, a $325 million Series E had the company valued at a whopping $7.3 billion. Things kicked up a gear in May last year when Accel led a $1 billion round, pushing Scale's valuation to an eye-watering $13.8 billion with Tiger Global, Index Ventures, and Nvidia all back for more. Now, Meta Platforms, one of the largest spenders on AI, has acquired a 49% stake in Scale AI for $14.3 billion, valuing the company at nearly $30 billion. The decision risks Scale's once-enviable positioning by questioning its neutrality, though, with Google, Microsoft, and others retooling their contracts to avoid giving a rival a peek at their playbooks. More Google News: Google delivers a harsh message to loyal employees Meta commits absurd money to top Google, Microsoft in critical race How to track stock price changes from 52-week lows with Google Finance Meanwhile, fresh contenders are muscling in. Labelbox and Appen have supercharged their platforms, and leaner outfits like Hive, Alegion, and CloudFactory pitch specialized, sector-focused labeling services with tighter security and more agility. In a major development, Google, one of Scale AI's biggest backers, is looking to offload its $200 million-plus data annotation agreement with Scale AI. The search giant fears that handing proprietary training datasets to a part-owned rival could leak sensitive insights into its AI offerings, including autonomous-vehicle say Alphabet has already opened back-channel talks with Labelbox, Appen, and other annotation outfits to backfill its HITL needs. Those discussions, spanning tens of millions in annual spend, signal a shift toward diversification and tighter controls. The fallout isn't limited to Google, though. Microsoft, Elon Musk's xAI, and other marquee Scale clients are reportedly reevaluating contracts worth hundreds of millions, worried that Meta's inside view could tilt the competitive landscape. OpenAI pulled back from Scale months ago, and it spends far less than Google. It spreads its bets across multiple providers to avoid risking its intellectual property. Turns out, the deal has everything to do with fueling Meta's 'superintelligence' push. Scale CEO Alexandr Wang will lead the charge toward Meta's elusive goal of AGI. He's taking a small crew with him. Scale will continue to run independently with Jason Droege stepping in as interim CEO.. It's important to note that Google-parent Alphabet's stock price is up 10% over the past month, yet remains down 7% year-to-date. In contrast, Meta Platform's stock price has climbed 7.5% in the last month and is up 20.4% plans major AI shift after Meta's surprising $14 billion move first appeared on TheStreet on Jun 16, 2025 This story was originally reported by TheStreet on Jun 16, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Valory's Decentralized AI Agents Aim to Bring Transparency and Control to DeFi Investors
Valory's Decentralized AI Agents Aim to Bring Transparency and Control to DeFi Investors

Yahoo

time38 minutes ago

  • Yahoo

Valory's Decentralized AI Agents Aim to Bring Transparency and Control to DeFi Investors

Valory's Decentralized AI Agents Aim to Bring Transparency and Control to DeFi Investors originally appeared on TheStreet. AI agents are quickly becoming integral to how businesses manage portfolios, automate workflows, and navigate digital markets. But most of today's tools—from ChatGPT to private analytics stacks—leave users exposed to platform risks, hidden logic, and limited control. Valory, a Zurich-based company building on the Olas protocol, is offering a decentralized alternative. The company's open-source agents combine machine learning models with smart contracts and crypto wallets, enabling users to operate AI-driven strategies across DeFi, prediction markets, and marketing—without relying on black-box infrastructure. 'We launched Olas so that people could truly own their AI,' said David Diez, CEO of Valory. 'That means owning the models, the logic, and the economics.' Valory's platform targets high-net-worth individuals and institutions that want more than generic SaaS offerings. Rather than outsourcing sensitive tasks like portfolio optimization or campaign automation, Diez says firms can now control how their AI behaves, where it operates, and how it handles assets. 'How much of your stack do you want to own?' Diez asked. 'For core business functions, it's not just about cost—it's about sovereignty over data and margin.' The agents, licensed under Apache 2.0, can be customized or reused for various use cases. Valory currently supports integration with more than 50 DeFi protocols, including Aave and Uniswap, and has reached $400 million in locked value as of Q4 2024, according to company posts on X. Security remains a central concern for institutional adoption. Valory's agents include built-in guardrails and operate with support from Safe, a widely used multi-signature wallet provider. These controls limit agents to predefined actions—such as caps on transaction size or protocol access—reducing the likelihood of errant behavior. Users retain full custody over their funds via wallets like MetaMask or Trust Wallet. Valory also supports MPC (multi-party computation) wallets, splitting key access for added redundancy. 'You can pull the plug on the agent anytime,' Diez said. Valory's stack is fully open-source and publicly audited, which the company believes is essential for attracting TradFi investors who require end-to-end transparency. Valory's Decentralized AI Agents Aim to Bring Transparency and Control to DeFi Investors first appeared on TheStreet on Jun 17, 2025 This story was originally reported by TheStreet on Jun 17, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

AI ‘Factories' Are Hungry for Power. Expect ‘Gigawatt-Scale' Growth.
AI ‘Factories' Are Hungry for Power. Expect ‘Gigawatt-Scale' Growth.

Yahoo

timean hour ago

  • Yahoo

AI ‘Factories' Are Hungry for Power. Expect ‘Gigawatt-Scale' Growth.

Did you know that artificial intelligence works in 'factories?' It does, but without an artificial lunchbox and an artificial hardhat. Factories are what AI chip maker Nvidia and electrical infrastructure supplier Schneider Electric call the data centers where ChatGPT or Alphabet's Gemini learn to answer users' questions and then help teenagers summarize The Odyssey. BofA Securities forecasts data-center spending will grow 12% a year on average for the coming few years, hitting more than $400 billion by 2028.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store