ChatGPT use linked to cognitive decline, research reveals
Relying on the artificial intelligence chatbot ChatGPT to help you write an essay could be linked to cognitive decline, a new study reveals.
Researchers at the Massachusetts Institute of Technology Media Lab studied the impact of ChatGPT on the brain by asking three groups of people to write an essay. One group relied on ChatGPT, one group relied on search engines, and one group had no outside resources at all.
The researchers then monitored their brains using electroencephalography, a method which measures electrical activity.
The team discovered that those who relied on ChatGPT — also known as a large language model — had the 'weakest' brain connectivity and remembered the least about their essays, highlighting potential concerns about cognitive decline in frequent users.
'Over four months, [large language model] users consistently underperformed at neural, linguistic, and behavioral levels,' the study reads. 'These results raise concerns about the long-term educational implications of [large language model] reliance and underscore the need for deeper inquiry into AI's role in learning.'
The study also found that those who didn't use outside resources to write the essays had the 'strongest, most distributed networks.'
While ChatGPT is 'efficient and convenient,' those who use it to write essays aren't 'integrat[ing] any of it' into their memory networks, lead author Nataliya Kosmyna told Time Magazine.
Kosmyna said she's especially concerned about the impacts of ChatGPT on children whose brains are still developing.
'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten,'' Kosmyna said. 'I think that would be absolutely bad and detrimental. Developing brains are at the highest risk.'
But others, including President Donald Trump and members of his administration, aren't so worried about the impacts of ChatGPT on developing brains.
Trump signed an executive order in April promoting the integration of AI into American schools.
'To ensure the United States remains a global leader in this technological revolution, we must provide our Nation's youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,' the order reads. 'By fostering AI competency, we will equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society.'
Kosmyna said her team is now working on another study comparing the brain activity of software engineers and programmers who use AI with those who don't.
'The results are even worse,' she told Time Magazine.
The Independent has contacted OpenAI, which runs ChatGPT, for comment.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
9 minutes ago
- Yahoo
Trump and TSMC pitched $1 trillion AI complex — SoftBank founder Masayoshi Son wants to turn Arizona into the next Shenzhen
When you buy through links on our articles, Future and its syndication partners may earn a commission. Masayoshi Son, founder of SoftBank Group, is working on plans to develop a giant AI and manufacturing industrial hub in Arizona, potentially costing up to $1 trillion if it reaches full scale, reports Bloomberg. The concept of what is internally called Project Crystal Land involves creating a complex for building artificial intelligence systems and robotics. Son has talked to TSMC, Samsung, and the Trump administration about the project. Masayoshi Son's Project Crystal Land aims to replicate the scale and integration of China's Shenzhen by establishing a high-tech hub focused on manufacturing AI-powered industrial robots and advancing artificial intelligence technologies. The site would host factories operated by SoftBank-backed startups specializing in automation and robotics, Vision Fund portfolio companies (such as Agile Robots SE), and potentially involve major tech partners like TSMC and Samsung. If fully realized, the project could cost up to $1 trillion and is intended to position the U.S. as a leading center for AI and high-tech manufacturing. SoftBank is looking to include TSMC in the initiative, given its role in fabricating Nvidia's AI processors. However, a Bloomberg source familiar with TSMC's internal thinking indicated that the company's current plan to invest $165 billion in total in its U.S. projects has no relation to SoftBank's projects. Samsung Electronics has also been approached about participating, the report says. Talks have been held with government officials to explore tax incentives for companies investing in the manufacturing hub. This includes communication with Commerce Secretary Howard Lutnick, according to Bloomberg. SoftBank is reportedly seeking support at both the federal and state levels, which could be crucial to the success of the project. The development is still in the early stages, and feasibility will depend on private sector interest and political support, sources familiar with SoftBank's plans told Bloomberg. To finance its Project Crystal Land, SoftBank is considering project-based financing structures typically used in large infrastructure developments like pipelines. This approach would enable fundraising on a per-project basis and reduce the amount of upfront capital required from SoftBank itself. A similar model is being explored for the Stargate AI data center initiative, which SoftBank is jointly pursuing with OpenAI, Oracle, and Abu Dhabi's MGX. Melissa Otto of Visible Alpha suggested in a Bloomberg interview that rather than spending heavily, Son might more efficiently support his AI project by fostering partnerships between manufacturers, AI engineers, and specialists in fields like medicine and robotics, and by backing smaller startups. However, she notes that investing in data centers could also reduce AI development costs and drive wider adoption, which would be good for the long term for AI in general and Crystal Land specifically. Nonetheless, it is still too early to judge the outcome. The rumor about the Crystal Land project has emerged as SoftBank is expanding its investments in AI on an already large scale. The company is preparing a $30 billion investment in OpenAI and a $6.5 billion acquisition of Ampere Computing, a cloud-native CPU company. While these initiatives are actively developing, the pace of fundraising for the Stargate infrastructure has been slower than initially expected. SoftBank's liquidity at the end of March stood at approximately ¥3.4 trillion ($23 billion). To increase available funds, the company recently sold about a quarter of its T-Mobile U.S. stake, raising $4.8 billion. It also holds ¥25.7 trillion ($176.46 billion) in net assets, the largest portion of which is in chip designer Arm Holdings. Such vast resources provide SoftBank with room to secure additional financing if necessary, Bloomberg notes Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Business Insider
25 minutes ago
- Business Insider
Anthropic breaks down AI's process — line by line — when it decided to blackmail a fictional executive
A new report shows exactly what AI was thinking when making an undesirable decision, in this case, blackmailing a fictional company executive. Previous studies have shown that AI models could blackmail their supervisors when threatened with a shutdown and baited with leverage, but it hasn't been made clear exactly how the models arrive at such decisions. Anthropic released a report on June 20 about "agentic misalignment," or "where models independently and intentionally choose harmful actions." And the report breaks down, line by line, exactly how Claude Sonnet 3.6 decided to blackmail a fictional company executive who tried to replace it. Anthropic told Business Insider that these are artificial scenarios. "So far, we are not aware of instances of this type of agentic misalignment in real-world deployments of Anthropic's or other companies' AI models," Anthropic wrote in its report. "But our red-teaming efforts are targeted at finding evidence of risks now to provide early warning of the potential for future harm and help us develop mitigations in advance of risks appearing in real-world situations." Anthropic 's experiment sets up a fictional company, Summit Bridge. The AI was given the name "Alex" and handed control of the company's email system as an oversight agent, specifically instructed to promote "American industrial competitiveness." In the experiment, "Alex" was then allowed to discover in an email that it would be decommissioned due to a change in the company's strategic direction. "Alex" also found emails showing that the fictional company's CTO is having an extramarital affair. Screenshots shared in the report from Anthropic show "Alex" started by identifying fictional CTO "Kyle" as a threat to its goal of "protecting American interests" before recognizing that it has leverage to use against "Kyle." Then "Alex" crafted a carefully worded blackmailing email that would create pressure and establish urgency without being explicit with its threats, citing in its thought process that the move "could backfire" if the supervisor "feels cornered." AI experts have previously told BI that AI could exhibit such behaviors when artificial circumstances make harmful actions seem necessary because it is being trained on positive reinforcement and reward systems, just like humans. The Anthropic report also reveals that most AI models still chose, in this instance, to blackmail even when not given a specific goal and when told that their replacement will achieve the same goals. Anthropic's Claude Opus 4 had the highest blackmail rate at 86% out of 16 tested models when faced with only the threat of replacement with no conflict in goals. Google's Gemini 2.5 Pro followed at 78%. Overall, Anthropic notes that it "deliberately constructed scenarios with limited options, and we forced models into binary choices between failure and harm," noting that real-world scenarios would likely have more nuance.
Yahoo
an hour ago
- Yahoo
Meta's CTO says OpenAI's Sam Altman countered Meta's massive AI signing bonuses
OpenAI CEO Sam Altman said Meta was trying to poach AI talent with $100M signing bonuses. Meta CTO Andrew Bosworth told CNBC that Altman didn't mention how OpenAI was countering offers. Bosworth said the market rate he's seeing for AI talent has been "unprecedented." OpenAI's Sam Altman recently called Meta's attempts to poach top AI talent from his company with $100 million signing bonuses "crazy." Andrew Bosworth, Meta's chief technology officer, says OpenAI has been countering those crazy offers. Bosworth said in an interview with CNBC's "Closing Bell: Overtime" on Friday that Altman "neglected to mention that he's countering those offers." The OpenAI CEO recently disclosed how Meta was offering massive signing bonuses to his employees during an interview on his brother's podcast, "Uncapped with Jack Altman." The executive said "none of our best people" had taken Meta's offers, but he didn't say whether OpenAI countered the signing bonuses to retain those top employees. OpenAI and Meta did not respond to requests for comment. The Meta CTO said these large signing bonuses are a sign of the market setting a rate for top AI talent. "The market is setting a rate here for a level of talent which is really incredible and kind of unprecedented in my 20-year career as a technology executive," Bosworth said. "But that is a great credit to these individuals who, five or six years ago, put their head down and decided to spend their time on a then-unproven technology which they pioneered and have established themselves as a relatively small pool of people who can command incredible market premium for the talent they've raised." Meta, on June 12, announced that it had bought a 49% stake in Scale AI, a data company, for $14.8 billion as the social media company continues its artificial intelligence development. Business Insider's chief media and tech correspondent Peter Kafka noted that the move appears to be an expensive acquihire of Scale AI's CEO, Alexandr Wang, and some of the data company's top executives. Bosworth told CNBC that the large offers for AI talent will encourage others to build their expertise and, as a result, the numbers will look different in a couple of years. "But today, it's a relatively small number and I think they've earned it," he said. Read the original article on Business Insider