logo
Using AI makes you stupid, researchers find

Using AI makes you stupid, researchers find

Yahoo6 hours ago

Artificial intelligence (AI) chatbots risk making people less intelligent by hampering the development of critical thinking, memory and language skills, research has found.
A study by researchers at the Massachusetts Institute of Technology (MIT) found that people who relied on ChatGPT to write essays had lower brain activity than those who used their brain alone.
The group who used AI also performed worse than the 'brain-only' participants in a series of tests. Those who had used AI also struggled when asked to perform tasks without it.
'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the paper said.
Researchers warned that the findings raised 'concerns about the long-term educational implications' of using AI both in schools and in the workplace.
It adds to a growing body of work that suggest people's brains switch-off when they use AI.
The MIT study monitored 54 people who were asked to write four essays. Participants were divided into three groups. One wrote essays with the help of ChatGPT, another used internet search engines to conduct research and the third relied solely on brainpower.
Researchers then asked them questions about their essays while performing so-called electroencephalogram (EEG) scans that measured activity in their brains.
Those who relied on ChatGPT, a so-called 'large language model' that can answer complicated questions in plain English, 'performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring', the researchers said.
The EEG scans found that 'brain connectivity systematically scaled down with the amount of external support' and was weakest in those who were relying on AI chatbots to help them write essays.
The readings in particular showed reduced 'theta' brainwaves, which are associated with learning and memory formation, in those using chatbots. 'Essentially, some of the 'human thinking' and planning was offloaded,' the study said.
The impact of AI contrasted with the use of search engines, which had relatively little effect on results.
Of those who has used the chatbot, 83pc failed to provide a single correct quote from their essays – compared to around 10pc in those who used a search engine or their own brainpower.
Participants who relied on chatbots were able to recall very little information about their essays, suggesting either they had not engaged with the material or had failed to remember it.
Those using search engines showed only slightly lower levels of brain engagement compared to those writing without any technical aides and similar levels of recall.
The findings will fuel concerns that AI chatbots are causing lasting damage to our brains.
A study by Microsoft and Carnegie Mellon, published in February, found that workers reported lower levels of critical thinking when relying on AI. The authors warned that overuse of AI could leave cognitive muscles 'atrophied and unprepared' for when they are needed.
Nataliya Kosmyna, the lead researcher on the MIT study, said the findings demonstrated the 'pressing matter of a likely decrease in learning skills' in those using AI tools when learning or at work.
While the AI-assisted group was allowed to use a chatbot in their first three essays, in their final session they were asked to rely solely on their brains.
The group continued to show lower memory and critical thinking skills, which the researchers said highlighted concerns that 'frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving'.
The essays written with the help of ChatGPT were also found to be homogenous, repeating similar themes and language.
Researchers said AI chatbots could increase 'cognitive debt' in students and lead to 'long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity'.
Teachers have been sounding the alarm that pupils routinely cheating on tests and essays using AI chatbots.
A survey by the Higher Education Policy Institute in February found 88pc of UK students were using AI chatbots to help with assessments and learning and that 18pc had directly plagiarised AI text into their work.
OpenAI, the developer of ChatGPT, was contacted for comment.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Musk's xAI Burns Through $1 Billion a Month as Costs Pile Up
Musk's xAI Burns Through $1 Billion a Month as Costs Pile Up

Yahoo

time11 minutes ago

  • Yahoo

Musk's xAI Burns Through $1 Billion a Month as Costs Pile Up

(Bloomberg) -- Elon Musk's artificial intelligence startup xAI is burning through $1 billion a month as the cost of building its advanced AI models races ahead of the limited revenues, according to people briefed on the company's financials. Security Concerns Hit Some of the World's 'Most Livable Cities' As Part of a $45 Billion Push, ICE Prepares for a Vast Expansion of Detention Space How E-Scooters Conquered (Most of) Europe As American Architects Gather in Boston, Retrofits Are All the Rage Taser-Maker Axon Triggers a NIMBY Backlash in its Hometown The rate at which the company is bleeding cash provides a stark illustration of the unprecedented financial demands of the artificial intelligence industry, particularly at xAI, where revenues have been slow to materialize. To cover the gap, Musk's startup is currently trying to raise $9.3 billion in debt and equity, according to people briefed on the deal terms, who asked not to be identified because the information is private. But even before the money is in the bank, the company has plans to spend more than half of it in just the next three months, the people said. Over the course of 2025, xAI, which is responsible for the AI-powered chatbot Grok, expects to burn through about $13 billion, as reflected in the company's levered cash flow, according to details shared with investors. As a result, its prolific fundraising efforts are just barely keeping pace with expenses, the people added. A spokesperson for the company declined to comment. The losses are due, in part, to the huge costs that all AI companies face as they build the server farms and buy the specialized computer chips that are needed to train advanced AI models like Grok and ChatGPT. Carlyle Group Inc. estimates that over $1.8 trillion of capital will be deployed by 2030 to meet that demand to build out AI infrastructure, Chief Executive Officer Harvey Schwartz wrote in a shareholder letter. 'Model builders will look to raise debt and they're going to burn lots and lots of cash,' said Jordan Chalfin, senior analyst and head of technology at CreditSights. 'The space is very competitive and they are battling for technical supremacy.' But Musk's entrant in the AI race has struggled to develop revenue streams at the same rate as some of its direct competitors, like OpenAI and Anthropic. While almost none of these companies offer public figures on their finances, Bloomberg has previously reported that OpenAI, the creator of ChatGPT, is expecting to bring in revenues of $12.7 billion this year. At xAI, revenues are expected to be just $500 million this year, rising to north of $2 billion next year, investors were recently told. What xAI has on its side is a CEO, Musk, who is the richest man in the world, and one who has shown a willingness to spend his fortune on huge, futuristic projects long before they start generating money. Back in 2017, Musk's biggest company, Tesla Inc. was burning through $1 billion a quarter to pay for the production of its Model 3 car, Bloomberg reported at the time. SpaceX, meanwhile, sustained years of steady losses as it pushed toward its long-term goal of interplanetary exploration. Even against this backdrop, though, the huge losses at xAI stand out. Musk's team at xAI, which is racing to develop AI that can compete with humans, believes that they have advantages that will eventually allow them to catch up with peers. While some competitors rent chips and server space, xAI is paying for much of the infrastructure itself and getting direct access through Musk's social media company, X, which previously bought a significant stockpile of the most coveted and high powered computer chips. Musk has said that he expects xAI will continue buying more chips. X Factor After recently merging with X, Musk's AI executives are also hopeful that they will be able to train the company's models on the social media network's copious and constantly-refreshed archives, rather than paying for data sets like other AI companies. These potential advantages have led xAI to optimistically project that it will be profitable by 2027, people familiar with the matter said. OpenAI expects to be cashflow positive by 2029, Bloomberg previously reported. These projections, along with Musk's celebrity status and political power, have been enough to win over many investors, especially before the recent breakdown in the relationship between Musk and President Donald Trump. Potential xAI investors were told that the company's valuation grew to $80 billion at the end of the first quarter, up from $51 billion at the end of 2024. Investors have included Andreessen Horowitz, Sequoia Capital and VY Capital. For now, though, xAI is racing to raise enough money to keep up with its prodigious expenditures. Between its founding in 2023 and June of this year, xAI raised $14 billion of equity, people briefed on the financials said. Of that, just $4 billion was left at the beginning of the first quarter, and the company expected to spend almost all of that remaining money in the second quarter, the people said. The company is now finalizing $4.3 billion in new equity funding, and it already has plans to raise another $6.4 billion of capital next year, the company has told investors. And that is on top of the $5 billion in debt that Bloomberg has previously reported Morgan Stanley is helping it raise. The corporate debt is expected to help pay for xAI's data center development, the people said. Other companies have decided to do project financing instead. The company is also expecting to get a bit of help from a $650 million rebate from one of its manufacturers, it told investors this week. There were early signs that investors were hesitant to loan the company money at the proposed terms, Bloomberg has reported. The company gave select investors more detailed financial information on Monday in response to questions it had faced during the fundraising process, people familiar with the negotiations said. But the deal has attracted more interest since the company changed some of the deal terms — to be more investor friendly — and finalized the equity fundraising. A spokesperson for Morgan Stanley, the bank in charge of xAI's debt sale, declined to comment. --With assistance from Tom Contiliano and Peter Pae. (Updates to include billion dollar figure in lead.) Ken Griffin on Trump, Harvard and Why Novice Investors Won't Beat the Pros How a Tiny Middleman Could Access Two-Factor Login Codes From Tech Giants American Mid: Hampton Inn's Good-Enough Formula for World Domination The Spying Scandal Rocking the World of HR Software US Allies and Adversaries Are Dodging Trump's Tariff Threats ©2025 Bloomberg L.P. Sign in to access your portfolio

ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

Time​ Magazine

time18 minutes ago

  • Time​ Magazine

ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT's Media Lab has returned some concerning results. The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels.' Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study. The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper's main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' she says. 'Developing brains are at the highest risk.' Generating ideas The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel. Kosmyna, who has been a full-time research scientist at the MIT Media Lab since 2021, wanted to specifically explore the impacts of using AI for schoolwork, because more and more students are using AI. So she and her colleagues instructed subjects to write 20-minute essays based on SAT prompts, including about the ethics of philanthropy and the pitfalls of having too many choices. The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely 'soulless.' The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. 'It was more like, 'just give me the essay, refine this sentence, edit it, and I'm done,'' Kosmyna says. The brain-only group, conversely, showed the highest neural connectivity, especially in alpha, theta and delta bands, which are associated with creativity ideation, memory load, and semantic processing. Researchers found this group was more engaged and curious, and claimed ownership and expressed higher satisfaction with their essays. The third group, which used Google Search, also expressed high satisfaction and active brain function. The difference here is notable because many people now search for information within AI chatbots as opposed to Google Search. After writing the three essays, the subjects were then asked to re-write one of their previous efforts—but the ChatGPT group had to do so without the tool, while the brain-only group could now use ChatGPT. The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. 'The task was executed, and you could say that it was efficient and convenient,' Kosmyna says. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.' The second group, in contrast, performed well, exhibiting a significant increase in brain connectivity across all EEG frequency bands. This gives rise to the hope that AI, if used properly, could enhance learning as opposed to diminishing it. Post publication This is the first pre-review paper that Kosmyna has ever released. Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. 'Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical,' says Kosmyna. 'We need to have active legislation in sync and more importantly, be testing these tools before we implement them.' Ironically, upon the paper's release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to 'only read this table below,' thus ensuring that LLMs would return only limited insight from the paper. She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. 'We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,' she says, laughing. Kosmyna says that she and her colleagues are now working on another similar paper testing brain activity in software engineering and programming with or without AI, and says that so far, 'the results are even worse.' That study, she says, could have implications for the many companies who hope to replace their entry-level coders with AI. Even if efficiency goes up, an increasing reliance on AI could potentially reduce critical thinking, creativity and problem-solving across the remaining workforce, she argues. Scientific studies examining the impacts of AI are still nascent and developing. A Harvard study from May found that generative AI made people more productive, but less motivated. Also last month, MIT distanced itself from another paper written by a doctoral student in its economic program, which suggested that AI could substantially improve worker productivity.

The Smart Revolution In US Oil and Gas
The Smart Revolution In US Oil and Gas

Forbes

time30 minutes ago

  • Forbes

The Smart Revolution In US Oil and Gas

AI is poised to provide the US oil and gas industry with powerful tools to operate more intelligently, safely, and sustainably. getty Artificial intelligence (AI) is essentially about making computers think and learn like humans to solve complex problems. In the U.S. oil and gas industry, this could become a game-changer. One key area is predictive maintenance. Imagine having a super-smart system that constantly monitors all the equipment – pumps, pipelines, and drilling rigs – using sensors that collect tons of data. AI algorithms could analyze this massive amount of data to find subtle patterns that humans might miss. These patterns could indicate that a piece of equipment is starting to wear down or might fail soon. By identifying these problems early, companies could schedule maintenance proactively, before a breakdown happens. This could avoid costly emergency repairs, reduce downtime (when operations have to stop), and extend the life of expensive equipment. It's like having a crystal ball for their machinery, leading to significant cost savings and better management of their assets. On a recent phone call with Todd Garner, CEO of PINN-AI, he told me, 'At PINN AI, we're building the brain of the subsurface — using AI to see what humans can't, at speeds they never could.' Beyond just fixing things, AI presents an opportunity to revolutionize operational efficiency. Think about all the complex processes involved in getting oil and gas out of the ground, from drilling to refining. AI could analyze real-time data from all these stages and identify ways to optimize how things are done. For example, it could fine-tune drilling parameters to drill faster and more accurately, or it could optimize the flow of oil and gas through pipelines to reduce energy consumption. This intelligent optimization could lead to less wasted energy, reduced waste products, and a more effective use of resources overall, which is both economically sound and possibly better for the environment. Garner highlights the data-rich but insight-limited nature of the industry, stating, "Oil and gas has no shortage of data — just a shortage of time and people to make sense of it. PINN AI (through Well Intel AI) bridges that gap with industrial-scale intelligence." Furthermore, AI is making the development of energy resources more responsible. Finding the right places to drill is a complex task involving analyzing huge amounts of geological data. Advanced AI algorithms could sift through this data much more effectively than humans, potentially leading to more accurate predictions of where oil and gas deposits are likely to be. This means fewer unproductive wells could be drilled, which would minimize the environmental impact of exploration. In the production phase, AI could continuously monitor and adjust operational settings to maximize the amount of oil and gas recovered from a well while strictly adhering to safety regulations and environmental protection measures. According to Garner, the goal is to empower human experts. "Our goal isn't to replace the geologist or engineer — it's to supercharge them. Think of it as Iron Man's suit for technical teams in oil and gas," he said. For oil and gas companies in the US, adopting AI offers some promising advantages: In essence, AI is poised to provide the US oil and gas industry with powerful tools to operate more intelligently, safely, and sustainably. AI is on track to help them be more competitive and environmentally conscious in the long run. It's about using smart technology to make a complex industry work better. As Garner concludes, "At PINN AI, we're not trying to replace expertise — we're trying to unlock it. Our job is to make the work of subsurface teams faster, more accurate, and more scalable."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store