
Is ChatGPT Making Us Stupid?
Two research studies suggest that heavy use of AI is not only a game changer, but an alarming threat ... More to humanity's ability to solve problems, communicate with one another, and perhaps to thrive.
In boardrooms and classrooms, coffee shops and cubicles, the same question keeps coming up: Is ChatGPT making us smarter, or is it making us intellectually lazy—maybe even stupid?
There's no question that generative artificial intelligence is a game-changer. ChatGPT drafts our emails, answers our questions, and completes our sentences. For students, it's become the new CliffsNotes. For professionals, a brainstorming device. For coders, a potential job killer. In record time, it has become a productivity enhancer for almost everything. But what is it doing to our brains?
As someone who has spent his career helping clients anticipate and prepare for the future, this question deserves our attention. With any new technology, concerns inevitably arise about its impact. When calculators were first introduced, people worried that students would lose their ability to perform basic arithmetic or mental math skills. When GPS was first introduced, some fretted that we would lose our innate sense of direction. And when the internet bloomed, people grew alarmed that easy access to information would erode our capacity for concentration and contemplation.
'Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, is what often gets shortchanged by internet grazing,' noted technology writer Nicholas Carr in a prescient 2008 Atlantic article, 'Is Google Making Us Stupid?'
Today, Carr's question needs to be asked anew – but of a different techno-innovation. Just-released research studies are helping us understand what's going on when we allow ChatGPT to think for us.
What Happens to the Brain on ChatGPT?
Researchers at MIT invited fifty-four participants to write essays across four sessions, divided into three groups: one using ChatGPT, one using Google, and one using only their brainpower. In the final session, the groups switched roles. What these researchers found should make all of us pause.
Participants who used ChatGPT consistently produced essays that scored lower in originality and depth than those who used search or wrote unaided. More strikingly, brain imaging revealed a decline in cognitive engagement in ChatGPT users. Brain regions associated with attention, memory, and higher-order reasoning were noticeably less active.
The MIT researchers introduced the concept of "cognitive debt"—the subtle but accumulating cost to our mental faculties when we outsource too much of our thinking to AI. 'Just as relying on a GPS dulls our sense of direction, relying on AI to write and reason can dull our ability to do those very things ourselves,' notes the MIT report. 'That's a debt that compounds over time.'
The second study, published in the peer-reviewed Swiss journal Societies, is titled 'AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.' It broadens the lens from a lab experiment to everyday life.
Researchers surveyed 666 individuals from various age and educational backgrounds to explore how often people rely on AI tools—and how that reliance affects their ability to think critically. The findings revealed a strong negative correlation between frequent AI use and critical thinking performance. Those who often turned to AI for tasks like writing, researching, or decision-making exhibited lower 'metacognitive' awareness and analytical reasoning. This wasn't limited to any one demographic, but younger users and those with lower educational attainment were particularly affected.
What's more, the study confirmed that over-reliance on AI encourages 'cognitive offloading'—our tendency to let external tools do the work our brains used to do. While cognitive offloading isn't new (we've done it for centuries with calculators and calendars), AI takes it to a whole new level. 'When your assistant can 'think' for you, you may stop thinking altogether,' the report notes.
Are We Letting the Tool Use Us?
These studies aren't anti-AI. Neither am I. I use ChatGPT daily. As a futurist, I see ChatGPT and similar tools as transformational breakthroughs—the printing press of the 21st century. They unlock productivity, unleash creativity, and lower barriers to knowledge.
But just as the printing press didn't eliminate the need to learn to read, ChatGPT doesn't absolve us of the responsibility to think. And that is the danger today, that people will stop doing their own thinking.
These studies are preliminary, and further research is needed. However, there is sufficient evidence to suggest that heavy use of AI is not only a game changer, but an alarming threat to humanity's ability to solve problems, communicate with one another, and perhaps to thrive. In integrating metacognitive strategies—thinking about thinking—into education, workplace training, and even product design. In other words, don't just use AI—engage with it. The line we must straddle is between augmentation and abdication. Are we using AI to elevate our thinking? Or are we turning over the keys to robots?
Here are four ideas for using this new technology, while keeping our cognitive edge sharp:
The danger isn't that ChatGPT will replace us. But it can make us stupid—if we let it replace our thinking instead of enriching it. The difference lies in how we use it, and more importantly, how aware we are while using it. The danger is that we'll stop developing the parts of ourselves that matter most—because it's faster and easier to let the machine do it. Let's not allow that to happen.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
33 minutes ago
- Gizmodo
NASA Aircraft Set to Perform Wild Low-Altitude Stunts Around These U.S. Cities
NASA is getting ready to fly two planes over mid-Atlantic states and parts of California, where they will be carrying out special maneuvers at a close distance while collecting valuable data about our changing planet. The two research aircraft, named P-3 Orion (N426NA) and a King Air B200 (N46L), are set to fly over Baltimore, Philadelphia, the Virginia cities of Hampton, Hopewell, and Richmond, in addition to the Los Angeles Basin, Salton Sea, and Central Valley, according to NASA. The flights will take place along the eastern coast between Sunday, June 22 and Thursday, June 26, and in California between Sunday, June 29 to Wednesday, July 2. It'll be a good opportunity to catch the two planes as they will fly at lower altitudes than most commercial flights, while pulling off specialized maneuvers like vertical spirals between 1,000 and 10,000 feet (304 to 3,048 meters), circling above power plants, landfills, and urban areas. The planes will also make missed approaches at local airports and low-altitude flybys along runways to collect air samples near the surface. The P-3, operated out of NASA's Wallops Flight Facility in Virginia, is a four-engine turboprop aircraft, carrying six science instruments. The King Air B200 is a twin-engine aircraft owned by Dynamic Aviation and contracted by NASA. The aircraft will carry out 40 hours of data collection for NASA's Student Airborne Research Program (SARP) on each U.S. coast. SARP is an eight-week summer internship program at NASA that provides undergraduate students with hands-on experience in various scientific areas. The low-altitude flights will be used to gather atmospheric data through the on board science instruments, which will be operated by the students. 'Despite SARP being a learning experience for both the students and mentors alike, our P-3 is being flown and performing maneuvers in some of most complex and restricted airspace in the country,' Brian Bernth, chief of flight operations at NASA Wallops, said in a statement. 'Tight coordination and crew resource management is needed to ensure that these flights are executed with precision but also safely.' NASA uses low altitude flights for Earth science, gathering high-resolution data that satellites can't capture at the same level of detail. These flights have typically supported research on climate change, natural disasters, and atmospheric science. The upcoming flights will take place near populated areas, so there will be plenty of chances to see the aircraft flying overhead.


CBS News
44 minutes ago
- CBS News
Meta introduces Oakley AI smart glasses that start at $399
Meta is expanding its line of AI-powered eyewear through a new partnership with sportswear brand Oakley. The new smart glasses introduced Friday, called Oakley Meta HSTN, were built for performance wear and are intended for "athletes and fans alike" according to a Meta statement. The full lineup will arrive this summer, Meta says, with glasses starting at $399. "Built for life on the field, trail, or track, with everyday use cases, Oakley Meta is here to evolve sport and enhance performance," Meta states on its website. The high-tech frames are part of Meta's partnership with EssilorLuxottica, the eyewear brand Meta worked with to launch its first round of AI sunglasses with Ray-Ban in 2021. While the Oakley Meta HSTN glasses feature Oakley's typical style, the frames also include Meta's technology, with small cameras that can record videos hands-free and Bluetooth speakers so wearers can listen to music and podcasts while they are on the go. The smart glasses also allow users to send and receive text messages and make calls. Each pair is equipped with a personal AI assistant that answers questions after being prompted by the phrase, "Hey Meta." In a teaser video, former NBA player J.R. Smith is shown playing golf with the glasses on when he asks, "Hey Meta, how strong is the wind today?" "The wind is blowing 12 mph from the northwest," Meta AI responds. As the glasses are intended for athletes, Meta is leaning on popular sports figures to get the word out. The advertising campaign will be led by World Cup winner Kylian Mbappé and three-time Super Bowl MVP Patrick Mahomes, Meta said in its statement.
Yahoo
an hour ago
- Yahoo
Microsoft to reportedly cut thousands of jobs, focusing on sales teams
Microsoft is reportedly planning to cut thousands of jobs, primarily targeting its sales workforce, according to Bloomberg. The layoffs are part of an effort to streamline operations amid significant investments in AI infrastructure. The company is spending heavily on servers and data centres to support AI development. The job cuts are expected to be announced in early July 2025, following the close of Microsoft's fiscal year. Sources familiar with the matter told Bloomberg that the reductions will mainly affect sales teams but may extend to other areas. The timing could still shift, and the company has not provided official comments, the report said. In April, Microsoft informed employees it would rely more on third-party firms to manage software sales to small and mid-size customers. This follows a previous round of layoffs in May 2025, which eliminated 6,000 positions, mostly in product and engineering roles, while largely preserving customer-facing teams such as sales and marketing. Earlier in June 2025, reports emerged that Microsoft is planning to reduce more than 300 jobs as part of its cost management strategy. In May 2025, Microsoft reduced its global workforce by about 3%, affecting around 6,000 employees across various levels and regions. The AI boom has significantly affected the tech labour market, with companies prioritising AI-focused roles and leveraging AI technology to enhance efficiency and reduce costs. As of June 2024, Microsoft employed approximately 228,000 full-time employees, with 55% based in the US. The company continues to implement organisational changes to position itself for success in a rapidly evolving market. Recently, reports of Microsoft Rus, the company's subsidiary in Russia, filing for bankruptcy also surfaced. This followed Russian President Vladimir Putin's statements against foreign service providers acting against Russian interests. "Microsoft to reportedly cut thousands of jobs, focusing on sales teams" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.