4 days ago
The hidden cost of AI: Cognitive lethargy
The world stands at a technological crossroads, with an unprecedented proliferation of large language models (LLMs), such as ChatGPT, drastically altering and reshaping the very essence of human existence. It has permeated the lives, homes, work, and play spaces of people. Over 378 million people worldwide are estimated to be active users of AI, including LLMs like ChatGPT, Gemini, Claude, and Copilot.
The use of AI-generated large language models experienced a significant global surge in 2025. Hundreds of millions of people worldwide use these tools daily for academic, personal, and professional purposes. Considering the rapid growth and development of AI, it becomes crucial to understand the cognitive implications of the widespread use of LLMs in educational and informational contexts. Today, there is substantial evidence-based research to show that, although it enhances the accessibility and personalization of education, the prolonged and frequent use of AI tools for information reduces people's critical thinking capacity. So, the integration of LLMs in learning ecosystems presents a complex duality.
Recently, the research from the Massachusetts Institute of Technology (MIT) has raised concerns for the education sector, educators, and learners. The study suggests that the increased use of AI systems may raise serious concerns regarding human intellectual development and autonomy. The usage of LLMs provides users with singular responses that will inadvertently discourage lateral thinking and independent judgment. Instead of becoming seekers of knowledge, we are leaning towards passive consumption of AI-generated content. In the long run, this world will lead to superficial engagement, weakened critical thinking, and less long-term memory formation, as well as a shallower understanding of the material. This will lead to a decrease in decision-making skills, and it will create a false perception that learning is effortless and simplified, decreasing student motivation and reducing the interest in individual research.
The increased use of ChatGPT will affect student learning, performance, perception of learning, and impact higher-order thinking. The MIT research suggests that while AI tools can enhance productivity, they may also promote a form of meta-cognition laziness. The fundamental principle of research inquiry is compromised as students rely heavily on digital tools for information gathering. The students will fall prey to Echo Chambers, where users of ChatGPT become trapped in self-reinforcing information bubbles, filtering out contradictory evidence. Echo Chambers can impact the foundation of academic discourse and debate. Furthermore, the sophisticated functioning of the algorithms leaves the users unaware of information gaps in the research, degrading the standard of scholarly outcomes.
These research findings have many implications for various stakeholders in the education sector and beyond. The role of the professor is evolving from a source of knowledge to a facilitator and guide. The curriculum must be adapted to digital literacy and changing patterns with a focus on security and safety. The technological developments call for monitoring academic integrity. Students must adopt a balanced approach to protect their higher-order thinking.
Artificial intelligence is here to stay and will impact all sectors, creating new career opportunities and displacing traditional employment pathways. Evidence suggests that if left unchecked, the world risks turning learners into mere editors of AI-generated text, rather than actual creators and thinkers. While advanced technologies, such as artificial intelligence, have immense potential and offer unprecedented opportunities for enhancing human learning and access to vast volumes of information, they also have the power to impact cognitive development, long-term memory building, and intellectual independence, and demand caution and critical consideration.