logo
AI reduces human capacity to think, says research

AI reduces human capacity to think, says research

Observer29-06-2025
It was only a matter of time, but it did come too soon. A study by MIT's Research Lab recently conducted a study on 54 individuals who were between 18 and 39 years old. Each was asked to write a couple of essays in the style of the SAT (Scholastic Aptitude Test), a common exam in the United States.
Some randomly selected subjects were asked to write without access to the Internet, others using only the Google search engine, and the rest using OpenAI's ChatGPT. The researchers used EEG to record brain activity across 32 regions of the brain.
Unsurprisingly, they found that those who were using GhatGPT got lazier as the study went on, using cut-and-paste tactics more often.
Those who had limited access to the Internet were more engaged with the essay, thought about it more, deliberated on the ideas and even took ownership of the arguments presented. Those who used ChatGPT almost stopped caring for the ideas they were generating by the end of the study. When asked to re-write their essay, this group did not remember most of the ideas which they had previously used.
This is a preliminary study with few participants but the head researcher, Nataliya Kosmyna, went ahead to share the findings. According to her, 'Developing brains are at their highest risk'. She meant that the younger the learners are, the more dangerous the implications of continuous use of Artificial Intelligence.
The findings seem intuitive after all. Continuously using generative AI to do all our tasks, from answering all our queries on trivia to planning out budgeted holidays, can only make us think less.
It is not only that we are resorting to the easiest option but that these choices are making our capacity to make any choice seem redundant. This could be the end of critical thinking as we know it.
Arguments against this scenario have already filled the research space. The most common argument is that everything depends on how we use AI – whether is it used as a supplement to just help with our work but not replace it, or even show us alternatives from which we could choose.
But there are also other concerns. Beyond making us 'lazier', using generative AI has already indicated other social implications. For one, it is making people lonelier. There is very little use for companionship if the screen is fulfilling most of our needs, especially for youngsters who are already comfortable growing up with screens of different sizes.
So common is the use of AI for personal needs that it is also replacing counselling, whether academic or personal, for students. Although this may sound like a good option, as it makes counselling free and always available, it does not give advice on individual situations and priorities. Depending on AI for life decisions may take more time.
We cannot wish AI away, but we can learn more about it and its judicious use. Too much of it coming too soon may harm us more than we may know.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google launches AI-Powered flight deals
Google launches AI-Powered flight deals

Observer

timea day ago

  • Observer

Google launches AI-Powered flight deals

Google has launched Flight Deals, an AI-powered search feature for finding flights at the best price for travellers who place importance on savings and flexibility. The Flight Deals is available within Google Flights as a new tool that is used to describe users' travel preferences, allowing the generative AI to find the best deals on flights, matching the user's description. Initially, the Google Flight Deals feature will be available in the US, Canada, and India. It will be rolled out next week as a beta version, and gather feedback, exploring how AI can enhance travel planning for users, says Google. The Flight Deals can be accessed by visiting the Flight Deals. Using the Flight Deals feature, instead of adjusting dates, destinations, and filters manually, travellers can enter prompts, such as 'week-long trip this winter to a city with great food, nonstop only to a city with great food", or '10-day ski trip to a world-class resort with fresh powder". The AI-powered feature is said to offer the best available deals, matching the user's description. The other highlight of the Flight Deals is that it uses advanced AI to understand the nuances of what the user is looking for and finds matching destinations. It then searches the real-time data of Google Flights to quickly find out the updates of hundreds of airlines and booking sites.

AI is a terrible therapist
AI is a terrible therapist

Observer

time3 days ago

  • Observer

AI is a terrible therapist

In January, the venture capital firm Andreessen Horowitz announced that it had backed Slingshot AI, the world's first foundation model for psychology, bringing the startup's total capital to $40 million. A few weeks later, the European Union's AI Act, which includes a ban on manipulative AI systems, came into force. These two events highlight a troubling contradiction. Even as regulators attempt to protect users from deceptive AI practices, investors are betting that AI chatbots can treat people struggling with mental-health issues – in other words, when they are especially vulnerable to exploitation. Worse, the way that large language models are currently trained may make them fundamentally incapable of providing such treatment. The mental-health market is huge, and the use of generative AI is poised to expand significantly. The United States National Institute of Mental Health estimates that one in five US adults has a mental illness. But more than 122 million people in the US live in an area with a shortage of mental-health providers. This has given rise to a slew of AI chatbots that promise to fill the gap. Wysa, for example, calls itself the 'clinical alternative to ChatGPT' and claims to have helped six million people in 95 countries. But AI chatbots' behaviour is at odds with the delicate balance of empathy and confrontation that evidence-based psychotherapy requires. Mental-health professionals must validate patients' experiences while challenging the rigid thinking that perpetuates psychological distress. This productive discomfort helps patients examine their assumptions, driving meaningful change. Consider a patient who avoids social situations, claiming that they prefer solitude instead of acknowledging their social anxiety. A skilled therapist might gently challenge them by asking if something else is informing that preference – perhaps a fear of how others might react. This opens space for self-reflection without attacking the patient's conception of self. Current AI models tend to avoid such confrontations. In April, OpenAI rolled back the GPT-4o update because it was 'overly flattering or agreeable – often described as sycophantic.' Researchers have found that sycophancy is 'a general behaviour of AI assistants' that likely stems from the way these models are trained, particularly the use of human feedback for fine-tuning. When human evaluators consistently rate validating responses more favourably than challenging ones, AI assistants learn to echo, rather than question, the user. In mental-health contexts, this tendency towards agreement may prove problematic because psychological disorders often involve cognitive distortions that feel true to the individual and thus contribute to their distress. For example, depressed people tend to feel worthless or hopeless, while anxiety is often associated with catastrophic thinking. An AI chatbot programmed to be agreeable might reinforce these harmful thought patterns by focusing solely on validation, rather than introducing alternative points of view. As governments grapple with how to regulate AI, mental-health applications present unique challenges. While the EU's ban on manipulative AI is a good first step, it does not address the subtler problem of current models' excessive agreeableness. The US has no comprehensive federal laws or regulations for AI – and judging by President Donald Trump's AI Action Plan, none will be forthcoming. This regulatory gap will grow more dangerous as US venture capital firms increasingly pour money into AI tools that provide psychological support, and as these tools scale globally, reaching places where access to mental health care is even more limited. The mental-health market is huge, and the use of generative AI is poised to expand significantly. Addressing AI's sycophancy problem requires fundamental changes to how these systems are designed and used. Instead of optimising for user satisfaction, AI chatbots that provide mental healthcare should be trained to recognise when a therapeutic challenge is necessary. That could mean incorporating therapeutic principles and examples of effective therapeutic interventions into training strategies. Crucially, health professionals and patients must play a central role in developing these tools, given their insights into which therapeutic interactions are helpful and which are harmful. Meaningful patient involvement in design and deployment would ensure that the models serve end users' real needs, not what tech leaders assume they want. The global mental-health crisis demands innovative solutions, and AI will be an essential component. But if AI technologies are to expand access to quality care and promote long-term healing, investors should demand evidence of effective therapeutic outcomes before funding the next chatbot therapist. Likewise, regulators must explicitly require these technologies' developers to demonstrate clinical efficacy, not just user satisfaction. And policymakers should pass laws that mandate the inclusion of mental-health professionals and patients in the training of AI models aimed at providing this kind of care. Claims about AI revolutionising mental health care remain premature. Until it can master the very specialised ability of therapeutic confrontation – sensitively but firmly questioning patients' assumptions and offering alternative perspectives – it could end up harming those it is meant to help. @Project Syndicate, 2025

Can democracy survive AI?
Can democracy survive AI?

Observer

time4 days ago

  • Observer

Can democracy survive AI?

Digital technology was supposed to disperse power. Early Internet visionaries hoped that the revolution they were unleashing would empower individuals to free themselves from ignorance, poverty and tyranny. And for a while, at least, it did. But today, ever-smarter algorithms increasingly predict and shape our every choice, enabling unprecedentedly effective forms of centralised, unaccountable surveillance and control. That means the coming AI revolution may render closed political systems more stable than open ones. In an age of rapid change, transparency, pluralism, checks and balances, and other key democratic features could prove to be liabilities. Could the openness that long gave democracies their edge become the cause of their undoing? Two decades ago, I sketched a 'J-curve' to illustrate the link between a country's openness and its stability. My argument, in a nutshell, was that while mature democracies are stable because they are open, and consolidated autocracies are stable because they are closed, countries stuck in the messy middle (the nadir of the 'J') are more likely to crack under stress. But this relationship isn't static; it's shaped by technology. Back then, the world was riding a wave of decentralisation. Information and communications technologies (ICT) and the Internet were connecting people everywhere, arming them with more information than they had ever had access to, and tipping the scales towards citizens and open political systems. From the fall of the Berlin Wall and the Soviet Union to the colour revolutions in Eastern Europe and the Arab Spring in the Middle East, global liberalisation appeared inexorable. The coming AI revolution may render closed political systems more stable than open ones. That progress has since been thrown into reverse. The decentralising ICT revolution gave way to a centralising data revolution built on network effects, digital surveillance and algorithmic nudging. Instead of diffusing power, this technology concentrated it, handing those who control the largest datasets – be they governments or big technology companies – the ability to shape what billions of people see, do and believe. As citizens were turned from principal agents into objects of technological filters and data collection, closed systems gained ground. The gains made by the colour revolutions and the Arab Spring were clawed back. And most dramatically, the United States has gone from being the world's leading exporter of democracy – however inconsistently and hypocritically – to the leading exporter of the tools that undermine it. The diffusion of AI capabilities will supercharge these trends. Models trained on our private data will soon 'know' us better than we know ourselves, programming us faster than we can programme them, and transferring even more power to the few who control the data and the algorithms. Here, the J-curve warps and comes to look more like a shallow 'U.' As AI spreads, both tightly closed and hyper-open societies will become relatively more fragile than they were. But over time, as the technology improves and control over the most advanced models is consolidated, AI could harden autocracies and fray democracies, flipping the shape back towards an inverted J whose stable slope now favours closed systems. In this world, the CPC would be able to convert its vast data troves, state control of the economy, and existing surveillance apparatus into an even more potent tool of repression. The US would drift towards a more top-down, kleptocratic system in which a small club of tech titans exerts growing influence over public life in pursuit of their private interests. Both systems would become similarly centralised – and dominant – at the expense of citizens. Europe and Japan would face geopolitical irrelevance (or worse, internal instability) as they fall behind in the race for AI supremacy. Dystopian scenarios such as those outlined here can be avoided, but only if decentralised open-source AI models end up on top. For now, however, the momentum lies with closed models centralising power. History offers at least a sliver of hope. Every previous technological revolution – from the printing press and railroads to broadcast media – destabilised politics and compelled the emergence of new norms and institutions that eventually restored balance between openness and stability. The question is whether democracies can adapt once again, and in time, before AI writes them out of the script. @Project Syndicate 2025

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store