&w=3840&q=100)
Thinking capped: How generative AI may be quietly dulling our brains
of humans.
While this study is preliminary and limited in scope, involving barely 54 subjects aged 18 to 34, it found that those who used ChatGPT for writing essays (as part of the research experiment) showed measurably lower brain activity than their peers who didn't. 'Writing without (AI) assistance increased brain network interactions across multiple frequency bands, engaging higher cognitive load, stronger executive control, and deeper creative processing,' it found.
Various experts in India, too, reiterate the concerns of overdependence on AI, to the extent where people outsource even thinking to AI. Those dealing with the human brain define this as 'cognitive offloading' which, they caution, can diminish critical thinking and reasoning capability while also building a sense of social isolation – in effect, dragging humans into an 'idiot trap'.
Training the brain to be lazy
'We now rely on AI for tasks we used to do ourselves — writing essays, solving problems, even generating ideas,' says Nitin Anand additional professor of clinical psychology, National Institute of Mental Health and Neuro Sciences (Nimhans), Bengaluru. 'That means less practice in critical thinking, memory recall, and creative reasoning.'
This dependence, he adds, is also weakening people's ability to delay gratification. 'AI tools are designed for speed. They answer instantly.
But that trains people to expect quick solutions everywhere, reducing patience and long-term focus.'
Anand warns that this behavioural shift is feeding into a pattern of digital addiction, which he classifies as the 4Cs: craving, compulsion, loss of control, and consequences (see box).
'When someone cannot stop checking their phone, feels restless without it, and suffers in real life because of it — that's addiction,' he says, adding that the threat of addiction towards technology has increased multifold by something as adaptive and customisable as AI.
Children and adolescents are particularly at risk, says Pankaj Kumar Verma, consultant psychiatrist and director of Rejuvenate Mind Neuropsychiatry Clinic, New Delhi.
'Their prefrontal cortex — the brain's centre for planning, attention, and impulse control — is still developing,' he explains. 'Constant exposure to fast-changing AI content overstimulates neural circuits, leading to short attention spans, poor impulse control, and difficulty with sustained focus.'
The effects don't stop at attention
'We're seeing a decline in memory retention and critical thinking, simply because people don't engage deeply with information anymore,' Verma adds. Even basic tasks like asking for directions or speaking to others are being replaced by AI, increasing social isolation, he says.
Much of this harks back to the time when landlines came to be replaced by smartphones. Landline users rarely needed a phonebook — numbers of friends, family, and favourite shops were memorised by heart. But with mobile phones offering a convenient 'contacts' list, memory was outsourced. Today, most people can barely remember three-odd numbers unaided.
With AI, such cognitive shifts will likely become more pronounced, the experts say. What looks like convenience today might well be shaping a future where essential human skills quietly fade away.
Using AI without losing ourselves
Experts agree that the solution is not to reject AI, but to regulate its use with conscious boundaries and real-world grounding. Verma advocates for structured rules around technology use, especially in homes with children
and adolescents.
'Children, with underdeveloped self-regulation, need guidance,' he says. 'We must set clear boundaries and model balanced behaviour. Without regulation, we risk overstimulating developing brains.'
To prevent digital dependence, Anand recommends simple, yet effective, routines that can be extended to AI use. The 'phone basket ritual', for instance, involves setting aside all devices in a common space at a fixed hour each day — usually in the evening — to create a screen-free window for family time or rest.
He also suggests 'digital fasting': unplugging from all screens for six to eight hours once a week to reset attention and reduce compulsive use.
'These habits help reclaim control from devices and re-train the brain to function independently,' he says. Perhaps, digital fasting can be extended to 'AI fasting' during work and school assignments to allow the brain to engage in cognitive activities.
Pratishtha Arora, chief executive officer of Social and Media Matters, a digital rights organisation, highlights the essential role of parental responsibility in shaping children's digital lives.
'Technology is inevitable, but how we introduce it matters,' she says. 'The foundation of a child's brain is laid early. If we outsource that to screens, the damage can be long-term.'
She also emphasises the need to recognise children's innate skills and interests rather than plunging them into technology at an early age.
Shivani Mishra, AI researcher at the Indian Institute of Technology Kanpur, cautions against viewing AI as a replacement for human intelligence. 'AI can assist, but it cannot replace human creativity or emotional depth,' she says. Like most experts, she too advises that AI should be used to reduce repetitive workload, 'and free up space for thinking, not to avoid thinking altogether'.
The human cost
According to Mishra, the danger lies not in what AI can do, but in how much we delegate to it, often without reflection.
Both Anand and Verma share concerns about how its unregulated use could stunt core human faculties. Anand reiterates that unchecked dependence could erode the brain's capacity to delay gratification, solve problems, and tolerate discomfort.
'We're at risk of creating a generation of young people who are highly stimulated but poorly equipped to deal with the complexities of real life,' Verma says.
The way forward, the experts agree, lies in responsible development, creating AI systems grounded in ethics, transparency, and human values. Research in AI ethics must be prioritised not just for safety, but also to preserve what makes us human in the first place, they advise.
The question is not whether AI will shape the future; it is already doing so. It is whether humans will remain conscious architects of that future or passive participants in it.
Writing without AI assistance leads to higher cognitive load engagement, stronger executive control, and deeper creative processing
Writing with AI assistance reduces overall neural connectivity and shifts the dynamics of information flow
Large language model (LLM) users noted a diminishing inclination to evaluate the output critically
Participants who were in the brain-only group reported higher satisfaction and demonstrated higher brain connectivity, compared to other groups
Essays written with the help of LLM carried less significance or value to the participants as they spent less time on writing and mostly failed to provide a quote from their essays

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
Brutal CEO cut 80% of workers who rejected AI - 2 years later he says he would do it again
When most leaders cautiously tested AI, Eric Vaughan, the CEO of IgniteTech, took a gamble that shocked the tech world. The IgniteTech CEO replaced nearly 80% of his staff in a bold move to make artificial intelligence the company's foundation. His decision, controversial yet transformative, shows the brutal reality of adapting to disruption. Vaughan's story shows that businesses must change their culture, not just their technology, to thrive in the AI era. In early 2023, IgniteTech CEO Eric Vaughan faced one of his toughest decisions. Convinced that artificial intelligence was not just a tool but an existential shift for every business, he dismantled his company's traditional structure. ALSO READ: Orca attack mystery: What really happened to marine trainer Jessica Radcliffe Why did IgniteTech face resistance to AI? Live Events When Vaughan first pushed the company to use AI, he spent a lot of money on training. Mondays turned into "AI Mondays," which were only for learning new skills, trying out new tools, and starting pilot projects. IgniteTech paid for employees to take AI-related courses and even brought in outside experts to help with adoption, as per a report by Fortune. However, resistance emerged rapidly. It was surprising that the most pushback came from technical employees, not sales or marketing. A lot of people were doubtful about what AI could do, focusing on what it couldn't do instead of what it could do. Some people openly refused to take part, while others worked against the projects. Vaughan said that the resistance was so strong that it was almost sabotage. His experience is backed up by research. According to a 2025 report on enterprise AI adoption, one in three workers said they were against or even sabotaging AI projects, usually because they were afraid of losing their jobs or were frustrated with tools that weren't fully developed, as per a report by Fortune. ALSO READ: Apple iPhone 17 Air and Pro get surprise release date change — here's the new timeline How did Vaughan rebuild the company? Vaughan came to the conclusion that believing in AI was not up for debate. Instead of making his current employees change, he started hiring new people who shared his vision. He called these new hires "AI innovation specialists." This change affected every department, including sales and finance, as per a report by Fortune. Thibault Bridel-Bertomeu, IgniteTech's new chief AI officer, was a key hire. Vaughan reorganized the company so that every division reported to AI after he joined. This centralization stopped things from being done twice and made it easier for people to work together, which is a common problem when companies use AI. The change was expensive, disruptive, and emotionally draining, but Vaughan says it had to said, "It was harder to change minds than to add skills,' as per a report by Fortune. What can other companies learn from this? Even though it hurt, IgniteTech got a lot of benefits. By the end of 2024, it had released two AI solutions that were still in the patent process. One of them was Eloquens AI, an email automation platform. Revenue remained in the nine-figure range, with profit margins near 75% Ebitda. During the chaos, the company even made a big purchase. ALSO READ: Alien Attack in November? Harvard scientists warn mysterious space object could be advanced technology Vaughan's story teaches us a crucial lesson: using AI is as much about culture as it is about technology. While companies like Ikea focus on augmenting workers instead of replacing them, Vaughan chose radical restructuring to ensure alignment. Both methods show how hard it is for businesses to find a balance between trust and innovation. FAQs Why did Eric Vaughan fire so many people at IgniteTech? He thought that people who didn't want to use AI would hurt the company's future, so he decided to rebuild with people who shared his vision. What happened after IgniteTech changed its AI? The company introduced new AI products, set up a central AI division, and made more money, even though the change was hard.


Time of India
2 hours ago
- Time of India
Free GPT-5 access heats up AI rivalry: Who wins, who loses?
The past fortnight has been incredible in terms of the buzzing AI space, particularly the open source AI domain. First with the coming of Perplexity Comet, and now with the release of GPT 5, the sector is literally taking neckbreak competition to another level. OpenAI has unveiled GPT-5, its most sophisticated artificial intelligence model yet, and the best part is that it's now available for free to all ChatGPT users—albeit with some usage the first time, free-tier users can access a reasoning-capable AI model—GPT-5—unlocking advanced capabilities such as multi-step reasoning, better understanding of complex prompts, and significantly improved accuracy. When users hit their usage cap, the system seamlessly pivots to a lighter but still capable version called GPT-5 Mini. Plus-tier subscribers can enjoy elevated limits, while Pro and Team users gain unlimited access, including access to GPT-5 Pro, a version tailored for deep, intricate reasoning tasks. GenAI race gets hotter "This is a very happening space. Sam Altman sees the GPT-5 as a step toward ethical AI but warns against over-reliance. Elon Musk, on the other hand, warns of OpenAI's growing influence, citing geopolitical risks. There are other leaders who call it solid but overhyped, exposing LLM limits and pushing hybrid AI approaches," says Gopi Thangavel, Group CIO at Larsen & Toubro, commenting on the development. "This can be the way to crush the competition -- a strategy used by most. For example, Perplexity is free for India. Moreover, OpenAI has removed other models as they see GPT-5 as more stable and advanced compared to previous models. If you see, they timed it well -- giving previous lighter models as free as OSS 20b and 120b and then launched GPT-5. Besides, GPT 5 is much better than previous models in coding," says Gaurav Rawat, an AI leader with a leading financial sector company and who has experience across the auto and pharma sectors as well. Designed on the GPT 4 model, GPT-5 is able to offer better detailed and thoughtful answers by analysing intricate tasks such as science questions, coding, information synthesis, or financial analysis. It's also able to return more accurate as well as faster responses as compared to its past models. The new model has the potential to work on various data forms, including images and text, all at the same time. This primarily translates into the fact that somebody can ask the model a question, put up a picture, or even share a document. The GPT 5 can comprehend as well as respond without the need of separate tools. Such a capability wasn't there before. Free GPT5 sustainable? "This development is reflective of the competition in this space, which is huge. Other than a few, LLMs are struggling to make money. They spend a lot developing them but are unable to get it back," says Avik Sarkar, Senior Research Fellow & Visiting Faculty at Indian School of Business. "Nothing is ever truly free — when it's free, you are the product. OpenAI's decision to make GPT‑5 accessible to everyone is a smart play: offer a preview, gauge user engagement, shape future pricing, grow the customer base — and at the same time, turn up the competitive heat. It's the same psychology as letting readers preview a few Kindle pages before buying the book — just at an AI scale. This move will fortify Microsoft's ecosystem and influence. But make no mistake — GPT‑5's capabilities will reshape the competitive landscape. The real question: Which rival will be the first to reveal what they've been hiding under the hood?" comments Rajendra Deshpande, former CIO at Intelenet Global Services.


Time of India
2 hours ago
- Time of India
Future proofing APAC: Building the skills for AI powered economy
By John Lombard Artificial Intelligence (AI) is no longer a distant vision—it is now the operational backbone of industries across the Asia-Pacific (APAC) region. From predictive analytics in manufacturing to generative AI (GenAI) in customer service, AI adoption is reshaping economic structures. Yet, adoption is far from uniform. Advanced economies such as Japan, Singapore, South Korea, and China are deploying AI at scale, while others are still building foundational infrastructure and digital readiness. One of the most pressing challenges is not technological capability but human capability. Without a skilled workforce, AI adoption risks creating a gap between potential and performance—a high-speed engine without the tracks to run on. The skills gap in an accelerating AI landscape According to NTT DATA's recent Global GenAI Report, nearly 70% of organizations view AI as a game changer, and almost two-thirds plan to significantly invest in GenAI over the next two years. But investment alone is insufficient. The talent pool trained to design, deploy, and govern AI systems is lagging behind. In many organizations, innovation in AI is outpacing workforce readiness and governance frameworks. Employees often feel underprepared—not due to resistance, but because training, role-specific upskilling, and AI literacy have not kept pace with technological change. Cultural diversity, language barriers, and varied education systems across APAC further complicate the creation of a region-wide skilled AI workforce. To bridge this divide, enterprises need structured AI and GenAI talent development frameworks that are scalable, measurable, and adaptable to evolving technologies. Such frameworks should: • Provide foundational AI literacy for all employees, regardless of role. • Offer role-based practical training for professionals in technical and non-technical functions. • Develop certified specialists with deep domain expertise in AI deployment. • Cultivate strategic leaders capable of driving AI innovation and governance at an enterprise scale. This tiered approach allows organizations to embed AI capabilities at every operational layer—from frontline staff to the C-suite. Importantly, in-house trainers and industry-specific learning modules can make training more relevant and impactful. In the enterprise technology context, AI skills cannot be siloed. They need to intersect with other capabilities such as: • Cybersecurity – safeguarding AI models and data pipelines from vulnerabilities. • Data Science & Machine Learning – building and refining predictive and generative models. • Cloud Computing – enabling scalable AI deployments across geographies. • Ethical AI Governance – ensuring accuracy, bias mitigation, and regulatory compliance. These competencies will be essential as AI becomes integrated into everything from supply chain systems to customer engagement platforms. Responsible AI adoption Expanding AI use also means expanding accountability. Governance models must address risks such as bias in outputs, misinformation, intellectual property concerns, and data leakage. Organizations should invest in transparent governance frameworks, clear audit trails, and training that empowers employees to identify and mitigate AI risks. Responsible AI adoption is not just about compliance—it is about building trust with employees, customers, and regulators. In a region as diverse as APAC, this trust will be a competitive differentiator. The leadership imperative The role of leadership in AI transformation extends beyond technology investment. Executives must actively participate in upskilling, signal commitment to continuous learning, and ensure inclusivity in training initiatives. By aligning AI innovation agendas with workforce development strategies, leaders can create sustainable adoption rather than short-term experimentation. Ultimately, the future of APAC's AI economy will depend on how effectively the region matches technological advances with human capabilities. Leaders who act today—investing in both AI systems and the people who operate them—will define the competitive, ethical, and sustainable AI landscape of tomorrow. The author is CEO, APAC, NTT DATA. The views expressed are solely of the author and ETCIO does not necessarily subscribe to it. ETCIO shall not be responsible for any damage caused to any person/organization directly or indirectly.