
AI, and the cost of ‘optimised' learning
In a course I teach at a liberal arts university, I asked students to write a reflective essay about a personal cultural experience that changed them. What I got back was unsettling — far too many reflective pieces had the polished, impersonal sheen of AI. The sentences were smooth, the tone perfectly inoffensive, but missing the raw, uneven edges of real student writing. When I asked a few of them about their process, some admitted using AI tools like ChatGPT as a 'starting point,' others as an 'editor,' and a few simply shrugged, 'It gave me the answer.' The underlying sentiment was clear: why struggle when you can get it done by AI?
(Photo credit: Getty Images)
Not just in my classroom
Professors everywhere are facing a generation of students who carry instant 'answers' in their pockets, bypassing the struggle that deep thinking, reflection and real learning demand. AI isn't just helping with assignments anymore — it's writing discussion posts, solving problem sets, even drafting essays before class. What we're seeing is not just a technological shift — it's a cultural one. But what do we make of this shift — from thinking, to outsourcing thought?
Since 2012, standardised assessments across high-income countries have revealed a troubling phenomenon: a measurable decline in reasoning and problem-solving abilities among young adults. The data is stark: 25% of adults in developed economies, and a staggering 35% in the US, now struggle with basic mathematical reasoning. According to Financial Times journalist John Burn-Murdoch's piercing analysis, 'Have Humans Passed Peak Brain Power?', this decline is not due to biology, nor environment. It's something more insidious: it's due to how technology is reshaping our cognitive capacities.
Where once we immersed ourselves in deep reading and reflective analysis, we now live in the age of the scroll. Algorithmically curated feeds dictate our attention, fragmenting our thoughts into 280-character conclusions and ten-second clips. Fewer than half of Americans read a single book in 2022. This isn't just a change in habit; it's a shift in the architecture of our cognition. We are witnessing a silent, collective decline of attention span, memory, and conceptual depth. And this crisis is now bleeding into education.
Gyankunj Case
These concerns are not limited to elite university campuses. In a study that I conducted with my professor and a colleague in Gujarat, evaluating the Gyankunj program — a flagship initiative to integrate technology into govt school classrooms — we found that students exposed to smartboards and digital content actually fared worse in mathematics and writing compared to their peers in classrooms without digital tools.
The reasons were sobering. Teachers had not been adequately trained in using these technologies. Mathematics, which requires cognitive scaffolding and immediate feedback, suffered because the teacher was reduced to a passive facilitator of pre-designed content. Writing, an intensely human process involving revisions, suggestions, and encouragement, became mechanical. What we observed was not enhanced learning, but the opposite — a disconnect between medium and method.
This points to a deeper malaise: techno-optimism. There's a growing belief, often fuelled by venture capital and consultancy jargon, that algorithms can fix education. That AI tutors, avatars, and dashboards can replace the 'inefficiencies' of human teaching. That every child's mind can be optimised, like a logistics chain.
Learning Is Human
But pedagogy is not content delivery. It is a relational, embodied, and context-rich process. It depends on trust, dialogue, spontaneity, eye contact, missteps and encouragement. No AI system, no matter how sophisticated, can replicate the chemistry of a teacher who senses a student's confusion and adapts — not by code, but by care.
AI is now entering primary education spaces as well. I have seen prototypes where storybooks are narrated by AI voices, children's drawings are corrected by algorithms and writing prompts are generated automatically. But what happens to play-based learning?
To dirtying one's hands with clay, engaging with textures, shapes, and emotions? Indian educators like Gandhi, Tagore, and Gijubhai Badheka emphasised the necessity of experiential, tactile learning in early years. Similarly, Sri Aurobindo emphasised that education must arise from the svabhava of the child, guided by the inner being — not imposed templates. Can an algorithm, however sophisticated, grasp this uniqueness?
J Krishnamurti, in his talks on education, famously questioned whether any system, however well-designed, could ever nurture freedom. For him, true learning happened in freedom from fear, not in efficient content delivery. If AI's omnipresence in classrooms creates an atmosphere where mistakes are quickly corrected, paths auto-completed, and creativity constrained by what's already been done, are we not curbing the learner's inward growth? In reducing learning to clicks, nudges, and 'correct answers', are we not slowly extinguishing the inner flame?
Walking the Tightrope
And yet — let me be clear — I am neither a techno-skeptic nor a techno-romantic. The use of AI in education, when done thoughtfully, has made certain forms of learning more accessible and visual. Diagrams, simulations, and language-support systems have helped many students grasp complex ideas. It can assist teachers in planning. It can support students with special needs. But it should remain a tool, never the foundation. A servant of learning, not its substitute.
When we raise children in screen-first environments, we risk creating what Jonathan Haidt (2024) now identifies as an anxious generation: digitally fluent but emotionally fragmented, constantly grappling with overexposure to screens, metrics, and digital surveillance. So, we have to ask:
Are we preparing students not to be wiser, but simply more optimised?
Not more reflective, but more 'prompt ready'?
Not more social, but increasingly isolated behind screens and 'smart' interfaces?
The challenge ahead is not technological. It is existential. Will we nurture depth, or distraction? Freedom, or feedback loops? A sense of self, or a sense of being constantly scored?
Facebook Twitter Linkedin Email Disclaimer
Views expressed above are the author's own.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
36 minutes ago
- Time of India
CoreWeave to offer compute capacity in Google's new cloud deal with OpenAI
CoreWeave has emerged as a winner in Google's newly signed partnership with OpenAI , sources familiar with the matter told Reuters, in the latest example of the voracious appetite for computing resources in the artificial-intelligence industry and the formation of new alliances to meet them. The so-called neocloud company, which sells cloud computing services built on Nvidia 's graphics processing units, is slated to provide computing capacity to Google's cloud unit, and Alphabet's Google will then sell that computing capacity to OpenAI to meet the growing demand for services like ChatGPT, the sources said. Google will also provide some of its own computing resources to OpenAI, added the sources, who requested anonymity to discuss private matters. The details of the arrangement, first reported by Reuters on Tuesday, highlight the evolving dynamics between hyperscalers like Microsoft and Google and so-called neocloud companies like Coreweave. Hyperscalers are large cloud service providers that offer massive-scale data centres and cloud infrastructure. The insatiable hunger for computing resources has generated major investment commitments and turned rivals into partners. Backed by OpenAI and Nvidia, Coreweave signed up Google as a customer in the first quarter. CoreWeave, Google and OpenAI declined to comment. CoreWeave, a specialized cloud provider that went public in March, has already been a major supplier of OpenAI's infrastructure. It has signed a five-year contract worth $11.9 billion with OpenAI to provide dedicated computing capacity for OpenAI's model training and inference. OpenAI also took a $350 million equity stake in CoreWeave in March. This partnership was further expanded last month through an additional agreement worth up to $4 billion, extending through April 2029, underscoring OpenAI's escalating demand for high-performance computing resources. Industry insiders say adding Google Cloud as a new customer could help CoreWeave diversify its revenue sources, and having a credible partner with deep pockets like Google enables the startup to secure more favorable financing terms to support ambitious data centre buildouts across the country. This could also boost Google's cloud unit, which generated $43 billion in sales last year, allowing it to capitalize on the growth of OpenAI, which is also one of its largest competitors in areas like search and chatbots. It positions Google as a neutral provider of computing resources in competition with peers such as Amazon and Microsoft. CoreWeave's deal with Google coincides with Microsoft's re-evaluation of its data centre strategy, including withdrawing from certain data centre leases. Microsoft, once Coreweave's largest customer, accounting for about 62% of its 2024 revenue, is also renegotiating with OpenAI to revise the terms of their multibillion-dollar investment, including the future equity stake it will hold in OpenAI. CoreWeave, backed by Nvidia, has established itself as a fast-rising provider of GPU-based cloud infrastructure in the AI wave. While its public debut in March was met with a lukewarm response due to concerns over its highly leveraged capital structure and shifting GPU demand, the company's stock has surged since its IPO price of $40 per share, gaining over 270% and reaching a record high of $166.63 in June.


Time of India
36 minutes ago
- Time of India
Nvidia chief calls AI 'the greatest equaliser' - but warns Europe risks falling behind
By Thomas Adamson and Kelvin Chan Will artificial intelligence save humanity - or destroy it? Lift up the world's poorest - or tighten the grip of a tech elite? Jensen Huang - the global chip tycoon widely predicted to become one of the world's first trillionaires - offered his answer on Wednesday: neither dystopia nor domination. AI, he said, is a tool for liberation. Wearing his signature biker jacket and mobbed by fans for selfies, the Nvidia CEO cut the figure of a tech rockstar as he took the stage at VivaTech in Paris. "AI is the greatest equalizer of people the world has ever created," Huang said, kicking off one of Europe's biggest technology industry fairs. Huang's core argument: AI can level the playing field, not tilt it. Critics argue Nvidia's dominance risks concentrating power in the hands of a few. But Huang insists the opposite - that by slashing computing costs and expanding access, "we're democratizing intelligence" for startups and nations alike. But beyond the sheeny optics, Nvidia used the Paris summit to unveil a wave of infrastructure announcements across Europe, signaling a dramatic expansion of the AI chipmaker's physical and strategic footprint on the continent. In France, the company is deploying 18,000 of its new Blackwell chips with startup Mistral AI. In Germany, it's building an industrial AI cloud to support manufacturers. Similar rollouts are underway in Italy, Spain, Finland and the U.K., including a new AI lab in Britain. Other announcements include a partnership with AI startup Perplexity to bring sovereign AI models to European publishers and telecoms, a new cloud platform with Mistral AI, and work with BMW and Mercedes-Benz to train AI-powered robots for use in auto plants. The announcements underscore how central AI infrastructure has become to global strategy - and how Nvidia, now the world's most valuable chipmaker, is positioning itself as the engine behind it. As the company rolls out ever more powerful systems, critics warn the model risks creating a new kind of "technological priesthood" - one in which only the wealthiest companies or governments can afford the compute power, energy, and elite engineering talent required to participate. That, they argue, could choke the bottom-up innovation that built the tech industry in the first place. Huang pushed back. "Through the velocity of our innovation, we democratize," he said, responding to a question by The Associated Press. "We lower the cost of access to technology." As Huang put it, these factories "reason," "plan," and "spend a lot of time talking to" themselves, powering everything from ChatGPT to autonomous vehicles and diagnostics. But some critics warn that without guardrails, such all-seeing, self-reinforcing systems could go the way of Skynet in " The Terminator " movie - vast intelligence engines that outpace human control. To that, Huang offers a counter-model: layered AI governance by design. "In the future," he said, "the AI that is doing the task is going to be surrounded by 70 or 80 other AIs that are supervising it, observing it, guarding it, ensuring that it doesn't go off the rails." He likened the moment to a new industrial revolution. Just as electricity transformed the last one, Huang said, AI will power the next - and that means every country needs a national intelligence infrastructure. That's why, he explained, he's been crisscrossing the globe meeting heads of state. "They all want AI to be part of their infrastructure," he said. "They want AI to be a growth manufacturing industry for them." Europe, long praised for its leadership on digital rights, now finds itself at a crossroads. As Brussels pushes forward with world-first AI regulations, some warn that over-caution could cost the bloc its place in the global race. With the U.S. and China surging ahead and most major AI firms based elsewhere, the risk isn't just falling behind - it's becoming irrelevant. Huang has a different vision: sovereign AI. Not isolation, but autonomy - building national AI systems aligned with local values, independent of foreign tech giants. "The data belongs to you," Huang said. "It belongs to your people, your country... your culture, your history, your common sense." But fears over AI misuse remain potent - from surveillance and deepfake propaganda to job losses and algorithmic discrimination. Huang doesn't deny the risks. But he insists the technology can be kept in check - by itself. The VivaTech event was part of Huang's broader European tour. He had already appeared at London Tech Week and is scheduled to visit Germany. In Paris, he joined French President Emmanuel Macron and Mistral AI CEO Arthur Mensch to reinforce his message that AI is now a national priority.


Time of India
an hour ago
- Time of India
Sam Altman reveals water cost of each ChatGPT query; it will surprise you
In a surprising revelation, OpenAI CEO Sam Altman shared that a single ChatGPT query uses a few drops of water. This comes at a time when the environmental cost of artificial intelligence is under growing scrutiny. In a blog post, Altman said each query consumes about 0.000085 gallons of water. That's roughly one-fifteenth of a teaspoon. AI models like ChatGPT run on massive server farms that must be cooled constantly. This makes water usage an important part of the conversation. Altman's claim aims to ease public concern, but some experts want more clarity and proof. How water usage is connected to ChatGPT AI runs on powerful computers stored in data centers that produce a lot of heat. To keep them from overheating, companies use cooling systems that often depend on water. As tech becomes more central to daily life, water use has joined energy and carbon emissions in the sustainability debate. Sam Altman's water estimate and what it means by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like No Distractions. Just Solitaire Play Solitaire Download Undo Altman said each ChatGPT query takes about 0.34 watt-hours of electricity and a few drops of water. That may sound small, but when you think about the millions of queries made each day, the total adds up. Critics point out that OpenAI has not explained how this number was calculated. That lack of detail has made some experts cautious. Past concerns about AI's water use A report from The Washington Post last year estimated that creating a 100-word email with GPT-4 could use more than a full bottle of water. These numbers were tied to the cooling needs of data centers, especially those in hot and dry places. Altman's latest statement appears to push back on that report as pressure grows on tech firms to be more accountable. Experts call for transparency Many in the tech and environmental space say companies like OpenAI need to publish independent and verified data about their resource use. Altman's number sounds reassuring but without knowing how the math was done or where the servers are located, it is hard to trust fully. Can AI be sustainable? As AI becomes a part of more industries and daily life, its long-term environmental cost matters more than ever. Altman believes the cost of intelligence will one day drop to the price of electricity alone. That could make AI both affordable and sustainable. But for now, even few drops of water per query raise big questions. AI Masterclass for Students. Upskill Young Ones Today!– Join Now