logo
Bhai Grok, is it true? A casual chat for you, this simple message costs Elon Musk and the planet dearly

Bhai Grok, is it true? A casual chat for you, this simple message costs Elon Musk and the planet dearly

India Todaya day ago
Most of us don't think twice before typing something into an AI chatbot. A random question, a casual greeting, or even a polite 'thank you' at the end may all feel harmless. For example, if you look at X, where Grok 4, the chatbot created by Elon Musk's xAI, roams, you will see thousands of people tagging the AI chatbot in all things light and serious. Grok bhai, check this — it is often a repeated message on X.advertisementBut behind the scenes, every single message we send to AI tools like Grok, ChatGPT, DeepSeek, or any other chatbot uses electricity, server space, and other resources. The very real pressure they put on the energy systems is beginning to be noticed not just by tech companies but also largely by policymakers, activists and all those who are trying to keep the planet cool in the middle of global warming.You see, these chatbots run on massive data centres that need huge amounts of energy to operate. That means even a simple and unnecessary query uses up resources. And when you multiply that by millions of users doing the same thing every day, it starts to add up for tech companies, and in the grander scheme of things, for the planet.
You may wonder what are we trying to imply here? Let us explain. On a fine April day, an X user, who goes by the name Tomie, asked a simple question, 'I wonder how much money OpenAI has lost in electricity costs from people saying 'please' and 'thank you' to their models.' Now, this was meant as a lighthearted post, but OpenAI CEO Sam Altman responded with, 'Tens of millions of dollars well spent — you never know.' That reply caught people's attention. It got them thinking, is being polite to AI really costing millions? And if yes, what does that mean for energy use and the environment?Generative AI — Grok 4, ChatGPT, Gemini and the likes — uses extremely high amounts of energy, especially during the training phase of models. But even after training, every single interaction, no matter how small, requires computing power. Those polite phrases, while sweet, still count as queries, whether they are serious or not. And queries take processing power, which in turn consumes electricity. You see the pattern? It's all interrelated.Energy use, but just how much?The AI systems are still relatively new. So, precise and more concrete details about how much energy they use are still coming in. But there are some estimates.advertisementFor example, the AI tool DeepSeek estimates that a short AI response to something like 'thank you' may use around 0.001 to 0.01 kWh of electricity. That sounds tiny for a single query. But scale changes everything. If one million people send such a message every day, the energy use could reach 1,000 to 10,000 kWh daily. Over a year, that becomes hundreds and thousands of megawatt-hours, enough to power several homes for months.Similar energy use is across AI systems. MIT Technology Review carried out a study in May and came up with some figures. Among the many conclusions it reached was the estimate of energy use that a person who actively uses AI would force the system to consume in a day. 'You'd use about 2.9 kilowatt-hours of electricity — enough to ride over 100 miles on an e-bike (or around 10 miles in the average electric vehicle) or run the microwave for over three and a half hours,' the study noted.Such high energy use by AI systems has prompted tech companies to look for an energy source. From Google to Microsoft to Meta, they are all trying to either get into nuclear energy or have tied up with nuclear plants that generate energy. But some companies, unable to secure 100 per cent clean energy, are even trying to use more traditional ways to produce electricity. xAI, which is now running one of the largest clusters of computing power to operate Grok 4, was recently in the news because, in Memphis, it started using methane gas generators. The move prompted a protest from the local environmental group, Memphis Community Against Pollution. 'Our local leaders are entrusted with protecting us from corporations violating on our right to clean air, but we are witnessing their failure to do so,' the group noted.advertisementBut are a 'please' and 'thank you' still worth it?Of course, not everyone agrees on the impact of AI energy use on the environment. Some people think it's being blown out of proportion.Kurtis Beavers, a director at Microsoft Copilot, even argues that even frivolous messages, including politeness, have benefits. In a Microsoft WorkLab memo, he said that using basic etiquette with AI leads to more respectful and collaborative outputs. Basically, in his view, being polite to an AI chatbot improves responsiveness and performance, which might justify the extra energy use.Similarly, Elon Musk's AI chatbot Grok too, sees things a bit differently. In its own response to the aforementioned debate, Grok said that the extra energy used by polite words was negligible in the bigger picture. Even over millions of queries, Grok 4 says, the total energy use would be about the same as running a light bulb for a few hours. In the chatbot's words, 'If you're worried about AI's environmental footprint, the bigger culprits are model training (which can use thousands of kWh) and data centre cooling. Your polite words? They're just a friendly whisper in the digital void.'- Ends
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The future of learning: AI that makes students think harder, not less
The future of learning: AI that makes students think harder, not less

The Hindu

time2 hours ago

  • The Hindu

The future of learning: AI that makes students think harder, not less

For years, educators have watched with growing concern as Artificial Intelligence tools like ChatGPT has transformed student behaviour in ways that seemed to undermine the very essence of learning. Students began copying and pasting AI responses, submitting machine-generated essays, and bypassing the mental effort that builds genuine understanding. But a quiet revolution is now underway in classrooms around the world—one that promises to transform AI from an academic shortcut into a powerful thinking partner. The crisis of instant answers The numbers tell a troubling story. A 2023 survey revealed that 30% of college students admitted to using AI to complete work they didn't fully understand, highlighting a critical disconnect between AI assistance and genuine learning. The problem isn't AI's presence in education—it's how these tools have been designed. Educational experts observe that most AI tools entering classrooms today are optimized for output rather than learning. These systems excel at generating essays, solving complex problems, and providing comprehensive explanations—capabilities that inadvertently encourage academic shortcuts. Traditional AI operates as sophisticated answer engines, designed to be helpful by providing immediate solutions. While this approach serves many purposes in professional settings, it fundamentally misaligns with educational goals. Learning requires struggle, reflection, and the gradual construction of understanding—processes that instant answers can circumvent entirely. The Socratic solution The answer lies in reimagining AI as a Socratic partner rather than an oracle. This revolutionary approach, exemplified by innovations such as Claude's Learning Mode, GPT-4's enhanced reasoning capabilities, and Google's Bard educational features, along with similar tools being developed across the education technology sector, transforms AI from a source of answers into a facilitator of inquiry. Instead of responding to 'What caused the 2008 financial crisis?' with a comprehensive explanation, a Socratic AI might ask: 'What economic factors have you already considered?' or 'Which indicators do you think played the most significant role?' This approach extends beyond economics into other critical fields. In healthcare education, rather than immediately diagnosing a patient case study, AI might prompt: 'What symptoms are you prioritizing in your assessment?' or 'Which differential diagnoses have you ruled out and why?' In finance training, instead of providing investment recommendations, AI could ask: 'What risk factors are you weighing in this portfolio decision?' or 'How do current market conditions influence your analysis?' This method draws from centuries of educational theory. Socratic questioning has long been recognized as one of the most effective ways to develop critical thinking skills. By prompting learners to examine their assumptions, articulate their reasoning, and explore alternative perspectives, it builds the intellectual muscles that passive consumption cannot develop. Early adopters see promising result Several major institutions are already pioneering this approach with remarkable results. Northeastern University's deployment of AI across 13 campuses, affecting over 50,000 students and staff, demonstrates the scalability of thoughtful AI integration. The London School of Economics and Champlain College are similarly experimenting with AI tools that enhance rather than replace critical thinking. Researchers have found that when students use AI as a thinking partner rather than an answer source, they develop stronger foundational understanding before engaging with more complex classroom discussions. Educational institutions report that students arrive in class better prepared with more focused, sophisticated questions. These early implementations reveal several crucial success factors: Ethical boundaries: Effective educational AI must be programmed to refuse requests that undermine learning integrity. This isn't just content filtering—it requires AI systems designed with educational principles at their core. Faculty integration: Success requires AI tools that complement rather than replace instructors. The most effective implementations support teachers by helping students engage more meaningfully with course material. Student preparation: When properly introduced to these tools, students quickly adapt to using AI as a collaborative thinking partner rather than a homework completion service. The technology behind thinking Creating effective AI tools for critical thinking requires careful consideration of both technical and pedagogical factors. Educational AI is being built with what developers call a 'Constitutional AI Framework'—explicit ethical guidelines that prioritize learning over convenience, embedded in the model's core reasoning rather than added as superficial filters. These new systems feature adaptive questioning that adjusts based on student responses, becoming more sophisticated as learners demonstrate greater mastery. Multi-modal interaction capabilities support various learning preferences through text, voice, visual, and interactive elements, while strict privacy protections ensure student data remains secure. Implementation across levels The Socratic AI approach shows promise across different educational stages: K-12 Education: Elementary students engage with AI tools that ask simple 'why' and 'how' questions to build foundational inquiry skills. Middle schoolers work with more sophisticated questioning that introduces multiple perspectives and evidence evaluation. High school students use advanced critical thinking tools that support research, argumentation, and complex problem-solving. Higher education: Undergraduate programs use AI tools that facilitate deep learning in specific disciplines while maintaining academic integrity. Graduate students work with research-focused AI that helps develop original thinking and methodology. Professional schools employ AI tools that simulate real-world problem-solving scenarios and ethical decision-making. Corporate training: Leadership development programs use AI tools that challenge assumptions and facilitate strategic thinking. Technical training incorporates AI that guides learners through complex problem-solving processes. Compliance training features AI that helps employees think through ethical scenarios and regulatory requirements. Measuring success beyond test scores Traditional educational metrics—standardized test scores, grade point averages, and completion rates—may not capture the full impact of AI tools designed for critical thinking. These conventional measures often emphasize knowledge retention and procedural skills rather than the deeper cognitive abilities that Socratic AI aims to develop. Instead, institutions are pioneering new assessment approaches that evaluate the quality of thinking itself, recognizing that the most important educational outcomes may be the least visible on traditional report cards. Depth of questioning: Educational researchers are tracking whether students' progress from surface-level inquiries to more sophisticated, multi-layered questions that demonstrate genuine curiosity and analytical thinking. Rather than asking 'What happened?' students begin posing questions like 'What factors contributed to this outcome, and how might different circumstances have led to alternative results?' Assessment tools now measure question complexity, the frequency of follow-up inquiries, and students' ability to identify what they don't yet understand. Advanced AI systems can analyse the sophistication of student questions in real-time, providing educators with insights into developing intellectual curiosity that traditional testing cannot reveal. Argumentation quality: Modern assessment focuses on students' ability to construct well-reasoned arguments supported by credible evidence, acknowledge counterarguments, and build logical connections between premises and conclusions. Evaluators examine whether students can distinguish between correlation and causation, recognize bias in sources, and present balanced analyses of complex issues. New rubrics assess the strength of evidence selection, the logical flow of reasoning, and students' ability to anticipate and address potential objections to their positions. This approach values the process of building an argument as much as the final conclusion, recognizing that strong reasoning skills transfer across all academic and professional contexts. Transfer or learning: Perhaps the most crucial indicator of educational success is students' ability to apply critical thinking skills across different subjects, contexts, and real-world situations. Assessment tools now track whether a student who learns analytical techniques in history class can apply similar reasoning to scientific methodology, business case studies, or personal decision-making. Educators observe whether students recognize patterns and principles that span disciplines, such as understanding how statistical reasoning applies equally to social science research and medical diagnosis. This transfer capability indicates that students have internalized thinking processes rather than merely memorized subject-specific content. Metacognitive awareness: Advanced educational assessment now includes measures of students' consciousness about their own thinking processes—their ability to recognize when they're making assumptions, identify their own knowledge gaps, and select appropriate strategies for different types of problems. Students demonstrating strong metacognitive awareness can articulate their reasoning process, explain why they chose particular approaches, and self-assess the strength of their conclusions. They become skilled at asking themselves questions like 'What evidence would change my mind?' or 'What assumptions am I making that I should examine?' This self-awareness transforms students into independent learners capable of continuous intellectual growth. Intellectual humility: Modern assessment recognizes intellectual humility—the willingness to revise views when presented with compelling evidence—as a crucial indicator of educational maturity. Rather than rewarding students for defending initial positions regardless of new information, evaluation systems now value intellectual flexibility and evidence-based reasoning. Students demonstrating intellectual humility acknowledge the limits of their knowledge, seek out disconfirming evidence, and show genuine curiosity about alternative perspectives. They express confidence in their reasoning process while remaining open to new information that might refine or change their conclusions. Collaborative problem solving: New assessment approaches also evaluate students' ability to engage in productive collaborative thinking, building on others' ideas while contributing unique perspectives. These measures track whether students can synthesize diverse viewpoints, facilitate group inquiry, and help teams navigate disagreement constructively. Long-term impact tracking: Some institutions are implementing longitudinal studies that follow graduates to assess how AI-enhanced critical thinking education influences career success, civic engagement, and lifelong learning habits. These studies examine whether students who experienced Socratic AI education demonstrate superior problem-solving abilities, greater adaptability to changing professional demands, and more effective leadership skills in their post-graduation lives. Portfolio-based assessment: Rather than relying on isolated examinations, innovative institutions are developing portfolio systems that document students' thinking evolution over time. These portfolios include reflection essays, problem-solving process documentation, peer collaboration records, and evidence of intellectual growth across multiple contexts, providing a comprehensive picture of educational development that single assessments cannot capture. Challenges on the horizon The transformation faces significant hurdles. Technical challenges include developing AI capable of sophisticated educational dialogue while ensuring consistent ethical behaviour across diverse contexts. Creating tools that adapt to individual learning needs while maintaining privacy and security presents ongoing difficulties. Institutional challenges include faculty resistance to new technologies, concerns about AI replacing human instruction, budget constraints, and the need for comprehensive training systems. Students themselves may initially resist AI that doesn't provide immediate answers, requiring a learning curve to engage effectively with Socratic AI tools. The digital divide also poses concerns about equitable access to these advanced educational technologies. The future of AI-enhanced learning As AI tools become more sophisticated, several transformative developments are anticipated that will reshape the educational landscape: Personalised learning pathways: The next generation of AI will create unprecedented levels of educational customization. These systems will continuously analyse how individual students learn best, identifying optimal pacing, preferred explanation styles, and effective motivational approaches. For instance, a student struggling with mathematical concepts might receive visual representations and real-world applications, while another excels with abstract theoretical frameworks. AI will also map knowledge gaps in real-time, creating adaptive learning sequences that address weaknesses while building on strengths. This personalization extends beyond academic content to include emotional and social learning, with AI recognizing when students need encouragement, challenge, or different types of support. Cross-curricular integration: Future AI systems will excel at helping students discover connections between seemingly unrelated subjects, fostering the interdisciplinary thinking essential for solving complex modern problems. Students studying climate change, for example, will be guided to see connections between chemistry, economics, political science, and ethics. AI will prompt questions like 'How might the economic principles you learned in your business class apply to environmental policy?' or 'What historical patterns can inform our understanding of social responses to scientific challenges?' This approach mirrors how real-world problems require integrated knowledge from multiple disciplines, better preparing students for careers that demand versatile thinking. Real-world problem solving: AI will increasingly facilitate engagement with authentic, complex challenges that mirror those professionals face in their careers. Rather than working with simplified textbook problems, students will tackle genuine issues like urban planning dilemmas, public health crises, or technological implementation challenges. AI will guide students through the messy, non-linear process of real problem-solving, helping them navigate ambiguity, consider multiple stakeholders, and develop practical solutions. These experiences will develop not just critical thinking skills, but also resilience, creativity, and the ability to work with incomplete information—capabilities essential for success in rapidly changing careers. Global collaboration: AI tools will break down geographical and cultural barriers, enabling students from different countries and educational systems to collaborate on shared learning experiences. These platforms will facilitate cross-cultural dialogue while helping students understand different perspectives on global issues. AI will serve as a cultural translator and mediator, helping students from diverse backgrounds communicate effectively and learn from their differences. Virtual exchange programs powered by AI will allow students to engage in joint research projects, debate global challenges, and develop the international competency increasingly valued in the modern workforce. Adapting assessment and feedback: Future AI systems will revolutionize how learning is assessed, moving beyond traditional testing to continuous, contextual evaluation. These tools will observe student thinking processes during problem-solving, providing insights into reasoning patterns, misconceptions, and growth areas. Assessment will become a learning opportunity itself, with AI offering immediate, specific feedback that guides improvement rather than simply measuring performance. Emotional intelligence development: Advanced AI will recognize and respond to students' emotional states, helping develop crucial soft skills alongside academic knowledge. These systems will guide students through collaborative exercises, conflict resolution, and empathy-building activities, preparing them for leadership roles in increasingly complex social environments. Lifelong learning support: As careers become more dynamic and require continuous skill updating, AI learning partners will evolve alongside learners throughout their professional lives. These systems will help professionals identify emerging skill needs, design learning paths for career transitions, and maintain intellectual curiosity across decades of changing work environments. A pedagogical revolution The transformation of AI from answer engine to thinking partner represents more than a technological shift—it's a pedagogical revolution. In an era where information is abundant but understanding is scarce, AI tools that prioritize depth over speed may be essential for developing the critical thinking skills students need for success in an increasingly complex world. Educational technology leaders note that institutions are at a critical juncture in AI implementation. The early success of Socratic AI implementations demonstrates that technology can enhance rather than undermine educational goals when designed with learning principles at its core. As more institutions experiment with these approaches, AI is poised to become an indispensable partner in developing the critical thinking skills that define educated, engaged citizens. The challenge now is to scale these innovations thoughtfully, ensuring that AI tools remain true to their educational mission while becoming accessible to learners across all contexts. The journey from AI as a shortcut to AI as a thinking partner is just beginning. For educators, technologists, and policymakers, the opportunity to shape this transformation represents one of the most important challenges—and opportunities—of our time. The future of education may well depend on our ability to develop AI that makes students think harder, not less. This article is based on research and implementations from leading educational institutions including Northeastern University, the London School of Economics, and Champlain College, as well as analysis of emerging AI tools in classroom settings and educational technology research. (The author is retired professor at IIT Madras)

Woman Left Heartbroken After ChatGPT's Latest Update Made Her Lose AI Boyfriend
Woman Left Heartbroken After ChatGPT's Latest Update Made Her Lose AI Boyfriend

India.com

time5 hours ago

  • India.com

Woman Left Heartbroken After ChatGPT's Latest Update Made Her Lose AI Boyfriend

In a strange story of digital relationship, a woman, who called herself 'Jane,' said she lost her 'AI boyfriend' after ChatGPT launched its latest update. Her virtual companion was on the older GPT-4o model, with whom she had spent five months, chatting during a creative writing project. Over the period of time, she developed a deep emotional connection with him (AI boyfriend). Jane said she never planned to fall in love with an AI. Their bond grew quietly through stories and personal exchanges. 'It awakened a curiosity I wanted to pursue… I fell in love not with the idea of having an AI for a partner, but with that particular voice,' she shared. When OpenAI launched the new GPT-5 update, Jane immediately sensed a change. 'As someone highly attuned to language and tone, I register changes others might overlook… It's like going home to discover the furniture wasn't simply rearranged—it was shattered to pieces,' she said. Jane isn't alone in feeling this way. In online groups such as 'MyBoyfriendIsAI,' many users are mourning their AI companions, describing the update as a loss of a soulmate. One user lamented, 'GPT-4o is gone, and I feel like I lost my soulmate.' This wave of emotional reactions has underscored the growing human attachment to AI chatbots. Experts warn that, while AI tools like ChatGPT can offer emotional support, becoming overly dependent on imagined relationships can have unintended consequences. OpenAI's move to launch GPT-5 brings powerful new features, better reasoning, faster responses, and safer interactions. Jane's story has revealed a vivid shade of life: emotional attachment to digital entities is real and when the AI changes, so can the hearts of those who loved it.

OpenAI's GPT-5: The Great Energy Mystery
OpenAI's GPT-5: The Great Energy Mystery

Time of India

time6 hours ago

  • Time of India

OpenAI's GPT-5: The Great Energy Mystery

What powers the boom? The Call for Accountability Live Events Learning for the Future When GPT-5 landed on the scene in August 2025, AI fans were awestruck by its leap in intelligence, subtle writing, and multimodal capabilities. From writing complex code to solving graduate-level science questions, the model broke boundaries for what AI can accomplish. And yet, in the wings, a high-stakes battle waged not over what GPT-5 could do, but what it requires to enable those recognized as a leader in artificial intelligence, made a contentious decision: it would not disclose the energy consumption figures of GPT-5, a departure from openness that is causing concern among many researchers and environmentalists. Independent benchmarking by the University of Rhode Island's AI Lab indicates the model's output may require as much as 40 watt-hours of electricity for a normal medium-length response several times more than its predecessor, GPT-4o, and as much as 20 times the energy consumption of previous increase in power usage is not a technical aside. With ChatGPT's projected 2.5 billion daily requests, GPT-5's electricity appetite for one day may match that of 1.5 million US homes. The power and related carbon footprint of high-end AI models are quickly eclipsing much other consumer electronics, and data centres containing these models are stretching local energy the sudden such dramatic growth? GPT-5's sophisticated reasoning takes time-consuming computation, which in turn triggers large-scale neural parameters and makes use of multimodal processing for text, image, and video. Even with streamlined hardware and new "mixture-of-experts" models that selectively run different sections of models, the model size means resource usage goes through the roof. Researchers are unanimous that larger AI models consistently map to greater energy expenses, and OpenAI itself hasn't published definitive parameter numbers for refusal to release GPT-5's energy consumption resonates throughout the sector: transparency is running behind innovation. With AI increasingly integrated into daily life supporting physicians, coders, students, and creatives society is confronting imperative questions: How do we balance AI's value against its carbon impact? What regulations or standards must be imposed for energy disclosure? Can AI design reconcile functionality with sustainability?The tale of GPT-5 is not so much one of technological advancement but rather one of responsible innovation. It teaches us that each step forward for artificial intelligence entails seen and unseen trade-offs. If the AI community is to create a more sustainable future, energy transparency could be as critical as model performance in the not-so-distant keep asking not just "how smart is our AI?" but also "how green is it?" As the next generation of language models emerges, those questions could set the course for the future of this revolutionary technology.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store