
OpenAI's GPT-5: The Great Energy Mystery
As OpenAI unveils the groundbreaking GPT-5 model, a debate is brewing over one surprising omission of energy usage transparency. While the company touts GPT-5's astonishing capabilities, it refuses to disclose how much power the AI powerhouse consumes. Early independent estimates suggest a jaw-dropping increase compared to previous models, raising urgent questions about the environmental and ethical cost of the AI revolution.
Agencies When GPT-5 landed on the scene in August 2025, AI fans were awestruck by its leap in intelligence, subtle writing, and multimodal capabilities. From writing complex code to solving graduate-level science questions, the model broke boundaries for what AI can accomplish. And yet, in the wings, a high-stakes battle waged not over what GPT-5 could do, but what it requires to enable those achievements.OpenAI, recognized as a leader in artificial intelligence, made a contentious decision: it would not disclose the energy consumption figures of GPT-5, a departure from openness that is causing concern among many researchers and environmentalists. Independent benchmarking by the University of Rhode Island's AI Lab indicates the model's output may require as much as 40 watt-hours of electricity for a normal medium-length response several times more than its predecessor, GPT-4o, and as much as 20 times the energy consumption of previous versions.
This increase in power usage is not a technical aside. With ChatGPT's projected 2.5 billion daily requests, GPT-5's electricity appetite for one day may match that of 1.5 million US homes. The power and related carbon footprint of high-end AI models are quickly eclipsing much other consumer electronics, and data centres containing these models are stretching local energy resources.Why the sudden such dramatic growth? GPT-5's sophisticated reasoning takes time-consuming computation, which in turn triggers large-scale neural parameters and makes use of multimodal processing for text, image, and video. Even with streamlined hardware and new "mixture-of-experts" models that selectively run different sections of models, the model size means resource usage goes through the roof. Researchers are unanimous that larger AI models consistently map to greater energy expenses, and OpenAI itself hasn't published definitive parameter numbers for years.OpenAI's refusal to release GPT-5's energy consumption resonates throughout the sector: transparency is running behind innovation. With AI increasingly integrated into daily life supporting physicians, coders, students, and creatives society is confronting imperative questions: How do we balance AI's value against its carbon impact? What regulations or standards must be imposed for energy disclosure? Can AI design reconcile functionality with sustainability?
The tale of GPT-5 is not so much one of technological advancement but rather one of responsible innovation. It teaches us that each step forward for artificial intelligence entails seen and unseen trade-offs. If the AI community is to create a more sustainable future, energy transparency could be as critical as model performance in the not-so-distant future.Let's keep asking not just "how smart is our AI?" but also "how green is it?" As the next generation of language models emerges, those questions could set the course for the future of this revolutionary technology.
Disclaimer Statement: This content is authored by a 3rd party. The views expressed here are that of the respective authors/ entities and do not represent the views of Economic Times (ET). ET does not guarantee, vouch for or endorse any of its contents nor is responsible for them in any manner whatsoever. Please take all steps necessary to ascertain that any information and content provided is correct, updated, and verified. ET hereby disclaims any and all warranties, express or implied, relating to the report and any content therein.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India.com
an hour ago
- India.com
Woman Left Heartbroken After ChatGPT's Latest Update Made Her Lose AI Boyfriend
In a strange story of digital relationship, a woman, who called herself 'Jane,' said she lost her 'AI boyfriend' after ChatGPT launched its latest update. Her virtual companion was on the older GPT-4o model, with whom she had spent five months, chatting during a creative writing project. Over the period of time, she developed a deep emotional connection with him (AI boyfriend). Jane said she never planned to fall in love with an AI. Their bond grew quietly through stories and personal exchanges. 'It awakened a curiosity I wanted to pursue… I fell in love not with the idea of having an AI for a partner, but with that particular voice,' she shared. When OpenAI launched the new GPT-5 update, Jane immediately sensed a change. 'As someone highly attuned to language and tone, I register changes others might overlook… It's like going home to discover the furniture wasn't simply rearranged—it was shattered to pieces,' she said. Jane isn't alone in feeling this way. In online groups such as 'MyBoyfriendIsAI,' many users are mourning their AI companions, describing the update as a loss of a soulmate. One user lamented, 'GPT-4o is gone, and I feel like I lost my soulmate.' This wave of emotional reactions has underscored the growing human attachment to AI chatbots. Experts warn that, while AI tools like ChatGPT can offer emotional support, becoming overly dependent on imagined relationships can have unintended consequences. OpenAI's move to launch GPT-5 brings powerful new features, better reasoning, faster responses, and safer interactions. Jane's story has revealed a vivid shade of life: emotional attachment to digital entities is real and when the AI changes, so can the hearts of those who loved it.


Hans India
an hour ago
- Hans India
Former Twitter CEO Parag Agrawal Returns with $30M AI Startup 'Parallel' to Challenge GPT-5 in Web Research
Almost three years after being abruptly ousted from Twitter by Elon Musk, Parag Agrawal is making a high-profile comeback in Silicon Valley. This time, the former Twitter CEO is leading his own artificial intelligence venture — and it's already drawing attention for outperforming some of the biggest names in the field. Agrawal's new company, Parallel Web Systems Inc., founded in 2023, operates out of Palo Alto with a 25-person team. Backed by major investors such as Khosla Ventures, First Round Capital, and Index Ventures, Parallel has raised $30 million in funding. According to the company's blog post, its platform is already processing millions of research tasks daily for early adopters, including 'some of the fastest-growing AI companies,' as Agrawal describes them. At its core, Parallel offers agentic AI services that allow AI systems to pull real-time data directly from the public web. The platform doesn't just retrieve information — it verifies, organizes, and even grades the confidence level of its responses. In essence, it gives AI applications a built-in browser with advanced intelligence, enabling more accurate and reliable results. Parallel's technology features eight distinct 'research engines' tailored for different needs. The fastest engine delivers results in under a minute, while its most advanced, Ultra8x, can spend up to 30 minutes digging into highly detailed queries. The company claims Ultra8x has surpassed OpenAI's GPT-5 in independent benchmarks like BrowseComp and DeepResearch Bench by over 10%, making it 'the only AI system to outperform both humans and leading AI models like GPT-5 on the most rigorous benchmarks for deep web research.' The potential applications are wide-ranging. AI coding assistants can use Parallel to pull live snippets from GitHub, retailers can track competitors' product catalogs in real time, and market analysts can have customer reviews compiled into spreadsheets. Developers have access to three APIs, including a low-latency option optimized for chatbots. Agrawal's return to the tech scene comes after a turbulent 2022, when Musk completed his $44 billion acquisition of Twitter and immediately dismissed most of its top executives, including him. That move followed months of legal disputes over the takeover. Rather than taking a break, Agrawal dived back into research and development. He explored ideas ranging from AI healthcare to data-driven automation, but ultimately zeroed in on what he saw as a critical gap in the AI landscape — giving AI agents the ability to reliably locate and interpret information from the internet. Now, Parallel positions him back in the AI race, and perhaps indirectly, in competition with Musk. Agrawal sees the future of AI as one where multiple autonomous agents will work online simultaneously for individual users. 'You'll probably deploy 50 agents on your behalf to be on the internet,' he predicts. 'And that's going to happen soon, like next year,' he told Bloomberg. With speed, accuracy, and reliability as its edge, Parallel could become a defining player in the next phase of AI innovation.


Time of India
2 hours ago
- Time of India
OpenAI's GPT-5: The Great Energy Mystery
What powers the boom? The Call for Accountability Live Events Learning for the Future When GPT-5 landed on the scene in August 2025, AI fans were awestruck by its leap in intelligence, subtle writing, and multimodal capabilities. From writing complex code to solving graduate-level science questions, the model broke boundaries for what AI can accomplish. And yet, in the wings, a high-stakes battle waged not over what GPT-5 could do, but what it requires to enable those recognized as a leader in artificial intelligence, made a contentious decision: it would not disclose the energy consumption figures of GPT-5, a departure from openness that is causing concern among many researchers and environmentalists. Independent benchmarking by the University of Rhode Island's AI Lab indicates the model's output may require as much as 40 watt-hours of electricity for a normal medium-length response several times more than its predecessor, GPT-4o, and as much as 20 times the energy consumption of previous increase in power usage is not a technical aside. With ChatGPT's projected 2.5 billion daily requests, GPT-5's electricity appetite for one day may match that of 1.5 million US homes. The power and related carbon footprint of high-end AI models are quickly eclipsing much other consumer electronics, and data centres containing these models are stretching local energy the sudden such dramatic growth? GPT-5's sophisticated reasoning takes time-consuming computation, which in turn triggers large-scale neural parameters and makes use of multimodal processing for text, image, and video. Even with streamlined hardware and new "mixture-of-experts" models that selectively run different sections of models, the model size means resource usage goes through the roof. Researchers are unanimous that larger AI models consistently map to greater energy expenses, and OpenAI itself hasn't published definitive parameter numbers for refusal to release GPT-5's energy consumption resonates throughout the sector: transparency is running behind innovation. With AI increasingly integrated into daily life supporting physicians, coders, students, and creatives society is confronting imperative questions: How do we balance AI's value against its carbon impact? What regulations or standards must be imposed for energy disclosure? Can AI design reconcile functionality with sustainability?The tale of GPT-5 is not so much one of technological advancement but rather one of responsible innovation. It teaches us that each step forward for artificial intelligence entails seen and unseen trade-offs. If the AI community is to create a more sustainable future, energy transparency could be as critical as model performance in the not-so-distant keep asking not just "how smart is our AI?" but also "how green is it?" As the next generation of language models emerges, those questions could set the course for the future of this revolutionary technology.