
How Much Energy Does AI Use? The People Who Know Aren't Saying
Jun 19, 2025 6:00 AM A growing body of research attempts to put a number on energy use and AI—even as the companies behind the most popular models keep their carbon emissions a secret. Photograph: Bloomberg/Getty Images
'People are often curious about how much energy a ChatGPT query uses,' Sam Altman, the CEO of OpenAI, wrote in an aside in a long blog post last week. The average query, Altman wrote, uses 0.34 watt-hours of energy: 'About what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes.'
For a company with 800 million weekly active users (and growing), the question of how much energy all these searches are using is becoming an increasingly pressing one. But experts say Altman's figure doesn't mean much without much more public context from OpenAI about how it arrived at this calculation—including the definition of what an 'average' query is, whether or not it includes image generation, and whether or not Altman is including additional energy use, like from training AI models and cooling OpenAI's servers.
As a result, Sasha Luccioni, the climate lead at AI company Hugging Face, doesn't put too much stock in Altman's number. 'He could have pulled that out of his ass,' she says. (OpenAI did not respond to a request for more information about how it arrived at this number.)
As AI takes over our lives, it's also promising to transform our energy systems, supercharging carbon emissions right as we're trying to fight climate change. Now, a new and growing body of research is attempting to put hard numbers on just how much carbon we're actually emitting with all of our AI use.
This effort is complicated by the fact that major players like OpenAi disclose little environmental information. An analysis submitted for peer review this week by Luccioni and three other authors looks at the need for more environmental transparency in AI models. In Luccioni's new analysis, she and her colleagues use data from OpenRouter, a leaderboard of large language model (LLM) traffic, to find that 84 percent of LLM use in May 2025 was for models with zero environmental disclosure. That means that consumers are overwhelmingly choosing models with completely unknown environmental impacts.
'It blows my mind that you can buy a car and know how many miles per gallon it consumes, yet we use all these AI tools every day and we have absolutely no efficiency metrics, emissions factors, nothing,' Luccioni says. 'It's not mandated, it's not regulatory. Given where we are with the climate crisis, it should be top of the agenda for regulators everywhere.'
As a result of this lack of transparency, Luccioni says, the public is being exposed to estimates that make no sense but which are taken as gospel. You may have heard, for instance, that the average ChatGPT request takes 10 times as much energy as the average Google search. Luccioni and her colleagues track down this claim to a public remark that John Hennessy, the chairman of Alphabet, the parent company of Google, made in 2023.
A claim made by a board member from one company (Google) about the product of another company to which he has no relation (OpenAI) is tenuous at best—yet, Luccioni's analysis finds, this figure has been repeated again and again in press and policy reports. (As I was writing this piece, I got a pitch with this exact statistic.)
'People have taken an off-the-cuff remark and turned it into an actual statistic that's informing policy and the way people look at these things,' Luccioni says. 'The real core issue is that we have no numbers. So even the back-of-the-napkin calculations that people can find, they tend to take them as the gold standard, but that's not the case.'
One way to try and take a peek behind the curtain for more accurate information is to work with open source models. Some tech giants, including OpenAI and Anthropic, keep their models proprietary—meaning outside researchers can't independently verify their energy use. But other companies make some parts of their models publicly available, allowing researchers to more accurately gauge their emissions.
A study published Thursday in the journal Frontiers of Communication evaluated 14 open-source large language models, including two Meta Llama models and three DeepSeek models, and found that some used as much as 50 percent more energy than other models in the dataset responding to prompts from the researchers. The 1,000 benchmark prompts submitted to the LLMs included questions on topics such as high school history and philosophy; half of the questions were formatted as multiple choice, with only one-word answers available, while half were submitted as open prompts, allowing for a freer format and longer answers. Reasoning models, the researchers found, generated far more thinking tokens—measures of internal reasoning generated in the model while producing its answer, which are a hallmark of more energy use—than more concise models. These models, perhaps unsurprisingly, were also more accurate with complex topics. (They also had trouble with brevity: During the multiple choice phase, for instance, the more complex models would often return answers with multiple tokens, despite explicit instructions to only answer from the range of options provided.)
Maximilian Dauner, a PhD student at the Munich University of Applied Sciences and the study's lead author, says he hopes AI use will evolve to think about how to more efficiently use less-energy-intensive models for different queries. He envisions a process where smaller, simpler questions are automatically directed to less-energy-intensive models that will still provide accurate answers. 'Even smaller models can achieve really good results on simpler tasks, and don't have that huge amount of CO 2 emitted during the process,' he says.
Some tech companies already do this. Google and Microsoft have previously told WIRED that their search features use smaller models when possible, which can also mean faster responses for users. But generally, model providers have done little to nudge users toward using less energy. How quickly a model answers a question, for instance, has a big impact on its energy use—but that's not explained when AI products are presented to users, says Noman Bashir, the Computing & Climate Impact Fellow at MIT's Climate and Sustainability Consortium.
'The goal is to provide all of this inference the quickest way possible so that you don't leave their platform,' he says. 'If ChatGPT suddenly starts giving you a response after five minutes, you will go to some other tool that is giving you an immediate response.'
However, there's a myriad of other considerations to take into account when calculating the energy use of complex AI queries, because it's not just theoretical—the conditions under which queries are actually run out in the real world matter. Bashir points out that physical hardware makes a difference when calculating emissions. Dauner ran his experiments on an Nvidia A100 GPU, but Nvidia's H100 GPU—which was specially designed for AI workloads, and which, according to the company, is becoming increasingly popular—is much more energy-intensive.
Physical infrastructure also makes a difference when talking about emissions. Large data centers need cooling systems, light, and networking equipment, which all add on more energy; they often run in diurnal cycles, taking a break at night when queries are lower. They are also hooked up to different types of grids—ones overwhelmingly powered by fossil fuels, versus those powered by renewables—depending on their locations.
Bashir compares studies that look at emissions from AI queries without factoring in data center needs to lifting up a car, hitting the gas, and counting revolutions of a wheel as a way of doing a fuel-efficiency test. 'You're not taking into account the fact that this wheel has to carry the car and the passenger,' he says.
Perhaps most crucially for our understanding of AI's emissions, open source models like the ones Dauner used in his study represent a fraction of the AI models used by consumers today. Training a model and updating deployed models takes a massive amount of energy—figures that many big companies keep secret. It's unclear, for example, whether the light bulb statistic about ChatGPT from OpenAI's Altman takes into account all the energy used to train the models powering the chatbot. Without more disclosure, the public is simply missing much of the information needed to start understanding just how much this technology is impacting the planet.
'If I had a magic wand, I would make it mandatory for any company putting an AI system into production, anywhere, around the world, in any application, to disclose carbon numbers,' Luccioni says.
Paresh Dave contributed reporting.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Digital Trends
29 minutes ago
- Digital Trends
Your Meta AI chats are not really a secret. Here's how to keep them private
At this point, it shouldn't come as a surprise that discussing your deepest secrets and personal issues with an AI chatbot is not a good idea. And when that chatbot is made by Meta, the company behind Facebook and Instagram (including all its sordid history with user data privacy), there is even more reason to be cautious. But it seems a lot of users are oblivious to the risks, and in return, exposing themselves in the worst possible ways. Your chatbot interactions with Meta AI — from seeking trip suggestions to jazzing up an image — are publicly visible in the app's endlessly-scrolling vertical Discover feed. I installed the app a day ago, and in less than 10 minutes of using it, I had already come across people sharing their entire resume, complete with their address, phone number, qualifications, and more, on the main feed page. Some had asked the Meta AI chatbot to give them trip ideas in Bangkok involving strip clips, while others had weirdly specific demands regarding a certain skin condition. Users on social media have also documented the utterly chaotic nature of their app's Discover feed. An expert at the Electronic Privacy Information Center told WIRED that people are sharing everything from medical history to court details. Recommended Videos How to plug Meta AI app's privacy holes? Of course, an app that doesn't offer granular controls and a more explicit setup flow regarding chat privacy is a disaster waiting to happen. The Meta AI app clearly fumbled on this front and puts the onus of course correction on users. If you have the app installed on your phone, follow these steps on your phone. Open the Meta app and tap on the round profile icon in the top-right corner of the app to open the Settings dashboard. On the Settings page, tap on Data & Privacy, followed by Manage your information on the next page. You will now see an option that says 'Make all public prompts visible to only you.' Tap on it and select 'Apply to all' in the pop-up window, as shown in the image below. If you are concerned about previous AI chats that have contained sensitive information, you can clear the past log by tapping on the 'Delete all prompts' option on the same page. Next, go back to the Data & Privacy section and click on the 'Suggesting your prompts on other apps' option. On the next page, disable the toggles corresponding to Instagram and Facebook. If you have already shared your Meta interactions publicly, click on the notepad icon in the bottom tray to check the entire history. On the chat record page, tap on any of the past interactions to open it, and then tap on the three-dot menu button in the top-right corner. Doing so opens a pop-up tray where you see options to either delete the chat or make it private so that no other users of the Meta AI app can see it in their Discover feed. As a general rule of thumb, don't discuss personal or identifiable information with the chatbot, and also avoid sharing pictures to give them creative edits. Why is it deeply problematic? When the Meta AI app was introduced in April, the company said its Discover feed was 'a place to share and explore how others are using AI.' Right now, it's brimming with all kinds of weird requests. A healthy few of them appear to be fixated on finding free dating and fun activity ideas, some are about career and relationship woes, finding love in foreign lands, and skin issues in intimate parts of the body. Facebook's 'Meta AI' literally just puts everyone's private conversations directly on a public For You page what the actual fuck lol — Daniel (@danielgothits) June 12, 2025 Here is the worst part. The only meaningful warning appears when you are about to post your AI creation (or interaction) to the feed section. The pop-up message says 'Feed is public' at the top, and underneath that, you see the 'post to feed' button. According to Business Insider, that warning was not always visible and was only added after the public outcry. But it appears that a lot of people are not aware of what the 'Post to Feed' button actually does. To them, it might come out as something referring to their own feed where the Meta AI chats are catalogued in an orderly fashion for their eyes, just the way you will see them in other chatbot apps such as ChatGPT and Gemini. Another risk is the exposure. During the initial setup, when the app picks up account information from the Facebook and/or Instagram app installed on your phone, the text boxes are dynamic, which means you can go ahead and change the username. Notably, there is no 'edit' or 'change' signal, and to an ordinary person, it would just appear as if the Meta AI simply extracted the correct username from their pre-installed social app. It's not too different from the seamless sign-up experience in apps that show users Google Account or Apple ID options to log-in. Wild things are happening on Meta's AI app. The feed is almost entirely boomers who seem to have no idea their conversations with the chatbot are posted publicly. They get pretty personal (see second pic, which I anonymized). — Justine Moore (@venturetwins) June 11, 2025 When I first installed the app on my iPhone 16 Pro, it automatically identified the Instagram account logged into the phone. I tapped on the button with my username plastered over it, and I was directly taken to the main page of the Meta AI app, where I could directly jump into the Discover feed. There was no warning about the privacy, or how the log of my data would be shared, or even made public knowledge. If you want your AI prompts not to appear in the public Discover feed, you will have to manually enable an option from within the app's settings section, as described above. The flow is slightly different on Android, where you see a small 'chats will be public' during the initial set-up process. That message appears only once, and not on any other page. Just like the iOS app, you must manually enable the option to prevent your chats from appearing in the Discover feed and to stop the chat prompts from appearing inside Instagram and Facebook. If you absolutely must use the Meta AI, you can already summon it in WhatsApp, Instagram, and Facebook. In those apps, you can ask Meta AI random questions, ask it to create images, or give it a fun makeover to pictures, among others. Be warned, however, that AI still struggles with hallucination issues, and you must double-check whatever information the chatbot serves you.


Android Authority
30 minutes ago
- Android Authority
Google Photos' upcoming Remix feature could launch with a video upgrade (APK teardown)
Edgar Cervantes / Android Authority TL;DR Google Photos' upcoming Remix feature, internally codenamed 'Bluejay,' will likely support video editing. When it launches, the feature will use generative AI to transform videos into various styles, with options potentially including 'I'm feeling lucky,' 'Subtle movements,' and 'Go wild.' Creating AI photos and videos is all the rage these days, but many people are using AI to remix their existing photos and videos. For instance, the Studio Ghibli trend went viral recently, and people had a lot of fun reimaging themselves in the popular art style. We've previously spotted Google Photos working to incorporate this generative AI use case with the upcoming Remix feature. While the feature is yet to launch, we've now spotted clues indicating it will work for both photos and videos. Authority Insights story on Android Authority. Discover You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release. Google Photos v7.34 includes code that indicates that the upcoming Remix feature could also support video edits. Code Copy Text Remix Bluejay Video I'm Feeling Lucky. Suble Movements. Go Wild. Here, 'bluejay' is the working codename for the Remix feature, and in the second string, it is used as a placeholder for the marketing name 'Remix' (which is still a work-in-progress name and may or may not be the final marketing name). While the Remix feature for photos could offer styles like claymation and anime, there would be different style suggestions provided for the Remix video feature. We could spot these three style suggestions: Random style: I am feeling lucky Subtle style: Subtle Movements Wild style: Go Wild As is the theme with the Remix feature, the Remix video feature will also likely use generative AI to transform your video into these different styles. Google has yet to announce the Remix feature, and it's still very much a work in progress. We don't know if and when it will roll out to users. We'll keep you updated when we learn more. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.


Gizmodo
34 minutes ago
- Gizmodo
SpaceX Starship Explodes in Spectacular Fireball at Texas Test Facility
Are we on Mars yet? The upper stage prototype, designated Ship 36, exploded shortly before midnight local time on June 18 during routine preparations for an upcoming test flight. SpaceX is in the midst of preparing for Starship's next fully integrated test, known as Flight 10. The last several tests haven't gone well, but this prototype never even left the ground. The explosion—as shown in spectacular footage from at SpaceX's Massey facility, a test site located several miles from the launch mount at Starbase, Texas. The 171-foot-tall (51-meter) Starship upper stage 'experienced a major anomaly while on a test stand at Starbase,' SpaceX said in a statement on X. 'A safety clear area around the site was maintained throughout the operation and all personnel are safe and accounted for.' The Starbase team is coordinating with local authorities to manage the aftermath of the incident, SpaceX said, adding that, while the company reported no threat to nearby communities, it urged the public to steer clear of the area as safety measures are carried out. SpaceX CEO Elon Musk chimed in a few hours after the incident on X, brushing the incident off as 'Just a scratch.' He elaborated further this morning, saying the early data 'suggests a nitrogen COPV [composite overwrapped pressure vessel] in the payload bay failed below its proof pressure,' and that if this proves to be the case, it's the 'first time ever for this design.' A COPV is a lightweight tank made of composite fibers wrapped around a thin liner to store high-pressure fluids, according to NASA. No further details are known, but as SpaceNews points out, Starship was being prepared for a static fire test, and the explosion happened before the rocket had a chance to fire its Raptor engines. A June 18 advisory from the Federal Aviation Administration pointed to June 29 as a potential date for Flight 10, but that seems unlikely now. SpaceX is in a bit of a slump right now, with this incident adding to the pile of recent setbacks. Flight 7 in January 2024 experienced a propellant leak and fire triggered by unexpected vibrations in the propulsion system. In March, Flight 8 was cut short by a hardware failure in one of the Raptor engines, while Flight 9 in May successfully reached space, but a leak led to loss of control and the vehicle broke apart during reentry. SpaceX's Starship megarocket is built to carry people and cargo to the Moon, Mars, and other destinations around the solar system. It's a key part of NASA's Artemis program, which aims to land astronauts on the Moon by 2027, and is key to Elon Musk's goal of colonizing Mars.