logo
Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question

Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question

Gizmodo19-06-2025
Like it or not, large language models have quickly become embedded into our lives. And due to their intense energy and water needs, they might also be causing us to spiral even faster into climate chaos. Some LLMs, though, might be releasing more planet-warming pollution than others, a new study finds.
Queries made to some models generate up to 50 times more carbon emissions than others, according to a new study published in Frontiers in Communication. Unfortunately, and perhaps unsurprisingly, models that are more accurate tend to have the biggest energy costs.
It's hard to estimate just how bad LLMs are for the environment, but some studies have suggested that training ChatGPT used up to 30 times more energy than the average American uses in a year. What isn't known is whether some models have steeper energy costs than their peers as they're answering questions.
Researchers from the Hochschule München University of Applied Sciences in Germany evaluated 14 LLMs ranging from 7 to 72 billion parameters—the levers and dials that fine-tune a model's understanding and language generation—on 1,000 benchmark questions about various subjects.
LLMs convert each word or parts of words in a prompt into a string of numbers called a token. Some LLMs, particularly reasoning LLMs, also insert special 'thinking tokens' into the input sequence to allow for additional internal computation and reasoning before generating output. This conversion and the subsequent computations that the LLM performs on the tokens use energy and releases CO2.
The scientists compared the number of tokens generated by each of the models they tested. Reasoning models, on average, created 543.5 thinking tokens per question, whereas concise models required just 37.7 tokens per question, the study found. In the ChatGPT world, for example, GPT-3.5 is a concise model, whereas GPT-4o is a reasoning model.
This reasoning process drives up energy needs, the authors found. 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach,' study author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, said in a statement. 'We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.'
The more accurate the models were, the more carbon emissions they produced, the study found. The reasoning model Cogito, which has 70 billion parameters, reached up to 84.9% accuracy—but it also produced three times more CO2 emissions than similarly sized models that generate more concise answers.
'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,' said Dauner. 'None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.' CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.
Another factor was subject matter. Questions that required detailed or complex reasoning, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, according to the study.
There are some caveats, though. Emissions are very dependent on how local energy grids are structured and the models that you examine, so it's unclear how generalizable these findings are. Still, the study authors said they hope that the work will encourage people to be 'selective and thoughtful' about the LLM use.
'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner said in a statement.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Who Are Those Fantastic SuperAgers And Why Do They Stay Healthy?
Who Are Those Fantastic SuperAgers And Why Do They Stay Healthy?

Forbes

timea minute ago

  • Forbes

Who Are Those Fantastic SuperAgers And Why Do They Stay Healthy?

Everyone wants to stay fully independent as we age. A few people over age 80 do and they don't suffer from memory loss either. They are called 'super agers' because they do not suffer the same mental and physical declines as almost everyone else. They've been studied by researchers for decades. We do have some answers to what makes them different from everyone else. Maybe we all wish we could be super agers ourselves. What Is A Super Ager? The term describes someone over 80 with an exceptional memory — one at least as good as the memories of people who are 20 to 30 years younger. They don't get Alzheimer's disease. They think clearly and participate in many things most folks their age cannot do. As reported in AARP, those who seem to defy normal changes of aging have a few things in common. Here are some of these, also reflected in research from other reports in other places in the U.S. 1. Thicker brains We don't know if it's a hereditary thing or not but with this rare group, the part of the brain which impacts thinking, memory and decision-making is thicker in super agers — sometimes thicker even than it is in most people in their 50s and 60s. It resists shrinking. 2. Supersize memory cells in their brains. These larger than average sized particular nerve cells resist the protein deposits associated with cognitive decline and Alzheimer's disease. The protein plaques and tangles may be present in the brains of those super agers, seen from those who donated their brains to studies after they pass, but they do not seem to cause damage. 3. More 'social intelligence' brain cells These particular kinds of brain cell cells have been linked to social intelligence and awareness. They help facilitate rapid communication across the brain, providing an enhanced ability to navigate the outside impacts thinking, memory and decision-making. As I see it, that would suggest greater levels of independence in aging, an elusive thing most want but few over 80 actually achieve. What If You Aren't Born With That Kind Of Brain? There is plenty of research on preventing dementia to tell us that lifestyle is perhaps the most important feature of aging well, whether you are destined to be a super ager or not. Most of the super agers are very active and maintain healthy habits. One of my favorite examples is the late Dr. Ruth Westheimer, the famous sex therapist. She was a TV personality, author of numerous books and, as she used to say, a nonstop talker. Her belief was that talking exercises the brain. OK, we know some nonstop talkers who are definitely not exercising their brains so that's not all there is to keeping one's brain active. But being silent and withdrawn is certainly not helpful lifestyle clearly includes regular exercise, even walking. It doesn't take running marathons. And food choices are part of this, of course as well. Data more recently shows a link between consuming ultra processed foods (sweets, packaged snacks, etc.) and cognitive decline. The better choices, as we are often told, include fish, vegetables, fruits and whole grains, avoiding junk food and excess. Other resources emphasize the importance of social connections in aging to help ward off dementia, getting enough sleep, and managing stress. Easier said than done! And one thing the super agers who have been studied also have in common is the unwavering willingness or ability to constantly challenge themselves. The Takeaway Not everyone will have a super ager's remarkably different kind of brain structure. But everyone can see their good health habits and aim for doing what they do. They stay active in many ways, socially engaged, and everyone can make that an achievable goal.

Amazon-Backed (AMZN) Anthropic Just Announced a Big Upgrade
Amazon-Backed (AMZN) Anthropic Just Announced a Big Upgrade

Business Insider

timean hour ago

  • Business Insider

Amazon-Backed (AMZN) Anthropic Just Announced a Big Upgrade

Amazon-backed (AMZN) Anthropic is making a big upgrade to its Claude Sonnet 4 AI model by increasing the context window for enterprise API customers to one million tokens. For context, that is enough to handle about 750,000 words and 75,000 lines of code. This is a major jump from the previous 200,000 token limit and more than double OpenAI's GPT-5, which offers 400,000 tokens. The new long-context feature will also be available through Anthropic's cloud partners, including Amazon's Bedrock and Google (GOOGL) Cloud's Vertex AI. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. This comes as OpenAI's new GPT-5 model offers lower pricing and high coding performance. Nevertheless, Anthropic's product lead, Brad Abrams, says that he is pleased with the firm's API growth and expects the longer context to bring 'a lot of benefit' to coding platforms. Interestingly, unlike OpenAI, which earns most of its revenue from ChatGPT subscriptions, Anthropic focuses on selling AI through APIs, which makes developer-focused platforms critical to its business. The company also recently updated its largest model, Claude Opus 4.1, to improve its coding capabilities. It is worth noting that having a larger context window can make AI far better at tasks like software engineering, where seeing the entire project leads to more accurate results. Abrams said that it also improves Claude's ability to work on 'agentic' coding tasks, which are complex problems that require the AI to keep track of its steps over minutes or hours. In addition, for API users sending prompts over 200,000 tokens, costs will rise to $6 per million input tokens and $22.50 per million output tokens, up from $3 and $15, respectively. What Is the Price Target for AMZN Stock? Turning to Wall Street, analysts have a Strong Buy consensus rating on Amazon stock based on 43 Buys and one Hold assigned in the past three months. Furthermore, the average AMZN stock price target of $264.40 per share implies 19.1% upside potential from current levels.

Apple says the App Store is 'fair and free of bias' in response to Musk's legal threats
Apple says the App Store is 'fair and free of bias' in response to Musk's legal threats

Engadget

time2 hours ago

  • Engadget

Apple says the App Store is 'fair and free of bias' in response to Musk's legal threats

Apple has denied Elon Musk's accusation that it's favoring OpenAI in its App Store rankings and making it impossible for other AI companies to reach the top. In a statement sent to Bloomberg , Apple said the App Store is "designed to be fair and free of bias." The company's spokesperson explained that the App Store features "thousands of apps through charts, algorithmic recommendations and curated lists selected by experts using objective criteria." They added: "Our goal is to offer safe discovery for users and valuable opportunities for developers, collaborating with many to increase app visibility in rapidly evolving categories." xAI founder Elon Musk accused Apple of "unequivocal antitrust violation" by favoring OpenAI in a post on X, warning that his company "will take immediate legal action." In a separate post from his threat, he asked Apple why it "[refuses] to put either X or Grok in [its] 'Must Have' section." X, he said, is "the #1 news app in the world," while Grok is ranked number five among all apps. "Are you playing politics? What gives?" he continued. Musk didn't provide evidence to back his accusations. It's also worth noting that Chinese AI app DeepSeek reached the top of Apple's free app rankings back in January, overtaking even ChatGPT. As X's own Community Notes also mentioned in Musk's post, added hours after it went up, Perplexity reached the top of overall rankings in India's App Store back in July. Both apps were able to reach the top of their respective lists way after Apple and OpenAI announced their partnership last year. OpenAI CEO Sam Altman responded to Musk's accusation, as well. He said it's a "remarkable claim," given that he has heard allegations that Musk manipulates "X to benefit himself and his own companies and harm his competitors and people he doesn't like." In response, Musk posted: "Scam Altman lies as easily as he breathes."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store