
How your AI prompts could harm the environment
Sustainability
Climate change
EconomyFacebookTweetLink
Follow
Sign up for CNN's Life, But Greener newsletter. Our limited newsletter series guides you on how to minimize your personal role in the climate crisis — and reduce your eco-anxiety.
Whether it's answering work emails or drafting wedding vows, generative artificial intelligence tools have become a trusty copilot in many people's lives. But a growing body of research shows that for every problem AI solves, hidden environmental costs are racking up.
Each word in an AI prompt is broken down into clusters of numbers called 'token IDs' and sent to massive data centers — some larger than football fields — powered by coal or natural gas plants. There, stacks of large computers generate responses through dozens of rapid calculations.
The whole process can take up to 10 times more energy to complete than a regular Google search, according to a frequently cited estimation by the Electric Power Research Institute.
So, for each prompt you give AI, what's the damage? To find out, researchers in Germany tested 14 large language model (LLM) AI systems by asking them both free-response and multiple-choice questions. Complex questions produced up to six times more carbon dioxide emissions than questions with concise answers.
In addition, 'smarter' LLMs with more reasoning abilities produced up to 50 times more carbon emissions than simpler systems to answer the same question, the study reported.
'This shows us the tradeoff between energy consumption and the accuracy of model performance,' said Maximilian Dauner, a doctoral student at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study published Wednesday.
Typically, these smarter, more energy intensive LLMs have tens of billions more parameters — the biases used for processing token IDs — than smaller, more concise models.
'You can think of it like a neural network in the brain. The more neuron connections, the more thinking you can do to answer a question,' Dauner said.
Complex questions require more energy in part because of the lengthy explanations many AI models are trained to provide, Dauner said. If you ask an AI chatbot to solve an algebra question for you, it may take you through the steps it took to find the answer, he said.
'AI expends a lot of energy being polite, especially if the user is polite, saying 'please' and 'thank you,'' Dauner explained. 'But this just makes their responses even longer, expending more energy to generate each word.'
For this reason, Dauner suggests users be more straightforward when communicating with AI models. Specify the length of the answer you want and limit it to one or two sentences, or say you don't need an explanation at all.
Most important, Dauner's study highlights that not all AI models are created equally, said Sasha Luccioni, the climate lead at AI company Hugging Face, in an email. Users looking to reduce their carbon footprint can be more intentional about which model they chose for which task.
'Task-specific models are often much smaller and more efficient, and just as good at any context-specific task,' Luccioni explained.
If you are a software engineer who solves complex coding problems every day, an AI model suited for coding may be necessary. But for the average high school student who wants help with homework, relying on powerful AI tools is like using a nuclear-powered digital calculator.
Even within the same AI company, different model offerings can vary in their reasoning power, so research what capabilities best suit your needs, Dauner said.
When possible, Luccioni recommends going back to basic sources — online encyclopedias and phone calculators — to accomplish simple tasks.
Putting a number on the environmental impact of AI has proved challenging.
The study noted that energy consumption can vary based on the user's proximity to local energy grids and the hardware used to run AI models.That's partly why the researchers chose to represent carbon emissions within a range, Dauner said.
Furthermore, many AI companies don't share information about their energy consumption — or details like server size or optimization techniques that could help researchers estimate energy consumption, said Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside who studies AI's water consumption.
'You can't really say AI consumes this much energy or water on average — that's just not meaningful. We need to look at each individual model and then (examine what it uses) for each task,' Ren said.
One way AI companies could be more transparent is by disclosing the amount of carbon emissions associated with each prompt, Dauner suggested.
'Generally, if people were more informed about the average (environmental) cost of generating a response, people would maybe start thinking, 'Is it really necessary to turn myself into an action figure just because I'm bored?' Or 'do I have to tell ChatGPT jokes because I have nothing to do?'' Dauner said.
Additionally, as more companies push to add generative AI tools to their systems, people may not have much choice how or when they use the technology, Luccioni said.
'We don't need generative AI in web search. Nobody asked for AI chatbots in (messaging apps) or on social media,' Luccioni said. 'This race to stuff them into every single existing technology is truly infuriating, since it comes with real consequences to our planet.'
With less available information about AI's resource usage, consumers have less choice, Ren said, adding that regulatory pressures for more transparency are unlikely to the United States anytime soon. Instead, the best hope for more energy-efficient AI may lie in the cost efficacy of using less energy.
'Overall, I'm still positive about (the future). There are many software engineers working hard to improve resource efficiency,' Ren said. 'Other industries consume a lot of energy too, but it's not a reason to suggest AI's environmental impact is not a problem. We should definitely pay attention.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
a minute ago
- The Verge
UK government suggests deleting files to save water
Can deleting old emails and photos help the UK tackle ongoing drought this year? That's the hope, according to recommendations for the public included in a press release today from the National Drought Group. There are far bigger steps companies and policymakers can take to conserve water of course, but drought has gotten bad enough for officials to urge the average person to consider how their habits might help or hurt the situation. And the proliferation of data centers is raising concerns about how much water it takes to power servers and keep them cool. 'Simple, everyday choices – such as turning off a tap or deleting old emails – also really helps the collective effort to reduce demand and help preserve the health of our rivers and wildlife,' Helen Wakeham, Environment Agency Director of Water, said in the press release. 'Simple, everyday choices – such as turning off a tap or deleting old emails – also really helps the collective effort' The Environment Agency didn't immediately respond to an inquiry from The Verge about how much water it thought deleting files might save, nor how much water data centers that store files or train AI use in the UK's drought-affected areas. A small data center has been estimated to use upwards of 25 million liters of water per year if it relies on old-school cooling methods that allow water to evaporate. To be sure, tech companies have worked for years to find ways to minimize their water use by developing new cooling methods. Microsoft, for example, has tried placing a data center at the bottom of the sea and submerging servers in fluorocarbon-based liquid baths. Generating electricity for energy-hungry data centers also uses up more water since fossil fuel power plants and nuclear reactors also need water for cooling and to turn turbines using steam, an issue that transitioning to more renewable energy can help to address. August ushered in the UK's fourth heatwave of the summer, exacerbating what was already the driest six months leading to July since 1976. Five regions of the UK have officially declared drought, according to the release, while another six areas are in the midst of 'prolonged dry weather.' The National Drought Group says pleas to residents to save water have made a difference. Water demand dropped by 20 percent from a July 11th peak in the Severn Trent area after 'water-saving messaging,' according to the release. Plugging leaks is another major concern. Fixing a leaking toilet can prevent 200 to 400 liters of water from being wasted each day, it from this author will be added to your daily email digest and your homepage feed. See All by Justine Calma Posts from this topic will be added to your daily email digest and your homepage feed. See All Climate Posts from this topic will be added to your daily email digest and your homepage feed. See All Environment Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Science Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech


Fast Company
30 minutes ago
- Fast Company
AI data wars push Reddit to block the Wayback Machine
As the battle to train artificial intelligence models becomes more intense and Reddit's rich content library becomes more valuable, the social media giant has taken steps to block the Internet Archive from indexing its pages. While the Wayback Machine has historically recorded all Reddit pages, comments and user profiles, the company has put limits on what the system can scrape. Moving forward, it will only be permitted to archive the site's home page, which shows popular posts and news headlines of the day, but no user comments or post history. The action comes as Reddit has become increasingly protective of the content on its site. Reddit, in May, announced it had struck a deal with OpenAI to use its content to help train ChatGPT. It previously announced a similar deal with Google – and blocked other search engines from crawling the site after that deal unless they struck financial agreements with Reddit as well. AI companies that are less well-financed, however, have reportedly been using the Internet Archive to scrub the site's previous posts and train their large language models from that content. Reddit spokesperson Tim Rathschmidt, in a statement, told Fast Company 'Internet Archive provides a service to the open web, but we've been made aware of instances where AI companies violate platform policies, including ours, and scrape data from the Wayback Machine. Until they're able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content) we're limiting some of their access to Reddit data to protect redditors.' Reddit shares were higher Tuesday, gaining more than 3% in midday trading, hitting $228. Year to date, the company's stock is up 38%. Reddit's legal battles meet its AI ambitions In June, Reddit sued Anthropic, claiming the AI company behind the Claude chatbot was scraping the Reddit site. 'In July 2024, Anthropic claimed, in response to Reddit's public protests regarding Anthropic's misuse of Reddit content, that it had blocked its bots from accessing Reddit. Not so,' the suit reads. 'Anthropic's bots continued to hit Reddit's servers over one hundred thousand times. … Unlike its competitors, Anthropic has refused to agree to respect Reddit users' basic privacy rights, including removing deleted posts from its systems.' (Anthropic has denied the accusations.) Reddit's latest defensive act against AI scraping comes as the company is focusing more on its own AI initiatives. Last December, the company rolled out Reddit Answers, an AI-powered tool that will summarize conversations and posts on the site, letting users bypass traditional search engines. That AI product is now used by six million people, the company said in its second quarter earnings announcement, up from one million in the first quarter. Reddit is planning to use that momentum, as well as the significant use of its own internal search engine (which the company says services 70 million users per week) to challenge Google and other popular search tools. 'The world and the internet are rapidly changing, and I believe Reddit has a once-in-a-generation opportunity,' said CEO Steve Huffman on an earnings call following the earnings. 'We're unifying [search and Reddit Answers] into a single search experience. We're going to bring that front and center in the app. So, whether you're a new user opening the app for the first time or returning user opening the app, that search box will be present immediately for users who open the app looking for something specific.' While Reddit's efforts in the search space will include AI components, the company said it hopes to differentiate itself from the growing number of AI search engines by highlighting the human component.


Harvard Business Review
31 minutes ago
- Harvard Business Review
Accelerating Data-Driven Transformation in the Hybrid Cloud Webinar
Featuring Jean-Philippe Player, Field CTO, Cloudera; Anu Mohan, Vice President, Data Strategy, Cloudera; Jaidev Karthickeyan, Sr. Director & Global Head, Value Advisory, Cloudera; and Alex Clemente, Harvard Business Review Analytic Services (HBR-AS) Many companies are trying to become more data-driven, so they can make better, faster decisions and boost their profitability. And many also want to integrate AI into all aspects of their operations and strategy. But organizations tend to prioritize data production over data consumption, putting more resources into building and managing AI data pipelines than educating employees on how to apply insights from data and AI into their decision-making and daily workstream. The goals of becoming more data-driven and incorporating AI into their business operations and strategy require some degree of data transformation.