logo
Thinking AI models like ChatGPT emit '50 times more CO2' but still give wrong answers

Thinking AI models like ChatGPT emit '50 times more CO2' but still give wrong answers

Daily Record9 hours ago

The more an AI service thinks, the more carbon it emits.
Artificial Intelligence is a tool being used by millions of people the world over. AI is when computer systems perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making.
From homeowners asking ChatGPT for renovation advice, to the software revealing what Scottish homes could look like in the next 25 years, engaging with AI can be helpful and eye-opening, but can also come with serious risks.

A recent study from MIT found that using ChatGPT for essay writing can negatively impact cognitive engagemen t and memory recall, compared to those who wrote purely from their own brain.

But it's not just the personal impact AI can have, it can also damage the environment. Another study analysing different types of AI found there was a marked difference in CO2 output depending on the model.
A query typed into a large language model (LLM), such as ChatGPT, requires energy and produces more CO2 emissions. Emissions, however, depend on the model, the subject matter, and the user.
Researchers compared 14 models and found that complex answers cause more emissions than simple answers. Meanwhile, models that provide more accurate answers also produce more emissions.
Wondering how asking AI a question produces CO2 emissions? Well, no matter which questions we ask an AI, the model will come up with an answer, the researchers in Germany explained.
To produce this information - regardless of whether that answer is correct or not - the model uses tokens. Tokens are words or parts of words that are converted into a string of numbers that can be processed by the LLM.

This conversion, as well as other computing processes, produce CO2 emissions. Many users, however, are unaware of the substantial carbon footprint associated with these technologies.
With that in mind, researchers measured and compared CO2 emissions of different, already trained, LLMs using a set of standardised questions.
"The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach," explained first author Maximilian Dauner.

"Explicit reasoning processes significantly drive up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models."
'Thinking' AI causes the most emissions. Reasoning models, on average, created 543.5 'thinking' tokens per question, whereas concise models required just 37.7 tokens per question.

Thinking tokens are additional tokens that reasoning LLMs generate before producing an answer. A higher token footprint always means higher CO2 emissions.
It doesn't, however, mean the resulting answers are more correct. This is because elaborate detail does not always equal correctness.

Subject matter also resulted in significantly different levels of CO2 emissions. Questions that required lengthy reasoning processes, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, like high school history.
The most accurate model was the Cogito model with 70 billion parameters, reaching 84.9 per cent accuracy. The model produced three times more CO2 emissions than similar sized models that generated concise answers.
All is not lost, though. If you are a tech enthusiast, but also climate-conscious, you can, to an extent, control the amount of CO2 emissions caused by AI by adjusting your personal use of the technology, the researchers said.

"Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power," Dauner pointed out.
Choice of model can make a big difference in CO2 emissions. For example, having DeepSeek R1 answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York.
Meanwhile, OpenAI's ChatGPT consumes 500 ml of water for every five to 50 prompts it answers, according to Shaolei Ren, a researcher at the University of California, Riverside.
"If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective about when and how they use these technologies," Dauner said.
Join the Daily Record WhatsApp community!

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Small business AI use is lagging, but one firm is channeling Sherlock Holmes and knocking out 'grunt work'
Small business AI use is lagging, but one firm is channeling Sherlock Holmes and knocking out 'grunt work'

NBC News

time14 minutes ago

  • NBC News

Small business AI use is lagging, but one firm is channeling Sherlock Holmes and knocking out 'grunt work'

Chris Schwegmann is getting creative with how artificial intelligence is being used in law. At Dallas-based boutique law firm Lynn Pinker Hurst & Schwegmann, he sometimes asks AI to channel Supreme Court Chief Justice John Roberts or Sherlock Holmes. Schwegmann said after uploading opposing counsel's briefs, he'll ask legal technology platform Harvey to assume the role of a legal mind like Roberts to see how the chief justice would think about a particular problem. Other times, he will turn to a fictional character like Holmes, unlocking a different frame of mind. 'Harvey, ChatGPT ... they know who those folks are, and can approach the problem from that mindset,' he said. 'Once we as lawyers get outside those lanes, when we are thinking more creatively involving other branches of science, literature, history, mythology, that sometimes generates some of the most interesting ideas that can then be put, using proper legal judgement, in a framework that works to solve a legal problem.' It's just one example of how smaller businesses are putting AI to work to punch above their weight, and new data shows there's an opportunity for much more implementation in the future. Only 24% of owners in the recent Small Business and Technology Survey from the National Federation of Independent Business said they are using AI, including ChatGPT, Canva and Copilot, in some capacity. Notably, 98% of those using it said AI has so far not impacted the number of employees at their firms. At his trial litigation firm of 50 attorneys, Schwegmann said AI is resolving work in days that would sometimes take weeks, and said the technology isn't replacing workers at the firm. It has freed up associate lawyers from doing 'grunt work,' he said, and also means more senior-level partners have the time to mentor younger attorneys because everyone has more time. The NFIB survey found AI use varied based on the size of the small business. For firms with employees in the single digits, uptake was at 21%. At firms with fifty or more workers, AI implementation was at nearly half of all respondents. 'The data show clearly that uptake for the smallest businesses lags substantially behind their larger competitors. ... With a little attention from all the relevant stakeholders, a more equal playing field is possible,' the NFIB report said. For future AI use, 63% of all small employers surveyed said the utilization of the technology in their industry in the next five years will be important to some degree; 12% said it will be extremely important and 15% said it will not be important at all. Some of the most common uses in the survey were for communications, marketing and advertising, predictive analysis and customer service. 'We still have the need for the independent legal judgment of our associate lawyers and our partners — it hasn't replaced them, it just augments their thinking,' Schwegmann said. 'It makes them more creative and frees their time to do what lawyers do best, which is strategic thought and creative problem solving.' The NFIB data echoes a recent survey from Reimagine Main Street, a project of Public Private Strategies Institute in partnership with PayPal. Reimagine surveyed nearly 1,000 small businesses with annual revenue between $25,000 and $50,000 and also found that a quarter had already started integrating AI into daily workflows. Schwegmann said at his firm, AI is helping to even the playing field. 'One of the things Harvey lets us do is review, understand and incorporate and respond much faster than we would prior to the use of these kinds of AI tools,' he said. 'No longer does a party have an advantage because they can paper you to death.'

Federal judge rules copyrighted books are fair use for AI training
Federal judge rules copyrighted books are fair use for AI training

NBC News

time28 minutes ago

  • NBC News

Federal judge rules copyrighted books are fair use for AI training

A federal judge has sided with Anthropic in a major copyright ruling, declaring that artificial intelligence developers can train on published books without authors' consent. The decision, filed Monday in the U.S. District Court for the Northern District of California, sets a precedent that training AI systems on copyrighted works constitutes fair use. Though it doesn't guarantee other courts will follow, Judge William Alsup's ruling marks the first of dozens of ongoing copyright lawsuits to give an answer on fair use in the context of generative AI. It's a question that's been raised by creatives across various industries for years since generative AI tools exploded into the mainstream, allowing users to easily produce art from models trained on copyrighted work — often without the human creator's knowledge or permission. AI companies have been hit with a slew of copyright lawsuits from media companies, music labels and authors since 2023. Artists have signed multiple open letters urging government officials and AI developers to constrain the unauthorized use of copyrighted works. In recent years, companies have also increasingly inked licensing deals with AI developers to dictate terms of use for their artists' works. Alsup on Monday ruled on a lawsuit filed by three authors — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — last August, who claimed that Anthropic ignored copyright protections when it pirated millions of books and digitized purchased books to feed into its large language models, which helped train them to generate human-like text responses. 'The copies used to train specific LLMs were justified as a fair use,' Alsup wrote in the ruling. 'Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes.' His decision stated that Anthropic's use of the books to train its models, including versions of its flagship AI model Claude, was 'exceedingly transformative' enough to fall under fair use. Fair use, as defined by the Copyright Act, takes into account four factors: the purpose of the use, what kind of copyrighted work is used (creative works get stronger protection than factual works), how much of the work was used, and whether the use hurts the market value of the original work. 'We are pleased that the Court recognized that using 'works to train LLMs was transformative — spectacularly so,'' Anthropic said in a statement, quoting the ruling. 'Consistent with copyright's purpose in enabling creativity and fostering scientific progress, 'Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.'' Bartz and Johnson did not immediately respond to requests for comment. Graeber declined to comment. Alsup noted, however, that all of the authors' works contained 'expressive elements' earning them stronger copyright protection, which is a factor that points against fair use, although not enough to sway the overall ruling. He also added that while making digital copies of purchased books was fair use, downloading pirated copies for free did not constitute fair use. But aside from the millions of pirated copies, Alsup wrote, copying entire works to train AI models was 'especially reasonable' because the models didn't reproduce those copies for public access, and doing so 'did not and will not displace demand' for the original books. His ruling stated that although AI developers can legally train AI models on copyrighted works without permission, they should obtain those works through legitimate means that don't involve pirating or other forms of theft. Despite siding with the AI company on fair use, Alsup wrote that Anthropic will still face trial for the pirated copies it used to create its massive central library of books used to train AI. 'That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft,' Alsup wrote, 'but it may affect the extent of statutory damages.'

Nvidia CEO Huang sells $15 million worth of stock, first sale of $873 million plan
Nvidia CEO Huang sells $15 million worth of stock, first sale of $873 million plan

NBC News

time2 hours ago

  • NBC News

Nvidia CEO Huang sells $15 million worth of stock, first sale of $873 million plan

Nvidia CEO Jensen Huang sold 100,000 shares of the chipmaker's stock on Friday and Monday, according to a filing with the U.S. Securities and Exchange Commission. The sales are worth nearly $15 million at Tuesday's opening price. The transactions are the first sale in Huang's plan to sell as many as 600,000 shares of Nvidia through the end of 2025. It's a plan that was announced in March, and it'd be worth $873 million at Tuesday's opening price. The Nvidia founder still owns more than 800 million Nvidia shares, according to Monday's SEC filing. Huang has a net worth of about $126 billion, ranking him 12th on the Bloomberg Billionaires Index. The 62-year-old chief executive sold about $700 million in Nvidia shares last year under a prearranged plan, too. Nvidia stock is up more than 800% since December 2022 after OpenAI's ChatGPT was first released to the public. That launch drew attention to Nvidia's graphics processing units, or GPUs, which were needed to develop and power the artificial intelligence service. The company's chips remain in high demand with the majority of the AI chip market, and Nvidia has introduced two subsequent generations of its AI GPU technology. Nvidia continues to grow. Its stock is up 9% this year, even as the company faces export control issues that could limit foreign markets for its AI chips. In May, the company reported first-quarter earnings that showed the chipmaker's revenue growing 69% on an annual basis to $44 billion during the quarter.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store