logo
#

Latest news with #Cogito

AI chatbots using reason emit more carbon than those responding concisely, study finds
AI chatbots using reason emit more carbon than those responding concisely, study finds

Economic Times

time8 hours ago

  • Science
  • Economic Times

AI chatbots using reason emit more carbon than those responding concisely, study finds

A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. "The environmental impact of questioning trained (large-language models) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," first author Maximilian Dauner, a researcher at Hochschule Munchen University of Applied Sciences, Germany, said. "We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models," Dauner study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions -- 500 multiple-choice and 500 model responded to 100 questions on each of the five subjects chosen for the analysis -- philosophy, high school world history, international law, abstract algebra, and high school mathematics. "Zero-token reasoning traces appear when no intermediate text is needed (e.g. Cogito 70B reasoning on certain history items), whereas the maximum reasoning burden (6.716 tokens) is observed for the Deepseek R1 7B model on an abstract algebra prompt," the authors wrote. Tokens are virtual objects created by conversational AI when processing a user's prompt in natural language. More tokens lead to increased carbon dioxide equipped with an ability to reason, or 'reasoning models', produced 543.5 'thinking' tokens per question, whereas concise models -- producing one-word answers -- required just 37.7 tokens per question, the researchers tokens are additional ones that reasoning models generate before producing an answer, they more thinking tokens do not necessarily guarantee correct responses, as the team said, elaborate detail is not always essential for said, "None of the models that kept emissions below 500 grams of CO₂ equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly." "Currently, we see a clear accuracy-sustainability trade-off inherent in (large-language model) technologies," the author most accurate performance was seen in the reasoning model Cogito, with a nearly 85 per cent accuracy in responses, whilst producing three times more carbon dioxide emissions than similar-sized models generating concise answers."In conclusion, while larger and reasoning-enhanced models significantly outperform smaller counterparts in terms of accuracy, this improvement comes with steep increases in emissions and computational demand," the authors wrote. "Optimising reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, is crucial for advancing more sustainable and environmentally conscious artificial intelligence technologies," they wrote.

AI chatbots that reason emit more carbon than ones with simple reply: Study
AI chatbots that reason emit more carbon than ones with simple reply: Study

Business Standard

time8 hours ago

  • Science
  • Business Standard

AI chatbots that reason emit more carbon than ones with simple reply: Study

A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. "The environmental impact of questioning trained (large-language models) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," first author Maximilian Dauner, a researcher at Hochschule Mnchen University of Applied Sciences, Germany, said. "We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models," Dauner added. The study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions -- 500 multiple-choice and 500 subjective. Each model responded to 100 questions on each of the five subjects chosen for the analysis -- philosophy, high school world history, international law, abstract algebra, and high school mathematics. "Zero-token reasoning traces appear when no intermediate text is needed (e.g. Cogito 70B reasoning on certain history items), whereas the maximum reasoning burden (6.716 tokens) is observed for the Deepseek R1 7B model on an abstract algebra prompt," the authors wrote. Tokens are virtual objects created by conversational AI when processing a user's prompt in natural language. More tokens lead to increased carbon dioxide emissions. Chatbots equipped with an ability to reason, or 'reasoning models', produced 543.5 'thinking' tokens per question, whereas concise models -- producing one-word answers -- required just 37.7 tokens per question, the researchers found. Thinking tokens are additional ones that reasoning models generate before producing an answer, they explained. However, more thinking tokens do not necessarily guarantee correct responses, as the team said, elaborate detail is not always essential for correctness. Dauner said, "None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly." "Currently, we see a clear accuracy-sustainability trade-off inherent in (large-language model) technologies," the author added. The most accurate performance was seen in the reasoning model Cogito, with a nearly 85 per cent accuracy in responses, whilst producing three times more carbon dioxide emissions than similar-sized models generating concise answers. "In conclusion, while larger and reasoning-enhanced models significantly outperform smaller counterparts in terms of accuracy, this improvement comes with steep increases in emissions and computational demand," the authors wrote. "Optimising reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, is crucial for advancing more sustainable and environmentally conscious artificial intelligence technologies," they wrote.

AI chatbots using reason emit more carbon than those responding concisely, study finds
AI chatbots using reason emit more carbon than those responding concisely, study finds

Indian Express

time9 hours ago

  • Science
  • Indian Express

AI chatbots using reason emit more carbon than those responding concisely, study finds

A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. 'The environmental impact of questioning trained (large-language models) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,' first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, Germany, said. 'We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models,' Dauner added. The study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions — 500 multiple-choice and 500 subjective. Each model responded to 100 questions on each of the five subjects chosen for the analysis — philosophy, high school world history, international law, abstract algebra, and high school mathematics. 'Zero-token reasoning traces appear when no intermediate text is needed (e.g. Cogito 70B reasoning on certain history items), whereas the maximum reasoning burden (6.716 tokens) is observed for the Deepseek R1 7B model on an abstract algebra prompt,' the authors wrote. Tokens are virtual objects created by conversational AI when processing a user's prompt in natural language. More tokens lead to increased carbon dioxide emissions. Chatbots equipped with an ability to reason, or 'reasoning models', produced 543.5 'thinking' tokens per question, whereas concise models — producing one-word answers — required just 37.7 tokens per question, the researchers found. Thinking tokens are additional ones that reasoning models generate before producing an answer, they explained. However, more thinking tokens do not necessarily guarantee correct responses, as the team said, elaborate detail is not always essential for correctness. Dauner said, 'None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly.' 'Currently, we see a clear accuracy-sustainability trade-off inherent in (large-language model) technologies,' the author added. The most accurate performance was seen in the reasoning model Cogito, with a nearly 85 per cent accuracy in responses, whilst producing three times more carbon dioxide emissions than similar-sized models generating concise answers. 'In conclusion, while larger and reasoning-enhanced models significantly outperform smaller counterparts in terms of accuracy, this improvement comes with steep increases in emissions and computational demand,' the authors wrote. 'Optimising reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, is crucial for advancing more sustainable and environmentally conscious artificial intelligence technologies,' they wrote. PTI KRS KRS MPL

AI chatbots using reason emit more carbon than those responding concisely, study finds
AI chatbots using reason emit more carbon than those responding concisely, study finds

Time of India

time10 hours ago

  • Science
  • Time of India

AI chatbots using reason emit more carbon than those responding concisely, study finds

Live Events A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history."The environmental impact of questioning trained ( large-language models ) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," first author Maximilian Dauner, a researcher at Hochschule Munchen University of Applied Sciences, Germany, said."We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models ," Dauner study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions -- 500 multiple-choice and 500 model responded to 100 questions on each of the five subjects chosen for the analysis -- philosophy, high school world history, international law, abstract algebra, and high school mathematics."Zero-token reasoning traces appear when no intermediate text is needed (e.g. Cogito 70B reasoning on certain history items), whereas the maximum reasoning burden (6.716 tokens) is observed for the Deepseek R1 7B model on an abstract algebra prompt," the authors are virtual objects created by conversational AI when processing a user's prompt in natural language. More tokens lead to increased carbon dioxide equipped with an ability to reason, or ' reasoning models ', produced 543.5 'thinking' tokens per question, whereas concise models -- producing one-word answers -- required just 37.7 tokens per question, the researchers tokens are additional ones that reasoning models generate before producing an answer, they more thinking tokens do not necessarily guarantee correct responses, as the team said, elaborate detail is not always essential for said, "None of the models that kept emissions below 500 grams of CO₂ equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly.""Currently, we see a clear accuracy-sustainability trade-off inherent in (large-language model) technologies," the author most accurate performance was seen in the reasoning model Cogito, with a nearly 85 per cent accuracy in responses, whilst producing three times more carbon dioxide emissions than similar-sized models generating concise answers."In conclusion, while larger and reasoning-enhanced models significantly outperform smaller counterparts in terms of accuracy, this improvement comes with steep increases in emissions and computational demand," the authors wrote."Optimising reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, is crucial for advancing more sustainable and environmentally conscious artificial intelligence technologies," they wrote.

AI chatbots using reason emit more carbon than those responding concisely, study finds
AI chatbots using reason emit more carbon than those responding concisely, study finds

Mint

time11 hours ago

  • Science
  • Mint

AI chatbots using reason emit more carbon than those responding concisely, study finds

New Delhi, Jun 19 (PTI) A study found that carbon emissions from chat-based generative AI can be six times higher when responding to complex prompts, like abstract algebra or philosophy, compared to simpler prompts, such as high school history. "The environmental impact of questioning trained (large-language models) is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, Germany, said. "We found that reasoning-enabled models produced up to 50 times more (carbon dioxide) emissions than concise response models," Dauner added. The study, published in the journal Frontiers in Communication, evaluated how 14 large-language models (which power chatbots), including DeepSeek and Cogito, process information before responding to 1,000 benchmark questions -- 500 multiple-choice and 500 subjective. Each model responded to 100 questions on each of the five subjects chosen for the analysis -- philosophy, high school world history, international law, abstract algebra, and high school mathematics. "Zero-token reasoning traces appear when no intermediate text is needed (e.g. Cogito 70B reasoning on certain history items), whereas the maximum reasoning burden (6.716 tokens) is observed for the Deepseek R1 7B model on an abstract algebra prompt," the authors wrote. Tokens are virtual objects created by conversational AI when processing a user's prompt in natural language. More tokens lead to increased carbon dioxide emissions. Chatbots equipped with an ability to reason, or 'reasoning models', produced 543.5 'thinking' tokens per question, whereas concise models -- producing one-word answers -- required just 37.7 tokens per question, the researchers found. Thinking tokens are additional ones that reasoning models generate before producing an answer, they explained. However, more thinking tokens do not necessarily guarantee correct responses, as the team said, elaborate detail is not always essential for correctness. Dauner said, "None of the models that kept emissions below 500 grams of CO₂ equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly." "Currently, we see a clear accuracy-sustainability trade-off inherent in (large-language model) technologies," the author added. The most accurate performance was seen in the reasoning model Cogito, with a nearly 85 per cent accuracy in responses, whilst producing three times more carbon dioxide emissions than similar-sized models generating concise answers. "In conclusion, while larger and reasoning-enhanced models significantly outperform smaller counterparts in terms of accuracy, this improvement comes with steep increases in emissions and computational demand," the authors wrote. "Optimising reasoning efficiency and response brevity, particularly for challenging subjects like abstract algebra, is crucial for advancing more sustainable and environmentally conscious artificial intelligence technologies," they wrote.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store