Latest news with #languagemodels
Yahoo
8 hours ago
- Business
- Yahoo
Hugging Face Co-Founder Challenges AI Optimists: 'Models Can't Ask Original Scientific Questions'
Thomas Wolf, co-founder and chief science officer at Hugging Face, has cast doubt on the belief that current artificial intelligence systems will lead to major scientific breakthroughs. Wolf told Fortune that today's large language models, or LLMs, excel at providing answers but fall short when it comes to formulating original questions. 'In science, asking the question is the hard part,' he said. 'Once the question is asked, often the answer is quite obvious, but the tough part is really asking the question, and models are very bad at asking great questions.' Don't Miss: GoSun's breakthrough rooftop EV charger already has 2,000+ units reserved — become an investor in this $41.3M clean energy brand today. Invest early in CancerVax's breakthrough tech aiming to disrupt a $231B market. Back a bold new approach to cancer treatment with high-growth potential. Wolf's comments were in response to a blog post by Anthropic CEO Dario Amodei, who argues that artificial intelligence could compress a century's worth of scientific breakthroughs into just a few years. Wolf said he initially found the post compelling but became skeptical after rereading. 'It was saying AI is going to solve cancer, and it's going to solve mental health problems—it's going to even bring peace into the world. But then I read it again and realized there's something that sounds very wrong about it, and I don't believe that,' he told Fortune. San Francisco-based Anthropic is backed by tech giants, including Inc. (NASDAQ:AMZN) and Alphabet Inc. (NASDAQ:GOOG, GOOGL)), and is also known for its Claude family of AI models. For Wolf, the core issue lies in how LLMs are trained. In another blog post, Wolf argues that today's AI systems are built to predict likely outcomes, act as "yes-men on servers," capable of mimicking human responses but incapable of challenging assumptions or generating original ideas. "To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask," Wolf wrote. He cited that real scientific progress often comes from paradigm shifts—like Copernicus proposing heliocentrism or the invention of CRISPR-based gene editing—rather than from answering existing questions. Trending: This Jeff Bezos-backed startup will allow you to become a landlord in just 10 minutes, with minimum investments as low as $100. Wolf also questioned how AI performance is measured today. In his blog post, he pointed to benchmarks like Humanity's Last Exam or Frontier Math, which test how well AI models can answer complex but well-defined questions. "These are exactly the kinds of exams where I excelled," Wolf wrote, referencing his academic background. "But real scientific breakthroughs come not from answering known questions, but from asking challenging new ones and questioning previous ideas." He argued that AI needs to demonstrate the ability to challenge its training data, take counterfactual approaches, and identify new research directions from incomplete information. Using the board game Go as an analogy, Wolf said the landmark 2016 victory of DeepMind's AlphaGo over world champions made headlines but was not revolutionary. "Move 37, while impressive, is still essentially a straight-A student answer to the question posed by the rules of the game of Go," he wrote in his blog. "An Einstein-level breakthrough in Go would involve inventing the rules of Go itself." Hugging Face is a prominent open-source platform in the AI community, known for its collaborative development of open-source machine learning models and tools. The company is backed by investors including Sequoia Capital and Lux Capital, and it plays a leading role in developing transparent and accessible AI systems. Wolf concluded that while current models are useful as assistants, true scientific progress requires a different kind of intelligence—one that can formulate disruptive questions rather than repeat what is already known. See Next: $100k in assets? Maximize your retirement and cut down on taxes: Schedule your free call with a financial advisor to start your financial journey – no cost, no obligation. Warren Buffett once said, "If you don't find a way to make money while you sleep, you will work until you die." Here's how you can earn passive income with just $100. UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? (AMZN): Free Stock Analysis Report ALPHABET (GOOG): Free Stock Analysis Report This article Hugging Face Co-Founder Challenges AI Optimists: 'Models Can't Ask Original Scientific Questions' originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
10 hours ago
- Business
- Yahoo
Hugging Face Co-Founder Challenges AI Optimists: 'Models Can't Ask Original Scientific Questions'
Thomas Wolf, co-founder and chief science officer at Hugging Face, has cast doubt on the belief that current artificial intelligence systems will lead to major scientific breakthroughs. Wolf told Fortune that today's large language models, or LLMs, excel at providing answers but fall short when it comes to formulating original questions. 'In science, asking the question is the hard part,' he said. 'Once the question is asked, often the answer is quite obvious, but the tough part is really asking the question, and models are very bad at asking great questions.' Don't Miss: GoSun's breakthrough rooftop EV charger already has 2,000+ units reserved — become an investor in this $41.3M clean energy brand today. Invest early in CancerVax's breakthrough tech aiming to disrupt a $231B market. Back a bold new approach to cancer treatment with high-growth potential. Wolf's comments were in response to a blog post by Anthropic CEO Dario Amodei, who argues that artificial intelligence could compress a century's worth of scientific breakthroughs into just a few years. Wolf said he initially found the post compelling but became skeptical after rereading. 'It was saying AI is going to solve cancer, and it's going to solve mental health problems—it's going to even bring peace into the world. But then I read it again and realized there's something that sounds very wrong about it, and I don't believe that,' he told Fortune. San Francisco-based Anthropic is backed by tech giants, including Inc. (NASDAQ:AMZN) and Alphabet Inc. (NASDAQ:GOOG, GOOGL)), and is also known for its Claude family of AI models. For Wolf, the core issue lies in how LLMs are trained. In another blog post, Wolf argues that today's AI systems are built to predict likely outcomes, act as "yes-men on servers," capable of mimicking human responses but incapable of challenging assumptions or generating original ideas. "To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask," Wolf wrote. He cited that real scientific progress often comes from paradigm shifts—like Copernicus proposing heliocentrism or the invention of CRISPR-based gene editing—rather than from answering existing questions. Trending: This Jeff Bezos-backed startup will allow you to become a landlord in just 10 minutes, with minimum investments as low as $100. Wolf also questioned how AI performance is measured today. In his blog post, he pointed to benchmarks like Humanity's Last Exam or Frontier Math, which test how well AI models can answer complex but well-defined questions. "These are exactly the kinds of exams where I excelled," Wolf wrote, referencing his academic background. "But real scientific breakthroughs come not from answering known questions, but from asking challenging new ones and questioning previous ideas." He argued that AI needs to demonstrate the ability to challenge its training data, take counterfactual approaches, and identify new research directions from incomplete information. Using the board game Go as an analogy, Wolf said the landmark 2016 victory of DeepMind's AlphaGo over world champions made headlines but was not revolutionary. "Move 37, while impressive, is still essentially a straight-A student answer to the question posed by the rules of the game of Go," he wrote in his blog. "An Einstein-level breakthrough in Go would involve inventing the rules of Go itself." Hugging Face is a prominent open-source platform in the AI community, known for its collaborative development of open-source machine learning models and tools. The company is backed by investors including Sequoia Capital and Lux Capital, and it plays a leading role in developing transparent and accessible AI systems. Wolf concluded that while current models are useful as assistants, true scientific progress requires a different kind of intelligence—one that can formulate disruptive questions rather than repeat what is already known. See Next: $100k in assets? Maximize your retirement and cut down on taxes: Schedule your free call with a financial advisor to start your financial journey – no cost, no obligation. Warren Buffett once said, "If you don't find a way to make money while you sleep, you will work until you die." Here's how you can earn passive income with just $100. UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? (AMZN): Free Stock Analysis Report ALPHABET (GOOG): Free Stock Analysis Report This article Hugging Face Co-Founder Challenges AI Optimists: 'Models Can't Ask Original Scientific Questions' originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Se produjo un error al recuperar la información Inicia sesión para acceder a tu portafolio Se produjo un error al recuperar la información Se produjo un error al recuperar la información Se produjo un error al recuperar la información Se produjo un error al recuperar la información
Yahoo
5 days ago
- Science
- Yahoo
Scientists Just Found Something Unbelievably Grim About Pollution Generated by AI
Tech companies are hellbent on pushing out ever more advanced artificial intelligence models — but there appears to be a grim cost to that progress. In a new study in the science journal Frontiers in Communication, German researchers found that large language models (LLM) that provide more accurate answers use exponentially more energy — and hence produce more carbon — than their simpler and lower-performing peers. In other words, the findings are a grim sign of things to come for the environmental impacts of the AI industry: the more accurate a model is, the higher its toll on the climate. "Everyone knows that as you increase model size, typically models become more capable, use more electricity and have more emissions," Allen Institute for AI researcher Jesse Dodge, who didn't work on the German research but has conducted similar analysis of his own, told the New York Times. The team examined 14 open source LLMs — they were unable to access the inner workings of commercial offerings like OpenAI's ChatGPT or Anthropic's Claude — of various sizes and fed them 500 multiple choice questions plus 500 "free-response questions." Crunching the numbers, the researchers found that big, more accurate models such as DeepSeek produce the most carbon compared to chatbots with smaller digital brains. So-called "reasoning" chatbots, which break problems down into steps in their attempts to solve them, also produced markedly more emissions than their simpler brethren. There were occasional LLMs that bucked the trend — Cogito 70B achieved slightly higher accuracy than DeepSeek, but with a modestly smaller carbon footprint, for instance — but the overall pattern was stark: the more reliable an AI's outputs, the greater its environmental harm. "We don't always need the biggest, most heavily trained model, to answer simple questions," Maximilian Dauner, a German doctoral student and lead author of the paper, told the NYT. "Smaller models are also capable of doing specific things well. The goal should be to pick the right model for the right task." That brings up an interesting point: do we really need AI in everything? When you go on Google, those annoying AI summaries pop up, no doubt generating pollution for a result that you never asked for in the first place. Each individual query might not count for much, but when you add them all up, the effects on the climate could be immense. OpenAI CEO Sam Altman, for example, recently enthused that a "significant fraction" of the Earth's total power production should eventually go to AI. More on AI: CEOs Using AI to Terrorize Their Employees