Latest news with #TheGentleSingularity


Time of India
6 hours ago
- Business
- Time of India
Sam Altman just dropped a big AI prediction for 2026; experts are skeptical
Sam Altman AI prediction for 2026 In a fresh thought-provoking article entitled "The Gentle Singularity," OpenAI CEO Sam Altman has again moved in with a daring prediction: by 2026, artificial intelligence systems will be able to make "novel insights." The claim may sound ambiguous in itself, yet in the context of the developing objectives of OpenAI and the wider competitive scene of AI research, it weighs. Altman, who is famous for his futurist vision and bold declarations regarding artificial general intelligence (AGI), presents this forecast as part of an extended evolution of human society, one where AI remakes work, energy, innovation, and even scientific breakthroughs. His essay, released June 10, 2025, is more than prediction; it can be used as an unofficial guide for where OpenAI and its competitors are going next. This race towards AI systems coming up with genuine, new ideas instead of merely recombining present information is becoming the next arms race in technology innovation. With major breakthroughs being achieved by Google DeepMind, Anthropic, and upstart companies such as Lila Sciences and FutureHouse, the stakes are high not just for corporate hegemony but for the future of science itself. What Sam Altman really said: AI with "novel insights" by 2026 Altman's essay describes his vision for the "gentle singularity," an era in which AGI does not destroy or dismantle civilization but rather becomes a productive, evolutionary partner to humans. Perhaps one of the most remarkable things he predicts is this: by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch vàng CFDs với sàn môi giới tin cậy IC Markets Tìm hiểu thêm Undo "By 2026, we will likely see the arrival of systems that can figure out novel insights," This matter-of-fact-sounding sentence has huge implications. First of all, it indicates that OpenAI is approaching a level of model capability where artificial intelligence can move past summarization, prediction, or pattern matching and start to think creatively. This is the distinction between an AI that simply reports data and one that adds meaningfully to human knowledge. OpenAI President and co-founder Greg Brockman has recently supported this concept when announcing the o3 and o4-mini models in April 2025. "These were the first models scientists used to create useful new ideas in their field," he declared, suggesting that OpenAI is already developing the kind of AI that Altman described in his essay. What does "novel insight" really mean The "novel insight" in AI is the capability of the AI model to generate new, valuable, and hitherto not thought-provoking ideas or hypotheses. This characteristic stretches much beyond the current realm of capabilities of AI chatbots and big language models that are based on spewing out known facts. An AI race for survival: Other technology giants are not lagging behind Altman's prophetic essay is not isolated. Over the past few months, a host of major players have taken strategic actions in line with the premise of "novel insight" creation: Google DeepMind just released a paper on AlphaEvolve, a coding AI agent that created innovative solutions to challenging math problems, something even accomplished researchers hadn't tried. Anthropic founder of Claude AI, established a research grant program in May 2025 to support research projects that employ AI to generate scientific hypotheses and conduct experiments. FutureHouse, founded by ex-Google CEO Eric Schmidt, states its AI assisted in an actual scientific breakthrough, though details are kept under wraps. Lila Sciences, a startup by now-former OpenAI researcher Kenneth Stanley, has raised $200 million to create a lab dedicated to teaching AI models to ask more intelligent scientific questions, a basic prerequisite for insight. Why 'novel insights' from AI might still be useless, experts say Even as the hype sets in, top AI minds remain hesitant. Hugging Face Chief Science Officer Thomas Wolf recently contended that existing models can't pose genuinely new questions, a precondition for any scientific breakthrough. Likewise, Kenneth Stanley himself concedes that the issue is "fundamentally difficult," not merely a matter of computational horsepower but modeling what humans find "interesting" or "meaningful." Creativity is not just computation, but intuition and judgment areas where AI still lags. This means that even if models can generate new hypotheses, it remains unclear how useful, testable, or correct these insights will be. Also read | Bill Gates bought his daughter an entire neighborhood worth $38 million and you won't believe why AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Hindustan Times
2 days ago
- Hindustan Times
Every ChatGPT query you make uses water, and Sam Altman has revealed the exact figure
Generative AI has now become part and parcel of our daily lives. Many organisations are resorting to using chatbots, like ChatGPT, and even the LLMs behind these chatbots to get their work done through agent tech, AI, and whatnot. However, what if we told you that OpenAI's boss and the man behind ChatGPT, Sam Altman, recently revealed how much water is used for every one of your ChatGPT queries? This was revealed in a blog post by Sam Altman himself, titled 'The Gentle Singularity'. He revealed this while talking about data centre production, automation, and the cost of intelligence eventually converging to near the cost of electricity. Sam Altman stated that people are often curious about how much energy a ChatGPT query uses. In terms of electricity, it is about 0.34 watt-hours. This is something an oven would use in over one second, or a high-efficiency light bulb would use in a little over a couple of minutes. As for the water, Altman says that a query uses about 0.000085 gallons of water (0.00032 litres), which in real-world equivalent is about one-fifteenth of a teaspoon. So, for example, using ChatGPT 1,000 times would result in approximately 0.32 litres of water being consumed. Well, time and again, it has been propagated that running these AI models requires large data centres and large compute power. And for all of that work, you do need more electricity, more power generation, and of course, water. This is where the larger carbon footprint comes into play. Having said that, while Altman does mention some figures in his official blog post, it is not really clear how he came to these conclusions or if there has been research about this at OpenAI. MOBILE FINDER: iPhone 16 LATEST Price, Specs And More

Yahoo
2 days ago
- Business
- Yahoo
Sam Altman thinks AI will have 'novel insights' next year
In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
2 days ago
- Business
- Yahoo
Sam Altman thinks AI will have 'novel insights' next year
In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. Sign in to access your portfolio