Sam Altman says the energy needed for an average ChatGPT query can power a lightbulb for a few minutes
Altman was writing about the impact that AI tools will have on the future in a blog post on Tuesday when he referenced the energy and resources consumed by OpenAI's chatbot, ChatGPT.
"People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes," Altman wrote.
"It also uses about 0.000085 gallons of water; roughly one-fifteenth of a teaspoon," he continued.
Altman wrote that he expects energy to "become wildly abundant" in the 2030s. Energy, along with the limitations of human intelligence, have been "fundamental limiters on human progress for a long time," Altman added.
"As data center production gets automated, the cost of intelligence should eventually converge to near the cost of electricity," he wrote.
OpenAI did not respond to a request for comment from Business Insider.
This is not the first time Altman has predicted that AI will become cheaper to use.
In February, Altman wrote on his blog that the cost of using AI will drop by 10 times every year.
"You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period," Altman wrote.
"Moore's law changed the world at 2x every 18 months; this is unbelievably stronger," he added.
Tech companies hoping to dominate in AI have been considering using nuclear energy to power their data centers.
In September, Microsoft signed a 20-year deal with Constellation Energy to reactivate one of the dormant nuclear plants located in Three Mile Island.
In October, Google said it had struck a deal with Kairos Power, a nuclear energy company, to make three small modular nuclear reactors. The reactors, which will provide up to 500 megawatts of electricity, are set to be ready by 2035.
Google's CEO, Sundar Pichai, said in an interview with Nikkei Asia published in October that the search giant wants to achieve net-zero emissions across its operations by 2030. He added that besides looking at nuclear energy, Google was considering solar energy.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


San Francisco Chronicle
28 minutes ago
- San Francisco Chronicle
AI chatbots need more books to learn from. These libraries are opening their stacks
CAMBRIDGE, Mass. (AP) — Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artistsand others whose creative works have been scooped up without their consent to train AI chatbots. 'It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft. Davis said libraries also hold 'significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Supported by 'unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. 'We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. 'Librarians have always been the stewards of data and the stewards of information.' Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. 'A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens — units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from 'shadow libraries' of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitizing rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitized would be for everyone, said Jessica Chapel, its chief of digital and online services. 'OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said. Digitization is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. 'We've been very clear that, 'Hey, we're a public library,'" Chapel said. 'Our collections are held for public use, and anything we digitized as part of this project will be made public.' Harvard's collection was already digitized starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be 'immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. 'At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. 'You have a lot of scientific information about how to run processes and how to run analyses.' At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. 'When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to 'help them make their own informed decisions and use AI responsibly.'
Yahoo
32 minutes ago
- Yahoo
If I Could Invest $1,000 in Any Growth Stock, It Would Be This One
Alphabet is trading below the market when comparing its forward price-to-earnings ratio to the S&P 500. Advertising accounted for about 75% of Alphabet's total revenue in the first quarter of 2025. Google Cloud will be Alphabet's main growth driver for the foreseeable future. 10 stocks we like better than Alphabet › This year has been rocky for the U.S. stock market. Between the Trump administration's tariff plans (and subsequent backtracks), recession fears, and overall uncertainty, the stock market has been more volatile than usual. Due to the uncertainty, investors have been heading toward value and dividend stocks, shifting away from the growth stocks that have been so popular in recent years. And while this is a strategic move to minimize risk, all isn't lost with growth stocks. One growth stock in particular that I would consider is Alphabet (NASDAQ: GOOG)(NASDAQ: GOOGL). With a market cap of more than $2 trillion (as of June 9), Alphabet may not seem like your typical growth stock, but it checks the boxes. And right now, it's a bargain worth considering. It's been a tough start to the year for Alphabet, down more than 8% through June 9 and essentially flat over the past 12 months. Of course, this isn't ideal for shareholders, but it does make the stock a lot more attractive for those looking to add shares or make their first purchase. At around 18 times forward earnings, Alphabet's stock is trading below the market (compared to the S&P 500) and is much cheaper than peers like Apple, Microsoft, Amazon, and Meta. Trading at a relatively low value alone doesn't make Alphabet's stock a buy, but it does make the upside far outweigh the downside. Alphabet has consistently been one of the top money-making businesses in the world. With subsidiaries that include Google, YouTube, Android, Waymo, and dozens of others, it's easy to see why. For perspective, Alphabet made $90.2 billion in revenue in the first quarter (up 12% year over year), more than companies like FedEx, Johnson & Johnson, and Taiwan Semiconductor Manufacturing Company have made in their last four quarters combined. Despite the billions Alphabet makes, it's hard to ignore the concentration of the company's revenue streams. Advertising, which includes Google Search and YouTube ads, accounted for more than 74% ($66.9 billion) of Alphabet's total revenue. As advertising goes, so does Alphabet's business. That alone isn't the problem, but some people fear that artificial intelligence (AI) tools could lead to reduced use of Google Search, potentially impacting its core business model. It may have some impact, but I don't think it will be significant, especially as Google incorporates its own AI tools and finds ways to monetize them. Although Google advertising is Alphabet's bread and butter, the company's main growth driver right now is Google Cloud. In Q1, Google Cloud made $12.3 billion in revenue, up 28% year over year. Arguably more impressive than the revenue growth is the operating income growing 142% year over year to $2.2 billion. It takes a lot of scale for cloud computing businesses to be profitable because they have high fixed costs for things like data centers, servers, and other infrastructure. It appears that Google Cloud has reached that scale. Google Cloud (12%) is firmly behind Amazon Web Services (30%) and Microsoft Azure (21%) in market share, and will likely remain in the third spot for the foreseeable future. However, it can be a productive business for Alphabet. Even in the third spot, the cloud pie is expected to grow large enough that Google Cloud could still make a meaningful contribution to Alphabet's financials. Although antitrust scrutiny and court rulings could reshape Alphabet's business down the road, the company is well positioned to remain a dominant force in tech for years to come. You likely won't regret investing $1,000 in the stock when you look back years from now. Before you buy stock in Alphabet, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Alphabet wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $649,102!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $882,344!* Now, it's worth noting Stock Advisor's total average return is 996% — a market-crushing outperformance compared to 174% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 9, 2025 Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Stefon Walters has positions in Apple and Microsoft. The Motley Fool has positions in and recommends Alphabet, Amazon, Apple, FedEx, Meta Platforms, Microsoft, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Johnson & Johnson and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. If I Could Invest $1,000 in Any Growth Stock, It Would Be This One was originally published by The Motley Fool Sign in to access your portfolio


Business Wire
an hour ago
- Business Wire
DATA POEM Launches POEM365, The World's First Large Causal AI Model (LCM)
NEW YORK--(BUSINESS WIRE)-- DATA POEM, an AI-first company that pioneered the field of Horizontal Causal Intelligence, today announced the official launch of POEM365. This revolutionary product marks the debut of the world's first and only Large Causal (AI) Model (LCM), designed to transform how enterprise organizations with over $500M in revenue across automotive, CPG, consumer durables, retail, QSR, e-commerce, and other sectors leverage data for mission-critical decision making. POEM365 launches with a robust foundation built on over 250+ billion integrated transaction records spanning 15,000 brands and $5 trillion in spend. Rigorously tested across varied datasets, DATA POEM's large causal architecture demonstrates industry-leading accuracy, exceeding the performance of major tech platforms. Notably, DATA POEM established new performance benchmarks, surpassing the records set by the top contenders in the M5 Forecasting Competition, which had remained unbeaten since the competition took place in 2020. While competitors deployed over 200 specialized models, DATA POEM attained its results with a single, unified model. Its causal architecture delivered breakthrough performance with significantly lower error rates, surpassing the 0.52 accuracy benchmark that major tech companies have struggled to exceed. The large causal architecture also beats the benchmark datasets of all large AI time series models, including Times FM by Google, Chronos by Amazon, Moirai by Salesforce, TTM by IBM and the latest benchmark leader Toto by DATA DOG, across all time horizons and accuracy metrics. POEM365: Unlocking True Enterprise Intelligence POEM365 is trained on a broad, multivariate data foundation using DATA POEM's proprietary causal architecture, 'Fount'. This breakthrough technology represents a paradigm shift from fragmented analytics to unified horizontal intelligence across deeply-siloed functions, building an enterprise consciousness that allows enterprises to think and act as one. This history-making technology uniquely enables organizations to gain a unified understanding of their operations, achieving Enterprise Consciousness: From What to Why and How: POEM365 doesn't just describe what happened—it reveals why it happened and prescribes how to respond. From Correlation to Causation: While traditional analytics identify correlations, POEM365 uncovers true causal relationships, leading to more impactful strategies. From Silos to Horizontal Integration: Unlike point solutions and AI Enablers, POEM365 spans across all enterprise functions, creating a unified intelligence layer. From Insight to Action: POEM365 bridges the gap between understanding and implementation, providing prescriptive and precise intervention recommendations. Total Enterprise Orchestration: The POEM365 model architecture orchestrates planning and optimization across every organizational function and all three critical time horizons—strategic long-term (18-36 months), annual, and monthly agile cycles. "DATA POEM's proprietary causal architecture ensures the 365 platform can not only process but truly understand the data at scale, allowing for mission-critical business decisions to be made with unprecedented confidence," said Bharath Gaddam, CEO and founder of DATA POEM. "This represents a monumental leap forward in the field of artificial intelligence which has mostly relied on surface-level statistical correlations. This architecture applies transfer learning to understand non-linear data patterns and relationships, synergies and halo effects with near real-time agility." Core Differentiators & Advanced Capabilities Intuitive Intelligence: POEM365 makes complex horizontal data analysis easily accessible to a wide range of stakeholders through an intuitive natural language interface and a specialized agent swarm, the Causal Poets. These include: Planning Intelligence Agent: Specializes in forecasting, scenario modeling, and planning. Optimization Intelligence Agent: Focuses on finding optimal solutions across complex decision spaces. Research Analyst Agent: Conducts deep investigations into complex business problems and works as an insights provider or research analyst. Enterprise Grade Security: Built for enterprise environments, POEM365 offers private cloud environments of enterprises robust security features such as hardware-level encryption, zero data exfiltration, and compliance with SOC 2 II. Both the organizations' data and intelligence stays within their private cloud. Industry-Specific Large Models: Unlike generic AI, POEM365 deploys industry-specific models trained on sector-unique data patterns. This specialization enables solutions like POEM365^Auto, POEM365^CPG, POEM365^Retail, POEM365^Durables, POEM365^Hospitality, POEM365^Ecomm, POEM365^Electronics, POEM365^Fashion, and POEM365^QSR to deliver unmatched precision. The specialized vertical solutions merge seamlessly with horizontal causal intelligence, delivering both industry depth and enterprise-wide understanding in a single, unified platform. POEM365 and its coordinated team of special AI agents, the Causal Poets, empower users to easily engage with their data, receiving real-time insights and course correction far beyond what traditional AI-enabled platforms provide. Customizable Deployment with a Suite of Solutions: Studio, Verse, and Rhyme Studio: The most comprehensive enterprise suite delivers a fully customizable solution that lets users tailor the POEM365 model with granular data to generate more precise intelligence. Verse: Enables users to deploy targeted POEM agents as specialized business function modules, providing specialized capabilities exactly when and where they are needed. Rhyme: Provides pre-configured POET essentials as a foundational package, offering standardized planning and optimization capabilities with core business intelligence for organizations getting started with enterprise-wide planning. Includes research agent insights for automated data analysis and basic predictive recommendations. Since 2019, DATA POEM has been revolutionizing causal intelligence for top brands, establishing itself as a leader in pushing AI beyond mere correlation to achieve true causation. The launch of POEM365 represents a pivotal first step in DATA POEM's mission, with its vision extending far beyond current enterprise applications. The company is continuously developing cutting-edge causal models and expanding its proprietary architecture, 'Fount', with future innovations set to span across many industries and empower more profound transformations, consistently pushing AI beyond correlation to true causation. For more information about DATA POEM and POEM365, please visit DATA POEM is an AI-first company that pioneered the field of Horizontal Causal Intelligence. Leveraging neural networks since its inception, DATA POEM's mission is to empower humankind by answering the fundamental "Why" of the world through cutting-edge technology. With POEM365, DATA POEM is revolutionizing enterprise decision-making by moving AI beyond correlation to true causation, delivering unprecedented accuracy and unified intelligence across deeply-siloed functions.