logo
The AI Revolution Isn't Possible Without an Energy Revolution

The AI Revolution Isn't Possible Without an Energy Revolution

Time​ Magazine5 hours ago

OpenAI CEO Sam Altman took to the stand in front of Congress, testifying on the future of AI regulation at Capitol Hill. During his hearing, he spelled out the truth about the limits of how far we can take AI: "The cost of AI will converge to the cost of energy."
AI is often framed as a purely digital phenomenon, operating seamlessly in the intangible realm of codes and algorithms. But behind every image generated, every response crafted, lies a significant and measurable energy cost. The technology we all use relies on minerals, chips, semiconductors, and data centers where your data is being churned and processed. Technology requires energy to power it, and the extraction of scarce minerals to build it. So, as we think about accelerating technology, what lies ahead is not just a computational challenge, but an infrastructural and ecological one.
Despite AI's promise as a technology with unlimited and infinite potential, there is a very real limiting factor to its growth. In his testimony, Altman skipped the fluff: "Eventually, chips, network gear... will be made by robots, and we'll make that very efficient and cheaper and cheaper, but an electron is an electron." This is a fundamental economic principle that will shape AI's future. As AI manufacturing processes become increasingly automated and optimized, the variable costs of hardware production will steadily decline. What remains immutable is the physics of computation itself: the energy required to power these systems. In a mature AI economy, the marginal cost of intelligence will approach the marginal cost of electricity. This creates a direct relationship between energy innovation and AI capabilities; regions with abundant, reliable, and affordable energy will gain decisive advantages in computational power.
Energy is the primary limiting factor to innovation. Some estimates suggest the U.S. will need up to 90 additional gigawatts, about 90 nuclear power plants to power data centers. The nations or companies that can generate clean energy at scale will effectively set the ceiling on what's possible in artificial intelligence. There are two methods the U.S. must adopt to build an energy-efficient AI infrastructure and lead the global AI race: measure AI emissions at scale and treat energy policy and AI policy as intertwined, as opposed to separate interests.
Carbon footprint represents the most comprehensive metric we have for quantifying the true future cost of AI systems. However, OpenAI and other major companies do not disclose how carbon-intensive their models are, leaving users in the dark about their AI energy. Despite the lack of transparency, data scientists have found nearly precise ways to estimate the carbon cost of various popular models. In the first week after OpenAI released its image generation tool, users created 700 million images. Each image uses about 7 watt-hours of electricity. Together, that's more than 5 million kilowatt-hours —roughly the same amount of energy 24,000 U.S. homes use in a week. Organizations that measure and optimize for carbon efficiency now will gain crucial advantages as regulations tighten and energy costs fluctuate. As energy becomes the limiting factor in AI advancement, carbon intensity directly translates to economic competitiveness. While today's market may not fully price environmental externalities, forward-thinking policies and market pressures increasingly will.
By implementing carbon accounting and reporting frameworks for AI operations today, companies and governments can establish the measurement infrastructure needed to drive meaningful optimization. These metrics provide essential visibility into not just where energy is being consumed, but how efficiently it's being utilized across model training, inference, and supporting infrastructure.
I recently worked with a major advertising technology company and the experience illustrated this principle in action. When they migrated from CPUs to GPUs, our measurements revealed a 62% reduction in carbon emissions and 55% less water usage—significant efficiency gains achieved through precise measurement and targeted infrastructure changes. This clearly demonstrated how another path forward is possible. And it starts with tracking and optimizing. Starting this measurement process immediately allows organizations to identify optimization opportunities, establish performance baselines, and develop the organizational capabilities needed for sustainable AI leadership.
According to the Carbon Disclosure Project, 'Failure to tackle climate-related risks in supply chains costs nearly three times more than the actions required to mitigate these risks.' Businesses and governments investing in AI infrastructure today must also invest in ways that sustain innovation for generations to come. As incredibly ambitious AI projects take hold, we have no option but to supercharge innovation to support the most efficient models. The race for artificial intelligence supremacy may ultimately transform into a race for energy supremacy, with the most significant breakthroughs coming not from Silicon Valley's software engineers but from innovations we can make in using AI more sustainably.
In a Senate Commerce Committee hearing with Altman in May, Senator Ted Budd (R-NC) said, "The ability for the U.S. to deploy new energy generation capacity and upgrade its grid is in many ways the key to the race against China. Energy is how we can win and it's also how we can lose.' According to the International Energy Agency, China is already a global leader in renewable energy, set to account for almost half of the world's total renewable power by the end of the decade. While the U.S. is hyper-focused on competing with China's AI models, to effectively compete and lead global AI we also need to lead in operational AI capability. Energy and AI are inextricably linked; this approach must be reflected in our policy.
And while transitioning to renewable energy is crucial for sustainable AI development, scaling renewable energy infrastructure to meet AI's exponentially growing demands will require decades of coordinated investment and policy alignment. We simply cannot afford to wait that long. To adequately respond to the rush of demand today, we can start to spot and mitigate inefficiencies by first capturing the carbon and water footprint of generative AI.
The Artificial Intelligence Environmental Impacts Act of 2024, out of Massachusetts, is one of the first bills that sets out to align our AI ambitions with our energy realities, calling for the development of comprehensive measurement and reporting tools that take into account the full range of AI's environmental impact.
Without concerted action now, we risk embedding wasteful practices into the foundations of an economy increasingly powered by AI. We must adopt a policy-forward strategy to incentivize energy-efficient data centers through consistent measurement, tracking and demanding efficiency where it exists.
Last week, Sam Altman laid out his vision for how AI can transform humanity by the 2030s, and he reiterated the possibilities that lie ahead: '[Intelligence and energy] have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else.'
There's no doubt that the cost of AI is energy. Let's make sure it's sustainable.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Torc Joins the Stanford Center for AI Safety to Conduct Joint Research on AI Safety for Level 4 Autonomous Trucking
Torc Joins the Stanford Center for AI Safety to Conduct Joint Research on AI Safety for Level 4 Autonomous Trucking

Business Wire

time33 minutes ago

  • Business Wire

Torc Joins the Stanford Center for AI Safety to Conduct Joint Research on AI Safety for Level 4 Autonomous Trucking

BLACKSBURG, Va.--(BUSINESS WIRE)-- Torc, a pioneer in commercializing self-driving class 8 trucks, today announced its membership with the Stanford Center for AI Safety, which conducts state-of-the-art research to help ensure the safety of AI, specifically machine learning, for use in autonomous trucking applications. This membership marks a significant milestone in Torc's ongoing commitment to ensuring the safety and reliability of its autonomous trucking solutions as the company prepares for market entry in 2027. The membership enables Torc to sponsor, collaborate in, and coauthor research with the Stanford Center for AI Safety, enabling direct access to those research findings as they happen. Access to the center's research symposiums, seminars, and other member benefits also help Torc apply Stanford's extensive AI Safety research in the company's efforts to significantly enhance the safety protocols of machine learning models within its autonomous driving systems. "Torc is proud to join the Stanford Center for AI Safety, reinforcing our mission to deliver safe, scalable, and trustworthy autonomous solutions,' said Steve Kenner, Chief Safety Officer at Torc. "This membership aligns with our commitment to advancing rigorous safety practices in AI development and supports our goal of providing highly reliable technology to our customers." The Stanford Center for AI Safety's research focuses on developing robust safety protocols and advanced machine learning techniques to mitigate risks in autonomous systems. As a member of the center, Torc can leverage published research to continue to address critical safety challenges in autonomous driving applications. Ultimately, Torc will work to continue to enhance the reliability and safety of its machine learning models toward the company's goal of fully commercializing autonomous trucks for long-haul applications in the U.S. in 2027. "Collaborating with members in our affiliates program allows us to apply our research in AI safety to real-world challenges,' commented Duncan Eddy, Director of the Stanford Center for AI Safety. 'Our work with Torc will include efforts to enhance the safety and reliability of autonomous driving systems, ultimately contributing to the advancement of this transformative technology." For more information on Torc, please visit About Torc Torc, headquartered in Blacksburg, Virginia, is an independent subsidiary of Daimler Truck AG, a global leader and pioneer in trucking. Founded in 2005 at the birth of the self-driving vehicle revolution, Torc has over 20 years of experience in pioneering safety-critical, self-driving applications. Torc offers a complete self-driving vehicle software and integration solution and is currently focusing on commercializing autonomous trucks for long-haul applications in the U.S. In addition to its Blacksburg headquarters and engineering offices in Austin, Texas, and Montreal, Canada, Torc has a fleet operations facility in the Dallas-Fort Worth area in Texas, to support the company's productization and commercialization efforts, as well as a presence in Ann Arbor, MI, to take advantage of the autonomous and automotive talent base in that region. Torc's purpose is driving the future of freight with autonomous technology. As the world's leading autonomous trucking solution, we empower exceptional employees, deliver a focused, hub-to-hub autonomous truck product, and provide our customers with the safest, most reliable, and cost-efficient solution to the market.

TENEX.AI and WitnessAI Partner to Deliver Managed Security and Compliance Services for Enterprise AI
TENEX.AI and WitnessAI Partner to Deliver Managed Security and Compliance Services for Enterprise AI

Business Wire

time33 minutes ago

  • Business Wire

TENEX.AI and WitnessAI Partner to Deliver Managed Security and Compliance Services for Enterprise AI

MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)-- WitnessAI, creator of the first enablement platform for safe AI use, and the AI-native cybersecurity company transforming enterprise security, today announced a partnership to provide managed security and compliance services for enterprises using AI for employee productivity, customer effectiveness, and operational security. For enterprises seeking to use GenAI to improve productivity and competitive position, will provide operational and policy expertise to ensure ongoing success, using the WitnessAI Safe AI Enablement Platform. 'Together, we're enabling enterprises to adopt AI quickly and safely—combining deep visibility, policy control, and expert-managed security.' According to a recent study by McKinsey, AI has 'potential impact poised to surpass even the biggest innovations of the past, from the printing press to the automobile,' and three times more employees are using AI for their work than their employers imagine. Organizations are investing heavily to capitalize on the opportunities AI brings. To do so, these same organizations need technology solutions and operational expertise. Together, WitnessAI and deliver this combination. WitnessAI provides software to securely observe, control, and protect employee and customer use of AI applications. WitnessAI analytics help organizations understand risks from AI usage, threats to users and information, and areas for ROI and operational improvements. provides managed services and trusted expertise for enterprises looking to securely adopt AI and use AI to protect their people and data. TENEX helps enterprises navigate the complexities of security, compliance, and the safe use of artificial intelligence. 'Enterprise interest in safe AI adoption is off the charts, and organizations are looking for expertise beyond the technology itself,' said Rick Caccia, CEO, WitnessAI. ' helps to fully solve the AI adoption challenge by delivering services around the software. We help enterprises use AI to move fast but stay safe together.' 'Secure AI adoption isn't just a security problem; regulatory compliance, operational excellence, and organizational improvement are key factors in ensuring success,' said Eric Foster, CEO, 'Combining our expert services and WitnessAI's visibility and policy controls solve the whole problem for enterprise security and privacy teams, enabling customers to become industry leaders.' Both companies are already working with global organizations to secure and govern AI usage. They are backed by leading investors, including Andressen Horowitz, Ballistic Ventures, Shield Capital, and Google Ventures—firms that have had notable success picking major security winners. About WitnessAI WitnessAI enables safe and effective adoption of enterprise AI through security and governance guardrails for public and private LLMs. The WitnessAI Secure AI Enablement Platform provides visibility of employee AI use, control of that use via AI-oriented policy, and protection of that use via data and topic security. Learn more at About is a cybersecurity company leveraging advanced artificial intelligence and human expertise to transform enterprise security. Backed by Andreessen Horowitz (a16z) and Shield Capital, TENEX's flagship offering is a next-generation Managed Detection and Response (MDR) service, transforming how organizations detect and respond to threats. With deep expertise in Google and Microsoft security ecosystems and state-of-the-art AI capabilities, TENEX empowers enterprises to strengthen threat detection, agility, and resilience while maximizing the value of their security investments.

The Senate is expected to pass this crypto bill without addressing Trump's investments
The Senate is expected to pass this crypto bill without addressing Trump's investments

Fast Company

time41 minutes ago

  • Fast Company

The Senate is expected to pass this crypto bill without addressing Trump's investments

The Senate is expected to approve legislation Tuesday that would regulate a form of cryptocurrency known as stablecoins, the first of what is expected to be a wave of crypto legislation from Congress that the industry hopes will bolster its legitimacy and reassure consumers. The fast-moving legislation, which will be sent to the House for potential revisions, comes on the heels of a 2024 campaign cycle where the crypto industry ranked among the top political spenders in the country, underscoring its growing influence in Washington and beyond. Eighteen Democratic senators have shown support for the legislation as it has advanced, siding with the Republican majority in the 53–47 Senate. If passed, it would become the second major bipartisan bill to advance through the Senate this year, following the Laken Riley Act on immigration enforcement in January. Still, most Democrats oppose the bill. They have raised concerns that the measure does little to address President Donald Trump's personal financial interests in the crypto space. 'We weren't able to include certainly everything we would have wanted, but it was a good bipartisan effort,' said Sen. Angela Alsobrooks, D-Md., on Monday. She added, 'This is an unregulated area that will now be regulated.' Known as the GENIUS Act, the bill would establish guardrails and consumer protections for stablecoins, a type of cryptocurrency typically pegged to the U.S. dollar. The acronym stands for 'Guiding and Establishing National Innovation for U.S. Stablecoins.' It's expected to pass Tuesday, since it only requires a simple majority vote—and it already cleared its biggest procedural hurdle last week in a 68–30 vote. But the bill has faced more resistance than initially expected. There is a provision in the bill that bans members of Congress and their families from profiting off stablecoins. But that prohibition does not extend to the president and his family, even as Trump builds a crypto empire from the White House. Trump hosted a private dinner last month at his golf club with top investors in a Trump-branded meme coin. His family holds a large stake in World Liberty Financial, a crypto project that provides yet another avenue where investors are buying in and enriching the president's relatives. World Liberty has launched its own stablecoin, USD1. The administration is broadly supportive of crypto's growth and its integration into the economy. Treasury Secretary Scott Bessent last week said the legislation could help push the U.S. stablecoin market beyond $2 trillion by the end of 2028. Brian Armstrong, CEO of Coinbase —the nation's largest crypto exchange and a major advocate for the bill—has met with Trump and praised his early moves on crypto. This past weekend, Coinbase was among the more prominent brands that sponsored a parade in Washington commemorating the Army's 250th anniversary—an event that coincided with Trump's 79th birthday. But the crypto industry emphasizes that they view the legislative effort as bipartisan, pointing to champions on each side of the aisle. 'The GENIUS Act will be the most significant digital assets legislation ever to pass the U.S. Senate,' Senate Banking Committee Chair Tim Scott, R-S.C., said ahead of a key vote last week. 'It's the product of months of bipartisan work.' The bill did hit one rough patch in early May, when a bloc of Senate Democrats who had previously supported the bill reversed course and voted to block it from advancing. That prompted new negotiations involving Senate Republicans, Democrats and the White House, which ultimately produced the compromise version expected to win passage Tuesday. 'There were many, many changes that were made. And ultimately, it's a much better deal because we were all at the table,' Alsobrooks said. Still, the bill leaves unresolved concerns over presidential conflicts of interest—an issue that remains a source of tension within the Democratic caucus. Sen. Elizabeth Warren, D-Mass., has been among the most outspoken as the ranking member on the Senate Banking Committee, warning that the bill creates a 'super highway' for Trump corruption. She has also warned that the bill would allow major technology companies, such as Amazon and Meta, to launch their own stablecoins. If the stablecoin legislation passes the Senate on Tuesday, it still faces several hurdles before reaching the president's desk. It must clear the narrowly held Republican majority in the House, where lawmakers may try to attach a broader market structure bill—sweeping legislation that could make passage through the Senate more difficult. Trump has said he wants stablecoin legislation on his desk before Congress breaks for its August recess, now just under 50 days away.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store