
Could Lack Of DOGE Hamper Countries From Capturing The AI Bounty?
Robot plugging into power.
ChatGPT generated image
Energy is often treated like a commodity. Priced. Traded. Measured in barrels, kilowatt-hours, or BTUs.
But that framing misses the bigger picture. Energy isn't just something we consume. It's the invisible foundation beneath every step of human progress. It fuels industry, powers cities, and increasingly, drives the digital revolution. Without it, prosperity doesn't just stall. It doesnt exist.
Over the last century, access to energy has reshaped the human condition. In 1990, around 1.9 billion people, or 35 percent of the global population, lived in extreme poverty, surviving on less than two dollars a day. Today, that figure is down to 782 million. That drop didn't happen by accident. Expanding access to electricity and fuel enabled industrial growth, agricultural productivity, and urbanisation—all pillars of economic uplift.
Now, we're entering a new chapter. One marked by rising population, accelerating technological breakthroughs, and surging demand for power. By 2050, an estimated 1.1 billion people will enter the global middle class. At the same time, the global population will approach 10 billion, with much of the growth concentrated in regions like sub-Saharan Africa.
As living standards rise, so will the need for energy-intensive goods, services, and infrastructure. The U.S. Energy Information Administration projects a 135% increase in global energy consumption by mid-century.
The systems that underpin this growth, whether economic, industrial or digital, are grounded in materials that require massive energy input. Vaclav Smil outlines four of these pillars in How the World Really Works: ammonia, steel, concrete, and plastic. These aren't optional; they are the non-negotiables of development. From feeding the world to building its infrastructure, they represent the baseline for progress. And producing them, at scale, still relies on fossil fuels.
At the same time, a new force is reshaping energy demand across the globe: artificial intelligence. Behind every large language model or real-time inference engine lies a dense web of data centers, chips, storage, and cooling infrastructure. These physical systems, often invisible to users, require tremendous amounts of electricity to function.
Electricity that must come from somewhere.
The International Energy Agency estimates that global data center energy use could rise by up to 128% by 2026, adding the equivalent of Sweden or Germany's entire energy consumption to the global grid.
In Ireland, data centers already use more than 20% of the country's electricity. In the U.S., data centers now account for more than 10% of total electricity use in at least five states. And S&P Global projects that U.S. data center usage will nearly double between 2024 and 2028, hitting 530 terawatt-hours, more electricity than Texas produced in 2022.
But where will they come from into the future?
Leading countries by number of data centers
Cloudscene, Statista,
Top 10 countries with electricity capacity
IRENA
The U.S. leads by a large margin, both in energy production and the concentration of AI data centers. However, as the data shows, the growing energy demands of AI and data centers in regions like the U.S. and China will strain existing infrastructure. While these regions have the capacity to generate the energy needed, many data centers still rely on outdated, inefficient systems, which will only exacerbate the challenge.
But will bureaucracy hinder countries from reaching their AI potential? Regulatory hurdles, delays in grid development, and challenges in resource allocation could slow efforts to meet the rising demand. Governments may struggle to keep pace with AI's rapid growth, impeding progress on critical infrastructure updates and the scaling of sustainable energy solutions.
As this demand surges, it won't be limited to tech giants. With AI becoming more accessible, energy demand will rise across industries. Yet much of today's infrastructure is ill-equipped for the task. Many data centers still rely on inefficient cooling systems and legacy hard drives.
Building the next generation of AI systems will require a shift toward more sustainable hardware: high-capacity SSDs, decentralised storage that taps into idle capacity, and liquid cooling systems that use less water and power. Companies like Nvidia are already leading the way with Blackwell GPUs, which are 25 times more energy efficient than their predecessors.
But efficiency introduces its own paradox. William Stanley Jevons observed in the 1800s that increased efficiency in resource use often leads to higher total consumption, a concept now known as the 'Jevons Paradox'. AI is already showing signs of following this pattern. When DeepSeek released its R1 model, it claimed to use 11 times fewer computing resources than Meta's Llama and cost just $6 million to train, compared to Llama's $60 million.
Yet that drop in cost sparked market concerns: would cheaper, more efficient models accelerate development, drive up energy use, and add further pressure to power grids? It's not guaranteed that every AI model will follow this path, but the trend is clear. As Microsoft CEO Satya Nadella put it, 'Jevons Paradox strikes again.'
Jevons Paradox
Twitter Satya Nadella
This tension between efficiency and demand underscores the complexity of today's energy challenge. Countries like India and China, home to nearly three billion people, are still in the process of building. Their industrial revolutions are far from over, and their need for reliable, affordable power is only accelerating. Shutting off fossil fuels isn't an option when the priority is lifting hundreds of millions into the middle class.
The systems driving global economic, industrial, and digital growth are built on materials that demand immense energy. The four pillars of modern life all rely on energy. So does innovation. From data centers to semiconductors, building the future takes power. Simply put, progress needs energy.
In that light, the countries best positioned to meet the AI-driven surge in energy demand may not be the ones with the most ambitious targets, but the ones with abundant energy resources and the ability for government approvals to move quickly. In the race to power the future, bureaucracy, not technology, might be the biggest bottleneck.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
2 hours ago
- Tom's Guide
ChatGPT Projects just got smarter — here's how to use the new tools
OpenAI's new ChatGPT Projects feature just got a huge upgrade, and it's a game-changer for anyone using ChatGPT to manage complex workstreams. Whether you're planning a major event or project, a busy professional or just trying to keep your thoughts organized, Projects gives you a centralized hub where your chats, files and instructions can all live in one focused workspace.I have found it extremely helpful for keeping Custom GPT instructions in one place in case there is another ChatGPT outage. I also use it to keep all my favorite prompts in one place. Here's everything you need to know to get started, including how to create a Project, what it can (and can't) do, and why it just might become your favorite new productivity tool. ChatGPT Projects are like folders for your chats. Each 'Project' lets you: This makes Projects ideal for larger tasks that require ongoing iteration, deeper context or collaboration. I'm currently using it for all the polished images and edits for my current middle grade novel – it's a breeze having everything in one place. To create a new Project: In the left-hand sidebar, click New Project Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Give it a clear, goal-oriented name (e.g., 'College Applications,' 'Business Ideas' or 'Novel Outline') Upload any relevant files and add custom instructions to guide ChatGPT's behavior. This is a great time to use the '3-word-rule' (e.g., 'Act like a UX expert giving me design advice') Instructions you add here will only apply inside this Project — not to your general ChatGPT usage elsewhere. You can drag and drop existing chats into a project or use the menu next to any chat to select Move to project or Create new project. Once a chat is inside a project, it will take on that project's custom instructions and can reference any files you've uploaded. This creates a seamless thread of context that helps ChatGPT deliver smarter, more consistent responses. To remove a chat from a project, just drag it out or choose Remove from the chat's something worth considering: although there is a separate Image Library, I like to use Projects to keep the images together for a specific project. That way everything stays organized and in one place. Projects support a wide range of tools, making them a one-stop shop for research, planning, and execution. Even better? Only the individual chat is shared — not your entire project or its files/ Projects you can: Each user can create unlimited Projects (with up to 20 files each, subject to subscription rate limits). If you're done with a Project, you can delete it by clicking the three-dot menu next to the Project name. This will permanently erase all chats, files and custom instructions inside. It's a good idea to delete unnecessary chats and files to ensure you never run out of space. Once deleted, the content is purged within 30 days — so be sure to back up anything important. If you have memory enabled on your ChatGPT account, Projects can reference past chats and uploaded files to provide more relevant, consistent answers. This is especially useful for long-term or multi-phase work, like writing a novel or managing a product launch. Note: To enable full memory functionality in Projects, make sure both Saved Memories and Reference Chat History are turned on in your settings. Use the Search chats bar to quickly pull up any conversation across your Projects. You can remove files, merge documents, or break your work into multiple Projects if you hit your file limit. For Pro, Team, Enterprise, and Edu users, ChatGPT does not use project data to train its models. If you're a free user or on a personal Plus/Pro plan, training can happen only if you've opted into model improvement. For enterprise-level users, Projects inherit all your workspace's existing settings — including encryption, audit logs, feature availability and data residency. Admins can't yet disable Projects entirely, but they maintain full control over retention windows and tool access. Whether you're managing a solo side hustle or leading a team initiative, ChatGPT Projects make it easier to keep everything aligned and all in one place. The feature's mix of organization, chat tools and deep memory integration turns ChatGPT into something so much more than a chatbot. It becomes your creative, analytical, always-on partner. It's completely changed the way I work and stay organized.


CNBC
11 hours ago
- CNBC
Google, Scale AI's largest customer, plans split after Meta deal, sources say
Alphabet's Google, the largest customer of Scale AI, plans to cut ties with Scale after news broke that rival Meta is taking a 49% stake in the AI data-labeling startup, five sources familiar with the matter told Reuters. Google had planned to pay Scale AI about $200 million this year for the human-labeled training data that is crucial for developing technology, including the sophisticated AI models that power Gemini, its ChatGPT competitor, one of the sources said. The search giant already held conversations with several of Scale AI's rivals this week as it seeks to shift away much of that workload, sources added. Scale's loss of significant business comes as Meta takes a big stake in the company, valuing it at $29 billion. Scale was worth $14 billion before the deal. Scale AI intends to keep its business running while its CEO, Alexandr Wang, along with a few employees, move over to Meta. Since its core business is concentrated around a few customers, it could suffer greatly if it loses key customers like Google. In a statement, a Scale AI spokesperson said its business, which spans work with major companies and governments, remains strong, as it is committed to protecting customer data. The company declined to comment on specifics with Google. Scale AI raked in $870 million in revenue in 2024, and Google spent some $150 million on Scale AI's services last year, sources said. Other major tech companies that are customers of Scale's, including Microsoft, are backing away as well. Elon Musk's xAI is also looking to exit, one of the sources said. OpenAI decided to pull back from Scale several months ago, according to sources familiar with the matter, though it spends far less money than Google. OpenAI's CFO said on Friday that the company will continue to work with Scale AI, as one of its many data vendors. Companies that compete with Meta in developing cutting-edge AI models are concerned that doing business with Scale could expose their research priorities and road map to a rival, five sources said. By contracting with Scale AI, customers often share proprietary data as well as prototype products for which Scale's workers are providing data-labeling services. With Meta now taking a 49% stake, AI companies are concerned that one of their chief rivals could gain knowledge about their business strategy and technical blueprints. Google, Microsoft and OpenAI declined to comment. xAI did not respond to a request for comment. The bulk of Scale AI's revenue comes from charging generative AI model makers for providing access to a network of human trainers with specialized knowledge — from historians to scientists, some with doctorate degrees. The humans annotate complex datasets that are used to "post-train" AI models, and as AI models have become smarter, the demand for the sophisticated human-provided examples has surged, and one annotation could cost as much as $100. Scale also does data-labeling for enterprises like self-driving car companies and the U.S. government, which are likely to stay, according to the sources. But its biggest money-maker is in partnering with generative AI model makers, the sources said. Google had already sought to diversify its data service providers for more than a year, three of the sources said. But Meta's moves this week have led Google to seek to move off Scale AI on all its key contracts, the sources added. Because of the way data-labeling contracts are structured, that process could happen quickly, two sources said. This will provide an opening for Scale AI's rivals to jump in. "The Meta-Scale deal marks a turning point," said Jonathan Siddharth, CEO of Turing, a Scale AI competitor. "Leading AI labs are realizing neutrality is no longer optional, it's essential." Labelbox, another competitor, will "probably generate hundreds of millions of new revenue" by the end of the year from customers fleeing Scale, its CEO, Manu Sharma, told Reuters. Handshake, a competitor focusing on building a network of PhDs and experts, saw a surge of workload from top AI labs that compete with Meta. "Our demand has tripled overnight after the news," said Garrett Lord, CEO at Handshake. Many AI labs now want to hire in-house data-labelers, which allows their data to remain secure, said Brendan Foody, CEO of Mercor, a startup that in addition to competing directly with Scale AI also builds technology around being able to recruit and vet candidates in an automated way, enabling AI labs to scale up their data labeling operations quickly. Founded in 2016, Scale AI provides vast amounts of labeled data or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. The Meta deal will be a boon for Scale AI's investors including Accel and Index Ventures, as well as its current and former employees. As part of the deal, Scale AI's CEO, Wang, will take a top position leading Meta's AI efforts. Meta is fighting the perception that it may have fallen behind in the AI race after its initial set of Llama 4 large language models released in April fell short of performance expectations.
Yahoo
12 hours ago
- Yahoo
‘This is coming for everyone': A new kind of AI bot takes over the web
People are replacing Google search with artificial intelligence tools like ChatGPT, a major shift that has unleashed a new kind of bot loose on the web. To offer users a tidy AI summary instead of Google's '10 blue links,' companies such as OpenAI and Anthropic have started sending out bots to retrieve and recap content in real time. They are scraping webpages and loading relevant content into the AI's memory and 'reading' far more content than a human ever would. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. According to data shared exclusively with The Washington Post, traffic from retrieval bots grew 49 percent in the first quarter of 2025 from the fourth quarter of 2024. The data is from TollBit, a New York-based start-up that helps news publishers monitor and make money when AI companies use their content. TollBit's report, based on data from 266 websites - half of which are run by national and local news organizations - suggests that the growth of bots that retrieve information when a user prompts an AI model is on an exponential curve. 'It starts with publishers, but this is coming for everyone,' Toshit Panigrahi, CEO and co-founder of TollBit, said in an interview. Panigrahi said that this kind of bot traffic, which can be hard for websites to detect, reflects growing demand for content, even as AI tools devastate traffic to news sites and other online platforms. 'Human eyeballs to your site decreased. But the net amount of content access, we believe, fundamentally is going to explode,' he said. A spokesperson for OpenAI said that referral traffic to publishers from ChatGPT searches may be lower in quantity but that it reflects a stronger user intent compared with casual web browsing. To capitalize on this shift, websites will need to reorient themselves to AI visitors rather than human ones, Panigrahi said. But he also acknowledged that squeezing payment for content when AI companies argue that scraping online data is fair use will be an uphill climb, especially as leading players make their newest AI visitors even harder to identify. Debate around the AI industry's use of online content has centered on the gargantuan amounts of text needed to train the AI models that power tools like ChatGPT. To obtain that data, tech companies use bots that scrape the open web for free, which has led to a raft of lawsuits alleging copyright theft from book authors and media companies, including a New York Times lawsuit against OpenAI. Other news publishers have opted for licensing deals. (In April, The Washington Post inked a deal with OpenAI.) In the past eight months, as chatbots have evolved to incorporate features like web search and 'reasoning' to answer more complex queries, traffic for retrieval bots has skyrocketed. It grew 2.5 times as fast as traffic for bots that scrape data for training between the fourth quarter of 2024 and the first quarter of 2025, according to TollBit's report. Panigrahi said TollBit's data may underestimate the magnitude of this change because it doesn't reflect bots that AI companies send out on behalf of AI 'agents' that can complete tasks on a user's behalf, like ordering takeout from DoorDash. The start-up's findings also add a new dimension to mounting evidence that the modern internet - optimized for Google search results and social media algorithms - will have to be restructured as the popularity of AI answers grows. 'To think of it as, 'Well, I'm optimizing my search for humans' is missing out on a big opportunity,' he said. Installing TollBit's analytics platform is free for news publishers, and the company has more than 2,000 clients, many of which are struggling with these seismic changes, according to data in the report. Although news publishers and other websites can implement blockers to prevent various AI bots from scraping their content, TollBit found that more than 26 million AI scrapes bypassed those blockers in March alone. Some AI companies claim bots for AI agents don't need to follow bot instructions because they are acting on behalf of a user. Mark Howard, chief operating officer for the media company Time, a TollBit client, said the start-up's traffic data has helped Time negotiate content licensing deals with AI companies including OpenAI and the search engine Perplexity. But the market to fairly compensate publishers is far from established, Howard said. 'The vast majority of the AI bots out there absolutely are not sourcing the content through any kind of paid mechanism. … There is a very, very long way to go.' Related Content He's dying. She's pregnant. His one last wish is to fight his cancer long enough to see his baby. The U.S. granted these journalists asylum. Then it fired them. 'Enough is enough.' Why Los Angeles is still protesting, despite fear.