logo
Tech firms say deals for power give new life to nuclear plants at risk of going offline

Tech firms say deals for power give new life to nuclear plants at risk of going offline

NZ Herald5 days ago
Meta signed a 20-year agreement for the power flowing from a large legacy reactor in Illinois.
Microsoft struck a deal to restart a reactor next to the one at Pennsylvania's Three Mile Island plant that was closed in 1979 by a partial meltdown.
And Amazon last month in the same state locked up power from a 42-year-old nuclear plant down the Susquehanna River.
Tech companies are scouring the nation for other geriatric nuclear plants to power their AI dreams, according to interviews with nuclear industry officials and company earnings calls.
Their interest is focused on the roughly two dozen operating plants in unregulated markets, which are in many cases free to sell power to the highest bidder. They make up about half of the 54 plants still operating in the US.
The tech firms say the deals give new life to plants at risk of going offline or that have already been shut down.
Contracts that lock in rates for decades are attractive to plant operators, and the electricity flows without directly generating new carbon emissions.
Critics say Silicon Valley's nuclear spree will make it more likely that consumers will face electricity rate hikes or shortages in coming years as the US faces soaring demand for power - driven in part by new data centres.
By locking up ageing nuclear plants instead of building new power generation, tech firms could leave communities to fall back on fossil fuels, extending the life of polluting coal and gas plants.
A few years ago, nuclear energy struggled to compete with cheaper renewables and natural gas, but all power sources are now in greater demand.
Contracts with tech firms can offer nuclear plant operators as much as double the market rate for electricity.
Jackson Morris, a director with the environmental advocacy group Natural Resources Defence Council, said tapping nuclear energy allows companies to keep pledges to use carbon-free power, but 'doesn't do anything to solve for the impact they're having on consumers'.
'They're insulating themselves from their own impact,' he said.
Amazon, Google, Meta, and Microsoft declined to answer questions about which additional nuclear plants they may be seeking to buy power from, as well as the potential impacts of such purchases on other ratepayers and the environment. Amazon founder Jeff Bezos owns the Washington Post.
All of the companies say they mitigate the impact of their energy use on other customers, by working with utilities to shield customers from funding infrastructure that serves only data centres and investing in bringing new clean technologies to the power grid.
Tech firms say their data centres will eventually be powered by a new generation of cheaper but more sophisticated nuclear reactors, to be designed with help from AI.
However, the technology has been stymied by engineering issues, supply chain challenges and regulatory hurdles. Google and Microsoft are also investing in fusion energy, which is even less proven.
Controls, monitors and indicator lights fill the main control room at Three Mile Island last year. Photo / Wesley Lapointe, The Washington Post
'It turns out it is hard to go from all of that fancy new technology on a spreadsheet to an actual piece of infrastructure that isn't run with analogue controls,' said Ted Nordhaus, co-founder of the Breakthrough Institute, a California-based energy think-tank.
'Right now there is not much else to do other than try to squeeze every electron you can out of the existing nuclear fleet.'
Chain reaction
Energy companies that own nuclear plants are thrilled by the tech industry's recent interest, calling it a springboard for nuclear power's resurgence.
New Jersey power company PSEG told investors in February that it is in talks with tech firms about selling large amounts of power directly from its nuclear reactors on what is known as the Artificial Island complex in Delaware Bay.
Company chief executive Ralph LaRossa said in April that requests for new power from the utility by data centres has exploded over the past year, jumping 16-fold to 6.4 gigawatts, an amount of electricity that could power several million homes.
In Texas, energy company Vistra says it is in talks with tech firms interested in buying energy from the Comanche Peak nuclear plant, near Fort Worth, and possibly others it owns in Ohio and Pennsylvania.
'I think we will see more large deals,' said Dan Eggers, executive vice-president at Constellation Energy, which owns or partially owns 13 nuclear energy complexes across the country.
Constellation has already rezoned land next to the Byron Clean Energy Centre, a nuclear plant in Illinois, so tech companies can build data centres there.
It is seeking similar changes at the campus of the Calvert Cliffs nuclear plant in Maryland on Chesapeake Bay. The company says it is also contemplating new deals with tech companies for long-term nuclear power contracts in Pennsylvania and New York.
Lawmakers and regulators in some communities are concerned data centre nuclear deals could increase costs for other ratepayers and weaken the power grid.
Some Maryland lawmakers want to ban Constellation from inviting data centre construction alongside Calvert Cliffs, which produces nearly 40% of the state's electricity. A report from the state's Public Service Commission warns that siphoning energy from the plant away from the power grid for data centres could destabilise the system.
The Calvert Cliffs nuclear power plant in Lusby, Maryland, is seen in 2011. Photo / Jonathan Newton, The Washington Post
'In addition to being costly to replace a large nuclear plant, the quality of the generation … would be difficult to replace,' the report says. Unlike solar or wind facilities, nuclear power provides round-the-clock electricity when the plants are operating, in any weather.
In many cases, nuclear power that gets redirected to tech companies would be backfilled on the power grid with gas or coal generation.
Nuclear industry officials say the solution is not restricting deals, but building more plants.
'It is short sighted to say we will just ignore all this demand over the next few years and tell these companies to get their power somewhere else, when this could set us up for a lot of growth in the industry,' said Benton Arnett, senior director of markets and policy at the Nuclear Energy Institute, an industry group.
But even nuclear executives working with tech firms acknowledge that pulling zero emissions nuclear energy away from other customers will have an impact on the climate and can be out of sync with ambitious commitments tech firms have made to reduce their carbon footprint.
'A growing list of people are realising they can't have everything they want,' said Robert Coward, principal officer at MPR Associates, one of the nuclear industry's leading technical services firms.
Critical mass
The scramble by tech firms to secure more nuclear energy quickly has led Silicon Valley companies to some unexpected places.
They include a dormant construction site in South Carolina, where plans to build a Three Mile Island-size nuclear plant were abandoned in 2017, after the developer burned through US$9 billion on a project that struggled with cost overruns and engineering setbacks. Local ratepayers were saddled with the bill.
Federal prosecutors in 2020 secured prison sentences for executives involved with the project for lying to investors and ratepayers about its viability.
Now, several big tech companies are among those that have expressed interest in bringing the VC Summer nuclear project back to life, according to testimony from officials at utility Santee Cooper, after it invited proposals for restarting the project.
A utility spokesperson would not say if there are tech companies among the three or four proposals she said are finalists for a potential deal.
Tech firms are also eyeing a revival of Duane Arnold Energy Centre in Iowa, a 1970s vintage nuclear plant majority-owned by NextEra that was mothballed in 2020 after a fierce storm damaged its cooling towers, according to company earnings calls.
The repairs were initially deemed too costly, but data centres have shifted the economics of nuclear energy, and NextEra is mulling a reboot to serve the facilities.
'If we continue to see the kind of prices Microsoft is willing to pay for nuclear power from Three Mile Island, these type of deals become a solid economic proposition,' said Carly Davenport, a utilities analyst at Goldman Sachs.
She said estimates show the tech company is paying as much as twice the going rate on the open market, and locking in for a 20-year contract.
Duane Arnold is one of the last retired plants intact enough to restart.
Many of the retired plants in the US have already been dismantled.
But tech companies are finding ways to squeeze more juice out of active reactors in the ageing national fleet, pursuing reactor 'uprates' from federal regulators that allow increased output.
Nuclear power companies aim to increase the power output of the existing US nuclear fleet by the equivalent of three large new reactors using that tactic.
As more deals involving ageing reactors emerge, consumer advocates and environmental groups are growing concerned about the impact on everyday ratepayers and the planet.
Amazon reconfigured its deal in Pennsylvania after it was rejected by federal regulators that expressed concern about the effects on consumer electricity bills.
The company had proposed routing power from the plant directly to nearby data centres, allowing it to avoid paying usage fees for the electric grid.
A caution sign warns of radioactive exposure on the turbine deck at Three Mile Island, which is being renamed Crane Clean Energy Centre. Photo / Wesley Lapointe, The Washington Post
The online retailer last month announced a deal with plant owner Talen in which it agreed to pay grid fees, a contract that will effectively lock up a large chunk of existing power generation at a time the Mid-Atlantic power grid desperately needs more energy.
The deal is notable because it puts an existing nuclear plant on sound economic footing for another decade of emissions-free power generation, said former federal energy commissioner Allison Clements.
However, Amazon is also removing supply from the grid just as demand from AI and other uses such as electric cars and air conditioners is spiking.
'There isn't enough power on the grid,' Clements said, and the increased load forecast by analysts, utilities and grid operators cannot be met by existing power sources. 'There's not enough room on the system.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta bans 6.8 million WhatsApp accounts linked to scam operations
Meta bans 6.8 million WhatsApp accounts linked to scam operations

NZ Herald

time3 hours ago

  • NZ Herald

Meta bans 6.8 million WhatsApp accounts linked to scam operations

A security expert said in a video accompanying Meta's blog post that users should pause before responding to messages on internet platforms, noting that scammers often create a fake sense of urgency to get people to respond quickly. Photo / Getty Images Meta said today that it had banned more than 6.8 million WhatsApp accounts this year linked to scam operations. A wave of criminal activity on the internet has wrangled billions of dollars out of victims' savings. Scam accounts were often linked to criminal centres across Southeast Asia, where

The AI doctor will see you … soon
The AI doctor will see you … soon

Newsroom

time8 hours ago

  • Newsroom

The AI doctor will see you … soon

Comment: Artificial intelligence is already widely used in healthcare. There are now more than 1000 Federal Drug Administration-authorised AI systems in use in the US, and regulators around the world have allowed a variety of AI systems to support doctors and healthcare organisations. AI is being used to support radiologists examining X-rays and MRI scans by highlighting abnormal features, and to help predict how likely someone is to develop a disease based on their genetics and lifestyle. It is also integrated with consumer technology that many people use to manage their health. If you own an Apple watch, it can use AI to warn you if you develop an abnormal heart rhythm. More recently, doctors (including many GPs in Aotearoa New Zealand) have adopted AI to help them to write their medical notes. An AI system listens into the GP-patient conversation and then uses a large language model such as ChatGPT to turn the transcript of the audio into a summary of the consultation. This saves the doctor time and can help them pay closer attention to what their patient is saying rather than concentrating on writing notes. But there are still lots of things we don't know about the future of AI in health. I was recently invited to speak at the Artificial Intelligence in Medicine and Imaging conference at Stanford University, and clinicians in the audience asked questions that are quite difficult to answer. For example, if an AI system used by a doctor makes a mistake (ChatGPT is well known for 'hallucinating' incorrect information), who is liable if the error leads to a poor outcome for the patient? It can also be difficult to accurately assess the performance of AI systems. Often studies only assess AI systems in the lab, as it were, rather than in real world use on the wards. I'm the editor-in-chief of a new British Medical Journal publication, BMJ Digital Health & AI, which aims to publish high-quality studies to help doctors and healthcare organisations determine which types of AI and digital health technologies are going to be useful in healthcare. We've recently published a paper about a new AI system for identifying which artery is blocked in a heart attack, and another on how GPs in the UK are using AI for transcribing their notes. One of the most interesting topics in AI research is whether generative AI is better than a doctor for general purpose diagnosis. There seems to be some evidence emerging that AI may be starting to be better than doctors at diagnosing patients when given descriptions of complex cases. The surprising thing about this research is that it found that an AI alone might be more accurate than when a doctor uses an AI to help them. This may be because some doctors don't know how to use AI systems effectively, indicating that medical schools and training colleges should incorporate AI training into medical education programmes. Another interesting development is the use of AI avatars (simulated humans) for patient pre-consultations and triage, something that seems likely to be implemented within the next few years. The experience will be very similar to talking with a human doctor and the AI avatar could then explain to the real doctor what that they found and what they would recommend as treatment. Though this may save time, a balance will need to be struck between efficiency and patients' preferences – would you prefer to see an AI doctor now or wait longer to see a human doctor? The advancement of AI in healthcare is very exciting but there are risks. Often new technology is implemented without considering so-called human factors. These can have a big impact on whether mistakes are made using the new system, or even whether the system will get used at all. Clinicians and patients quickly stop using systems that are hard to use or that don't fit into their normal work routines. The best way to prevent this is to use 'human-centred design', where real people – doctors and patients – are included in the design process. There is also a risk that unregulated AI systems are used to diagnose patients or make treatment decisions. Most AI systems are highly regulated – patients can be reassured that any AI involved in their care is being used safely. But there is a risk that governments may not keep up with the accelerating development of AI systems. Rapid, large-scale adoption of inaccurate healthcare-related AI systems could cause a lot of problems, so it is very important governments invest in high-quality AI research and robust regulatory processes to ensure patient safety. Chris Paton will be giving a public lecture about AI in healthcare at the Liggins Institute on August 14 at 6pm. Register here.

EY & ACCA urge trustworthy AI with robust assessment frameworks
EY & ACCA urge trustworthy AI with robust assessment frameworks

Techday NZ

time20 hours ago

  • Techday NZ

EY & ACCA urge trustworthy AI with robust assessment frameworks

EY and the Association of Chartered Certified Accountants (ACCA) have released a joint policy paper offering practical guidance aimed at strengthening confidence in artificial intelligence (AI) systems through effective assessments. The report, titled "AI Assessments: Enhancing Confidence in AI", examines the expanding field of AI assessments and their role in helping organisations ensure their AI technologies are well governed, compliant, and reliable. The paper is positioned as a resource for business leaders and policymakers amid rapid AI adoption across global industries. Boosting trust in AI According to the paper, comprehensive AI assessments address a pressing challenge for organisations: boosting trust in AI deployments. The report outlines how governance, conformity, and performance assessments can help businesses ensure their AI systems perform as intended, meet legal and ethical standards, and align with organisational objectives. The guidance comes as recent research highlights an ongoing trust gap in AI. The EY Response AI Pulse survey found that 58% of consumers are concerned that companies are not holding themselves accountable for potential negative uses of the technology. This concern has underscored the need for greater transparency and assurance around AI applications. "Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity." Marie-Laure Delarue, EY's Global Vice-Chair, Assurance, expressed the significance of the current moment for AI: "AI has been advancing faster than many of us could have imagined, and it now faces an inflection point, presenting incredible opportunities as well as complexities and risks. It is hard to overstate the importance of ensuring safe and effective adoption of AI. Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity." She continued, "As businesses navigate the complexities of AI deployment, they are asking fundamental questions about the meaning and impact of their AI initiatives. This reflects a growing demand for trust services that align with EY's existing capabilities in assessments, readiness evaluations, and compliance." Types of assessments The report categorises AI assessments into three main areas: governance assessments, which evaluate the internal governance structures around AI; conformity assessments, determining compliance with laws, regulations and standards; and performance assessments, which measure AI systems against specific quality and performance metrics. The paper provides recommendations for businesses and policymakers alike. It calls for business leaders to consider both mandatory and voluntary AI assessments as part of their corporate governance and risk management frameworks. For policymakers, it advocates for clear definitions of assessment purposes, methodologies, and criteria, as well as support for internationally compatible assessment standards and market capacity-building. Public interest and skills gap Helen Brand, Chief Executive of ACCA, commented on the wider societal significance of trustworthy AI systems. "As AI scales across the economy, the ability to trust the technology is vital for the public interest. This is an area where we need to bridge skills gaps and build trust in the AI ecosystem as part of driving sustainable business. We look forward to collaborating with policymakers and others in this fascinating and important area." The ACCA and EY guidance addresses several challenges related to the current robustness and reliability of AI assessments. It notes that well-specified objectives, clear assessment criteria, and professional, objective assessment providers are essential to meaningful scrutiny of AI systems. Policy landscape The publication coincides with ongoing changes in the policy environment on AI evaluation. The report references recent developments such as the AI Action Plan released by the Trump administration, which highlighted the importance of rigorous evaluations for defining and measuring AI reliability and performance, particularly in regulated sectors. As AI technologies continue to proliferate across industries, the report argues that meaningful and standardised assessments could support the broader goal of safe and responsible AI adoption both in the private and public sectors. In outlining a potential way forward, the authors suggest both businesses and governments have roles to play in developing robust assessment frameworks that secure public confidence and deliver on the promise of emerging technologies.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store