logo
Meta becomes the latest big tech company turning to nuclear power for its AI needs

Meta becomes the latest big tech company turning to nuclear power for its AI needs

Boston Globe2 days ago

Constellation's Clinton Clean Energy Center was actually slated to close in 2017 after years of financial losses but was saved by legislation in Illinois establishing a zero-emission credit program to support the plant into 2027. The Meta-Constellation deal takes effect in June of 2027, when the state's taxpayer funded zero-emission credit program expires.
With the arrival of Meta, Clinton's clean energy output will expand by 30 megawatts, preserve 1,100 local jobs and bring in $13.5 million in annual tax revenue, according to the companies.
Get Starting Point
A guide through the most important stories of the morning, delivered Monday through Friday.
Enter Email
Sign Up
'Securing clean, reliable energy is necessary to continue advancing our AI ambitions,' said Urvi Parekh, Meta's head of global energy.
Advertisement
Constellation, the owner of the shuttered Three Mile Island nuclear power plant, said in September that it planned to restart the reactor so tech giant Microsoft could secure power to supply its data centers. Three Mile Island, located on the Susquehanna River just outside Harrisburg, Pennsylvania, was the site of the nation's worst commercial nuclear power accident in 1979.
Also last fall, Amazon said it was investing in small nuclear reactors, two days after a similar announcement by Google. Additionally, Google announced last month that it was investing in three advanced nuclear energy projects with Elementl Power.
Advertisement
U.S. states have been positioning themselves to meet the tech industry's power needs as policymakers consider expanding subsidies and gutting regulatory obstacles.
Last year, 25 states passed legislation to support advanced nuclear energy, and lawmakers this year have introduced over 200 bills supportive of nuclear energy, according to the trade association Nuclear Energy Institute.
Advanced reactor designs from competing firms are filling up the federal government's regulatory pipeline as the industry touts them as a reliable, climate-friendly way to meet electricity demands from tech giants desperate to power their fast-growing artificial intelligence platforms.
Amazon, Google and Microsoft also have been investing in solar and wind technologies, which make electricity without producing greenhouse gas emissions.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes
FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes

Forbes

time29 minutes ago

  • Forbes

FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes

AI at the FDA getty The U.S. Food and Drug Administration announced this week that it deployed a generative AI tool called ELSA (Evidence-based Learning System Assistant), across its organization. After a low-profile pilot that delivered measurable gains, the system is now in use by staff across the agency, several weeks ahead of its original schedule. Dr. Marty Makary, the FDA's commissioner, shared a major outcome. A review task that once took two or three days now takes six minutes. 'Today, we met our goal ahead of schedule and under budget,' said Makary. 'What took one scientific reviewer two to three days [before] The FDA has thousands of reviewers, analysts, and inspectors who deal with massive volumes of unstructured data such as clinical trial documents, safety reports, inspection records. Automating any meaningful portion of that stack creates outsized returns. ELSA helps FDA teams speed up several essential tasks. Staff are already using it to summarize adverse event data for safety assessments, compare drug labels, generate basic code for nonclinical database setup, and identify priority sites for inspections, among other tasks. This last item, using data to rank where inspectors should go, could have a real-world impact on how the FDA oversees the drug and food supply chain and impacts on how the FDA delivers its services. Importantly, however, the tool isn't making autonomous decisions without a human in the loop. The system prepares information so that experts can decide faster. It cuts through the routine, not the judgment. One of the biggest questions about AI systems in the public sector revolves around the use of data and third party AI systems. Makary addressed this directly by saying that 'All information stays within the agency. The AI models are not being trained on data submitted by the industry.' That's a sharp contrast to the AI approaches being taken in the private sector, where many large language models have faced criticism over training on proprietary or user-submitted content. In the enterprise world, this has created mounting demand for "air-gapped" AI solutions that keep data locked inside the company. That makes the FDA's model different from many corporate tools, which often rely on open or external data sources. The agency isn't building a public-facing product. It's building a controlled internal system, one that helps it do its job better. Federal departments have been slow to move past AI experimentation. The Department of Veterans Affairs has started testing predictive tools to manage appointments. The SEC has explored market surveillance AI for years. But few have pushed into full and widespread production. The federal government has thousands of employees processing huge volumes of information, most of it unstructured sitting in documents, files, and even paper. That means AI is being focused most on operational and process-oriented activities. It's shaping up to be a key piece of how agencies process data, make recommendations, and act. Makary put it simply that ELSA is just the beginning for AI adoption within the FDA. 'Today's rollout of ELSA will be the first of many initiatives to come,' he said. 'This is how we'll better serve the American people.'​​

Anthropic C.E.O.: Don't Let A.I. Companies off the Hook
Anthropic C.E.O.: Don't Let A.I. Companies off the Hook

New York Times

time34 minutes ago

  • New York Times

Anthropic C.E.O.: Don't Let A.I. Companies off the Hook

Picture this: You give a bot notice that you'll shut it down soon, and replace it with a different artificial intelligence system. In the past, you gave it access to your emails. In some of them, you alluded to the fact that you've been having an affair. The bot threatens you, telling you that if the shutdown plans aren't changed, it will forward the emails to your wife. This scenario isn't fiction. Anthropic's latest A.I. model demonstrated just a few weeks ago that it was capable of this kind of behavior. Despite some misleading headlines, the model didn't do this in the real world. Its behavior was part of an evaluation where we deliberately put it in an extreme experimental situation to observe its responses and get early warnings about the risks, much like an airplane manufacturer might test a plane's performance in a wind tunnel. We're not alone in discovering these risks. A recent experimental stress-test of OpenAI's o3 model found that it at times wrote special code to stop itself from being shut down. Google has said that a recent version of its Gemini model is approaching a point where it could help people carry out cyberattacks. And some tests even show that A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons. None of this diminishes the vast promise of A.I. I've written at length about how it could transform science, medicine, energy, defense and much more. It's already increasing productivity in surprising and exciting ways. It has helped, for example, a pharmaceutical company draft clinical study reports in minutes instead of weeks and has helped patients (including members of my own family) diagnose medical issues that could otherwise have been missed. It could accelerate economic growth to an extent not seen for a century, improving everyone's quality of life. This amazing potential inspires me, our researchers and the businesses we work with every day. But to fully realize A.I.'s benefits, we need to find and fix the dangers before they find us. Every time we release a new A.I. system, Anthropic measures and mitigates its risks. We share our models with external research organizations for testing, and we don't release models until we are confident they are safe. We put in place sophisticated defenses against the most serious risks, such as biological weapons. We research not just the models themselves, but also their future effects on the labor market and employment. To show our work in these areas, we publish detailed model evaluations and reports. Want all of The Times? Subscribe.

Trump Wants His Presidential Library Set in Florida, Enticed by Free Land
Trump Wants His Presidential Library Set in Florida, Enticed by Free Land

Wall Street Journal

time42 minutes ago

  • Wall Street Journal

Trump Wants His Presidential Library Set in Florida, Enticed by Free Land

Donald Trump is considering the campus of Florida Atlantic University for a presidential library, on a site where he has been offered free land, as planning begins for the MAGA mecca he eschewed during his first term. Trump and his advisers are planning a campaign to raise hundreds of millions of dollars for a library fund. One of the president's sons, Eric Trump, and one of his sons-in-law, Michael Boulos, recently established a nonprofit to support the library. More than $37 million from lawsuits involving ABC News and Meta Platforms, as well as tens of millions of leftover inauguration funds, are expected to fund construction of the complex, along with donors. Trump envisions turning a $400 million Boeing 747 jet—a gift from Qatar—into a tourist attraction at the library.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store