logo
Pa. Public Utility Commission sets hearing on AI data centers' impacts on electricity

Pa. Public Utility Commission sets hearing on AI data centers' impacts on electricity

Yahoo29-03-2025

The Susquehanna Steam Electric Station is part of the Allegheny Electric Cooperative Inc. (Photo via U.S. Nuclear Regulatory Commission)
Data centers used for artificial intelligence and other aspects of the online world are likely to have a significant impact on Pennsylvania's energy economy in coming years, according to the head of the state Public Utility Commission.
The PUC unanimously approved Chairperson Stephen DeFrank's motion Thursday to hold a hearing April 24 on how to protect consumers, while harnessing the opportunities for economic growth, technological advancement, electricity market stability and bolstering national security.
'Balancing concerns like these is one of the primary mandates of the commission,' DeFrank said in the motion, adding that it would be required to provide non-discriminatory access to public utilities, while guarding against undue burdens and costs for existing customers and risk to electric utilities.
At least two large data center projects are in the works that have the potential to affect Pennsylvania's electrical grid.
Constellation Energy announced in September a $1.6 billion investment to restart its nuclear power plant at Three Mile Island in Dauphin County. The deal wouldprovide carbon-free electricity to supply power for software giant Microsoft's AI data centers in the region.
This month, Amazon Web Services announced the $650 million purchase of a data center with 1,200 acres of land adjacent to the Susquehanna Steam Electric Station nuclear power plant near Berwick, Luzerne County. AWS plans to build a data center campus that would consume as much energy as 900,000 homes.
Jon Gordon, wholesale markets manager for Advanced Energy United, an association of clean energy providers, told the Capital-Star the prospect of many large data centers being located in a region introduces new variables into the economy.
'Forecasting energy demand used to be relatively straightforward. Demand grew in a straight line with economic activity,' he said.
The demand from data centers could affect electricity supply and reliability when added to already increased demand from electric vehicles and heating, the retirement of fossil-fuel generating stations, the push to increase the amount of carbon-neutral energy being used, and delays in getting clean energy generation on line, Gordon said.
'We don't know how speculative these projects are and how many are actually going to come online,' he said. 'In the energy industry, we need to figure out which ones are actually going to get built.'
DeFrank noted such projects may require upgrades to electrical distribution systems to connect to the grid. The PUC must ensure that if utilities pay for improvements to serve projects that ultimately do not materialize that current ratepayers are not left with the bill.
'In such cases, questions may arise about who will ultimately bear these stranded costs. Providing certainty may mean requiring deposits or other finance security, minimum contract terms, or some sort of breakage or termination fee for loads that decide not to proceed with construction,' DeFrank's motion said.
The PUC also needs to provide large electricity users with certainty regarding how long it will take and how much it will cost to connect to the grid. Users willing to pay for and build system upgrades may be able to connect faster. But utilities may take a conservative approach to large projects, DeFrank said.
Next month's hearing will be in Harrisburg and include panels representing electricity distribution companies, industrial customers and advocates.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT's Sam Altman sends strong 2-word message on the future
ChatGPT's Sam Altman sends strong 2-word message on the future

Miami Herald

time34 minutes ago

  • Miami Herald

ChatGPT's Sam Altman sends strong 2-word message on the future

As the AI boom continues to take over both the tech industry and the news cycle, there's one thing that's for sure: it's scaring a lot of people. AI is a technically complex topic that can be difficult to explain to the average person, but there's one sentiment that isn't hard to explain at all: the concept that AI might take your job. Don't miss the move: Subscribe to TheStreet's free daily newsletter So rather than try to understand AI's capabilities, or why every major tech company from Meta to Google to Nvidia is pouring billions of dollars into developing it, most people are going to zero in on the part that's personally applicable to them. Related: Cathie Wood has a bold take on AI stealing your job Some voices in the tech space have tried to present an opposite take on the whole "AI making you jobless" rhetoric. Ark Invest CEO Cathie Wood said in a recent tweet, "History shows that new technologies create many more jobs than they displace. We do not think that this time will be different." OpenAI's Sam Altman is easily the AI movement's biggest figurehead, thanks to ChatGPT's runaway success. The company hit three million paid ChatGPT subscribers as of June. This proves that people are flocking to it in droves - and away from search engines. Research firm Gartner has even predicted that by 2026, traditional search engine volume will drop 25%. Now Altman has penned a blog post addressing the topic of AI and how it's changing our world. It's a refreshing take that, for once, will give you some hope about the future of your career. Altman's post emphasizes that compared to any time that has come before, the 2030s can be described with two powerful words: "wildly different." Altman offers a reality check, saying, "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far, it's much less weird than it seems like it should be." "We do not know how far beyond human-level intelligence we can go, but we are about to find out," he continued. More Tech Stocks: Palantir gets great news from the PentagonAnalyst has blunt words on Trump's iPhone tariff plansOpenAI teams up with legendary Apple exec The OpenAI CEO doesn't hesitate to say that his company has recently built systems that are "smarter than people in many ways, and are able to significantly amplify the output of people using them." Altman also says ChatGPT is "already more powerful than any human who has ever lived," a phrase that may feel threatening to some, considering that LLMs are not human to begin with. But Altman sees even more ahead, predicting that AI will significantly mold our future. Related: Microsoft has good news for Elon Musk, bad news for Sam Altman "In the 2030s, intelligence and energy - ideas, and the ability to make ideas happen -are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else." Altman also acknowledged that, yes, many jobs will go away as AI continues to evolve, but that won't be the end of the story. "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything," he says. "There will be very hard parts like whole classes of jobs going away, but on the other hand, the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before. We probably won't adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big." Altman also points out a key asset of humanity that AI cannot duplicate, saying, "People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don't care very much about machines." Related: OpenAI teams up with legendary Apple exec The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

‘This is coming for everyone': A new kind of AI bot takes over the web
‘This is coming for everyone': A new kind of AI bot takes over the web

Washington Post

time37 minutes ago

  • Washington Post

‘This is coming for everyone': A new kind of AI bot takes over the web

People are replacing Google search with artificial intelligence tools like ChatGPT, a major shift that has unleashed a new kind of bot loose on the web. To offer users a tidy AI summary instead of Google's '10 blue links,' companies such as OpenAI and Anthropic have started sending out bots to retrieve and recap content in real time. They are scraping webpages and loading relevant content into the AI's memory and 'reading' far more content than a human ever would.

Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity
Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Forbes

timean hour ago

  • Forbes

Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Speculating on the future of AI including artificial general intelligence (AGI) and artificial ... More superintelligence (ASI). In today's column, I examine a newly posted blog piece by Sam Altman that has generated quite a bit of hubbub and controversy within the AI community. As the CEO of OpenAI, Sam Altman is considered an AI luminary, of which his viewpoint on the future of AI carries an enormous amount of weight. His latest online commentary contains some eyebrow-raising indications about the current and upcoming status of AI, including aspects partially coated in AI-speak and other insider terminology that require mindful interpretation and translation. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. In a new posting on June 10, 2025, entitled 'The Gentle Singularity' by Sam Altman on his personal blog, the famed AI prognosticator made these remarks (excerpts): There's a whole lot in there to unpack. His upbeat-worded opinion piece contains commentary about many undecided considerations, such as referring to the ill-defined and indeterminate AI event horizon, the impacts of artificial superintelligence, various touted dates that suggest when we can expect things to really take off, hazy thoughts about the nature of the AI singularity, and much more. Let's briefly explore the mainstay elements. A big question facing those who are deeply into AI consists of whether we are on the right track to attain AGI and ASI. Maybe we are, maybe we aren't. Sam Altman's reference to the AI event horizon alludes to the existing pathway that we are on, and he unequivocally implies and states that in his opinion, we not only have reached the event horizon but that we are avidly past it already. As espoused, the takeoff has started. Just to note, that's a claim embodying immense boldness and brashness, and not everyone in AI concurs with that viewpoint. Consider these vital facets. First, in favor of that perspective, some insist that the advent of generative AI and large language models (LLMs) vividly demonstrates that we are now absolutely on the path toward AGI/ASI. The incredible semblance of natural language fluency exhibited by the computational capabilities of contemporary LLMs seems to be a sure sign that the road ahead must lead to AGI/ASI. However, not everyone is convinced that LLMs constitute the appropriate route. There are qualms that we already are witnessing headwinds on how much generative AI can be further extended, see my coverage at the link here. Perhaps we are nearing a severe roadblock, and continued efforts will not get us any further bang for the buck. Worse still, we might be off-target and going in the wrong direction altogether. Nobody can say for sure whether we are on the right path or not. It is a guess. Well, Sam Altman has planted a flag that we are incontrovertibly on the right path and that we've already zipped down the roadway quite a distance. Cynics might find this a self-serving perspective since it reinforces and reaffirms the direction that OpenAI is currently taking. Time will tell, as they say. Another consideration in the AI field is that perhaps there will be a kind of singularity that serves as a key point at which AGI or ASI will readily begin to emerge and keenly showcase that we have struck gold in terms of being on the right pathway. For my detailed explanation of the postulated AI singularity, see the link here. Some believe that the AI singularity will be a nearly instantaneous split-second affair, happening faster than the human eye can observe. One moment we will be working stridently on pushing AI forward, and then, bam, the singularity occurs. It is envisioned as a type of intelligence explosion, whereby intelligence rapidly begets more intelligence. After the singularity happens, AI will be leaps and bounds better than it just was. In fact, it could be that we will have a fully complete AGI or ASI due to the singularity. One second earlier, we had plain AI, while an instant later we amazingly have AGI or ASI in our midst, like a rabbit out of a hat. Perhaps though the singularity will be a long and drawn-out activity. There are those who speculate the singularity might get started and then take minutes, hours, or days to run its course. The time factor is unknown. Maybe the AI singularity will take months, years, decades, centuries, or lengthier to gradually unfurl. Additionally, there might not be anything resembling a singularity at all, and we've just concocted some zany theory that has no basis in reality. Sam Altman's posting seems to suggest that the AI singularity is already underway (or, maybe happening in 2030 or 2035) and that it will be a gradual emerging phenomenon, rather than an instantaneous one. Interesting conjecture. Right now, efforts to forecast when AGI and ASI are going to be attained are generally based on putting a finger up into prevailing AI winds and wildly gauging potential dates. Please be aware that the hypothesized dates have very little evidentiary basis to them. There are many highly vocal AI luminaires making brazen AGI/ASI date predictions. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. See my analysis of those dates at the link here. A somewhat quieter approach to the gambit of date guessing is via the use of surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Depending on how you interpret Sam Altman's latest blog post, it isn't clear as to whether AGI is happening by 2030 or 2035, or whether it is ASI instead of AGI since he refers to superintelligence, which might be his way of expressing ASI or maybe AGI. There is a muddiness of differentiating AGI from ASI. Indeed, I've previously covered his changing definitions associated with AGI and ASI, i.e., moving of the cheese, at the link here. We'll know how things turned out in presumably a mere 5 to 10 years. Mark your calendars accordingly. An element of the posting that has gotten the gall of especially AI ethicists is that the era of AGI and ASI seems to be portrayed as solely uplifting and joyous. We are in a gentle singularity. That's certainly happy news for the world at large. Utopia awaits. There is a decidedly other side to that coin. AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI or x-risk. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned. No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here. You can readily discern which camp the posting sides with, namely roses and fine wine. It is important to carefully assess the myriads of pronouncements and proclamations being made about the future of AI. Oftentimes, the wording appears to brazenly assert that the future is utterly known and predictable. With a sense of flair and confidence, many of these prognostications can be easily misread as somehow a bushel of facts and knowns, rather than a bundle of opinions and conjecture. Franklin D. Roosevelt wisely stated: 'There are as many opinions as there are experts.' Keep your eyes and ears open and be prudently mindful of all prophecies concerning the future of AI. You'll be immeasurably glad you were cautious and alert.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store