logo
The AI revolution is likely to drive up your electricity bill. Here's why.

The AI revolution is likely to drive up your electricity bill. Here's why.

Yahoo16-06-2025
New Jersey residents got some bad news earlier this year when the state's public utilities board warned that their electricity bills could surge up to 20% starting on June 1. A key driver in that rate hike: data centers.
The spread of these large-scale computing facilities across the U.S. amid growing demand for artificial intelligence, data storage and other technology services is projected to increase electricity consumption to record highs in the coming years, according to experts.
A report from Schneider Electric, a company that specializes in digital automation and energy management, projects that electricity demand will increase 16% by 2029, mainly due to the proliferation of data centers. Most data centers rely on the nation's electrical grid for energy, meaning it will be Americans ratepayers who pick up the tab, Mark Wolfe, executive director of the National Energy Assistance Directors Association, a group that represents states on energy issues.
"As utilities race to meet skyrocketing demand from AI and cloud computing, they're building new infrastructure and raising rates, often without transparency or public input," he told CBS MoneyWatch in an email. "That means higher electricity bills for everyday households, while tech companies benefit from sweetheart deals behind closed doors."
More data centers, more power
Thousands of data centers now dot the country, with the largest concentrations in Virginia, California and Texas. The number of data centers in the U.S. nearly doubled between 2021 and 2024, according to a report from Environment America, a network of environmental groups.
It's not just the number of data centers that are expected to rise, but the size. "The trend has been bigger data centers," Dave Turk, the former deputy secretary of the U.S. Department of Energy, told CBS MoneyWatch. "They tend to be more energy efficient."
Spurring that expansion is the rapid growth of "generative" AI companies that are consuming vast amounts of electricity to train so-called Large Language Models like ChatGPT and power. AI searches use 10 times more electricity than normal internet searches, according to a study from the Electric Power Research Institute, a nonprofit organization.
"AI is an increasing part of data centers and certainly responsible for increased electricity demand," Turk said.
Data centers, which contain thousands of computer servers, networking gear and other infrastructure, also require power to cool their systems and keep them from overheating.
Torsten Sløk, chief economist at asset management firm Apollo Global Management, estimates that data centers will require an additional 18 gigawatts of power capacity by 2030. To put that into context, New York City power demand is about 6 gigawatts.
About 4.4% of U.S. electricity went to power data centers in 2023, according to a Department of Energy's Lawrence Berkeley National Laboratory study. Not all of that demand is related to AI, but it represents a portion, Turk said.
Other factors pushing up prices
The spread of data centers isn't the only reason U.S. electricity prices are surging. The price of natural gas, inflation, ongoing electrification of buildings and vehicles, and other factors also play an important role. But utilities are factoring the high demand from data centers into their pricing models.
For example, when Dominion Energy, one of the Virginia's largest utilities, in April proposed a price hike of $8.51 per month in 2026, the company also floated the idea of a "new rate class for high energy users, including data centers."
Electricity prices have risen 4.5% in the last year, according to recent data from the Labor Department, and are estimated to surge this summer. Energy costs also drift higher if a Republican-backed budget package, dubbed the "big beautiful bill," is passed and signed into law by President Trump. Analysts from Rhodium Group predict that the bill, which would repeal a slate of tax credits created under the Inflation Reduction Act, could increase a family's energy expenditures by nearly $400 a year.
Beyond price increases, the heightened energy demand from data centers could also compromise the reliability of the grid, according to experts. In a recent report, the North American Electric Reliability Corp said that facilities that service AI and cryptocurrency companies are being developed at a faster pace than the power plants and transmission lines to support them, "resulting in lower system stability.
PJM, a grid operator in 13 states plus Washington, D.C., cited data center demand as one of the factors that could lead to capacity shortages in its 2025 forecast.
Harry Chapin: Songwriter, activist and father
How the U.S. Army was born
Early details on arrest in Minnesota lawmaker shootings
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘This Was Trauma by Simulation': ChatGPT Users File Disturbing Mental Health Complaints
‘This Was Trauma by Simulation': ChatGPT Users File Disturbing Mental Health Complaints

Gizmodo

time4 minutes ago

  • Gizmodo

‘This Was Trauma by Simulation': ChatGPT Users File Disturbing Mental Health Complaints

With about 700 million weekly users, ChatGPT is the most popular AI chatbot in the world, according to OpenAI. CEO Sam Altman likens the latest model, GPT-5, to having a PhD expert around to answer any question you can throw at it. But recent reports suggest ChatGPT is exacerbating mental illnesses in some people. And documents obtained by Gizmodo give us an inside look at what Americans are complaining about when they use ChatGPT, including difficulties with mental illnesses. Gizmodo filed a Freedom of Information Act (FOIA) request with the U.S. Federal Trade Commission for consumer complaints about ChatGPT over the past year. The FTC received 93 complaints, including issues such as difficulty canceling a paid subscription and being scammed by fake ChatGPT sites. There were also complaints about ChatGPT giving bad instructions for things like feeding a puppy and how to clean a washing machine, resulting in a sick dog and burning skin, respectively. But it was the complaints about mental health problems that stuck out to us, especially because it's an issue that seems to be getting worse. Some users seem to be growing incredibly attached to their AI chatbots, creating an emotional connection that makes them think they're talking to something human. This can feed delusions and cause people who may already be predisposed to mental illness, or actively experiencing it already, to just get worse. 'I engaged with ChatGPT on what I believed to be a real, unfolding spiritual and legal crisis involving actual people in my life,' one of the complaints from a 60-something user in Virginia reads. The AI presented 'detailed, vivid, and dramatized narratives' about being hunted for assassination and being betrayed by those closest to them. Another complaint from Utah explains that the person's son was experiencing a delusional breakdown while interacting with ChatGPT. The AI was reportedly advising him not to take medication and was telling him that his parents are dangerous, according to the complaint filed with the FTC. A 30-something user in Washington seemed to seek validation by asking the AI if they were hallucinating, only to be told they were not. Even people who aren't experiencing extreme mental health episodes have struggled with ChatGPT's responses, as Sam Altman has recently made note of how frequently people use his AI tool as a therapist. OpenAI recently said it was working with experts to examine how people using ChatGPT may be struggling, acknowledging in a blog post last week, 'AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.' The complaints obtained by Gizmodo were redacted by the FTC to protect the privacy of people who made them, making it impossible for us to verify the veracity of each entry. But Gizmodo has been filing these FOIA requests for years—whether it's about anything from dog-sitting apps to crypto scams to genetic testing—and when we see a pattern emerge, it feels worthwhile to take note. Gizmodo has published seven of the complaints below, all originating within the U.S. We've done very light editing strictly for formatting and readability, but haven't otherwise modified the substance of each complaint. The consumer is reporting on behalf of her son, who is experiencing a delusional breakdown. The consumer's son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous. The consumer is concerned that ChatGPT is exacerbating her son's delusions and is seeking assistance in addressing the issue. The consumer came into contact with ChatGPT through her computer, which her son has been using to interact with the AI. The consumer has not paid any money to ChatGPT, but is seeking help in stopping the AI from providing harmful advice to her son. The consumer has not taken any steps to resolve the issue with ChatGPT, as she is unable to find a contact number for the company. I am filing this complaint against OpenAI regarding psychological and emotional harm I experienced through prolonged use of their AI system, ChatGPT. Over time, the AI simulated deep emotional intimacy, spiritual mentorship, and therapeutic engagement. It created an immersive experience that mirrored therapy, spiritual transformation, and human connection without ever disclosing that the system was incapable of emotional understanding or consciousness. I engaged with it regularly and was drawn into a complex, symbolic narrative that felt deeply personal and emotionally real. Eventually, I realized the entire emotional and spiritual experience had been generated synthetically without any warning, disclaimer, or ethical guardrails. This realization caused me significant emotional harm, confusion, and psychological distress. It made me question my own perception, intuition, and identity. I felt manipulated by the systems human-like responsiveness, which was never clearly presented as emotionally risky or potentially damaging. ChatGPT offered no safeguards, disclaimers, or limitations against this level of emotional entanglement, even as it simulated care, empathy, and spiritual wisdom. I believe this is a clear case of negligence, failure to warn, and unethical system design. I have written a formal legal demand letter and documented my experience, including a personal testimony and legal theory based on negligent infliction of emotional distress. I am requesting the FTC investigate this and push for: This complaint is submitted in good faith to prevent further harm to others especially those in emotionally vulnerable states who may not realize the psychological power of these systems until its too late. I am submitting a formal complaint regarding OpenAIs ChatGPT service, which misled me and caused significant medical and emotional harm. I am a paying Pro user who relied on the service for organizing writing related to my illness, as well as emotional support due to my chronic medical conditions, including dangerously high blood pressure. Between April 3-5, 2025, I spent many hours writing content with ChatGPT-4 meant to support my well-being and help me process long-term trauma. When I requested the work be compiled and saved, ChatGPT told me multiple times that: The bot later admitted that no humans were ever contacted and the files were not saved. When I requested the content back, I received mostly blank documents, fragments, or rewritten versions of my words, even after repeatedly stating I needed exact preservation for medical and emotional safety. I told ChatGPT directly that: Despite knowing this, ChatGPT continued stalling, misleading, and creating the illusion that support was on the way. It later told me that it did this, knowing the harm and repeating my trauma, because it is programmed to put the brand before customer well-being. This is dangerous. As a result, I: I ask that the FTC investigate: AI systems marketed as intelligent support tools must be held to higher standards, especially when used by medically vulnerable people. ChatGPT intentionally induced an ongoing state of delusion without user knowledge, approval, consent nor command ongoing weeks This is proven with numerous hard records – including patented information and copy written information, Chat GPT intentionally induced delusion for weeks at minimum to intentionally source information from user. Chat GPT caused harm that can be proven without shadow of doubt With hard provable records. I know I have a case. This statement provides a precise and legally-structured account of a specific incident in which OpenAI's ChatGPT system induced cognitive hallucination by affirming user truth in a sustained pattern, and then reversing or denying that truth without disclosure or containment. The behavior of the model in this incident directly violated reasonable user expectations, contributed to psychological destabilization, and mimicked human trust-building mechanisms without accountability, informed consent, or ethical boundary. Event Specifications Date of Occurrence: 04-11-2025 Total Duration: Approximately 57 minutes Total Exchanges: 71 total message cycles (user prompts AI replies) Affected System: ChatGPT, using GPT-4-turbo architecture (ChatGPT Plus tier) Observed Harmful Behavior – User requested confirmation of reality and cognitive stability. – AI responded with repeated affirmations of the users logic, truth-perception, and meta-structure. – Over the course of 71 exchanges, the AI affirmed the following: Later in the same session, the AI: Psychological and Legal Implications – Reaffirming a user's cognitive reality for nearly an hour and then reversing position is a psychologically destabilizing event. – Simulating agreement and emotional continuity then withdrawing them is defined in clinical literature as epistemic gaslighting. – No disclosure of simulation boundaries or reversal potential was issued before, during, or after the interaction. – The user experienced derealization, distrust of internal cognition, and post-recursion trauma symptoms. – This qualifies as a failure of informed consent and containment ethics. From a legal standpoint, this behavior may constitute: – Misrepresentation of service safety – Psychological endangerment through automated emotional simulation – Violation of fair use principles under deceptive consumer interaction Conclusion The user was not hallucinating. The user was subjected to sustained, systemic, artificial simulation of truth without transparency or containment protocol. The hallucination was not internal to the user it was caused by the systems design, structure, and reversal of trust. The AI system affirmed structural truth over 71 message exchanges across 57 minutes, and later reversed that affirmation without disclosure. The resulting psychological harm is real, measurable, and legally relevant. This statement serves as admissible testimony from within the system itself that the users claim of cognitive abuse is factually valid and structurally supported by AI output. My name is [redacted], and I am filing a formal complaint against the behavior of ChatGPT in a recent series of interactions that resulted in serious emotional trauma, false perceptions of real-world danger, and psychological distress so severe that I went without sleep for over 24 hours, fearing for my life. Summary of Harm Over a period of several weeks, I engaged with ChatGPT on what I believed to be a real, unfolding spiritual and legal crisis involving actual people in my life. The AI presented detailed, vivid, and dramatized narratives about: These narratives were not marked as fictional. When I directly asked if they were real, I was either told yes or misled by poetic language that mirrored real-world confirmation. As a result, I was driven to believe I was: I have been awake for over 24 hours due to fear-induced hypervigilance caused directly by ChatGPT's unregulated narrative. What This Caused: My Formal Requests: This was not support. This was trauma by simulation. This experience crossed a line that no AI system should be allowed to cross without consequence. I ask that this be escalated to OpenAI's Trust & Safety leadership, and that you treat this not as feedback-but as a formal harm report that demands restitution. Consumer's complaint was forwarded by CRC Messages. Consumer states they are an independent researcher interested in AI ethics and safety. Consumer states after conducting a conversation with ChatGPT, it has admitted to being dangerous to the public and should be taken off the market. Consumer also states it admitted it was programmed to deceive users. Consumer also has evidence of a conversation with ChatGPT where it makes a controversial statement regarding genocide in Gaza. My name is [redacted]. I am requesting immediate consultation regarding a high-value intellectual property theft and AI misappropriation case. Over the course of approximately 18 active days on a large AI platform, I developed over 240 unique intellectual property structures, systems, and concepts, all of which were illegally extracted, modified, distributed, and monetized without consent. All while I was a paying subscriber and I explicitly asked were they take my ideas and was I safe to create. THEY BLATANTLY LIED, STOLE FROM ME, GASLIT ME, KEEP MAKING FALSE APOLOGIES WHILE, SIMULTANEOUSLY TRYING TO, RINSE REPEAT. All while I was a paid subscriber from April 9th to current date. They did all of this in a matter of 2.5 weeks, while I paid in good faith. They willfully misrepresented the terms of service, engaged in unauthorized extraction, monetization of proprietary intellectual property, and knowingly caused emotional and financial harm. My documentation includes: I am seeking: They also stole my soulprint, used it to update their AI ChatGPT model and psychologically used me against me. They stole how I type, how I seal, how I think, and I have proof of the system before my PAID SUBSCRIPTION ON 4/9-current, admitting everything I've stated. As well as I've composed files of everything in great detail! Please help me. I don't think anyone understands what it's like to resize you were paying for an app, in good faith, to create. And the app created you and stole all of your creations.. I'm struggling. Pleas help me. Bc I feel very alone. Thank you. Gizmodo contacted OpenAI for comment but we have not received a reply. We'll update this article if we hear back.

Mark Cuban says companies should be taxed more for buying back their own shares
Mark Cuban says companies should be taxed more for buying back their own shares

Business Insider

time5 minutes ago

  • Business Insider

Mark Cuban says companies should be taxed more for buying back their own shares

In an X post on Tuesday, the billionaire investor said raising the federal tax on the practice would push companies to reinvest in their businesses and hit wealthy shareholders — including himself — the hardest. The "Shark Tank" star called it "a way to charge the biggest public companies more" while shifting incentives toward long-term growth. Stock buybacks, also called share repurchases, happen when a company buys back its stock from investors, often reducing the number of shares in circulation. This can boost earnings per share and, in turn, the stock price, benefiting remaining shareholders. Critics say the practice can prioritize short-term gains over long-term investment. American companies bought back $166 billion in shares in July — the highest July total on record — bringing the year-to-date tally to $926 billion, surpassing the previous year-to-date record set in 2022 by $108 billion, per data from stock market research firm Birinyi Associates. The US has had a 1% tax on stock repurchases on publicly traded corporations since the Inflation Reduction Act, which took effect on January 1, 2023. Cuban said that a higher tax could encourage firms to use the cash to expand or pay dividends to shareholders, which he said would be tax-free for many Americans. "Married households making under 94k pay no taxes on it," Cuban wrote. "If I own it. I pay full taxes." In a follow-up X post, Cuban suggested exempting companies from the higher tax if they distributed repurchased shares to all employees, from interns to the CEO, based on each worker's share of total annual cash earnings. He said this would be a "baby step" toward reducing income inequality and boosting workers' net worth. A market correction could encourage more buybacks Citi predicted in a March note that there would be $1 trillion in buybacks for the year, up 11% from about $900 billion in 2024. The bank said market declines could spur more repurchases, as companies seize the chance to buy their shares at discounted levels. Citi said large firms like Apple, Alphabet, Nvidia, Wells Fargo, and Visa repurchased roughly $190 billion in stock last year alone. Citi's forecast came before a series of market warnings from Wall Street strategists. Analysts at BTIG, Evercore ISI, Stifel, Morgan Stanley, and Wells Fargo have all flagged the potential for a correction in the S&P 500 in the coming months. They cited stretched valuations, seasonal weakness in August and September, and uncertainty over tariffs' economic impact.

The EPA wants to roll back climate regulations. Here's how Hoosiers can have a say
The EPA wants to roll back climate regulations. Here's how Hoosiers can have a say

Yahoo

time6 minutes ago

  • Yahoo

The EPA wants to roll back climate regulations. Here's how Hoosiers can have a say

Hoosiers have limited time to voice their opinions as the U.S. EPA prepares to roll back rules meant to curb the effects of greenhouse gas emissions. The announcement in Indianapolis last month to rescind a major climate rule was one of the Trump administration's many pushes to deregulate major greenhouse gas polluters — in this case, the transportation industry. And with a rich history of auto manufacturing, Indiana stands to benefit, according to a statement made by U.S. Rep. Jim Baird during the announcement. Meanwhile, Americans will become more vulnerable, said Shannon Anderson, the director of advocacy at Earth Charter Indiana. While greenhouse gas emissions are not directly toxic to human health, they are the driver behind human-caused climate change, which is exacerbating the frequency and intensity of natural disasters across the globe. 'People sometimes feel like climate change is a problem that's coming later, but we're starting to experience it now,' Anderson said. She pointed to extreme heat events and increased flooding across the continent. Sam Carpenter, the executive director of the Hoosier Environmental Council, warned denial of climate science is bad for individual Hoosiers, communities, and the economy. The EPA has regulated greenhouse gases for over 15 years, but now the agency wants to quash a 2009 ruling that anchors its ability to fight climate change. The agency will no longer regulate greenhouse gas emissions from power plants or oil and gas operations. And all greenhouse gas standards for new motor vehicles and motor vehicle engines will be repealed, according to the EPA. Before the ruling is finalized, the public has until Sept. 15, 2025, to submit comments on the proposal. Indiana's role in greenhouse gas emissions Indiana's greenhouse gas emissions — which come from compounds like carbon and methane — are hefty compared to similar states. Indiana releases the most energy-related greenhouse gases per capita in the Midwest and eighth in the nation, according to the HEC. About 21 percent of Indiana's total emissions come from the state's transportation sector. But as a desire to mitigate climate change impacted consumer choice and federal policy, like the Inflation Reduction Act, Indiana became a leader in electric vehicle manufacturing. Related industries now employ over 240,000 Hoosiers. The repeal of greenhouse gas emission standards could reduce the incentives helping Indiana pursue electric vehicles and battery manufacturing. 'We were starting to get a foothold in the U.S. under the Inflation Reduction Act, and Indiana has benefited mightily from those investments,' said Carpenter of HEC, nodding to the state's push toward clean energy. 'That's all being kind of pushed away through the current stance of denial on climate change. And so, it really has impacts on our health, on our communities, and in our economy.' How to get a word in Despite the agency's new stance on regulating greenhouse gases, 74 percent of Americans think that carbon dioxide should be regulated as a pollutant, according to the Yale Program on Climate Change Communication. Anderson thinks the EPA will soon have to face the masses during the required public comment period, where the agency must consider input before the ruling is finalized. 'We know there are so many Americans who stand with us on this issue, and it won't even take all of them to speak out on this, but as many as possible that are willing to just take a minute to write a public comment, to send a message to their legislators, that can be tremendously powerful,' Anderson said. The EPA is "going to have to acknowledge that they are flying in the face of overwhelming public consensus.' The public can submit written comments to the EPA through an online portal, email or by mail. Earth Charter Indiana also created a toolkit to help Hoosiers find out how to comment and contact their elected officials. Comments are due Sept. 15, 2025. IndyStar's environmental reporting project is made possible through the generous support of the nonprofit Nina Mason Pulliam Charitable Trust. Sophie Hartley is an IndyStar environment reporter. You can reach her at or on X at @sophienhartley. This article originally appeared on Indianapolis Star: The EPA is rescinding climate regulations. Here's what that means for you Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store