logo
Lawmakers advance historic bill that could reshape how data centers operate nationwide: 'Looking at it in a way that would protect our resources'

Lawmakers advance historic bill that could reshape how data centers operate nationwide: 'Looking at it in a way that would protect our resources'

Yahoo26-04-2025

With the rapid rise of artificial intelligence and large language models like ChatGPT, data centers are popping up across the country to power our increasingly digital lives. But these centers require enormous resources to operate, especially electricity and water.
A new Minnesota bill could become the national standard for holding tech giants accountable.
The proposed legislation would set a new precedent for how data centers powering our digital world can operate sustainably. Facilities would need to report their energy use and water withdrawals publicly, pay conservation fees, and source at least 65% of their energy from carbon-free sources, according to Government Technology.
That's a big shift. Data centers use billions of gallons of water for cooling and consume vast amounts of electricity.
NPR has reported that the average data center uses 300,000 gallons of water a day, with larger centers around 1 million. With 11 large-scale centers proposed across Minnesota — and only one currently under construction — concerns are rising about whether the state's grid and water supply can support them sustainably.
Critically, those centers cumulatively could demand more electricity than all Minnesota households combined, according to Government Technology. That has sparked debate over how the state will balance development with its 2040 goal of achieving carbon-free energy.
Rep. Patty Acomb, the top Democrat on the House Energy Committee and sponsor of the bill, said the objective is to ensure responsible growth. "I just wanted to make sure we were looking at it in a way that would protect our resources, protect utility ratepayers, and allow it to be a thriving industry," she told the Star Tribune.
Environmental groups have applauded the effort, while the Minnesota Chamber of Commerce and the Data Center Coalition oppose the bill.
If passed, this legislation could mark a turning point in how high-tech industries manage natural resources, benefiting local communities and helping protect Minnesota's environment for generations to come.
To combat excess water and energy usage associated with data centers elsewhere, some companies and researchers are looking for alternatives.
Do you think governments should ban the production of gas-powered lawn equipment?
Absolutely
Yes — but not yet
I don't know
Heck no
Click your choice to see results and speak your mind.
Microsoft is working on a way to eliminate the need for water during chip-level cooling. Meanwhile, researchers at the University of Missouri have devised a new cooling system to control temperatures and make data centers more energy-efficient.
Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Week in Review: Why Anthropic cut access to Windsurf
Week in Review: Why Anthropic cut access to Windsurf

Yahoo

time40 minutes ago

  • Yahoo

Week in Review: Why Anthropic cut access to Windsurf

Welcome back to Week in Review! Got lots for you today, including why Windsurf lost access to Claude, ChatGPT's new features, WWDC 2025, Elon Musk's fight with Donald Trump, and lots more. Have a great weekend! Duh: During an interview at TC Sessions: AI 2025, Anthropic's co-founder had a perfectly reasonable explanation for why the company cut access to Windsurf: 'I think it would be odd for us to be selling Claude to OpenAI,' Chief Science Officer Jared Kaplan said, referring to rumors and reports that OpenAI, its largest competitor, is acquiring the AI coding assistant. Seems like a good reason to me! Everything is the same: Chinese lab DeepSeek released an updated version of its R1 reasoning AI model last week that performs well on a number of math and coding benchmarks. Now some AI researchers are speculating that at least some of the source data it trained on came from Google's Gemini family of AI. WWDC 2025: Apple's annual developers conference starts Monday. Beyond a newly designed operating system, here's what we're expecting to see at this year's event, including a dedicated gaming app and updates to Mac, Watch, TV, and more. This is TechCrunch's Week in Review, where we recap the week's biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here. Business in the front: ChatGPT is getting new features for business users, including connectors for Dropbox, Box, SharePoint, OneDrive, and Google Drive. This would let ChatGPT look for information across your own services to answer questions. Oh no: Indian grocery delivery startup KiranaPro was hacked, and all of its data was wiped. According to the company, it has 55,000 customers, with 30,000 to 35,000 active buyers across 50 cities, who collectively place 2,000 orders daily. Artsy people, rejoice! Photoshop is now coming to Android, so users of Google's operating system can gussy up their images, too. The app has a similar set of editing tools as the desktop version, including layering and masking. Let's try that again: Tesla filed new trademark applications for "Tesla Robotaxi" after previous attempts to trademark the terms 'Robotaxi' and 'Cybercab" failed. Rolling in dough: Tech startup Anduril just picked up a $1 billion investment as part of a new $2.5 billion raise led by Founders Fund, which means Anduril has doubled its valuation to $30.5 billion. On the road again: When Toma's founders realized car dealerships were drowning in missed calls, they hit the road to see the problem firsthand. That summer road trip turned into a $17 million a16z-backed fundraise that helped Toma get its AI phone agents into more than 100 dealerships across the U.S. Fighting season: All gloves were off on Thursday as Elon Musk and President Trump took to their respective social networks to throw jabs at each other. Though it might be exciting to watch rich men squabble in public, the fallout between the world's richest person and a sitting U.S. president promises to have broader implications for the tech industry. Money talks: Whether you use AI as a friend, a therapist, or even a girlfriend, chatbots are trained to keep you talking. For Big Tech companies, it's never been more competitive to attract users to their chatbot platforms — and keep them there. This article originally appeared on TechCrunch at

Welcome to campus. Here's your ChatGPT.
Welcome to campus. Here's your ChatGPT.

Boston Globe

timean hour ago

  • Boston Globe

Welcome to campus. Here's your ChatGPT.

'Our vision is that, over time, AI would become part of the core infrastructure of higher education,' Leah Belsky, OpenAI's vice president of education, said in an interview. In the same way that colleges give students school email accounts, she said, soon 'every student who comes to campus would have access to their personalized AI account.' Advertisement To spread chatbots on campuses, OpenAI is selling premium AI services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up Some universities, including the University of Maryland and California State University, are already working to make AI tools part of students' everyday experiences. In early June, Duke University began offering unlimited ChatGPT access to students, faculty and staff. The school also introduced a university platform, called DukeGPT, with AI tools developed by Duke. OpenAI's campaign is part of an escalating AI arms race among tech giants to win over universities and students with their chatbots. The company is following in the footsteps of rivals like Google and Microsoft that have for years pushed to get their computers and software into schools, and court students as future customers. Advertisement The competition is so heated that Sam Altman, OpenAI's CEO, and Elon Musk, who founded the rival xAI, posted dueling announcements on social media this spring offering free premium AI services for college students during exam period. Then Google upped the ante, announcing free student access to its premium chatbot service 'through finals 2026.' OpenAI ignited the recent AI education trend. In late 2022, the company's rollout of ChatGPT, which can produce human-sounding essays and term papers, helped set off a wave of chatbot-fueled cheating. Generative AI tools like ChatGPT, which are trained on large databases of texts, also make stuff up, which can mislead students. Less than three years later, millions of college students regularly use AI chatbots as research, writing, computer programming and idea-generating aides. Now OpenAI is capitalizing on ChatGPT's popularity to promote the company's AI services to universities as the new infrastructure for college education. OpenAI's service for universities, ChatGPT Edu, offers more features, including certain privacy protections, than the company's free chatbot. ChatGPT Edu also enables faculty and staff to create custom chatbots for university use. (OpenAI offers consumers premium versions of its chatbot for a monthly fee.) OpenAI's push to AI-ify college education amounts to a national experiment on millions of students. The use of these chatbots in schools is so new that their potential long-term educational benefits, and possible side effects, are not yet established. California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for 'California's future AI-driven economy.' Cal State said the effort would help make the school 'the nation's first and largest AI-empowered university system.' Advertisement Some universities say they are embracing the new AI tools in part because they want their schools to help guide, and develop guardrails for, the technologies. " You're worried about the ecological concerns. You're worried about misinformation and bias," Edmund Clark, the chief information officer of California State University, said at a recent education conference in San Diego. 'Well, join in. Help us shape the future.' Last spring, OpenAI introduced ChatGPT Edu, its first product for universities, which offers access to the company's latest AI. Paying clients like universities also get more privacy: OpenAI says it does not use the information that students, faculty and administrators enter into ChatGPT Edu to train its AI. (The New York Times has sued OpenAI and its partner, Microsoft, over copyright infringement. Both companies have denied wrongdoing.) Last fall, OpenAI hired Belsky to oversee its education efforts. An ed tech startup veteran, she previously worked at Coursera, which offers college and professional training courses. She is pursuing a two-pronged strategy: marketing OpenAI's premium services to universities for a fee while advertising free ChatGPT directly to students. OpenAI also convened a panel of college students recently to help get their peers to start using the tech. Among those students are power users like Delphine Tai-Beauchamp, a computer science major at the University of California, Irvine. She has used the chatbot to explain complicated course concepts, as well as help explain coding errors and make charts diagraming the connections between ideas. 'I wouldn't recommend students use AI to avoid the hard parts of learning,' Tai-Beauchamp said. She did recommend students try AI as a study aid. 'Ask it to explain something five different ways.' Advertisement Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT. Jared DeForest, the chair of environmental and plant biology at Ohio University, created his own tutoring bot, called SoilSage, which can answer students' questions based on his published research papers and science knowledge. Limiting the chatbot to trusted information sources has improved its accuracy, he said. 'The curated chatbot allows me to control the information in there to get the product that I want at the college level,' DeForest said. But even when trained on specific course materials, AI can make mistakes. In a new study -- 'Can AI Hold Office Hours?' -- law school professors uploaded a patent law casebook into AI models from OpenAI, Google and Anthropic. Then they asked dozens of patent law questions based on the casebook and found that all three AI chatbots made 'significant' legal errors that could be 'harmful for learning.' 'This is a good way to lead students astray,' said Jonathan S. Masur, a professor at the University of Chicago Law School and a co-author of the study. 'So I think that everyone needs to take a little bit of a deep breath and slow down.' OpenAI said the 250,000-word casebook used for the study was more than twice the length of text that its GPT-4o model can process at once. Anthropic said the study had limited usefulness because it did not compare the AI with human performance. Google said its model accuracy had improved since the study was conducted. Advertisement Belsky said a new 'memory' feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the AI 'more valuable as you grow and learn.' Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Belsky envisions graduating students bringing their AI chatbots into their workplaces and using them for life. 'It would be their gateway to learning -- and career life thereafter,' Belsky said. This article originally appeared in

Karoline Leavitt rips Van Hollen, media for portrayal of Kilmar Abrego Garcia
Karoline Leavitt rips Van Hollen, media for portrayal of Kilmar Abrego Garcia

New York Post

timean hour ago

  • New York Post

Karoline Leavitt rips Van Hollen, media for portrayal of Kilmar Abrego Garcia

White House press secretary Karoline Leavitt called out Democrats and the media for defending illegal immigrant and suspected MS-13 member Kilmar Abrego Garcia Friday. Abrego Garcia, who was deported in March to an El Salvador mega prison, was returned to the U.S. Friday to answer federal charges for human smuggling and conspiracy. 'The Justice Department's Grand Jury Indictment against Abrego Garcia proves the unhinged Democrat Party was wrong, and their stenographers in the Fake News Media were once again played like fools,' Leavitt said in a statement to Fox News. 'Abrego Garcia was never an innocent 'Maryland Man'– Abrego Garcia is an illegal alien terrorist, gang member, and human trafficker who has spent his entire life abusing innocent people, especially women and the most vulnerable,' Leavitt added. She also called out Sen. Chris Van Hollen, D-Md., who traveled to El Salvador in April 'to show solidarity' with Abrego Garcia. 'Abrego Garcia will now return to the United States to answer for his crimes and meet the full force of American justice,' Leavitt said. 'The Democrat lawmakers, namely Democrat Senator Chris Van Hollen, and every single so-called 'journalist' who defended this illegal criminal abuser must immediately apologize to Garcia's victims. The Trump Administration will continue to hold criminals accountable to the fullest extent of the law.' 4 White House press secretary Karoline Leavitt called out Democrats and the media for defending suspected MS-13 member Kilmar Abrego Garcia. Ron Sachs/CNP / 4 Abrego Garcia was deported in March to an El Salvador mega prison and returned to the U.S. on Friday to answer federal charges for human smuggling and conspiracy. AP Abrego Garcia previously lived in Maryland before the administration deported him to the Central American country's mega prison. 4 Senator Chris Van Hollen meeting with Kilmar Abrego Garcia at a hotel in San Salvador, El Salvador, on April 17. x account of senator Van Hollen/AFP via Getty Images 4 'The Democrat lawmakers, namely Democrat Senator Chris Van Hollen, and every single so-called 'journalist' who defended this illegal criminal abuser must immediately apologize to Garcia's victims,' Leavitt said. According to Abrego Garcia's indictment, he played a 'significant role' in a human smuggling ring operating for nearly a decade, and Bondi described him as a full-time smuggler who made more than 100 trips, transporting women, children and MS-13 gang-affiliated persons throughout the United States. Fox News Digital obtained Tennessee Highway Patrol bodycam footage from a 2022 traffic stop where troopers pulled over Abrego Garcia for speeding. Inside his vehicle were eight other men, raising immediate suspicions. 'He's hauling these people for money,' one trooper said. Law enforcement found $1,400 in cash and flagged Abrego Garcia in the National Crime Information Center, which returned a gang/terrorism alert. ICE was called, but never responded. Despite Abrego Garcia's alleged illegal activity, various media outlets continued to refer to him as a 'Maryland man' Friday, including the Washington Post and the New York Times. Fox News contributor Guy Benson shared a screenshot of their Breaking News alerts using the phrase. Axios and USA TODAY referred to him as a 'Maryland man' or 'Maryland father' on social media.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store