logo
OpenAI CEO sparks controversy after refusing to disclose information about latest AI model: 'It's more critical than ever to address'

OpenAI CEO sparks controversy after refusing to disclose information about latest AI model: 'It's more critical than ever to address'

Yahooa day ago
OpenAI CEO sparks controversy after refusing to disclose information about latest AI model: 'It's more critical than ever to address'
ChatGPT's popularity exploded instantly upon its November 2022 debut — acquiring a staggering 100 million active users in two months — but concerns about the technology's drain on resources have risen in tandem.
Quantifying the environmental impact of generative artificial intelligence (AI) and large language models (LLMs) like ChatGPT has been an elusive endeavor, and as The Guardian observed, OpenAI and CEO Sam Altman have been frustratingly opaque about the energy demands of its newest offering, GPT-5.
What's happening?
Per The Guardian, as of mid-2023, a simple query using ChatGPT used "about as much electricity as an incandescent bulb consumes in 2 minutes."
Research published as a preprint in March 2023 looked at ChatGPT's water usage, asserting that training the GPT-3 model could "directly evaporate 700,000 liters of clean freshwater, but such information has been kept a secret."
On August 7, ChatGPT's parent company OpenAI introduced GPT-5, its latest and most feature-heavy offering.
GPT-5 represented "a significant leap in intelligence over all our previous models, featuring state-of-the-art performance across coding, math, writing, health, visual perception, and more," OpenAI claimed as the model was unveiled.
The Guardian's coverage pointed out that ChatGPT's usage of resources would likely increase in conjunction with its capabilities, adding that OpenAI had been markedly silent on the subject over the past five years.
While OpenAI hasn't been forthcoming, experts weighed in on what they suspect is necessarily a thirstier, energy-gobbling ChatGPT model.
"A more complex model like GPT-5 consumes more power both during training and during inference … I can safely say that it's going to consume a lot more power than GPT-4," said University of Illinois professor Rakesh Kumar, who researches AI and energy usage.
Why is GPT-5's energy usage concerning?
ChatGPT's leap from a million to 100 million users wasn't a blip — back in March, TechCrunch analyzed more recent usage trends following its introduction in November 2022.
Do you think your city has good air quality?
Definitely
Somewhat
Depends on the time of year
Not at all
Click your choice to see results and speak your mind.
"By November 2023, ChatGPT had reached another milestone of 100 million weekly active users, which grew to 300 million by December 2024, then 400 million in February 2025," the outlet explained.
Citing initial calculations from the University of Rhode Island's AI lab on Friday, August 8, The Guardian surmised that GPT-5's capabilities could require an amount of energy that "would correspond to burning that incandescent bulb for 18 minutes."
Put another way, GPT-5 could use as much daily energy as 1.5 million households in the United States.
University of California, Riverside, AI researcher Shaolei Ren said GPT-5's energy requirements "should be orders of magnitude higher than that for GPT-3" based on its size alone.
What can be done about AI's environmental impact?
Although AI researchers can make credible estimates, experts called for responsible corporate disclosures.
"It's more critical than ever to address AI's true environmental cost. We call on OpenAI and other developers to use this moment to commit to full transparency by publicly disclosing GPT-5's environmental impact," said University of Rhode Island professor Marwan Abdelatti.
Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.
Solve the daily Crossword
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

4 Things Schools Need To Consider When Designing AI Policies
4 Things Schools Need To Consider When Designing AI Policies

Forbes

time12 minutes ago

  • Forbes

4 Things Schools Need To Consider When Designing AI Policies

Artificial intelligence has moved from Silicon Valley boardrooms into homes and classrooms across America. A recent Pew Research Center study reveals that 26% of American teenagers now utilize AI tools for schoolwork—twice the number from two years prior. Many schools are rushing to establish AI policies. The result? Some are creating more confusion than clarity by focusing solely on preventing cheating while ignoring broader educational opportunities AI presents. The challenge shouldn't be about whether to allow AI in schools—it should be about how to design policies that strike a balance between academic integrity and practical preparation for an AI-driven future. Here are four essential considerations for effective school AI policies. 1. Address Teacher AI Use, Not Just Student Restrictions The most significant oversight in current AI policies? They focus almost exclusively on what students can't do while completely ignoring teacher usage. This creates confusion and sends mixed messages to students and families. Most policies spend paragraphs outlining student restrictions, but fail to answer basic questions about educator usage: Can teachers use AI to create lesson plans? Are educators allowed to use AI for generating quiz questions or providing initial feedback on essays? What disclosure requirements exist when teachers use AI-generated content? When schools prohibit students from using AI while allowing teachers unrestricted access, the message becomes hypocritical. Students notice when their teacher presents an AI-generated quiz while simultaneously forbidding them from using AI for research. Parents wonder why their children face strict restrictions while educators operate without clear guidelines. If students are required to disclose AI usage in assignments, teachers should identify when they've used AI for lesson materials. This consistency builds trust and models responsible AI integration. 2. Include Students in AI Policy Development Most AI policies are written by administrators who haven't used ChatGPT for homework or witnessed peer collaboration with AI tools. This top-down approach creates rules that students either ignore or circumvent entirely. When we built AI guidelines for WITY, our AI teen entrepreneurship platform at WIT - Whatever It Takes, we worked directly with students. The result? Policies that teens understand and respect because they helped create them. Students bring critical information about real-world AI use that administrators often miss. They are aware of which platforms their classmates use, how AI supports various subjects, and where current rules create confusion. When students participate in policy creation compliance increases significantly because the rules feel collaborative rather than punitive. 3. Balance AI Guardrails With Innovation Opportunities Many AI policies resemble legal warnings more than educational frameworks. Fear-based language teaches students to view AI as a threat rather than a powerful tool requiring responsible use. Effective policies reframe restrictions as learning opportunities. Instead of "AI cannot write your essays," try "AI can help you brainstorm and organize ideas, but your analysis and voice should drive the final work." Schools that blanket-ban AI usage miss opportunities to prepare students for careers where AI literacy will be essential. AI access can vary dramatically among students. While some students have premium ChatGPT subscriptions and access to the latest tools, others may rely solely on free versions or school-provided resources. Without addressing this gap, AI policies can inadvertently increase educational inequality. 4. Build AI Literacy Into Curriculum and Family Communication In an AI-driven economy, rules alone don't prepare students for a future where AI literacy is necessary. Schools must teach students to think critically about AI outputs, understand the bias in AI systems, and recognize the appropriate applications of AI across different contexts. Parents often feel excluded from AI conversations at school, creating confusion about expectations. This is why schools should explain their AI policies in plain language, provide examples of responsible use, and offer resources for parents who want to support responsible AI use at home. When families understand the educational rationale behind AI integration—including teacher usage and transparency requirements—they become partners in developing responsible use habits rather than obstacles to overcome. AI technology changes rapidly, making static policies obsolete within months. Schools should schedule annual policy reviews that include feedback from students, teachers, and parents about both student and teacher AI usage. AI Policy Assessment Checklist School leaders should evaluate their current policies against these seven criteria: Teacher Guidelines: Do policies clearly state when and how teachers can use AI? Are disclosure requirements consistent between students and educators? Student Input: Have students participated in creating these policies? Do rules reflect actual AI usage patterns among teens? Equity Access: Can all students access the same AI tools, or do policies create advantages for families with premium subscriptions? Family Communication: Can parents easily understand the policies? Are expectations clear for home use? Are there opportunities for workshops for parents? Innovation Balance: Do policies encourage responsible experimentation or only focus on restrictions? Is the school policy focusing on preparing students for the AI-driven workforce? Regular Updates: Is there a scheduled review process as AI technology evolves? Does the school welcome feedback from students, teachers and parents? Skills Development: Do policies include plans for teaching AI literacy alongside restrictions? Who is teaching this class or workshop? Moving Forward: AI Leadership The most effective approach treats students as partners, not adversaries. When teens help create the rules they'll follow, when teachers model responsible usage, and when families understand the educational reasoning behind policies, AI becomes a learning tool rather than a source of conflict. Schools that embrace this collaborative approach will produce graduates who understand how to use AI ethically and effectively—exactly the capabilities tomorrow's economy demands.

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days
White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

Business Insider

time12 minutes ago

  • Business Insider

White House AI czar David Sacks says 'AI psychosis' is similar to the 'moral panic' of social media's early days

AI can create a diet plan, organize a calendar, and provide answers to an endless variety of burning questions. Can it also cause a psychiatric breakdown? David Sacks, the White House official spearheading America's AI policies, doesn't think so. President Donald Trump's AI and crypto czar discussed " AI psychosis" during an episode of the "All-In Podcast" published Friday. While most people engage with chatbots without a problem, a small number of users say the bots have encouraged delusions and other concerning behavior. For some, ChatGPT serves as an alternative to professional therapists. A psychiatrist earlier told Business Insider that some of his patients exhibiting what's been described as "AI psychosis," a nonclinical term, used the technology before experiencing mental health issues, "but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities." During the podcast, Sacks doubted the whole concept of "AI psychosis." "I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI." Sacks then referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced "AI psychosis" if there aren't other risk factors — including social and genetic — involved. "In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said. "I think it's fair to say we're in the midst of a mental health crisis in this country." Sacks attributed the crisis instead to the COVID-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said. After several reports of users suffering mental breaks while using ChatGPT, OpenAI CEO Sam Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5. "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," Altman wrote. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." Earlier this month, OpenAI introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot. The update will also change how the chatbot responds to users asking about personal challenges.

I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.
I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

Yahoo

time28 minutes ago

  • Yahoo

I'm a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here's how I got hired and why I love it.

Jigar Bhati transitioned from Twitter to OpenAI for a startup-like environment. He said his work experience in infrastructure and scaling helped him get hired at OpenAI. Bhati said OpenAI still feels like a startup with a focus on productivity and collaboration. This is an as-told-to essay based on a conversation with Jigar Bhati, a member of technical staff at OpenAI. He's worked at the company since 2023 and previously worked as a software engineer at Twitter. This story has been edited for length and clarity. I was working at Twitter for almost seven years and was looking for a change in general. Twitter was a big tech company, and it was handling millions of users already. I wanted to join a company that was a bit smaller scale and work in a startup-like environment. When ChatGPT launched, software engineers were in awe. So I was already using it before joining OpenAI. I was fascinated by the product itself. There was definitely that connection with the product that, as a software engineer, you often don't feel. I could also see there was a huge opportunity and huge impact to be created as part of joining OpenAI in general. And I think if I look back at the past two years of my journey, I think that's mostly true. It's been exciting seeing the user base grow over two years. I got to work and lead some critical projects, which was really great for my own professional development. And you're always working with smart people. OpenAI hires probably the best talent out there. You're contributing while also learning from different people, so there's exponential growth in your professional career as well. Building up the right work experience is key My career at Twitter and before that had been focused on infrastructure capabilities and handling large-scale distributed systems, which was something the team was looking for when I interviewed with OpenAI. When I joined OpenAI, there were a lot of challenges with respect to scaling infrastructure. So my expertise has helped steer some of those challenges in the right direction. Having that relevant experience helps when you're interviewing for any team. So one of the best things is to find the right team that has the closest match for you. Having public references, for me, helped. I had some discussions and conference talks and met a lot of people as part of that conference networking. That also helps you stay on top of state-of-the-art technological advancements, which helps in whatever work stream you're working with. I think OpenAI is probably one of the fastest-growing companies in the world, so you really need to move fast as part of that environment. You're shipping, or delivering, daily or weekly. And as part of that, I think you really need to have that cultural fit with the company. When you're interviewing, I think you need to be passionate about working at OpenAI and solving hard problems. In general for professionals, it's important to have a good career trajectory so that they can showcase how they have solved real, hard problems at a large scale. I think that goes a long way. Starting your career at a startup or a larger tech company are both fine. It's up to the students' interests and cultural fit as well. OpenAI is not just a place for experienced engineers. OpenAI also hires new grads and interns. I've definitely seen people entering as part of that pipeline. And it's amazing to see how they enjoy working at OpenAI and how they come up with new ideas and new capabilities and new suggestions. But whichever place you end up in, I think it's important to have good growth aspects professionally and also for you to ship products. At any company you can create impact for the company and yourself, and be able to have that career trajectory. OpenAI still feels like a startup One of the most exciting things is that I think OpenAI still operates like it operated two years ago. It still feels like a startup, even though we may have scaled the number of engineers, number of products, the number of user bases. It's still very much like a startup environment and there's a big push for productivity and collaboration. The velocity and the productivity you get working at OpenAI is definitely much higher than some of the other companies that I've worked with. That makes things really exciting because you get to ship products on a continuous basis, and you get into that habit of shipping daily rather than weekly or monthly or yearly. It feels great to be working in AI right now. In addition to having a connection with the product, it makes things very interesting when you are able to work from within the company, shaping the direction of a product that will be used by the entire world. With GPT-5, for instance, we had early access to it and could steer the direction of its launch. Every engineer is empowered to provide feedback that will help improve the model, add new capabilities, and make it more usable and user-friendly for all the different 700 million users out there. I think that's pretty amazing and speaks to the kind of impact you can make as part of the company. Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store