
Every ChatGPT query you make uses water, and Sam Altman has revealed the exact figure
Generative AI has now become part and parcel of our daily lives. Many organisations are resorting to using chatbots, like ChatGPT, and even the LLMs behind these chatbots to get their work done through agent tech, AI, and whatnot. However, what if we told you that OpenAI's boss and the man behind ChatGPT, Sam Altman, recently revealed how much water is used for every one of your ChatGPT queries?
This was revealed in a blog post by Sam Altman himself, titled 'The Gentle Singularity'. He revealed this while talking about data centre production, automation, and the cost of intelligence eventually converging to near the cost of electricity.
Sam Altman stated that people are often curious about how much energy a ChatGPT query uses. In terms of electricity, it is about 0.34 watt-hours. This is something an oven would use in over one second, or a high-efficiency light bulb would use in a little over a couple of minutes. As for the water, Altman says that a query uses about 0.000085 gallons of water (0.00032 litres), which in real-world equivalent is about one-fifteenth of a teaspoon.
So, for example, using ChatGPT 1,000 times would result in approximately 0.32 litres of water being consumed.
Well, time and again, it has been propagated that running these AI models requires large data centres and large compute power. And for all of that work, you do need more electricity, more power generation, and of course, water. This is where the larger carbon footprint comes into play.
Having said that, while Altman does mention some figures in his official blog post, it is not really clear how he came to these conclusions or if there has been research about this at OpenAI.
MOBILE FINDER: iPhone 16 LATEST Price, Specs And More

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
39 minutes ago
- Time of India
Temasek joins Microsoft, BlackRock and MGX to develop AI infrastructure
Temasek has joined a consortium backed by Microsoft, BlackRock and tech investment company MGX to invest and expand artificial intelligence infrastructure, according to BlackRock's investor day presentation slides on Thursday. The Singapore state investment company has joined AI Infrastructure Partnership , a group that also includes BlackRock's Global Infrastructure Partners, the slides showed. AIP, formed in September with a goal to initially invest more than $30 billion in AI-related projects, is one of the world's largest efforts to invest in data centres and energy facilities needed to power AI applications such as ChatGPT. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 23.7% Returns in last 5 years with Shriram Life's ULIP Shriram Life Insurance Undo It aims to mobilise up to $100 billion including debt financing for such investments, which will focus on the United States. Temasek's participation comes after the Kuwait Investment Authority joined AIP earlier in June. Live Events The sovereign wealth fund of Kuwait was the first non-founder financial anchor investor to join the consortium, which also counts partners including Nvidia and billionaire Elon Musk's xAI. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories "Temasek's investment in the AI Infrastructure Partnership reflects our focus on the big shifts and trends of the future," Ravi Lambah, Temasek's head of strategic initiatives, said in an email to Reuters. "AI is potentially the most transformative and impactful technology for all sectors and businesses," he added. Temasek did not disclose financial details of the investment. The global investment company had a net portfolio value of S$389 billion ($304 billion) as of March 31, 2024, according to its website.


Indian Express
an hour ago
- Indian Express
The Browser Company launches AI web browser ‘Dia' in invite-only beta
Dia, a new AI web browser developed by The Browsing Company, was launched in beta on Wednesday, June 11. The browser provides users with an in-built AI interface. It is currently available for use by invite only. 'Dia gets more personalized with every tab you open. This is 100x more context than ChatGPT, automatically. And we believe it changes what's possible with AI,' Josh Miller, the CEO of The Browsing Company, said in a post on X. With Dia, the company is looking to enable easier access to AI tools without visiting sites like ChatGPT, Perplexity, and Claude. Existing Arc members will also be able to access Dia. Current Dia users will further be able to send invites to other users, the company said. For years, web browsers have been the main way in which people access and use the internet. However, the rise of AI has posed an existential threat to traditional web browsers, prompting companies to race toward integrating AI directly into the browsing experience. Dia is a browser that is built on top of Chromium, the open source browser project led by Google. The browser interface looks familiar with a URL bar for users to type in their search queries. However, this URL bar can also serve as the interface to prompt an in-built AI chatbot. This chatbot can carry out searches on the web and summarise uploaded files. It is also capable of automatically switching between chat and search modes, according to a report by TechCrunch. In addition, Dia appears to have screen awareness as users can ask questions about the tabs that they have open. The AI chatbot can also generate a document with text based on the content from those tabs. Dia becomes more personalised with more use, as per the company. 'Every time you open a new tab, Dia's memory automatically takes notes for you […] These notes are encrypted & stored locally. We don't take notes on sensitive pages like banking,' Miller said. It also has an opt-in feature called History where users can choose to give the browser access to seven days of their browsing history in order to receive more personalised answers to their search queries. Another feature on Dia called Skills provides short strings of AI-generated code that can serve as shortcuts for various settings. Last year, The Browser Company launched a web browser called Arc. While it gained popularity among a select group of users, the firm could not scale the browser effectively with its complicated interface acting as a barrier for mass adoption. Since then, the company has reportedly been working on developing a browser with AI at the heart of it. Other companies have also sought to integrate AI features into their web browsers. For instance, Google recently announced that it is bringing Gemini to Chrome users in the US. This is in addition to AI Mode and AI Overviews in Google Search. Opera Neon, on the other hand, has started giving users access to AI agents capable of autonomously building mini-applications and carrying out other minor tasks on their behalf.


Time of India
an hour ago
- Time of India
AI chatbots need more books to learn from; These libraries are opening their stacks
Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century - and in 254 languages - are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artists and others whose creative works have been scooped up without their consent to train AI chatbots. "It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright," said Burton Davis, a deputy general counsel at Microsoft . Davis said libraries also hold "significant amounts of interesting cultural, historical and language data" that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Live Events Supported by "unrestricted gifts" from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories "We're trying to move some of the power from this current AI moment back to these institutions," said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. " Librarians have always been the stewards of data and the stewards of information." Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s - a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. "A lot of the data that's been used in AI training has not come from original sources," said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items," he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia , from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens - units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta , for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from "shadow libraries" of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University 's 400-year-old Bodleian Library, which is digitizing rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitized would be for everyone, said Jessica Chapel, its chief of digital and online services. "OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning," Chapel said. Digitization is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. "We've been very clear that, 'Hey, we're a public library,'" Chapel said. "Our collections are held for public use, and anything we digitized as part of this project will be made public." Harvard's collection was already digitized starting in 2006 for another tech giant, Google , in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be "immensely critical" for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. "At a university, you have a lot of pedagogy around what it means to reason," Leppert said. "You have a lot of scientific information about how to run processes and how to run analyses." At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. "When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to "help them make their own informed decisions and use AI responsibly."