
OpenAI Now Hit $10B in Annual Revenue, More Than Twice Compared to Last Year
A new milestone was achieved by one of the world's most popular and biggest AI companies, OpenAI, as the company announced that it hit a $10 billion annual revenue for the first time. This is a significant jump from last year's valuation of the company which is almost twice as much in value, showing off its massive growth for this year alone thanks to the renowned features and technology.
The latest figure is a combination of the different aspects of OpenAI's business, centering on its consumer products for users and businesses, with the company able to grow more this year. OpenAI Now Hit $10 Billion in Annual Revenue
CNBC reported that OpenAI has revealed that it successfully reached a $10 billion annual recurring revenue (ARR) this year, and this signals massive growth for the company in only a few years. It is known that OpenAI only recently launched ChatGPT in late 2022, making this chatbot less than three years old, alongside other products of the company, but in this short span, they were able to achieve a whopping milestone.
It is important to note that last year's annual recurring revenue for OpenAI only reached $5.5 billion, which means that the company almost doubled its ARR in its latest numbers for the year.
That being said, CNBC noted that this new milestone is a massive thing for the company as it is known to have lost around $5 billion last year. Moreover, this follows a recent successful raising of $40 billion in funding last March. What Is Next For OpenAI's Future?
The Information revealed that after this $10 billion ARR achievement, the company plans to raise its revenue to more than 10 times what it is currently valued, targeting $125 billion come 2029.
OpenAI previously revealed last March that it saw a significant boost in its weekly active users reaching 500 million.Moroever, OpenAI also shared that earlier this month, it now has three million paying business users. OpenAI's Latest Advancements
This year was a massive one for OpenAI as the company pushed for significant upgrades and new features to its different experiences, particularly with the world-renowned chatbot, ChatGPT. Earlier this year, OpenAI released the free AI image generator on ChatGPT which allowed users to directly create AI-generated images on the platform,without having the need to use DALL-E and others.
Moreover, the company has also debuted the ChatGPT Memory Boost feature that allows the chatbot to be more remembering of past conversations and take these into account in new or existing ones. OpenAI said that this Memory Boost allows ChatGPT to tailor responses and other forms of content based on previous chats, allowing it to havecontext and better suit what a user needs.
There are more plans that OpenAI has in store for users this year, and all of them plan to deliver massive upgrades and new capabilities for its different products, especially ChatGPT, and boost more of its user base. OpenAI currently enjoys a significant number of daily active users and other partners, leading to the company's latest milestone in revenue for the year.
Originally published on Tech Times

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Int'l Business Times
20 hours ago
- Int'l Business Times
Every ChatGPT Question Comes at a Cost: Enough Power for a Lightbulb and a Teaspoon of Water
In an increasingly digital world, the convenience of artificial intelligence like ChatGPT has become commonplace. Yet, behind each seemingly effortless query lies a hidden environmental toll. From the energy powering vast data centres to the surprising amount of water used for cooling, the true cost of our AI interactions extends far beyond a simple internet connection. OpenAI CEO Sam Altman has now shed more light on the environmental footprint of AI tools like ChatGPT. In a recent blog post, he revealed that an average ChatGPT query uses just 0.000085 gallons of water, which he described as 'roughly one-fifteenth of a teaspoon.' The Environmental Cost of ChatGPT This insight comes amidst growing concerns over how much energy and water artificial intelligence systems consume. Altman's recent blog post, while discussing the future of AI, also presented specific figures regarding ChatGPT's resource consumption. 'People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes,' Altman wrote. This was less than almost every estimate I have seem: according to the latest Sam Altman post, the average ChatGPT query uses about the same amount of power as the average Google search in 2009 (the last time they released a per-search number)… 0.0003 kWh — Ethan Mollick (@emollick) June 10, 2025 'It also uses about 0.000085 gallons of water, roughly one-fifteenth of a teaspoon,' he added. The water usage of one-fifteenth of a teaspoon per query might appear tiny for one person. Yet, given the billions of queries AI systems handle daily, the total effect becomes substantial. OpenAI did not explain how these figures were arrived at. 'As data centre production gets automated, the cost of intelligence should eventually converge to near the cost of electricity.' Altman continued. However, this perspective on electricity cost also underscores the larger environmental concerns now under close examination. Increased Focus on AI's Environmental Burden As AI becomes more widely adopted, experts and researchers are concerned about its environmental burden. Studies earlier this year forecast that AI could potentially use more electricity than Bitcoin mining by late 2025. Water consumption also presents a major worry, particularly for water-based cooling data centres. According to a 2024 report from The Washington Post, creating a 100-word email with GPT-4 might use roughly one bottle of water, influenced by the data centre's site and cooling approach. This demonstrates how environmental effects can differ depending on the infrastructure. Altman suggests that, in time, the expense of producing intelligence via AI will nearly match that of electricity. Until then, discussions concerning AI's environmental impact will become more vocal. Every Drop Counts: AI's Thirst for Water This isn't the only occasion Altman has foreseen AI becoming more affordable to run. In February, Altman stated in a blog post that AI's operating expenses would reduce tenfold yearly. 'You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period,' Altman wrote. 'Moore's law changed the world at 2x every 18 months; this is unbelievably stronger,' he added. Big Tech's Ambitious Energy Drive Leading tech firms in the AI race are exploring nuclear energy as a power source for their data centres. In September, Microsoft finalised a 20-year agreement with Constellation Energy to restart a dormant nuclear plant at Three Mile Island. In October, Google revealed it had partnered with Kairos Power, a nuclear energy company, to produce three small modular nuclear reactors. These reactors, which can supply up to 500 megawatts of electricity, are expected to be operational by 2035. Google, Amazon, and Microsoft have signed deals for nuclear energy projects to power their AI and data capabilities. Google's agreement with Kairos to buy power from multiple small modular reactors is a world first with 500MW planned across 6-7 reactors. — Works in Progress (@WorksInProgMag) November 13, 2024 In an October interview, Google's CEO, Sundar Pichai, told Nikkei Asia that the search giant aims for net-zero emissions across its entire operations by 2030. He further stated that Google also assessed solar energy beyond just nuclear power. 'It was a very ambitious target, and we are still going to be working very ambitiously towards it. Obviously, the trajectory of AI investments has added to the scale of the task needed,' Pichai said. Originally published on IBTimes UK


DW
21 hours ago
- DW
AI art can't match human creativity, yet — researchers – DW – 06/11/2025
Generative AI models are bad at representing things that require human senses, like smell and touch. Their creativity is 'hollow and shallow,' say experts. Anyone can sit down with an artificial intelligence (AI) program, such as ChatGPT, to write a poem, a children's story, or a screenplay. It's uncanny: the results can seem quite "human" at first glance. But don't expect anything with much depth or sensory "richness", as researchers explain in a new study. They found that the Large Language Modes (LLMs) that currently power Generative AI tools are unable to represent the concept of a flower in the same way that humans do. In fact, the researchers suggest that LLMs aren't very good at representing any 'thing' that has a sensory or motor component — because they lack a body and any organic human experience. "A large language model can't smell a rose, touch the petals of a daisy or walk through a field of wildflowers. Without those sensory and motor experiences, it can't truly represent what a flower is in all its richness. The same is true of some other human concepts," said Qihui Xu, lead author of the study at Ohio State University, US. The study suggests that AI's poor ability to represent sensory concepts like flowers might also explain why they lack human-style creativity. "AI doesn't have rich sensory experiences, which is why AI frequently produces things that satisfy a kind of minimal definition of creativity, but it's hollow and shallow," said Mark Runco, a cognitive scientist at Southern Oregon University, US, who was not involved in the study. The study was published in the journal Nature Human Behaviour , June 4, 2025. What are the challenges to book preservation? To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video AI poor at representing sensory concepts The more scientists probe the inner workings of AI models, the more they are finding just how different their 'thinking' is compared to that of humans. Some say AIs are so different that they are more like alien forms of intelligence. Yet objectively testing the conceptual understanding of AI is tricky. If computer scientists open up a LLM and look inside, they won't necessarily understand what the millions of numbers changing every second really mean. Xu and colleagues aimed to test how well LLMs can 'understand' things based on sensory characteristics. They did this by testing how well LLMs represent words with complex sensory meanings, measuring factors, such as how emotionally arousing a thing is or whether you can mentally visualize a thing, and movement or action-based representations. For example, they analyzed the extent to which humans experience flowers by smelling, or experience them using actions from the torso, such as reaching out to touch a petal. These ideas are easy for us to grasp, since we have intimate knowledge of our noses and bodies, but it's harder for LLMs, which lack a body. Overall, LLMs represent words well — but those words lack any connection to the senses or motor actions that we experience or feel as humans. But when it comes to words that have connections to things we see, taste or interact with using our body, that's where AI fails to convincingly capture human concepts. What's meant by 'AI art is hollow' AI creates representations of concepts and words by analyzing patterns from a dataset that is used to train it. This idea underlies every algorithm or task, from writing a poem, to predicting whether an image of a face is you or your neighbor. Most LLMs are trained on text data scraped from the internet, but some LLMs are also trained on visual learning, from still-images and videos. Xu and colleagues found that LLMs with visual learning exhibited some similarity with human representations in visual-related dimensions. Those LLMs beat other LLMs trained just on text. But this test was limited to visual learning — it excluded other human sensations, like touch or hearing. This suggests that the more sensory information an AI model receives as training data, the better it can represent sensory aspects. AI's impact on the working world To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video AI keeps learning and improving The authors noted that LLMs are continually improving and said it was likely that AI will get better at capturing human concepts in the future. Xu said that when future LLMs are augmented with sensor data and robotics, they may be able to actively make inferences about and act upon the physical world. But independent experts DW spoke to suggested the future of sensory AI remained unclear. "It's possible an AI trained on multisensory information could deal with multimodal sensory aspects without any problem," said Mirco Musolesi, a computer scientist at University College London, UK, who was not involved in the study. However, Runco said even with more advanced sensory capabilities, AI will still understand things like flowers completely differently to humans. Our human experience and memory are tightly linked with our senses — it's a brain-body interaction that stretches beyond the moment. The smell of a rose or the silky feel of its petals, for example, can trigger joyous memories of your childhood or lustful excitement in adulthood. AI programs do not have a body, memories or a 'self'. They lack the ability to experience the world or interact with it as animals and human-animals do — which, said Runco, means "the creative output of AI will still be hollow and shallow." Edited by: Zulfikar Abbany


Int'l Business Times
2 days ago
- Int'l Business Times
OpenAI Now Hit $10B in Annual Revenue, More Than Twice Compared to Last Year
A new milestone was achieved by one of the world's most popular and biggest AI companies, OpenAI, as the company announced that it hit a $10 billion annual revenue for the first time. This is a significant jump from last year's valuation of the company which is almost twice as much in value, showing off its massive growth for this year alone thanks to the renowned features and technology. The latest figure is a combination of the different aspects of OpenAI's business, centering on its consumer products for users and businesses, with the company able to grow more this year. OpenAI Now Hit $10 Billion in Annual Revenue CNBC reported that OpenAI has revealed that it successfully reached a $10 billion annual recurring revenue (ARR) this year, and this signals massive growth for the company in only a few years. It is known that OpenAI only recently launched ChatGPT in late 2022, making this chatbot less than three years old, alongside other products of the company, but in this short span, they were able to achieve a whopping milestone. It is important to note that last year's annual recurring revenue for OpenAI only reached $5.5 billion, which means that the company almost doubled its ARR in its latest numbers for the year. That being said, CNBC noted that this new milestone is a massive thing for the company as it is known to have lost around $5 billion last year. Moreover, this follows a recent successful raising of $40 billion in funding last March. What Is Next For OpenAI's Future? The Information revealed that after this $10 billion ARR achievement, the company plans to raise its revenue to more than 10 times what it is currently valued, targeting $125 billion come 2029. OpenAI previously revealed last March that it saw a significant boost in its weekly active users reaching 500 OpenAI also shared that earlier this month, it now has three million paying business users. OpenAI's Latest Advancements This year was a massive one for OpenAI as the company pushed for significant upgrades and new features to its different experiences, particularly with the world-renowned chatbot, ChatGPT. Earlier this year, OpenAI released the free AI image generator on ChatGPT which allowed users to directly create AI-generated images on the platform,without having the need to use DALL-E and others. Moreover, the company has also debuted the ChatGPT Memory Boost feature that allows the chatbot to be more remembering of past conversations and take these into account in new or existing ones. OpenAI said that this Memory Boost allows ChatGPT to tailor responses and other forms of content based on previous chats, allowing it to havecontext and better suit what a user needs. There are more plans that OpenAI has in store for users this year, and all of them plan to deliver massive upgrades and new capabilities for its different products, especially ChatGPT, and boost more of its user base. OpenAI currently enjoys a significant number of daily active users and other partners, leading to the company's latest milestone in revenue for the year. Originally published on Tech Times