
ChatGPT-Maker OpenAI Seeks $500B Valuation in Share Sale
Elevate Your Investing Strategy:
Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence.
For context, OpenAI reportedly secured $8.3 billion in new funding earlier this week at a $300 billion valuation. This investment is part of the company's larger $40 billion fundraising target for 2025.
More About OpenAI's Latest Share Sale
According to market reports, investors such as Thrive Capital are exploring the purchase of employee-held shares in OpenAI's secondary stock sale. However, the talks are private, and sources requested anonymity as they are not authorized to speak publicly.
If the deal moves forward, it would boost OpenAI's valuation by about two-thirds from its earlier $300 billion mark set during a $40 billion funding round led by SoftBank Group (SFTBY). This would further cement its position as one of the world's most highly valued private companies.
ChatGPT Crosses 700 Million Weekly Users
OpenAI's main product, ChatGPT, recently crossed a major mark of 700 million weekly users, up from 500 million in March. The company's annual revenue also reportedly reached $12 billion. The figure is more than double the $5.5 billion reported in December 2024, driven by strong demand for its AI tools from both consumers and enterprise users.
On the other hand, OpenAI's rapid growth is leading to higher expenses. The company now expects to spend $8 billion in 2025, about $1 billion more than earlier estimates. Much of this cost is going toward building infrastructure, like renting computer chips and setting up data centers to handle its expanding AI models.
Which Is the Best AI Stock to Buy, According to Analysts?
Investors looking to buy into OpenAI in 2025 may be let down, as the company remains private and doesn't offer its shares to the public.
Still, those interested in the AI space can consider other major AI-related stocks. To help with this, we've used TipRanks' stock comparison tool to compare several top AI companies. Users can also conduct further research to identify the most promising stock based on analyst ratings and other insights.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scientific American
an hour ago
- Scientific American
AI Took on the Math Olympiad—But Mathematicians Aren't Impressed
A defining memory from my senior year of high school was a nine-hour math exam with just six questions. Six of the top scorers won slots on the U.S. team for the International Math Olympiad (IMO), the world's longest running math competition for high school students. I didn't make the cut, but became a tenured mathematics professor anyway. This year's olympiad, held last month on Australia's Sunshine Coast, had an unusual sideshow. While 110 students from around the world went to work on complex math problems using pen and paper, several AI companies quietly tested new models in development on a computerized approximation of the exam. Right after the closing ceremonies, OpenAI and later Google DeepMind announced that their models earned (unofficial) gold medals for solving five of the six problems. Researchers like Sébastien Bubeck of OpenAI celebrated these models' successes as a ' moon landing moment ' by industry. But are they? Is AI going to replace professional mathematicians? I'm still waiting for the proof. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. The hype around this year's AI results is easy to understand because the olympiad is hard. To wit, in my senior year of high school, I set aside calculus and linear algebra to focus on olympiad-style problems, which were more of a challenge. Plus the cutting-edge models still in development did so much better at the exam than the commercial models already out there. In a parallel contest administered by Gemini 2.5 pro, Grok 4, o3 high, o4-mini high and DeepSeek R1 all failed to produce a single completely correct solution. It shows that AI models are getting smarter, their reasoning capabilities improving rather dramatically. Yet I'm still not worried. The latest models just got a good grade on a single test—as did many of the students—and a head-to-head comparison isn't entirely fair. The models often employ a 'best-of- n ' strategy, generating multiple solutions and then grading themselves to select the strongest. This is akin to having several students work independently, then get together to pick the best solution and submit only that one. If the human contestants were allowed this option, their scores would likely improve too. Other mathematicians are similarly cautioning against the hype. IMO gold medalist Terence Tao (currently a mathematician at the University of California, Los Angeles) noted on Mastodon that what AI can do depends on what the testing methodology is. IMO president Gregor Dolinar said that the organization ' cannot validate the methods [used by the AI models], including the amount of compute used or whether there was any human involvement, or whether the results can be reproduced.' Besides, IMO exam questions don't compare to the kinds of questions professional mathematicians try to answer, where it can take nine years, rather than nine hours, to solve a problem at the frontier of mathematical research. As Kevin Buzzard, a mathematics professor at Imperial College London, said in an online forum, 'When I arrived in Cambridge UK as an undergraduate clutching my IMO gold medal I was in no position to help any of the research mathematicians there.' These days, mathematical research can take more than one lifespan to acquire the right expertise. Like many of my colleagues, I've been tempted to try 'vibe proving'—having a math chat with an LLM as one would with a colleague, asking 'Is it true that...' followed by a technical mathematical conjecture. The chatbot often then supplies a clearly articulated argument that, in my experience, tends to be correct when it comes to standard topics but subtly wrong at the cutting edge. For example, every model I've asked has made the same subtle mistake in assuming that the theory of idempotents behaves the same for weak infinite-dimensional categories as it does for ordinary ones, something that human experts (trust me on this) in my field know to be false. I'll never trust an LLM—which at its core is just predicting what text will come next in a string of words, based on what's in its dataset—to provide a mathematical proof that I can't verify myself. The good news is, we do have an automated mechanism for determining whether proofs can be trusted. Relatively recent tools called 'proof assistants' are software programs (they don't use AI) designed to check whether a logical argument proves the stated claim. They are increasingly attracting attention from mathematicians like Tao, Buzzard and myself who want more assurance that our own proofs are correct. And they offer the potential to help democratize mathematics and even improve AI safety. Suppose I received a letter, in unfamiliar handwriting, from Erode, a city in Tamil Nadu, India, purporting to contain a mathematical proof. Maybe its ideas are brilliant, or maybe they're nonsensical. I'd have to spend hours carefully studying every line, making sure the argument flowed step-by-step, before I'd be able to determine whether the conclusions are true or false. But if the mathematical text were written in an appropriate computer syntax instead of natural language, a proof assistant could check the logic for me. A human mathematician, such as I, would then only need to understand the meaning of the technical terms in the theorem statement. In the case of Srinivasa Ramanujan, a generational mathematical genius who did hail from Erode, an expert did take the time to carefully decipher his letter. In 1913 Ramanujan wrote to the British mathematician G. H. Hardy with his ideas. Luckily, Hardy recognized Ramanujan's brilliance and invited him to Cambridge to collaborate, launching the career of one of the all-time mathematical 'greats.' What's interesting is that some of the AI IMO contestants submitted their answers in the language of the Lean computer proof assistant so that the computer program could automatically check for errors in their reasoning. A start-up called Harmonic posted formal proofs generated by their model for five of the six problems, and ByteDance achieved a silver-medal level performance by solving four of the six problems. But the questions had to be written to accommodate the models' language limitations, and they still needed days to figure it out. Still, formal proofs are uniquely trustworthy. While so-called 'reasoning' models are prompted to break problems down into pieces and explain their 'thinking' step by step, the output is as likely to produce an argument that sounds logical but isn't, as to constitute a genuine proof. By contrast, a proof assistant will not accept a proof unless it is fully precise and fully rigorous, justifying every step in its chain-of-thought. In some circumstances, a hand-waving or approximate solution is good enough, but when mathematical accuracy matters, we should demand that AI-generated proofs are formally verifiable. Not every application of generative AI is so black and white, where humans with the right expertise can determine whether the results are correct or incorrect. In life, there is a lot of uncertainty and it's easy to make mistakes. As I learned in high school, one of the best things about math is the fact that you can prove definitively that some ideas are wrong. So I'm happy to have an AI try to solve my personal math problems, but only if the results are formally verifiable. And we aren't quite there, yet.


Digital Trends
an hour ago
- Digital Trends
ChatGPT-5 launch live: follow the build-up to OpenAI's major livestream event
The much-anticipated of arrival of ChatGPT-5, the next major upgrade to OpenAI's ChatGPT AI model, looks like it will happen today. OpenAI posted a not-so-cryptic message on X, hinting at the new model's arrival with the number five replacing the 's' in livestream, while also confirming the time and date for the livestream launch. Recommended Videos ChatGPT-5 is set tipped to dramatically change the way you use AI, with a host of new features expected to be announced. When is the ChatGPT-5 launch livestream? The ChatGPT-5 launch event will start at 10am PT / 1pm ET today. LIVE5TREAM THURSDAY 10AM PT — OpenAI (@OpenAI) August 6, 2025 How to follow the ChatGPT-5 launch live We will be running our ChatGPT-5 live blog during the build-up to the event, and throughout the launch livestream itself, keeping you in the know every step of the way. OpenAI hasn't revealed details on exactly where you'll be able to watch the launch event just yet, but we'd recommend keeping a close eye on the following places; ChatGPT-5 launch live blog Live Last updated August 07, 2025 4:27 AM The liveblog has ended. No liveblog updates yet. Load more


Android Authority
an hour ago
- Android Authority
OpenAI's GPT-5 leaks, hinting at better math and coding abilities
Calvin Wankhede / Android Authority TL;DR Details about OpenAI's upcoming GPT-5 model have leaked. GitHub accidentally published details of the upcoming model and its four variants in a blog, which was later withdrawn. The leak points to better reasoning and improved agentic capabilities that may also come to ChatGPT after the model's release. OpenAI's long-awaited GPT-5 models are expected to arrive very soon, and are likely to be available through APIs before ChatGPT. But a recent leak has spoiled the release, revealing some of the features we can expect to see with the upcoming release. Details about the GPT-5 were revealed through an accidental blog post by GitHub, discovered by a Reddit user (via The Verge). The now-deleted blog post spoke about GPT-5's leaps in reasoning, coding abilities, and the overall user experience compared to the existing GPT-4, GPT-4.1, and GPT-4o models. The leak revealed that the newer models offer better responses with shorter prompts, display clearer thinking, and allow for better collaboration with and assistance to all users. The webpage, which can still be viewed through an archived version, further details four variants of GPT-5, including: the standard GPT-5 model edition for 'logic and multi-step tasks,' GPT-5-mini for cost-effective deployments, GPT-5-nano for high-speed query responses, and GPT-5-chat for integration with multimodal chat-based workflows in enterprise settings. The leaked document also emphasizes better agentic capabilities in GPT-5, but does not reveal specifics. Notably, OpenAI released proper agentic capabilities to ChatGPT last month, with the option to delegate tasks and expect them to be deployed at the server end instead of your own computer or device used to interact with the chatbot. While there hasn't been an official announcement from OpenAI about GPT-5's release, the completeness of GitHub's blog post indicates the launch is around the corner. In fact, OpenAI recently posted on X that about a livestream scheduled for later today (i.e., Thursday) at 10 AM PT (1 PM ET). In the announcement, OpenAI replaced the 'S' in Livestream with '5,' strongly suggesting GPT-5's announcement. Despite the lack of details from OpenAI, co-founder Sam Altman has teased some of its capabilities. Recently speaking on Theo Von's podcast, Altman expressed the 'weird feeling' while testing an upcoming model. Talking about the test, Altman said that the AI quickly accomplished a task that they couldn't understand, and that made them feel 'useless.' In contrast, two early GPT-5 testers recently told Reuters that the leap from GPT-4 to GPT-5 didn't feel as massive as the previous upgrade from GPT-3. While we're unsure which camp we would lie in, we don't quite see signs of singularity (aka AGI) just yet, but we hope to learn more soon after OpenAI's livestream later today. Follow