ChatGPT o3-pro is only available on $200+ plans – here's what you're missing
If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission.
OpenAI just released a new ChatGPT model that's better and more reliable than its best reasoning models to date. ChatGPT o3-pro joins the list of AI chatbot options in the app, replacing the o1-pro model. As exciting as the new model is, however, most ChatGPT users don't have access to it… even if they pay for the Plus plan.
If you're on ChatGPT Free or ChatGPT Plus ($20/month), you won't get access to OpenAI's new reasoning AI. ChatGPT o3-pro is coming to the $200/month ChatGPT Plus tier. ChatGPT Team users also have access to o3-pro, and Enterprise and Edu users will get the upgrade soon.
Today's Top Deals
Best deals: Tech, laptops, TVs, and more sales
Best Ring Video Doorbell deals
Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98
There's some good news for ChatGPT Plus users, too. OpenAI has significantly reduced the costs for o3.
As someone who chats with ChatGPT o3 almost exclusively, I definitely appreciate the improved efficiencies. It's not like I worried too often about running out of ChatGPT o3 chats, but it did happen. It's good to see OpenAI bring down costs for its frontier models. That means Plus users are getting better rate limits than before.
OpenAI explained in its release notes that 'like o1-pro, o3-pro is a version of our most intelligent model, o3, designed to think longer and provide the most reliable responses.'
o3-pro will excel in the same areas as o1-pro, including math, science, and coding.
Like o3, o3-pro has access to various tools available in ChatGPT, including online search, file support, reasoning with visual prompts, coding (Python), and memory. It's not quite on par with o3, though. ChatGPT o3-pro doesn't have temporary chats for now, and it can't use the 4o image generation tool or the Canvas feature.
What really matters here are the performance improvements, and o3-pro excels in all benchmarks OpenAI conducted. That's not surprising for a new frontier model. OpenAI wouldn't add the 'pro' suffix without ensuring o3-pro outperforms o3.
The company also says that o3-pro is routinely favored by reviewers:
In expert evaluations, reviewers consistently prefer o3-pro over o3 in every tested category and especially in key domains like science, education, programming, business, and writing help. Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy.
OpenAI's tests show an average win rate of 64% in favor of o3-pro when compared to o3.
As I said before, I'm a ChatGPT Plus user who's quite happy with what I get for that monthly $20 fee. I can't justify going Pro, just as I can't downgrade to ChatGPT Free.
I've been using o3 more and more lately, even if I have to fight with the AI sometimes. Naturally, I wondered whether I really needed the slightly better o3-pro performance and the reduced hallucination rate (aka improved accuracy).
I don't think I'll miss much for now, and this o3-pro review that Sam Altman retweeted does a great job explaining where o3-pro shines and why ChatGPT Plus users might not need it. Here's a longer snippet that includes the detail Altman cited:
The weekly limit for ChatGPT o3 chats sits at 100 messages for ChatGPT Plus, Team, and Enterprise users.
Developers will appreciate the price drop the most. ChatGPT o3 input/output is priced at $2/$8 per 1 million tokens, down from $10/$40.
But then I took a different approach. My co-founder Alexis and I took the time to assemble a history of all our past planning meetings at Raindrop, all our goals, even recorded voice memos, and then asked o3-pro to come up with a plan.
We were blown away. It spit out the exact kind of concrete plan and analysis I've always wanted an LLM to create, complete with target metrics, timelines, priorities, and strict instructions on what to cut.
The plan o3 gave us was plausible, reasonable. But the plan o3-pro gave us was specific and grounded enough that it actually changed how we're thinking about our future.
This is hard to capture in an eval.
That sounds amazing, but it's also something I don't need right now. I recommend reading the entire review to see the differences between o3 and o3-pro and decide for yourself.
While I won't get o3-pro anytime soon, I'm glad to hear that operating costs for ChatGPT o3 queries have dropped significantly.
Altman said on X that OpenAI has reduced the price of o3 by 80%. OpenAI's Kevin Weil tweeted that the company has doubled the rate limits for o3 in the Plus tier. That might not match the 80% drop in costs, but it's still a big improvement.
The weekly limit for ChatGPT o3 chats remains at 100 messages for ChatGPT Plus, Team, and Enterprise users.
Developers will appreciate the price drop the most. ChatGPT o3 input/output is now priced at $2/$8 per 1 million tokens, down from $10/$40.
More Top Deals
Amazon gift card deals, offers & coupons 2025: Get $2,000+ free
See the
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
43 minutes ago
- Yahoo
‘This is coming for everyone': A new kind of AI bot takes over the web
People are replacing Google search with artificial intelligence tools like ChatGPT, a major shift that has unleashed a new kind of bot loose on the web. To offer users a tidy AI summary instead of Google's '10 blue links,' companies such as OpenAI and Anthropic have started sending out bots to retrieve and recap content in real time. They are scraping webpages and loading relevant content into the AI's memory and 'reading' far more content than a human ever would. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. According to data shared exclusively with The Washington Post, traffic from retrieval bots grew 49 percent in the first quarter of 2025 from the fourth quarter of 2024. The data is from TollBit, a New York-based start-up that helps news publishers monitor and make money when AI companies use their content. TollBit's report, based on data from 266 websites - half of which are run by national and local news organizations - suggests that the growth of bots that retrieve information when a user prompts an AI model is on an exponential curve. 'It starts with publishers, but this is coming for everyone,' Toshit Panigrahi, CEO and co-founder of TollBit, said in an interview. Panigrahi said that this kind of bot traffic, which can be hard for websites to detect, reflects growing demand for content, even as AI tools devastate traffic to news sites and other online platforms. 'Human eyeballs to your site decreased. But the net amount of content access, we believe, fundamentally is going to explode,' he said. A spokesperson for OpenAI said that referral traffic to publishers from ChatGPT searches may be lower in quantity but that it reflects a stronger user intent compared with casual web browsing. To capitalize on this shift, websites will need to reorient themselves to AI visitors rather than human ones, Panigrahi said. But he also acknowledged that squeezing payment for content when AI companies argue that scraping online data is fair use will be an uphill climb, especially as leading players make their newest AI visitors even harder to identify. Debate around the AI industry's use of online content has centered on the gargantuan amounts of text needed to train the AI models that power tools like ChatGPT. To obtain that data, tech companies use bots that scrape the open web for free, which has led to a raft of lawsuits alleging copyright theft from book authors and media companies, including a New York Times lawsuit against OpenAI. Other news publishers have opted for licensing deals. (In April, The Washington Post inked a deal with OpenAI.) In the past eight months, as chatbots have evolved to incorporate features like web search and 'reasoning' to answer more complex queries, traffic for retrieval bots has skyrocketed. It grew 2.5 times as fast as traffic for bots that scrape data for training between the fourth quarter of 2024 and the first quarter of 2025, according to TollBit's report. Panigrahi said TollBit's data may underestimate the magnitude of this change because it doesn't reflect bots that AI companies send out on behalf of AI 'agents' that can complete tasks on a user's behalf, like ordering takeout from DoorDash. The start-up's findings also add a new dimension to mounting evidence that the modern internet - optimized for Google search results and social media algorithms - will have to be restructured as the popularity of AI answers grows. 'To think of it as, 'Well, I'm optimizing my search for humans' is missing out on a big opportunity,' he said. Installing TollBit's analytics platform is free for news publishers, and the company has more than 2,000 clients, many of which are struggling with these seismic changes, according to data in the report. Although news publishers and other websites can implement blockers to prevent various AI bots from scraping their content, TollBit found that more than 26 million AI scrapes bypassed those blockers in March alone. Some AI companies claim bots for AI agents don't need to follow bot instructions because they are acting on behalf of a user. Mark Howard, chief operating officer for the media company Time, a TollBit client, said the start-up's traffic data has helped Time negotiate content licensing deals with AI companies including OpenAI and the search engine Perplexity. But the market to fairly compensate publishers is far from established, Howard said. 'The vast majority of the AI bots out there absolutely are not sourcing the content through any kind of paid mechanism. … There is a very, very long way to go.' Related Content He's dying. She's pregnant. His one last wish is to fight his cancer long enough to see his baby. The U.S. granted these journalists asylum. Then it fired them. 'Enough is enough.' Why Los Angeles is still protesting, despite fear.


TechCrunch
3 hours ago
- TechCrunch
Google reportedly plans to cut ties with Scale AI
In Brief Meta's big investment in Scale AI may be giving some of the startup's customers pause. Reuters reports that Google had planned to pay Scale $200 million this year but is now having conversations with its competitors and planning to cut ties with the startup. Microsoft is also reportedly looking to pull back, and OpenAI supposedly made a similar decision months ago, although its CFO said the company will continue working with Scale as one of many vendors. Scale's customers include self-driving car companies and the U.S. government, but Reuters says its biggest clients are generative AI companies seeking access to workers with specialized knowledge that they can use to annotate data for training models. A Scale spokesperson declined to comment on the company's relationship with Google, but he told TechCrunch that Scale's business remains strong, and that it will continue to operate as an independent company that safeguards its customers' data. Earlier reports suggest that Meta invested $14.3 billion in Scale for a 49% stake in the company, with Scale CEO Alexandr Wang joining Meta to lead the company's efforts to develop 'superintelligence.'
Yahoo
3 hours ago
- Yahoo
Week in Review: WWDC 2025 recap
Welcome back to Week in Review! We have lots for you this week, including what came out of WWDC 2025; The Browser Company's AI browser; OpenAI's partnership with Mattel; and updates to your iPad. Have a great weekend! The Apple experience: We kicked the week off with WWDC 2025, Apple's Worldwide Developers Conference, where the company showed off a newly designed iOS 26, new features across its products, and much more. There was considerable pressure on Apple this year to build on its promises and to make amends to developers as it lags behind in AI and faces continued legal challenges over its App Store. Snack hack: U.S. grocery distribution giant United Natural Foods (UNFI) was hit by a cyberattack, the company confirmed Tuesday. Much of UNFI's external-facing systems were offline, including web systems used by suppliers and customers, as well as the company's VPN products. Whole Foods was one of the victims, and it told staff that the cyberattack was affecting UNFI's 'ability to select and ship products from their warehouses' and that this will 'impact our normal delivery schedules and product availability.' Public debut: Chime's much-anticipated public debut finally arrived, with the company raising $864 million in its IPO. Iconiq was one of Chime's many backers taking a victory lap at its graduation to become a public company. This is TechCrunch's Week in Review, where we recap the week's biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here. Not to be outdone: Google rolled out Android 16 to Pixel phones, adding group chat to RCS, AI-powered edit suggestions to Google Photos, and support for corporate badges in Google Wallet. Cabs are here: Elon Musk has spent years claiming that Teslas would be able to drive themselves. Apparently the time has come — maybe? Musk said this week that Tesla will start offering public rides in driverless vehicles in Austin, Texas, on June 22. An AI browser: The Browser Company said last year that it's going to stop supporting and developing its Arc browser, which, although popular, was never able to reach scale. The startup has since been busy developing an AI-first browser called Dia. And another one: OpenAI released o3-pro, which is a version of o3, a reasoning model that the startup launched earlier this year. As opposed to conventional AI models, reasoning models work through problems step by step, allowing them to perform more reliably in domains like physics, math, and coding. In other news, Sam Altman posted on X to say that his company's first open model in years will be delayed until later this summer. Desperately seeking: Now that people can ask a chatbot for answers — sometimes generated from news content taken without a publisher's knowledge — there's no need to click on Google's blue links. And that's hurting publishers. Cool? Mattel and OpenAI are teaming up to create an 'AI-powered product,' whatever that is. As part of the deal, Mattel employees will also get access to OpenAI tools like ChatGPT Enterprise to 'enhance product development and creative ideation.' 'A privacy disaster': Reporter Amanda Silberling tried out the Meta AI app and found that it's publicly sharing people's queries. 'Meta does not indicate to users what their privacy settings are as they post, or where they are even posting to. So, if you log into Meta AI with Instagram, and your Instagram account is public, then so too are your searches about how to meet 'big booty women,'' she writes. iPad for work: iPadOS 26 will bring new features to the 15-year-old device that might actually make it usable for a full day of work. A wave of recent headlines and posts has raised questions about Bluesky, from concerns about slowing growth to claims that the platform is turning into a left-leaning echo chamber and that its users are too serious. While those critiques capture part of the conversation, they don't reflect the full picture of what Bluesky is working toward. But if left unchecked, those perceptions could pose a real challenge to the platform's future growth. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data