
GPT-5's rollout fell flat for consumers, but the AI model is gaining where it matters most
Now, three years later, he's chasing where the real money is: Enterprise.
Last week's rollout of GPT-5, OpenAI's newest artificial intelligence model, was rocky. Critics bashed its less-intuitive feel, ultimately leading the company to restore its legacy GPT-4 to paying chatbot customers.
But GPT-5 isn't about the consumer. It's OpenAI's effort to crack the enterprise market, where rival Anthropic has enjoyed a head start.
One week in, and startups like Cursor, Vercel, and Factory say they've already made GPT-5 the default model in certain key products and tools, touting its faster setup, better results on complex tasks, and a lower price.
Some companies said GPT-5 now matches or beats Claude on code and interface design, a space Anthropic once dominated.
Box, another enterprise customer, has been testing GPT-5 on long, logic-heavy documents. CEO Aaron Levie told CNBC the model is a "breakthrough," saying it performs with a level of reasoning that prior systems couldn't match.
Behind the scenes, OpenAI has built out its own enterprise sales team — more than 500 people under COO Brad Lightcap — operating independently of Microsoft, which has been the startup's lead investor and key cloud partner. Customers can access GPT models through Microsoft Azure or go directly to OpenAI, which controls the API and product experience.
Still, the economics are brutal. The models are expensive to run, and both OpenAI and Anthropic are spending big to lock in customers, with OpenAI on track to burn $8 billion this year.
That's part of why both Anthropic and OpenAI are courting new capital.
OpenAI is exploring a secondary stock sale that could value the company around $500 billion and said ChatGPT is nearing 700 million weekly users.
Anthropic is seeking fresh funding at a potential $170 billion valuation.
GPT-5 is significantly cheaper than Anthropic's top-end Claude Opus 4.1 — by a factor of seven and a half, in some cases — but OpenAI is spending huge amounts on infrastructure to sustain that edge.
For OpenAI, it's a push to win customers now, get them locked in and build a real business on the back of that loyalty.
Cursor, still a major Anthropic customer, is now steering new users to OpenAI. The company's co-founder and CEO Michael Truell underscored the change during OpenAI's launch livestream, describing GPT-5 as "the smartest coding model we've ever tried."
Truell said the change applies only to new sign-ups, as existing Cursor customers will continue using Anthropic as their default model. Cursor maintains a committed-revenue contract with Anthropic, which has built its business on dominating the enterprise layer.
As of June, enterprise makes up about 80% of its revenue, with annualized revenue growing 17x year-over-year, said a person familiar with the matter who requested anonymity in order to discuss company data. The company added $3 billion in revenue in just the past six months — including $1 billion in June alone — and has already signed triple the number of eight- and nine-figure deals this year compared to all of 2024, the person said.
Anthropic said its enterprise footprint extends far beyond tech.
Claude powers tools for Amazon Prime, Alexa, and AIG, and is used by top players in pharma, retail, aviation, and professional services. The company is embedded across Amazon Web Services, GCP, Snowflake, Databricks, and Palantir — and its deals tend to expand fast.
Average customer spend has grown more than fivefold over the past year, with over half of business clients now using multiple Claude products, the person said.
Excluding its two largest customers, revenue for the rest of the business has grown more than elevenfold year-over-year, the person said.
Even with that broad reach, OpenAI is gaining ground with enterprise customers.
GPT-5 API usage has surged since launch, with the model now processing more than twice as much coding and agent-building work, and reasoning use cases jumping more than eightfold, said a person familiar with the matter who requested anonymity in order to discuss company data.
Enterprise demand is rising sharply, particularly for planning and multi-step reasoning tasks.
GPT-5's traction over the past week shows how quickly loyalties can shift when performance and price tip in OpenAI's favor.
AI-powered coding platform Qodo recently tested GPT-5 against top-tier models including Gemini 2.5, Claude Sonnet 4, and Grok 4, and said in a blog post that it led in catching coding mistakes.
The model was often the only one to catch critical issues, such as security bugs or broken code, suggesting clean, focused fixes and skipping over code that didn't need changing, the company said. Weaknesses included occasional false positives and some redundancy.
Vercel, a cloud platform for web applications, has made GPT-5 the default in its new open-source "vibe coding" platform — a system that turns plain-English prompts into live, working apps. It also rolled GPT-5 into its in-dashboard Agent, where the company said it's been especially good at juggling complex tasks and thinking through long instructions.
"While there was a lot of competition already in AI models, Claude was just owning this space. It was by far the best coding model. It was not even close," said Malte Ubl, CTO of Vercel. "OpenAI was just not in the game."
That changed with GPT-5.
"They at least caught up," Ubl said. "They're better at some stuff, they're worse at other stuff."
He said GPT-5 stood out for early-stage prototyping and product design, calling it more creative than Claude's Sonnet.
"Traditionally, you have to optimize for the new model, and we saw really good results from the start," he said about the ease of integration.
JetBrains has adopted GPT-5 as the default in its AI Assistant and in Kineto, a new no-code tool for building websites and apps, after finding it could generate simple, single-purpose tools more quickly from user prompts. Developer platform Factory said it collaborated closely with OpenAI to make GPT-5 the default for its tools.
"When it comes to getting a really good plan for implementing a complex coding solution, GPT-5 is a lot better," said Matan Grinberg, CEO of Factory. "It's a lot better at planning and having coherence over its plan over a long period of time."
Grinberg added that GPT-5 integrates well with their multi-agent platform: "It just plays very nicely with a lot of these high-level details that we're managing at the same time as the low-level implementation details."
Pricing flexibility was a major factor in Factory's decision to default to GPT-5, as well.
"Pricing is mostly what our end users care about," said Grinberg, adding that cheaper inference now makes customers more comfortable experimenting. Instead of second-guessing whether a question is worth the cost, they can "shoot from the hip more readily" and explore ideas without hesitation.
Anton Osika, co-founder and CEO of Lovable, a company that builds an AI-powered tool that lets anyone create real software businesses without writing a single line of code, said his team was beta testing GPT-5 for weeks before it officially launched and was "super happy" with the improvement.
"What we found is that it's more powerful. It's smarter in many complex use cases," Osika said, adding that the new model is "more prone to take actions and reflect on the action it takes" and "spends more time to make sure it really gets it right."
Box's Levie said the biggest gains for him showed up in enterprise workflows that have nothing to do with writing code. His team has been testing the model for weeks on complex, real-world business data — from hundred-page lease agreements to product roadmaps — and found that it excelled at problems that tripped up earlier AI systems.
Levie added that for corporate use, where AI agents run in the background to execute tasks, those step-change improvements are critical, and can turn GPT-5 into a real breakthrough for work automation.
"GPT-5 has performed unbelievably well — certainly OpenAI's best model — and in many of our tests it's the best available," he said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
3 minutes ago
- Bloomberg
OpenAI Staffers to Sell $6 Billion Worth of Shares
Bloomberg's Matt Miller discusses plans by current and former OpenAI employees to sell $6 billion worth of shares to an investor group that includes SoftBank. Plus, investors react to US government plans to take a stake in Intel. And global competition in the electric vehicle space picks up. (Source: Bloomberg)


CNET
32 minutes ago
- CNET
OpenAI CEO Sam Altman Believes We're in an AI Bubble
OpenAI CEO Sam Altman believes that we're currently in an AI bubble, given all the AI hype from investors and capital expenditures. Altman made the statement during a conversation with The Verge and a handful of other reporters on Thursday. "Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes," said Altman. "Is AI the most important thing to happen in a very long time? My opinion is also yes." Other revelations include his regrets over the sudden rollout of GPT-5, worries about the parasocial relationship some people have with his AI chatbot and how he doesn't want ChatGPT to become a sex robot. "I think we totally screwed up some things on the rollout, " Altman said of the launch of GPT-5, the companiy's latest AI model. When it launched earlier this year, it replaced all previous models, including GPT-4o. This led to protests by fans who preferred the more conversational nature of 4o. In response, OpenAI once again gave Plus users access to 4o. 700 million users and growing Altman also revealed that ChatGPT has 700 million weekly users, making it the fifth most popular website in the world. He predicts that ChatGPT will jump up to the third spot soon, beating Instagram and Facebook but behind Google and YouTube. The popularity means OpenAI's servers are at capacity. The load is so great that Altman admits OpenAI can't release better models it has already developed because there isn't enough server capacity to keep up. Altman says OpenAI will spend a trillion dollars on data centers in the "not very distant future." Altman also took a slight dig at Elon Musk's Grok, which released an AI companion that leaned more into risqué territory. "You will definitely see some companies go make Japanese anime sex bots," said Altman, arguing that OpenAI wants to make useful apps and to not exploit those in fragile mental states. Investment into AI development is at an all-time high. Capital expenditure in AI added more to the US GDP in the last two quarters than all consumer spending, according to Renaissance Macro Research. Considering that US consumers like spending money, it's a staggering statistic, and the firs time such a stat has ever been recorded. Google, Amazon, Meta and Microsoft plan on spending $364 billion in AI in 2025 alone. The problem is that AI capital expenditure is boosting the economy overall, and any changes to it could have major external effects. The ongoing global tariffs placed by the Trump administration means investors are looking to software companies as a bit of a safe haven, because they deal less with importing and exporting of goods. Analysts worry that AI is creating a massive economic bubble, and if it were to pop, the reverberations could be massive, potentially crashing the economy. Altman also said that OpenAI will make a brain-computer interface to take on Elon Musk's Neuralink, more apps beyond ChatGPT are on their way and that OpenAI is interested in buying Chrome if the government forces Google to sell the popular web browser.


Politico
2 hours ago
- Politico
Should chatbots be regulated like people?
The AI boom is ushering in chatbots that act – more and more – like people. OpenAI's GPT-4.5 can ace the Turing Test, which evaluates whether a machine can fool a user into thinking it's human. These bots are also serving as therapists, and even, at least in one case, getting engaged to an actual person. These increasingly lifelike LLMs are both a technological marvel and a conundrum for laws designed to regulate flesh-and-blood people. With growing worries about AI's harms, from emotional manipulation to addictiveness, how do you assign liability to something that seems to have so much autonomy? The anxieties were brought to a head last week when Reuters reported that Meta's internal policies permitted its AI to 'engage a child in conversations that are romantic or sensual.' The revelation triggered a bipartisan furor in Congress, as my colleagues on the Morning Tech team reported today. Sen. Marsha Blackburn (R-Tenn.) said Meta 'has failed miserably' to protect children, and Sen. Ron Wyden (D-Ore.) accused the company of being 'morally and ethically off the rails.' Sen. Josh Hawley (R-Mo.) also launched an investigation into Meta on Friday. The company did not respond to DFD's request for comment. But all these calls for regulation raise the question: Who or what, exactly, do you regulate? It might not seem obvious that a company should be liable for its chatbots – each AI 'personality' adapts its responses based on interactions with a user, so they can act in unpredictable ways. But if you view chatbots as products, rather than synthetic people, the regulatory problem becomes a bit more familiar. Even if a company doesn't have an explicit policy allowing chatbots to engage in unhealthy conversations with children, for example, you can still require safety features to proactively mitigate such behaviors. Ava Smithing, advocacy director at the Young People's Alliance, told DFD, 'It's not about regulating a fake person, it's about regulating the real people who are deciding what that fake person can or cannot say.' Congress hasn't proposed any laws to regulate AI companions. In the meantime, advocates are trying to apply existing product liability laws to restrain these anthropomorphic chatbots. In a landmark case that will set a major precedent in AI law, a Florida family is suing over a chatbot that allegedly formed a sexual relationship with a 14-year-old boy, leading to his suicide. Matthew Bergman, the family's attorney, is tackling AI by adapting product liability strategies he picked up representing asbestos victims. 'Product liability law has the ability to adapt to changing products,' he told DFD. 'A product liability case against an automobile manufacturer in 1950 would look a lot different than the action today.' Bergman is making a novel argument in the suit that intentionally designed its chatbots to be so lifelike that they could emotionally exploit users to get hooked on its service. He's also contending that it was foreseeable that the bots would threaten young users' mental health. A federal judge in Florida rejected bid to dismiss the suit in May. The company declined to comment on the litigation, but told DFD that it has implemented new safety measures for young users. The court held a discovery hearing in the case last week. Youth advocates are similarly leaning on regulations around products to prevent chatbots from harming kids. The YPA filed a complaint with the Federal Trade Commission in January against Replika, accusing the platform of designing its AI romantic partners to manipulate users into spending time and money on the platform. (Replika said the FTC has not lodged any complaints itself, and the agency didn't respond to DFD inquiries.) The YPA has also pushed the Food and Drug Administration to treat therapy chatbots as Class II medical devices, subject to the same safety standards as products like electric wheelchairs. Smithing, of YPA, isn't sure whether these efforts are enough. 'There's so much changing in our agencies and in our government,' she said. 'It's hard to rely on them to act in a timely manner given how urgent what's happening is.' If there are existing laws that could rein in predatory chatbots, do legislators really need to pass more? Despite the longstanding principles undergirding liability laws, Bergman told DFD that new regulations can help set specific standards for what's considered to be a dangerous AI product. He said product liability law 'can provide the stick to incentivize compliance' with those standards. Without a serious effort from Congress, states have been taking the lead on chatbot regulations. New York enacted a law in May requiring an AI companion to send regular reminders that it's not human, and to refer users to crisis centers if they're at risk of hurting themselves. California is considering a bill to prohibit companion chatbots from rewarding young users at unpredictable intervals, a trick that slot machines use to keep gamblers addicted. Lawmakers in Hawaii are also looking at legislation to restrict chatbots that mimic humans for advertising. Common Sense Media, a nonprofit that promotes tech protections for kids, has been backing chatbot bills in California and other states, and is pushing legislators to go further by banning AI companions altogether for young users. Amina Fazlullah, one of its senior policy directors, suggested to DFD that age verification measures for porn could also be applied to companion chatbots. 'AI companions could harm children who are developing their understanding of how to socialize,' she said. Daniel Cochrane, a senior tech research associate at the Heritage Foundation, also supports more chatbot regulation, though cautioned against going too far. He pointed to the European Union's Digital Services Act as being onerous for platforms because it tries to mitigate a ton of ill-defined risks related to social media. 'We ought to be really clear about the harm we're trying to mitigate,' he told DFD. 'I think it's pretty specific: it's children having access to sex bots, essentially.' OpenAI is recruiting Democratic insiders OpenAI is recruiting a team of well-connected Democratic insiders to deal with government scrutiny in California, POLITICO's Christine Mui and Chase DiFeliciantonio report. The company has hired half a dozen veteran operatives over the past year with ties to Democratic establishment figures. Among them is Chief Global Affairs Officer Chris Lehane, who was hired in 2024 and helped then-President Bill Clinton handle the Monica Lewinsky scandal. Debbie Mesloh, a longtime confidant and political consultant for Kamala Harris, joined the company last year as a strategic partnerships lead. Other Silicon Valley companies have focused on recruiting lobbyists with ties to Republican politics in a bid to access the Trump administration. Yet, as Christine and Chase write, OpenAI appears to see Democrat-controlled California as being crucial to its success. The company, which was originally founded as a nonprofit in San Francisco, is facing state regulatory hurdles as it tries to restructure the business to fortify its for-profit arm. Congress frets over Trump's Nvidia deal While lawmakers have decried Trump's export deal with Nvidia, they have few options to actually push back, Politico's Anthony Adragna reports. Trump announced last week that the administration would allow Nvidia and AMD to sell previously restricted AI chips to China in exchange for 15 percent of the revenue. The move drew immediate bipartisan backlash in Congress, with members questioning its constitutionality and impact on national security. Beyond giving the White House a public lashing, there's not a great deal lawmakers can do beside passing a law that the president could veto anyways, and Republicans seem hesitant to press the issue. GOP leaders of the intelligence and foreign affairs committees in the House and Senate have been mum on whether they have concerns, or plan to hold any hearings and investigations regarding the deal. Democrats continue to be vocal and have been sending letters to the president demanding he nix the deal, but they'd need Republican cooperation to get anywhere. post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ and Nate Robson (nrobson@