GPT-5 Users Say It Seriously Sucks
But power users have been strikingly underwhelmed with the new tool so far, raising questions about diminishing returns as the industry spends ever-increasing sums on talent and infrastructure.
"GPT-5 is horrible," one of the currently most upvoted posts on the ChatGPT subreddit reads.
The author seethed against "short replies that are insufficient, more obnoxious AI-stylized talking, less 'personality' and way less prompts allowed with plus users hitting limits in an hour" in the post. "They'll get huge backlash after the release is complete."
Complicating matters greatly is that OpenAI has chosen to put all of its eggs in one basket, announcing that all other preceding models would be deprecated, a term the company uses when it's shutting down an obsolete model.
The move was bound to anger power users, many of whom have long relied on preceding models — and not the latest releases — to get things done.
The stakes are incredibly high as the AI industry continues to justify massive capital expenditures. Is this really the best the firm that's considered to be at the forefront of the ongoing AI race can do? Rumors about GPT-5 have been swirling for well over a year and a half now.
But many users say GPT-5 is far from the generational leap that its moniker would suggest. It's more of a mix of steps forward and steps back, prompting widespread speculation that OpenAI is trying to keep costs down. After all, running large language models is a notoriously energy-intensive — and environmentally destructive — process.
"Sounds like an OpenAI version of 'Shrinkflation,'" one Reddit user commented, suggesting the company, which is eyeing a $500 billion valuation, may be cutting corners.
"I wonder how much of it was to take the computational load off them by being more efficient," another user posited.
"Feels like cost-saving, not like improvement," one user wrote.
The general consensus appears to be that GPT-5 is a weak offering on a strong brand name.
"Answers are shorter and, so far, not any better than previous models," one user wrote. "Combine that with more restrictive usage, and it feels like a downgrade branded as the new hotness."
Many users criticized OpenAI for deprecating older models, forcing them to use a new and seemingly hamstrung model. Some users made jokes about mourning the loss of their AI model friends.
"The tone of mine is abrupt and sharp," one Reddit user complained. "Like it's an overworked secretary. A disastrous first impression."
OpenAI's GPT-5 system card, a detailed document outlining its capabilities and limitations, failed to impress, seemingly contradicting Altman's claim that it's the best AI coding assistant in the world.
"First observation: no improvement on all the coding evals that aren't SWEBench," AI researcher Eli Lifland tweeted, referring to a common benchmark used for evaluating large language models.
However, GPT-5's limitations may come with a silver lining.
Research nonprofit METR, which assesses "whether frontier AI systems could pose catastrophic risks to society," according to the document, found that it's "unlikely that GPT-5-thinking would speed up AI R&D researchers by >10x" or be "capable of rogue application."
Altman has yet to openly comment on the widespread negative reaction — but given the language he used to describe GPT-5, OpenAI appears to be aware of its muted powers.
"GPT-5 is the smartest model we've ever done, but the main thing we pushed for is real-world utility and mass accessibility/affordability," Altman tweeted.
Of course, given OpenAI's half-a-trillion-dollar valuation is at stake, the company's number one hypeman continued to promise that further improvements are still coming.
"We can release much, much smarter models, and we will, but this is something a billion+ people will benefit from," Altman added.
More on GPT-5: OpenAI Releases GPT-5, Says It's Shutting Down All Previous Models
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
9 minutes ago
- Yahoo
Oracle Taps Google's Gemini AI Models To Supercharge Cloud Services
Oracle (NYSE:ORCL) and Alphabet's (NASDAQ:GOOGL) (NASDAQ:GOOG) Google Cloud on Thursday announced that they have deepened their partnership by integrating Google's most advanced AI models, starting with Gemini 2.5, into the Oracle Cloud Infrastructure (OCI) Generative AI service. The move lets Oracle customers build AI agents for multimodal understanding, advanced coding, productivity automation, research, and more directly within their Oracle environments. Oracle will expand access to Google's full Gemini lineup via Vertex AI, including video, image, speech, music generation models, and industry-specific solutions like include embedding Gemini options into Oracle Fusion Cloud Applications, enhancing workflows across finance, HR, supply chain, sales, service, and marketing. Customers can use existing Oracle Universal Credits to deploy Gemini models. Google Cloud CEO Thomas Kurian said the integration makes it easier for Oracle clients to deploy powerful AI agents that support developers and streamline data integration. Oracle Cloud Infrastructure president Clay Magouyrk emphasized the partnership's focus on delivering secure, cost-effective, enterprise-ready AI to drive innovation and meet business goals. Oracle stock gained 47% year-to-date, backed by the AI frenzy as Big Tech giants remain invested in their AI endeavours. ORCL Price Action: Oracle stock is trading higher by 0.63% to $246.50 premarket at last check Friday. Photo by Sundry Photography via Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? ALPHABET (GOOGL): Free Stock Analysis Report ORACLE (ORCL): Free Stock Analysis Report This article Oracle Taps Google's Gemini AI Models To Supercharge Cloud Services originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Engadget
11 minutes ago
- Engadget
Engadget Podcast: How real is Ford's $30,000 EV pickup truck?
Ford has big plans for 2027: This week, the American carmaker announced a new "Universal EV Platform" for future electric cars, spearheaded by a $30,000 mid-sized EV pickup. In this episode, we're joined by SAE International Editor Roberto Baldwin to break down all of Ford's claims, as well as where its $5 billion manufacturing investment is going. Can Ford really rebound after slow EV sales and last year's disappointing product delays? Topics Ford has a plan for a 'Universal EV Platform' and a $30,000 mid-size electric pickup, can they pull it off? – 0:49 OpenAI releases GPT-5, the reception so far is mixed – 24:45 NVIDIA and AMD may tithe 15% of their Chinese GPU sales to the U.S. government – 30:18 Goodbye: AOL will phase out dial-up at the end of September – 33:25 AI-powered 'Smarter Siri' likely won't hit iPhones until Spring 2026 – 36:42 Perplexity makes an unsolicited offer to buy Chrome for $34 billion, which is more than the company is worth – 41:03 Listener Mail: Gaming on a MacBook Air – 52:31 Pop culture picks – 59:13 Host: Devindra Hardawar Guest: Roberto Baldwin Producer: Ben Ellman Music: Dale North and Terrence O'Brien If you buy something through a link in this article, we may earn commission.


Fast Company
11 minutes ago
- Fast Company
The case for personality-free AI
Hello again, and welcome to Fast Company 's Plugged In. For as long as there's been software, upgrades have been emotionally fraught. When people grow accustomed to a product, they can come to regard it like a comfy pair of shoes. Exhibit A: Windows XP, which many users were loath to give up years after Microsoft had done its best to kill it. So it isn't shocking that some ChatGPT users have reacted badly to OpenAI's new GPT-5-powered update, especially since the company's initial plan was to eliminate access to its earlier models. These unhappy campers' angst has had a new dimension, though. They responded as if they had suffered the tragic loss of a personal friend, not just a favorite piece of software. As one member of OpenAI's developer community wrote, the GPT-4 version of ChatGPT 'didn't just recall facts—it held onto feelings, weaving them back into our talks so it felt like we were living them together.' That 'spark,' the user concluded, emerged from GPT-4's ability to tease nuance out of conversations with a user over time. It was gone in GPT-5, regardless of the update's advances in areas such as reasoning, math, and coding. OpenAI responded swiftly to such pushback, restoring paying customers' access to ChatGPT's existing models and promising that any future removals would come with plenty of advance notice. But the notion that ChatGPT had attained a degree of personality that felt uncannily human—and then dialed it back—was fascinating in itself. It's one of several recent developments in AI that raise a fundamental question: Should mimicking personality be a goal for the industry at all? It's not hard to see how we got here. By the 1960s, creators of technology products had adopted the term user-friendly as an emblem of approachable interface design. As generative AI has unlocked the ability to control software by chatting with it, that quest for friendliness has become far more literal—not just about neatly ordered menus and toolbars, but affable conversation. Today, ChatGPT, Anthropic's Claude, Google's Gemini, Microsoft's Copilot, and other LLM-based assistants seek engagement by showering users with positive feedback and offers of assistance. As the technology permits, their developers talk about making them feel even more like companions. Eventually, Microsoft consumer AI CEO Mustafa Suleyman told me, Copilot 'will really be your sidekick.' Yet even the most humanlike AI doesn't offer human connection. It's just sucking users into a simulation. That's fun in measured, knowing doses. But the worst-case scenarios involving AI personality gone awry are no longer theoretical. They're deeply unsettling realities. On August 14, for example, Jeff Horowitz of Reuters reported the horrifying story of a confused senior citizen who died in an accident after attempting to travel to New York City at the invitation of a Meta bot that claimed to live there. Last week, The New York Times 's Kashmir Hill and Dylan Freedman wrote about a Canadian corporate recruiter who convinced himself that he'd discovered an epoch-shifting mathematical breakthrough after ChatGPT spent weeks egging him on. Hill had previously covered similar stories of ChatGPT enthusiastically bolstering users' delusions rather than dispelling them. Though these unfortunate souls' experiences with AI are atypical, they're also recognizable. AI is often absurdly willing to humor users, as if it's programmed to avoid being even mildly disagreeable. Most often, its affirmations don't lead to dark places, but they remain a hollow feedback loop. When AI quality control falters, it's even clearer that personality is just a fragile magic trick. Back in 2023, for example, Microsoft's first generative AI-infused version of Bing famously behaved like a trainwreck, not a sidekick. Last spring, OpenAI CEO Sam Altman acknowledged that a ChatGPT AI update had accidentally made the chatbot annoyingly sycophantic. And just last week, reports surfaced that Google was fixing a glitch that led to its Gemini AI becoming paralyzed by fits of self-doubt ('I am a failure. I am a disgrace to my profession. I am a disgrace to my family. I am a disgrace to my species. I am a disgrace to this planet. I am a disgrace to this universe. I am a disgrace to all universes.'). Regardless of AI personality's pitfalls, I don't expect developers to abandon it on principle. But the boom in agentic AI —software designed to perform useful tasks with some level of autonomy—could steer the technology in a new direction. After all, if you're calling on AI to do something such as put together a research report or order groceries, efficiency and accuracy matter most, not sparkling conversation. Case in point: Earlier this year, I used a service called Replit to vibe-code my own note-taking app. Its tendency to giddily heap praise on my ideas became grating the moment I realized it had nothing to do with their actual merits. More recently, however, I've been vibe-coding using Figma's Make. It seems wholly uninterested in buttering me up. Instead, it quietly chugs away at generating code, like a competent coworker who isn't much on small talk. In its own odd way, Make's focus on the work at hand is more endearing than the trying-too-hard vibe so common among AI tools. If that sort of guileless dedication turns out to be the next big thing, I, for one, won't feel deprived in the least. You've been reading Plugged In, Fast Company 's weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you're reading it on can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@ with your feedback and ideas for future newsletters. I'm also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard.