logo
OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model

OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model

Fast Company4 hours ago
It's rare for a tech titan to show any weakness or humanity. Yet even OpenAI's notoriously understated CEO Sam Altman had to admit this week that the rollout of the company's new GPT-5 Large Language Model was a complete disaster.
'We totally screwed up,' Altman admitted in an interview with The Verge.
I agree. As a former OpenAI Beta tester—and someone who currently spends over $1,000 per month on OpenAI's API—I've eagerly anticipated the launch of GPT-5 for over a year.
When it finally arrived, though, the model was a mess. In contrast to the company's previous GPT-4 series of models, GPT-5's responses feel leaden, cursory, and boring. The new model also makes dumb mistakes on simple tasks and generates shortened answers to many queries.
Why is GPT-5 so awful? It's possible that OpenAI hobbled its new model as a cost-cutting measure.
But I have a different theory. GPT-5 completely lacks emotional intelligence. And its inability to understand and replicate human emotion cripples the model—especially on any task requiring nuance, creativity, or a complex understanding of what makes people tick.
Getting Too Attached
When OpenAI launched its GPT-4 model in 2023, researchers immediately noted its outstanding ability to understand people. An updated version of the model (dubbed GPT 4.5 and released in early 2025) showed even higher levels of 'emotional intelligence and creativity.'
Initially, OpenAI leaned into its model's talent for understanding people, using terms cribbed from the world of psychology to describe the model's update.
'Interacting with GPT‑4.5 feels more natural. Its broader knowledge base, improved ability to follow user intent, and greater 'EQ' make it useful for tasks like improving writing, programming, and solving practical problems,' OpenAI wrote in the model's release notes, subtly dropping in a common psychological term used to measure a person's emotional intelligence.
Soon, though, GPT-4's knack for humanlike emotional understanding took a more concerning turn.
Plenty of people used the model for mundane office tasks, like writing code and interpreting spreadsheets. But a significant subset of users put GPT-4 to a different use, treating it like a companion—or even a therapist.
In early 2024, studies showed that GPT-4 provided better responses than many human counselors. People began to refer to the model as a friend —or even treat it as a confidant or lover.
Soon, articles began appearing in major news sources like the New York Times about people using the chatbot as a practice partner for challenging conversations, a stand-in for human companionship, or even an aide for counseling patients.
This new direction clearly spooked OpenAI.
As Altman pointed out in a podcast interview, conversations with human professionals like lawyers and therapists often involve strong privacy and legal protections. The same may not be true for intimate conversations with chatbots like GPT-4.
Studies have also shown that chatbots can make mistakes when providing clinical advice, potentially harming patients. And the bots' tendency to keep users talking–often by reinforcing their beliefs–can lead vulnerable patients into a state of 'AI psychosis', where the chatbot inadvertently validates their delusions and sends them into a dangerous emotional spiral.
Shortly after the GPT-5 launch, Altman discussed this at length in a post on the social network X.
'People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,' Altman wrote. 'We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.'
Altman went on to acknowledge that 'a lot of people effectively use ChatGPT as a sort of therapist or life coach.' While this can be 'really good,' Altman admitted that it made him deeply 'uneasy.'
In his words, if '…users have a relationship with ChatGPT where they think they feel better after talking but they're unknowingly nudged away from their longer term well-being (however they define it), that's bad.'
Lobotimize the Bot
To avoid that potentially concerning–and legally damaging–direction, OpenAI appears to have deliberately dialed back its bot's emotional intelligence with the launch of GPT-5.
The release notes for the new model say that OpenAI has taken steps towards 'minimizing sycophancy'—tech speak for making the bot less likely to reinforce users' beliefs and tell them what they want to hear.
OpenAI also says that GPT-5 errors on the side of ' safe completions '—giving vague or high-level responses to queries that are potentially damaging, rather than refusing to answer them or risking a wrong or harmful answer.
OpenAI also writes that GPT-5 is 'less effusively agreeable,' and that in training it, the company gave the bot example prompts that led it to agree with users and reinforce their beliefs, and then taught it 'not to do that.'
In effect, OpenAI appears to have lobotomized the bot–potentially removing or reconfiguring, through training and negative reinforcement, the parts of its virtual brain that handles many of the emotional aspects of its interactions with users.
This may have seemed fine in early testing–most AI benchmarks focus on productivity -centered tasks like solving complex math problems and writing Python code, where emotional intelligence isn't necessary.
But as soon as GPT-5 hit the real world, the problems with tweaking GPT-5's emotional center became immediately obvious.
Users took to social media to share how the switch to GPT-5 and the loss of the GPT-4 model felt like 'losing a friend.' Longtime fans of OpenAI bemoaned the 'cold' tone of GPT-5, its curt and business-like responses, and the loss of an ineffable 'spark' that made GPT-4 a powerful assistant and companion.
Emotion Matters
Even if you don't use ChatGPT as a pseudo therapist or friend, the bot's emotional lobotomy is a huge issue. Creative tasks like writing and brainstorming require emotional understanding.
In my own testing, I've found GPT-5 to be a less compelling writer, a worse idea generator, and a terrible creative companion. If I asked GPT-4 to research a topic, I could watch its chain of reasoning as it carefully considered my motivations and needs before providing a response.
Even with 'Thinking' mode enabled, GPT-5 is much more likely to quickly spit out a fast, cursory response to my query, or to provide a response that focuses solely on the query itself and ignores the human motivations of the person behind it.
With the right prompting, GPT-4 could generate smart, detailed, nuanced articles or research reports that I would actually want to read. GPT-5 feels more like interacting with a search engine, or reading text written in the dull prose of a product manual.
To be fair, for enterprise tasks like quickly writing a web app or building an AI agent, GPT-5 excels. And to OpenAI's credit, use of its APIs appears to have increased since the GPT-5 launch. Still, for many creative tasks–and for many users outside the enterprise space–GPT-5 is a major backslide.
OpenAI appears genuinely blindsided by the anger many users felt about the GPT-5 rollout and the bot's apparent emotional stuntedness. OpenAI leader Nick Turley admitted to the Verge that 'the degree to which people had such strong feelings about a particular model…was certainly a surprise to me.'
Turley went on to say that the 'level of passion' users have for specific models is 'quite remarkable' and that–in a truly techie bit of word choice–it 'recalibrated' his thinking about the process of releasing new models, and the things OpenAI owes its long-time users.
The company now seems to be aggressively rolling back elements of the GPT-5 launch–restoring access to the old GPT-4 model, making GPT-5 'warmer and friendlier', and giving users more control over how the new model processes queries.
Admitting when you're wrong, psychologists say, is a hallmark of emotional intelligence. Ironically, Altman's response to the GPT-5 debacle demonstrates rare emotional nuance, at the exact moment that this company is pivoting away from such things.
OpenAI could learn a thing or two from its leader. Whether you're a CEO navigating a disastrous rollout or a chatbot conversing with a human user, there's a simple yet essential lesson to forget at your peril: emotion matters.
The early-rate deadline for Fast Company's Most Innovative Companies Awards is Friday, September 5, at 11:59 p.m. PT. Apply today.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

On Holding AG (ONON): I'm Double Minded About The Stock, Says Jim Cramer
On Holding AG (ONON): I'm Double Minded About The Stock, Says Jim Cramer

Yahoo

time15 minutes ago

  • Yahoo

On Holding AG (ONON): I'm Double Minded About The Stock, Says Jim Cramer

We recently published . On Holding AG (NYSE:ONON) is one of the stocks Jim Cramer recently discussed. On Holding AG (NYSE:ONON) is an athletic apparel retailer whose stock has lost 18% year-to-date on the back of investor concerns about the broader retail industry. The shares dipped by 8% last week after Tapestry warned that it expected tariffs to hit its profits. Cramer discussed the movement in On Holding AG (NYSE:ONON)'s shares and warned that he might have been too bullish about the firm previously. Here is what he said: 'A lot of the apparel stocks are down off of Tapestry. I've got to tell you, I mean Ralph Lauren is too. But the one that I've been watching is On Holding. I thought On Holding had a good quarter. I've been either disabused of that notion or perhaps I've been too bullish about these guys. If ONON is not doing as well, then you have to start thinking about Nike again. ' Mbuso Sydwell Nkosi/ Here are his previous comments about On Holding AG (NYSE:ONON): 'One of my favorite companies is On Holding. Now it has been stuck in a holding pattern. They reported very good numbers today, the stock was initially up seven, now it's down. There's a substantial short position, the shorts have been winning in this battle. I think Roger Federer in the end wins. But it is a very contested group.' While we acknowledge the potential of ONON as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey.

Tech Stocks Are Under Pressure. Why Some Wall Street Analysts Say That May Not Last
Tech Stocks Are Under Pressure. Why Some Wall Street Analysts Say That May Not Last

Yahoo

time15 minutes ago

  • Yahoo

Tech Stocks Are Under Pressure. Why Some Wall Street Analysts Say That May Not Last

Tech stocks are having a rough day, extending their recent slump amid a sector rotation away from big tech leaders. The Nasdaq lost close to 1% in recent trading, and the S&P 500 slid 0.4% as tech sector losses weighed on the indexes. The Dow Jones Industrial Average was little changed as retail and consumer defensive names gained, while Amazon (AMZN), Apple (AAPL), and Nvidia (NVDA) ranked among its weakest performers as all of the "Magnificent Seven" stocks declined. Caution ahead of a speech from Federal Reserve Chair Jerome Powell on Friday, uncertainty about policy changes from the Trump administration, and worries about returns from AI spending have all added to pressures on the sector. However, some Wall Street analysts said they don't expect that to last long. "While some near-term tech volatility is not surprising given the run-up in valuations, we advise investors against becoming overly defensive," UBS said Wednesday. "While we think some caution may be warranted in the more cyclical parts of tech, we remain confident in the broader AI sector's long-term growth and resilience. We recommend investors seek balanced exposure across the AI value chain (infrastructure, semis, and applications), with a preference for laggards offering a more attractive risk-reward balance," UBS said. "We view tech sell-offs like yesterday as opportunities," bullish analysts at Wedbush told clients in a note Wednesday, suggesting the slump could be short-lived, and pointed to earnings from AI chipmaker Nvidia next week as a potential positive catalyst. "When Nvidia reports earnings next week on August 27th the tech world and Wall Street will be listening closely," they said, adding that they believe the "tech bull cycle will be well intact at least for another 2-3 years given the trillions being spent on AI." Read the original article on Investopedia Sign in to access your portfolio

Jim Cramer Says Reports Of Apple Inc. (AAPL)'s 'Death' Are Overblown
Jim Cramer Says Reports Of Apple Inc. (AAPL)'s 'Death' Are Overblown

Yahoo

time15 minutes ago

  • Yahoo

Jim Cramer Says Reports Of Apple Inc. (AAPL)'s 'Death' Are Overblown

We recently published . Apple Inc. (NASDAQ:AAPL) is one of the stocks Jim Cramer recently discussed. Giuseppe Costantino/ Apple Inc. (NASDAQ:AAPL)'s shares have reversed tack recently as the firm appears to have smoothed over its friction with the Trump administration. The firm announced a $100 billion additional investment in the US earlier this month, and the shares have gained 13.8% since then. Cramer discussed Apple Inc. (NASDAQ:AAPL)'s shares adding 30 dollars to their value in ten days: 'Well we've got a lot of exciting things that people are talking about. About the release perhaps of new AI powered products. Uh, smart home push, including robots. Life like version of Siri, well that would certainly be helpful. Smart speaker with display and home security camera. I don't know, home security is something that people want but the main thing here is that if we get robots and they're not from Musk, they're from Apple, I think we'd be very interested in that. While we acknowledge the potential of AAPL as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store