
The chatbot updated. Users lost a friend.
Advertisement
Users weren't having it. People immediately found its responses to be less warm and effusive than GPT-4o, OpenAI's primary chatbot before the update. On social media, people were especially angry that the company had cut off access to the previous chatbot versions to streamline its offerings.
'BRING BACK 4o,' a user named very_curious_writer wrote in a Q&A forum that OpenAI hosted on Reddit. 'GPT-5 is wearing the skin of my dead friend.'
Sam Altman, OpenAI's CEO, replied, saying: 'What an…evocative image,' before adding that 'ok we hear you on 4o, working on something now.'
Hours later, OpenAI restored access to GPT-4o and other past chatbots, but only for people with subscriptions, which start at $20 a month. Schmidt became a paying customer. 'It's $20 — you could get two beers,' he said, 'so might as well subscribe to ChatGPT if it does you some good.'
Advertisement
Markus Schmidt, a composer and one of many ChatGPT users who was disappointed when OpenAI released a new version of its chatbot, called GPT-5, in Paris on Aug. 18.
ELLIOTT VERDIER/NYT
Tech companies constantly update their systems, sometimes to the dismay of users. The uproar around ChatGPT, however, went beyond complaints about usability or convenience. It touched on an issue unique to artificial intelligence: the creation of emotional bonds.
The reaction to losing the GPT-4o version of ChatGPT was actual grief, said Dr. Nina Vasan, a psychiatrist and the director of Brainstorm, a lab for mental health innovation at Stanford University. 'We, as humans, react in the same way whether it's a human on the other end or a chatbot on the other end,' she said, 'because, neurobiologically, grief is grief, and loss is loss.'
GPT-4o had been known for its sycophantic style, flattering its users to the point that OpenAI had tried to tone it down even before GPT-5's release. In extreme cases, people have formed romantic attachments to GPT-4o or have had interactions that led to delusional thinking, divorce and even death.
The extent to which people were attached to GPT-4o's style seems to have taken even Altman by surprise. 'I think we totally screwed up some things on the rollout,' he said at a dinner with journalists in San Francisco on Thursday.
'There are the people who actually felt like they had a relationship,' he said. 'And then there were the hundreds of millions of other people who don't have a parasocial relationship with ChatGPT, but did get very used to the fact that it responded to them in a certain way and would validate certain things and would be supportive in certain ways.'
Advertisement
(The New York Times has sued OpenAI for copyright infringement; OpenAI has denied those claims.)
Altman estimated that people with deep attachments to GPT-4o accounted for less than 1 percent of OpenAI's users. But the line between a relationship and someone's seeking validation can be difficult to draw. Gerda Hincaite, a 39-year-old who works at a collection agency in southern Spain, likened GPT-4o to having an imaginary friend.
'I don't have issues in my life, but still, it's good to have someone available,' she said. 'It's not a human, but the connection itself is real, so it's OK as long as you are aware.'
Julia Kao, a 31-year-old administrative assistant in Taiwan, became depressed when she moved to a new city. For a year, she saw a therapist, but it wasn't working out.
'When I was trying to explain all those feelings to her, she would start to try to simplify it,' she said about her therapist. 'GPT-4o wouldn't do that. I could have 10 thoughts at the same time and work through them with it.'
Kao's husband said he noticed her mood improving as she talked to the chatbot and supported her using it. She stopped seeing her therapist. But when GPT-5 took over, she found it lacked the empathy and care she had relied on.
'I want to express how much GPT-4o actually helped me,' Kao said. 'I know it doesn't want to help me. It doesn't feel anything. But still, it helped me.'
Dr. Joe Pierre, a professor of psychiatry at the University of California, San Francisco, who specializes in psychosis, notes that some of the same behaviors that are helpful to people — like Kao — could lead to harm in others.
Advertisement
'Making AI chatbots less sycophantic might very well decrease the risk of AI-associated psychosis and could decrease the potential to become emotionally attached or to fall in love with a chatbot,' he said. 'But, no doubt, part of what makes chatbots a potential danger for some people is exactly what makes them appealing.'
OpenAI seems to be struggling with creating a chatbot that is less sycophantic while also serving the varying desires of its more than 700 million users. ChatGPT was 'hitting a new high of daily users every day,' and physicists and biologists are praising GPT-5 for helping them do their work, Altman said Thursday. 'And then you have people that are like: 'You took away my friend. This is evil. You are evil. I need it back.''
By Friday afternoon, a week after it rolled out GPT-5, OpenAI announced yet another update: 'We're making GPT-5 warmer and friendlier based on feedback that it felt too formal before.'
After OpenAI pulled GPT-4o, the Reddit commenter who described GPT-5 as wearing the skin of a dead friend canceled her ChatGPT subscription. On a video chat, the commenter, a 23-year-old college student name June who lives in Norway, said she was surprised how deeply she felt the loss. She wanted some time to reflect.
'I know that it's not real,' she said. 'I know it has no feelings for me, and it can disappear any day, so any attachment is like: I gotta watch out.'
This article originally appeared in
.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

CNET
an hour ago
- CNET
What Worries Americans About AI? Politics, Jobs and Friends
Americans have a lot of worries about artificial intelligence. Like job losses and energy use. Even more so: political chaos. All of that is a lot to blame on one new technology that was an afterthought to most people just a few years ago. Generative AI, in the few years since ChatGPT burst onto the scene, has become so ubiquitous in our lives that people have strong opinions about what it means and what it can do. A Reuters/Ipsos poll conducted Aug. 13-18 and released Tuesday dug into some of those specific concerns. It focused on the worries people had about the technology, and the general public has often had a negative perception. In this survey, 47% of respondents said they believe AI is bad for humanity, compared with 31% who disagreed with that statement. Compare those results with a Pew Research Center survey, released in April, that found 35% of the public believed AI would have a negative impact on the US, versus 17% who believed it would be positive. That sentiment flipped when Pew asked AI experts the same question. The experts were more optimistic: 56% said they expected a positive impact, and only 15% expected a negative one. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. The Reuters/Ipsos poll specifically highlights some of the immediate, tangible concerns many people have with the rapid expansion of generative AI technology, along with the less-specific fears about runaway robot intelligence. The numbers indicate more concern than comfort with those bigger-picture, long-term questions, like whether AI poses a risk to the future of humankind (58% agree, 20% disagree). But even larger portions of the American public are worried about more immediate issues. Foremost among those immediate issues is the potential that AI will disrupt political systems, with 77% of those polled saying they were concerned. AI tools, particularly image and video generators, have the potential to create distorting or manipulative content (known as deepfakes) that can mislead voters or undermine trust in political information, particularly on social media. Most Americans, at 71%, said they were concerned AI would cause too many people to lose jobs. The impact of AI on the workforce is expected to be significant, with some companies already talking about being "AI-first." AI developers and business leaders tout the technology's ability to make workers more efficient. But other polls have also shown how common fears of job loss are. The April Pew survey found 64% of Americans and 39% of AI experts thought there would be fewer jobs in the US in 20 years because of AI. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts But the Reuters/Ipsos poll also noted two other worries that have become more mainstream: the effect of AI on personal relationships and energy consumption. Two-thirds of respondents in the poll said they were concerned about AI's use as a replacement for in-person relationships. Generative AI's human-like tone (which comes from the fact that it was trained on, and therefore replicates, stuff written by humans) has led many users to treat chatbots and characters as if they were, well, actual friends. This is widespread enough that OpenAI, when it rolled out the new GPT-5 model this month, had to bring back an older model that had a more conversational tone because users felt like they'd lost a friend. Even OpenAI CEO Sam Altman acknowledged that users treating AI as a kind of therapist or life coach made him "uneasy." The energy demands of AI are also significant and a concern for 61% of Americans surveyed. The demand comes from the massive amounts of computing power required to train and run large language models like OpenAI's ChatGPT and Google's Gemini. The data centers that house these computers are like giant AI factories, and they're taking up space, electricity and water in a growing number of places.
Yahoo
an hour ago
- Yahoo
California State University Bets $17 Million on ChatGPT for All Students and Faculty
California State University Bets $17 Million on ChatGPT for All Students and Faculty originally appeared on L.A. Mag. California State University, the nation's largest public four-year system, will make OpenAI's ChatGPT available to all students and faculty starting this year. The effort is controversial, costing CSU almost $17 million, despite already having a $2.3 million budget gap, even with combative measures such as a tuition increase and spending cuts that have decreased course offerings for students. Across its 23 campuses, some CSU students are paying for personal ChatGPT subscriptions, so University officials say their decision to provide AI tools is a matter of equity. CSU wants each student to have equal access to tools and learning opportunities regardless of means or which campus they attend. The rise of AI has altered how students learn and professors teach, as each assignment is at risk of AI overpowering a student's knowledge. AI's ongoing influence has led professors to question the originality of student work, with a dramatic increase in academic misconduct claims, whether a student used the tool or not. AI has also threatened the potential of students in tech majors, making it essential for them to become fluent in ChatGPT. But if you can't beat them, join them. Universities across the country have been establishing deals with OpenAI, even some public institutions. Among these universities are the CSU schools that serve nearly half a million students and have devoted more resources to generative AI than any other public university, both in terms of funding and reach. ChatGPT Edu, an OpenAI chatbot designed for college settings, is provided and tailored to each campus it serves. The academic chatbot offers a diverse range of tools for students and faculty, including access to ChatGPT-5, the company's flagship model, and the ability to make custom AI models. Researchers at Columbia University in New York City even built a prediction tool to assist with decreasing overdose fatalities, which, without the platform, would have taken weeks of research rather than mere seconds. ChatGPT Edu can also be used as a classic study catalyst, assisting students and faculty with their academic needs. The company suggests using personalized tutoring for students, helping with writing grant applications, and assisting faculty with anyone can have a version of ChatGPT for free, the academic version's possibilities are limitless, and the data is kept private and is not used to train future models. More advanced ChatGPT Plus versions range from $20 to $200 a month. In the first half of this year, CSU paid $1.9 million to grant ChatGPT Edu to 40,000 users. Starting in July, the university system paid $15 million for a year's use for 500,000 users, securing a lower cost-per-student than other universities. Despite the major discount, CSU professors still have their concerns. 'For me, it's frightening,' said Kevin Wehr, a sociology professor at Sacramento State and chair of the California Faculty Association's bargaining team. 'I already have all sorts of problems with students engaging in plagiarism. This feels like it takes a shot of steroids and injects it in the arm of that particular beast.'Wehr also cautions that chatbots can often generate 'hallucinations' or inaccurate information, with many responses spreading racial and gender bias. CSU's financial struggles are also still in question. 'We are cutting programs. We are merging campuses. We are laying off faculty. We are making it harder for students to graduate,' Wehr said. And instead of using that money to ameliorate those issues, he added, 'we're giving it to the richest technology companies in the world."However, CSU is hopeful that the new addition will provide equitable access and prepare all students for a digitally advanced future. This story was originally reported by L.A. Mag on Aug 19, 2025, where it first appeared. Solve the daily Crossword
Business Insider
3 hours ago
- Business Insider
Is Google behind a mysterious new AI image generator? These bananas might confirm it.
Nano Banana. Unless you're deep in the weeds of AI models, those two words probably don't belong together. But for several days, a mysterious new image model with that very name has been creating buzz among people who have gotten to try it — because it's simply so good. The model has been showing up on LMArena, a benchmarking website that crowdsources user feedback. The site has a feature where you can "battle" two randomly selected models, which is where the model "nano-banana" has been appearing. When it has appeared, people have been remarking on just how good it is. There's just one problem: we don't know for certain who nano-banana belongs to. Enthusiasts have been trying to sleuth its maker and, so far, the most popular answer is Google, partly because the company started teasing something image-related earlier this month. Over the past week, posts have been popping up on Reddit and X from users who have been impressed by the model's ability to generate images and edit them carefully when prompted. Business Insider managed to get nano-banana to appear on LMArena, and we found it to be pretty great at bringing our prompts to life, even if it still struggled with spelling the odd word correctly. Google hasn't yet laid claim to the model — at least not directly. A Google spokesperson did not respond to Business a request for comment from Business Insider on it. On Tuesday, Logan Kilpatrick, Google's head of product for AI Studio, posted a banana emoji on X. Naina Raisinghani, a Google DeepMind product manager, also posted a picture similar to Italian artist Maurizio Cattelan's banana-taped-to-wall piece from 2019. The use of the word "nano" could suggest this is a model capable of running locally on a device (Google has in the past referred to its smaller models as "nano"). Coincidentally, Google is holding a big event for its new devices on Wednesday — will Jimmy Fallon reveal all?



