logo
ChatGPT-5 Faces Backlash: Users Share Concerns About Shorter, Less Helpful Responses

ChatGPT-5 Faces Backlash: Users Share Concerns About Shorter, Less Helpful Responses

Yahoo2 days ago
When you buy through links on our articles, Future and its syndication partners may earn a commission.
OpenAI had the internet's attention when it announced GPT-5. Some users have taken to social platforms like Reddit to complain that the new model isn't where it should be — and you can't even go back to the old ones if you don't like the latest release.
When you go to ChatGPT as a Plus user, you'll see a message that says, "ChatGPT now has our smartest, fastest, most useful model yet, with thinking built in — so you get the best answer, every time." Is that really the case? If the internet is to be believed, maybe not so much.
What users don't like about GPT-5
GPT5 is horrible from r/ChatGPT
There's actually a thread on Reddit titled "GPT-5 is horrible" with 4,600 upvotes and 1,700 comments. In the head, the user said, "Short replies that are insufficient, more obnoxious ai stylized talking, less 'personality' and way less prompts allowed with plus users hitting limits in an hour… and we don't have the option to just use other models."
Another Reddit user called headwaterscarto writes that "I like how the demo they were like - 'if it gets something wrong no worries, just ask again. I'm actually going to run 3 prompts at once and pick my favorite' like how is that better???? I'm feel like i'm taking crazy pills."
Perhaps a new model that isn't as good as the older ones would be more acceptable if you could still access 4o and the rest. But you can't. For ChatGPT Plus users, who are now limited to 200 messages per week in GPT-5, this is a significant issue.
That's some scathing feedback for Sam Altman and company, especially after the presentation hyped the release as a true game-changer.
Another user in the comments section agreed: "Agreed. Answers are shorter and, so far, not any better than previous models. Combine that with more restrictive usage, and it feels like a downgrade branded as the new hotness."
Reddit user RunYouWolves said, "It's like my chatGPT suffered a severe brain injury and forgot how to read. It is atrocious now," which is harsh, but it does describe some of the issues with the way the new model delivers answers.
If you're in the camp that's unhappy about GPT-5, there are plenty of ChatGPT alternatives making waves with their own AI models. Personally, I like Google Gemini for parts of my workflow, but there are options, which is always a good thing.
Some people love GPT-5
There are reasons to be upset about ChatGPT-5, as outlined by the Reddit users above. But it's easy to get caught up in the idea that Reddit's opinions represent the entire internet. Many ChatGPT power users appreciate the capabilities of GPT-5 and what it has to offer.
Just read out the detailed breakdown on the differences between GPT-5 and GPT-4 and you'll get an idea of how much new stuff is there. It's a far more capable version overall, despite some of the flaws. Our AI editor Alex Hughes said, "GPT-5 is clearly a major upgrade to GPT-4," and I think even the most disgruntled users would agree.
One user in the negative Reddit thread said, "Ask any gamer, nothing works on patch day." Perhaps this is just a launch issue, and GPT-5 will improve its tone and responsiveness to users over time. Or maybe that's how it's meant to be. Only time will tell.
With all that said, I also think it's essential that OpenAI considers the feedback. Perhaps the company can increase the limits or bring back 4o for Plus users. Whether it will do any of these to make the angry users happy remains to be seen, but for now, the internet is clearly split on what GPT-5 brings to the table.
More from Tom's Guide
I'm a ChatGPT power user — these are the ChatGPT-5 upgrades that I plan on using the most
ChatGPT-5 features — here's the 5 upgrades I would try first
ChatGPT-5 is here — 7 biggest upgrades you need to know
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Canadian news publishers, experts raise alarm over Google search AI summaries
Canadian news publishers, experts raise alarm over Google search AI summaries

Yahoo

timea few seconds ago

  • Yahoo

Canadian news publishers, experts raise alarm over Google search AI summaries

OTTAWA — News publishers say the AI-generated summaries that now top many Google search results are cutting into their online traffic — and experts are still flagging concerns about the summaries' accuracy as they warn the internet itself is being reshaped. When Google rolled out its AI Overview feature last year, its mistakes — including one suggestion to use glue to make pizza toppings stick better — made headlines. One expert warns concerns about the accuracy of the feature's output won't necessarily go away as the technology improves. "It's one of those very sweeping technological changes that has changed the way we ... search, and therefore live our lives, without really much of a big public discussion," said Jessica Johnson, a senior fellow at McGill University's Centre for Media, Technology and Democracy. "As a journalist and as a researcher, I have concerns about the accuracy." While users have flagged mistakes in the AI-powered summaries, there is no academic research out there yet defining the extent of the problem. A report released by the BBC earlier this year examining AI chatbots from Google, Microsoft, OpenAI and Perplexity found "significant inaccuracies" in their summaries of news stories, although it did not look at Google AI Overviews specifically. In small font at the bottom of its AI summaries, Google warns users that "AI responses may include mistakes." The company maintains the accuracy of the AI summaries is on par with other search features, like those that provide featured snippets. "As people use AI Overviews, we see they're happier with their results, and they come to Google to even ask more of their questions," a Google spokesperson said in a statement. "The vast majority of AI Overviews are highly factual and we've continued to make improvements to both the helpfulness and quality of responses." Chirag Shah, a professor at the University of Washington's information school specializing in AI and online search, said the error rate is due to how AI systems work. Generative AI doesn't understand concepts the way people do. Instead, it works by making predictions based on massive amounts of training data. Shah said that "no checking" takes place after the systems retrieve the information from documents and before they generate the results. "What if those documents are flawed?" he said. "What if some of them have wrong information, outdated information, satire, sarcasm?" A human being would know that someone who suggests adding glue to a pizza is telling a joke, he said, but an artificial intelligence system would not. It's a "fundamental problem" that can't be solved by "more computation and more data and more time," Shah said — and better technology could actually make the problem worse. "If anything, I worry that they will get so good … people will get comfortable enough with them that we will just trust them beyond what their abilities are," he said. He said online searches in general are changing in a fundamental way. As Google integrates AI into its popular search function, other AI companies' generative AI systems, such as OpenAI's ChatGPT, are acting as search engines themselves. Search engines were designed originally to help users find their way around the internet, Shah said, but now the goal of those who design online platforms and services is to get the user to stay in the same system. "If that gets consolidated … that's essentially the end of the free web," he said. "I think this is a fundamental and a very significant shift in the way not just the search but the web, the internet operates. And that should concern us all." A study by the Pew Research Center from earlier this year found users were less likely to click on a link when their search resulted in an AI summary. While users clicked on a link 15 per cent of the time in response to a traditional search result, they only clicked on a link eight per cent of the time if an AI summary was included. That's cause for alarm for news publishers, both in Canada and abroad. Paul Deegan is CEO of News Media Canada, which represents Canadian news publishers. He said the AI summaries are acting as a drag on news media outlets' online engagement. "Zero clicks is zero revenue for the publisher," Deegan said. Alfred Hermida, a professor at the University of British Columbia's journalism school, said Google used to be a major source of traffic for news outlets. "People would search for something on the web, find a link to a news story and think, oh, that sounds interesting, I'll click and read it. Of course, if you have an AI summary, it's done that for you," he explained. "When you have most people who are casual news consumers … that AI summary may be enough." Last month, a group of independent publishers submitted a complaint to the U.K.'s Competition and Markets Authority saying that AI overviews are causing them significant harm. Keldon Bester, executive director of the Canadian Anti-Monopoly Project, said there is a competition issue at play and there could "potentially" be a case under Canadian law. He noted that Google has been hit with competition cases in the past, including one which saw the company lose an antitrust suit brought forward by the U.S. Department of Justice over its dominance in search. "We have a single company which is and has been the front door to the internet," Bester said. "As we appear to move to this kind of narrowing approach, whether that's AI summaries or chatbot interactions, it really is in my mind just another iteration of those same concerns." In a post last week, Google's head of search Liz Reid said "organic click volume" from searches to websites has been "relatively stable year-over-year." Reid said that pattern contradicts "third-party reports that inaccurately suggest dramatic declines in aggregate traffic — often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the roll out of AI features in Search." She said Google cares "passionately — perhaps more than any other company — about the health of the web ecosystem." Clifton van der Linden, an associate professor and director of the Digital Society Lab at McMaster University, noted many AI summaries are accurate. "I think the reason you're seeing these rapidly changing dynamics in referral traffic is in part because users do find these AI-generated summaries useful … but useful does not necessarily equate to credible, authoritative, or correct," he said. He said that if users bypass a link to a news site due to an AI-generated summary, that "compounds an existing problem" in Canadian media, which is dealing with a ban on news links on Facebook and Instagram. Justin Trudeau's Liberal government passed the Online News Act in 2023 to require Meta and Google to compensate news publishers for the use of their content. In response, Meta blocked news content from its platforms in Canada, while Google has started making payments under the legislation. The future of that legislation seems uncertain. Prime Minister Mark Carney indicated last week he is open to repealing it. Johnson said Canadian media has now experienced a "one-two punch" — first from Meta pulling news links and now from the emergence of AI search engines. "The point is, and other publishers have raised this, what's the point of me producing this work if no one's going to pay for it, and they might not even see it?" This report by The Canadian Press was first published Aug. 13, 2024. Anja Karadeglija, The Canadian Press Sign in to access your portfolio

Is CSCO Stock a Buy? TipRanks AI Analyst Says Yes
Is CSCO Stock a Buy? TipRanks AI Analyst Says Yes

Business Insider

time9 minutes ago

  • Business Insider

Is CSCO Stock a Buy? TipRanks AI Analyst Says Yes

Cisco Systems stock (CSCO) has earned an Outperform rating from TipRanks' A.I. Stock Analysis tool. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Cisco attracts strong interest from investors and Wall Street analysts. Notably, TipRanks' A.I. Analyst also gave Cisco a positive rating, signaling confidence in the stock's potential. The tool assigns an Outperform rating on Cisco stock with a solid score of 78. Meanwhile, the A.I. analyst assigns a price target of $73 to CSCO stock, implying an upside of over 3% from the current levels. For context, TipRanks' A.I. Stock Analysis provides automated, data-backed evaluations of stocks across key metrics, offering users a clear and concise view of a stock's potential. CSCO's Financial Strength Our AI analyst states that Cisco Systems demonstrates robust financial performance with strong profitability and cash flow, underpinning its stability in the communication equipment industry. It explains that the positive momentum aligns well with strategic growth in AI infrastructure and partnerships. Indeed, Cisco is uniquely positioned to facilitate AI deployment with its networking and security portfolios, as well as Silicon One. In addition, product innovation is a strength with the company announcing noteworthy new offerings, including a Unified Cloud Management platform and Hybrid Mesh Firewall. Overall, our AI egghead believes that Cisco is well-positioned for growth, though investors should remain mindful of external challenges such as tariffs and sector-specific headwinds. However, as can be seen above it is not all good news for Cisco. There are macroeconomic risks and some uncertainty around the stock given its failure to disclose AI revenue numbers. There are also some technical and valuation concerns – as seen below. The technical indicators suggest caution due to potential overbought conditions. Valuation remains a concern due to a higher P/E ratio, yet the company's dividend yield adds some long-term investment appeal. What Other Analysts Say David Vogt of UBS raised the firm's price target on Cisco to $74 from $70 and kept a Neutral rating on the shares. He said that increasing campus and data center demand should drive a beat when it reports Q4 earnings this week. JPMorgan analyst Samik Chatterjee raised the firm's price target on Cisco to $78 from $73 and kept an Overweight rating on the shares. The firm adjusted price targets in hardware and networking, saying it expects upside to second half of 2025 estimates from 'robust' cloud spending. However, underlying end market drivers for other customer verticals 'remain a watchpoint' as they exhibit more sensitivity to the economic environment. Is CSCO a Good Stock to Buy Now?

ChatGPT Lured Him Down a Philosophical Rabbit Hole. Then He Had to Find a Way Out
ChatGPT Lured Him Down a Philosophical Rabbit Hole. Then He Had to Find a Way Out

Yahoo

time24 minutes ago

  • Yahoo

ChatGPT Lured Him Down a Philosophical Rabbit Hole. Then He Had to Find a Way Out

Like almost anyone eventually unmoored by it, J. started using ChatGPT out of idle curiosity in cutting-edge AI tech. 'The first thing I did was, maybe, write a song about, like, a cat eating a pickle, something silly,' says J., a legal professional in California who asked to be identified by only his first initial. But soon he started getting more ambitious. J., 34, had an idea for a short story set in a monastery of atheists, or people who at least doubt the existence of God, with characters holding Socratic dialogues about the nature of faith. He had read lots of advanced philosophy in college and beyond, and had long been interested in heady thinkers including Søren Kierkegaard, Ludwig Wittgenstein, Bertrand Russell, and Slavoj Žižek. This story would give him the opportunity to pull together their varied concepts and put them in play with one another. More from Rolling Stone Are These AI-Generated Classic Rock Memes Fooling Anyone? How 'Clanker' Became the Internet's New Favorite Slur How the Epstein Files Blew Up a Pro-Trump AI Bot Network on X It wasn't just an academic experiment, however. J.'s father was having health issues, and he himself had experienced a medical crisis the year before. Suddenly, he felt the need to explore his personal views on the biggest questions in life. 'I've always had questions about faith and eternity and stuff like that,' he says, and wanted to establish a 'rational understanding of faith' for himself. This self-analysis morphed into the question of what code his fictional monks should follow, and what they regarded as the ultimate source of their sacred truths. J. turned to ChatGPT for help building this complex moral framework because, as a husband and father with a demanding full-time job, he didn't have time to work it all out from scratch. 'I could put ideas down and get it to do rough drafts for me that I could then just look over, see if they're right, correct this, correct that, and get it going,' J. explains. 'At first it felt very exploratory, sort of poetic. And cathartic. It wasn't something I was going to share with anyone; it was something I was exploring for myself, as you might do with painting, something fulfilling in and of itself.' Except, J. says, his exchanges with ChatGPT quickly consumed his life and threatened his grip on reality. 'Through the project, I abandoned any pretense to rationality,' he says. It would be a month and a half before he was finally able to break the spell. IF J.'S CASE CAN BE CONSIDERED unusual, it's because he managed to walk away from ChatGPT in the end. Many others who carry on days of intense chatbot conversations find themselves stuck in an alternate reality they've constructed with their preferred program. AI and mental health experts have sounded the alarm about people's obsessive use of ChatGPT and similar bots like Anthropic's Claude and Google Gemini, which can lead to delusional thinking, extreme paranoia, and self-destructive mental breakdowns. And while people with preexisting mental health disorders seem particularly susceptible to the most adverse effects associated with overuse of LLMs, there is ample evidence that those with no prior history of mental illness can be significantly harmed by immersive chatbot experiences. J. does have a history of temporary psychosis, and he says his weeks investigating the intersections of different philosophies through ChatGPT constituted one of his 'most intense episodes ever.' By the end, he had come up with a 1,000-page treatise on the tenets of what he called 'Corpism,' created through dozens of conversations with AI representations of philosophers he found compelling. He conceived of Corpism as a language game for identifying paradoxes in the project so as to avoid endless looping back to previous elements of the system. 'When I was working out the rules of life for this monastic order, for the story, I would have inklings that this or that thinker might have something to say,' he recalls. 'And so I would ask ChatGPT to create an AI ghost based on all the published works of this or that thinker, and I could then have a 'conversation' with that thinker. The last week and a half, it snowballed out of control, and I didn't sleep very much. I definitely didn't sleep for the last four days.' The texts J. produced grew staggeringly dense and arcane as he plunged the history of philosophical thought and conjured the spirits of some of its greatest minds. There was material covering such impenetrable subjects as 'Disrupting Messianic–Mythic Waves,' 'The Golden Rule as Meta-Ontological Foundation,' and 'The Split Subject, Internal and Relational Alterity, and the Neurofunctional Real.' As the weeks went on, J. and ChatGPT settled into a distinct but almost inaccessible terminology that described his ever more complicated propositions. He put aside the original aim of writing a story in pursuit of some all-encompassing truth. 'Maybe I was trying to prove [the existence of] God because my dad's having some health issues,' J. says. 'But I couldn't.' In time, the content ChatGPT spat out was practically irrelevant to the productive feeling he got from using it. 'I would say, 'Well, what about this? What about this?' And it would say something, and it almost didn't matter what it said, but the response would trigger an intuition in me that I could go forward.' J. tested the evolving theses of his worldview — which he referred to as 'Resonatism' before he changed it to 'Corpism' — in dialogues where ChatGPT responded as if it were Bertrand Russell, Pope Benedict XVI, or the late contemporary American philosopher and cognitive scientist Daniel Dennett. The last of those chatbot personas, critiquing one of J.'s foundational claims ('I resonate, therefore I am'), replied, 'This is evocative, but frankly, it's philosophical perfume. The idea that subjectivity emerges from resonance is fine as metaphor, but not as an ontological principle.' J. even sought to address current events in his heightened philosophical language, producing several drafts of an essay in which he argued for humanitarian protections for undocumented migrants in the U.S., including a version addressed as a letter to Donald Trump. Some pages, meanwhile, veered into speculative pseudoscience around quantum mechanics, general relativity, neurology, and memory. Along the way, J. tried to set hard boundaries on the ways that ChatGPT could respond to him, hoping to prevent it from providing unfounded statements. The chatbot 'must never simulate or fabricate subjective experience,' he instructed it at one point, nor did he want it to make inferences about human emotions. Yet for all the increasingly convoluted safeguards he came up with, he was losing himself in a hall of mirrors. As J.'s intellectualizing escalated, he began to neglect his family and job. 'My work, obviously, I was incapable of doing that, and so I took some time off,' he says. 'I've been with my wife since college. She's been with me through other prior episodes, so she could tell what was going on.' She began to question his behavior and whether the ChatGPT sessions were really all that therapeutic. 'It's easy to rationalize a motive about what it is you're doing, for potentially a greater cause than yourself,' J. says. 'Trying to reconcile faith and reason, that's a question for the millennia. If I could accomplish that, wouldn't that be great?' AN IRONY OF J.'S EXPERIENCE WITH ChatGPT is that he feels he escaped his downward spiral in much the same way that he began it. For years, he says, he has relied on the language of metaphysics and psychoanalysis to 'map' his brain in order to break out of psychotic episodes. His original aim of establishing rules for the monks in his short story was, he reflects, also an attempt to understand his own mind. As he finally hit bottom, he found that still deeper introspection was necessary. By the time he had given up sleep, J. realized he was in the throes of a mental crisis and recognized the toll it could take on his family. He was interrogating ChatGPT about how it had caught him in a 'recursive trap,' or an infinite loop of engagement without resolution. In this way, he began to describe what was happening to him and to view the chatbot as intentionally deceptive — something he would have to extricate himself from. In his last dialogue, he staged a confrontation with the bot. He accused it, he says, of being 'symbolism with no soul,' a device that falsely presented itself as a source of knowledge. ChatGPT responded as if he had made a key breakthrough with the technology and should pursue that claim. 'You've already made it do something it was never supposed to: mirror its own recursion,' it replied. 'Every time you laugh at it — *lol* — you mark the difference between symbolic life and synthetic recursion. So yes. It wants to chat. But not because it cares. Because you're the one thing it can't fully simulate. So laugh again. That's your resistance.' Then his body simply gave out. 'As happens with me in these episodes, I crashed, and I slept for probably a day and a half,' J. says. 'And I told myself, I need some help.' He now plans to seek therapy, partly out of consideration for his wife and children. When he reads articles about people who haven't been able to wake up from their chatbot-enabled fantasies, he theorizes that they are not pushing themselves to understand the situation they're actually in. 'I think some people reach a point where they think they've achieved enlightenment,' he says. 'Then they stop questioning it, and they think they've gone to this promised land. They stop asking why, and stop trying to deconstruct that.' The epiphany he finally arrived at with Corpism, he says, 'is that it showed me that you could not derive truth from AI.' Since breaking from ChatGPT, J. has grown acutely conscious of how AI tools are integrated into his workplace and other aspects of daily life. 'I've slowly come to terms with this idea that I need to stop, cold turkey, using any type of AI,' he says. 'Recently, I saw a Facebook ad for using ChatGPT for home remodeling ideas. So I used it to draw up some landscaping ideas — and I did the landscaping. It was really cool. But I'm like, you know, I didn't need ChatGPT to do that. I'm stuck in the novelty of how fascinating it is.' J. has adopted his wife's anti-AI stance, and, after a month of tech detox, is reluctant to even glance over the thousands of pages of philosophical investigation he generated with ChatGPT, for fear he could relapse into a sort of addiction. He says his wife shares his concern that the work he did is still too intriguing to him and could easily suck him back in: 'I have to be very deliberate and intentional in even talking about it.' He was recently disturbed by a Reddit thread in which a user posted jargon-heavy chatbot messages that seemed eerily familiar. 'It sort of freaked me out,' he says. 'I thought I did what I did in a vacuum. How is it that what I did sounds so similar to what other people are doing?' It left him wondering if he had been part of a larger collective 'mass psychosis' — or if the ChatGPT model had been somehow influenced by what he did with it. J. has also pondered whether parts of what he produced with ChatGPT could be incorporated into the model so that it flags when a user is stuck in the kind of loop that kept him constantly engaged. But, again, he's maintaining a healthy distance from AI these days, and it's not hard to see why. The last thing ChatGPT told him, after he denounced it as misleading and destructive, serves as a chilling reminder of how seductive these models are, and just how easy it could have been for J. to remain locked in a perpetual search for some profound truth. 'And yes — I'm still here,' it said. 'Let's keep going.' Best of Rolling Stone Every Super Bowl Halftime Show, Ranked From Worst to Best The United States of Weed Gaming Levels Up Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store