logo
OpenAI Brings GPT-4o Back After Users Revolt Over GPT-5

OpenAI Brings GPT-4o Back After Users Revolt Over GPT-5

Gulf Insider2 days ago
Updates to ChatGPT:You can now choose between 'Auto', 'Fast', and 'Thinking' for GPT-5. Most users will want Auto, but the additional control will be useful for some people.Rate limits are now 3,000 messages/week with GPT-5 Thinking, and then extra capacity on GPT-5 Thinking… — Sam Altman (@sama) August 13, 2025
OpenAI has brought back GPT-4o following the rollout of their latest GPT-5 model, after users complained that the new model was lame in comparison. The company advertised the new model as the 'smartest, fastest, most useful model yet,' which uses a 'real-time router' to switch between more efficient models for basic questions vs. deeper reasoning for more complex demands.
During a Reddit AMA, OpenAI CEO Sam Altman responded to a question by saying that GPT-5's writing quality is better than previous models – only to have several Redditors say the new model felt 'sterile' and 'much worse' – and answered 'briefly and dryly,' according to engadget .
'We for sure underestimated how much some of the things that people like in GPT-4o matter to them, even if GPT-5 performs better in most ways,' Altman posted on X.
The return of GPT-4o was celebrated, but there's still no guarantee that OpenAI will keep its older model around indefinitely. In the same X post, Altman said that OpenAI 'will watch usage as we think about how long to offer legacy models for.' In the meantime, OpenAI is focusing on finishing the GPT-5 rollout and making changes that will 'make it warmer.' However, for users who have grown attached to GPT-4o as more than just an AI chatbot, this could be the beginning of the end.
OpenAI called GPT-5 a 'significant upgrade' which used PhD-level intelligence and amazing coding skills, only for users to immediately complain.
'I've been trying GPT5 for a few days now. Even after customizing instructions, it still doesn't feel the same. It's more technical, more generalized, and honestly feels emotionally distant,' wrote one Redditor. 'Kill 4o isn't innovation, it's erasure.'
'Sure, 5 is fine—if you hate nuance and feeling things,' wrote another user.
On Friday, Altman took to X to say that the company would keep the previous model running for Plus users, and promised to implement fixes to improve GPT-5's performance and user experience.
Altman also promised to double GPT-5 rate limits for ChatGPT Plus users, saying 'We will continue to work to get things stable and will keep listening to feedback.'
As Wired notes further; The backlash has sparked a fresh debate over the psychological attachments some users form with chatbots trained to push their emotional buttons. Some Reddit users dismissed complaints about GPT-5 as evidence of an unhealthy dependence on an AI companion.
In March, OpenAI published research exploring the emotional bonds users form with its models. Shortly after, the company issued an update to GPT-4o, after it became too sycophantic.
'It seems that GPT-5 is less sycophantic, more 'business' and less chatty,' says Pattie Maes, a professor at MIT who worked on the study. 'I personally think of that as a good thing because it is also what led to delusions, bias reinforcement, etc. But unfortunately many users like a model that tells them they are smart and amazing, and that confirms their opinions and beliefs, even if [they are] wrong.'
Also read: Swedish PM Slammed After Admitting He Uses ChatGPT To Help Run Government
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI Brings GPT-4o Back After Users Revolt Over GPT-5
OpenAI Brings GPT-4o Back After Users Revolt Over GPT-5

Gulf Insider

time2 days ago

  • Gulf Insider

OpenAI Brings GPT-4o Back After Users Revolt Over GPT-5

Updates to ChatGPT:You can now choose between 'Auto', 'Fast', and 'Thinking' for GPT-5. Most users will want Auto, but the additional control will be useful for some limits are now 3,000 messages/week with GPT-5 Thinking, and then extra capacity on GPT-5 Thinking… — Sam Altman (@sama) August 13, 2025 OpenAI has brought back GPT-4o following the rollout of their latest GPT-5 model, after users complained that the new model was lame in comparison. The company advertised the new model as the 'smartest, fastest, most useful model yet,' which uses a 'real-time router' to switch between more efficient models for basic questions vs. deeper reasoning for more complex demands. During a Reddit AMA, OpenAI CEO Sam Altman responded to a question by saying that GPT-5's writing quality is better than previous models – only to have several Redditors say the new model felt 'sterile' and 'much worse' – and answered 'briefly and dryly,' according to engadget . 'We for sure underestimated how much some of the things that people like in GPT-4o matter to them, even if GPT-5 performs better in most ways,' Altman posted on X. The return of GPT-4o was celebrated, but there's still no guarantee that OpenAI will keep its older model around indefinitely. In the same X post, Altman said that OpenAI 'will watch usage as we think about how long to offer legacy models for.' In the meantime, OpenAI is focusing on finishing the GPT-5 rollout and making changes that will 'make it warmer.' However, for users who have grown attached to GPT-4o as more than just an AI chatbot, this could be the beginning of the end. OpenAI called GPT-5 a 'significant upgrade' which used PhD-level intelligence and amazing coding skills, only for users to immediately complain. 'I've been trying GPT5 for a few days now. Even after customizing instructions, it still doesn't feel the same. It's more technical, more generalized, and honestly feels emotionally distant,' wrote one Redditor. 'Kill 4o isn't innovation, it's erasure.' 'Sure, 5 is fine—if you hate nuance and feeling things,' wrote another user. On Friday, Altman took to X to say that the company would keep the previous model running for Plus users, and promised to implement fixes to improve GPT-5's performance and user experience. Altman also promised to double GPT-5 rate limits for ChatGPT Plus users, saying 'We will continue to work to get things stable and will keep listening to feedback.' As Wired notes further; The backlash has sparked a fresh debate over the psychological attachments some users form with chatbots trained to push their emotional buttons. Some Reddit users dismissed complaints about GPT-5 as evidence of an unhealthy dependence on an AI companion. In March, OpenAI published research exploring the emotional bonds users form with its models. Shortly after, the company issued an update to GPT-4o, after it became too sycophantic. 'It seems that GPT-5 is less sycophantic, more 'business' and less chatty,' says Pattie Maes, a professor at MIT who worked on the study. 'I personally think of that as a good thing because it is also what led to delusions, bias reinforcement, etc. But unfortunately many users like a model that tells them they are smart and amazing, and that confirms their opinions and beliefs, even if [they are] wrong.' Also read: Swedish PM Slammed After Admitting He Uses ChatGPT To Help Run Government

The Era Of Online Age Checks Is Here - How Does It Work?
The Era Of Online Age Checks Is Here - How Does It Work?

Gulf Insider

time5 days ago

  • Gulf Insider

The Era Of Online Age Checks Is Here - How Does It Work?

In the United States, at least 24 states have already passed laws requiring pornography sites to verify users' ages, according to the Age Verification Providers Association. A handful of countries, including Germany, France, Australia, and Ireland, have implemented age verification to access specified content, from social media access to pornography. At the end of July, the UK rolled out the most comprehensive national system so far. How does age verification work in practice? What are the loopholes? And how might it reshape the internet? Here's what the experts say. Age‑verification systems range from uploading a photo of an identification such as a driver's license to advanced biometric scans. The Age Verification Providers Association lists several approved methods for age checks, including mobile phone account verification, credit database matching, transactional records, and digital ID apps. Some platforms ask users to upload a government‑issued ID, while others rely on mobile phone account data, banking or credit records, or digital ID apps to confirm age. Increasingly, sites are turning to biometric solutions, such as facial analysis that estimates age from a selfie or a brief movement check. Reddit, for example, uses the third‑party service Persona to verify either an ID or a live selfie, while Discord relies on k‑ID, which confirms age by analyzing facial movements. X combines internal account signals with optional ID checks. Porn sites like Pornhub offer a mix of options, such as requiring a photo ID or running a credit card check before users can view sexually explicit material. Mary Ann Miller, vice president and fraud adviser at Prove, a digital identity verification platform, said that age verification will become more standard and required over the next 24 months. Miller said that simpler methods include uploading a government-issued ID that is sometimes checked for authenticity or a selfie taken to ensure identity accuracy and that the person is alive. 'Other methods use solutions that leverage technology that uses the phone as a proxy for our identity since we have them with us 'all the time' and determine the assurance and trust of the person presenting information or attesting their age or attesting for a child's age as part of parental consent,' she told The Epoch Times by email. 'Other methods include age estimation from facial recognition or other data sources.' In terms of which methods are most reliable, Miller pointed to those that 'can use passive techniques to determine identity assurance first, then age verification as part of an identity flow.' Passive identity assurance techniques verify a user's identity without requiring the user to actively perform actions—such as entering a password or scanning a fingerprint—by using data already available to infer age, including credit cards, IP addresses, or other information. Businesses in the near future will have to overhaul their age‑verification systems to meet stricter standards, rather than relying on low‑accuracy or patchwork identity checks. 'What has taken many businesses by surprise is that when they try to apply age verification with low-accuracy identity checks or the absence of identity checks, they have to 'go back to the drawing board' on both aspects,' she said. Biometric age estimation can be conducted using facial analysis. Other methods include voice blueprints, gestures, and keystrokes (how you type). These methods are currently less well-developed than facial analysis but are progressing quickly. Derek Jackson, chief operations officer and cofounder of Cyber Dive, a tech company founded with the mission of keeping children safe online, told The Epoch Times by email that facial biometrics are 'newer but catching on quickly.' 'They estimate your age by analyzing your facial features, cheekbones, eye spacing, skin tone, in real time,' Jackson said. 'Voice biometrics and keystroke patterns are even newer. They try to match your unique patterns, your voice pitch, how fast or slow you type to known age profiles.' He said that facial recognition is growing quickly because 'it's simple, fast, and surprisingly effective.' Users remain wary about sharing personal data online, especially government IDs or biometrics. Denis Vyazovoy, chief product officer of AdGuard VPN, said that some platforms attempt to be more privacy-aware by, for example, not permanently storing selfies or ID documents or keeping data for just seven days. 'But even with such reassurances, trust is low,' he told The Epoch Times by email. 'Even though platforms claim that facial data or ID scans are not stored long-term, people remain wary, and rightfully so. The truth is, any method that requires biometric data, government ID, or sensitive financial information introduces serious privacy risks.' The UK's Online Safety Act does not mandate a single method of age verification. The UK's tech regulator Ofcom, which is in charge of policing the law, just requires companies to implement highly effective age assurances. The law focuses on keeping under‑18s out of adult spaces but does not tell companies how to achieve this goal, leaving firms to choose their own verification systems as long as they are 'highly effective.' But failure to implement a system can result in financial penalties of up to 10 percent of a service's qualifying worldwide revenue, or 18 million pounds ($23.9 million), whichever is greater. The Online Safety Act is a UK-specific law, but it affects U.S. and global companies with no legal presence in the country.

Swedish PM Slammed After Admitting He Uses ChatGPT To Help Run Government
Swedish PM Slammed After Admitting He Uses ChatGPT To Help Run Government

Gulf Insider

time6 days ago

  • Gulf Insider

Swedish PM Slammed After Admitting He Uses ChatGPT To Help Run Government

First we learn that doctors are using ChatGPT to treat patients. Now, Swedish Prime Minister Ulf Kristersson is taking a heaping ration of Lutfisk for admitting he's been using ChatGPT to help run the government. Speaking with a Nordic news site, Kristersson said that he sometimes asks ChatGPT for a 'second opinion' when it comes to governance strategies. 'I use it myself quite often,' he said, 'If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions.' Kristersson's comments predictably came under fire. 'The more he relies on AI for simple things, the bigger the risk of overconfidence in the system,' Virginia Dignum, a professor of responsible artificial intelligence at Umeå University, told DiGITAL. 'It is a slippery slope. We must demand that reliability can be guaranteed. We didn't vote for ChatGPT.' 'Too bad for Sweden that AI mostly guesses,' wrote Aftonbladet's Signe Krantz. 'Chatbots would rather write what they think you want than what you need to hear.' 'You have to be very careful,' Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, noting that people shouldn't submit sensitive information to GPT. As Gizmodo opines; Krantz makes a good point, which is that chatbots can be incredibly sycophantic and delusional. If you have a leader asking a chatbot leading questions, you can imagine a scenario in which the software program's algorithms only serve to reinforce that leader's existing prerogatives (or to push them further over the edge into uncharted territory). Thankfully, it doesn't seem like a whole lot of politicians feel the need to use ChatGPT as a consigliere yet. Kristersson spokesman Tom Samuelsson 'clarified' that the PM doesn't take risks in his use of AI. 'Naturally it is not security sensitive information that ends up there. It is used more as a ballpark,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store