logo
Hooray! ChatGPT Plus brings back legacy models alongside an updated GPT-5 experience

Hooray! ChatGPT Plus brings back legacy models alongside an updated GPT-5 experience

GPT-5 has faced a wave of criticism recently, both from everyday users and reviewers like our very own Calvin Wankhede here at Android Authority. Much of this feedback centered on the new model feeling more curt and having less personality. OpenAI responded quickly, addressing performance, personality, and usage limit issues — improving the overall experience significantly. Now, a fresh update makes things even better, at least for ChatGPT Plus subscribers.
OpenAI has greatly expanded GPT-5's functionality by adding more modes: Auto, Fast, Thinking Mini, and Thinking Pro. Personally, I find GPT-5 better than most alternatives in certain scenarios, though your preference may vary. I tend to favor its concise, down-to-earth tone, which feels less sycophantic than GPT-4o.
Of course, not everyone agrees. If you still miss the old models or remain unimpressed with the latest upgrades, the good news is that several legacy models are once again available:
GPT-4o : The previous default model, designed for general use.
: The previous default model, designed for general use. GPT-4.1 : Slightly better for specific, detailed queries, though similar to 4o overall.
: Slightly better for specific, detailed queries, though similar to 4o overall. o3 : Formerly the go-to model for deeper questions, philosophical reflection, and more.
: Formerly the go-to model for deeper questions, philosophical reflection, and more. o4-mini: A thinking model similar to o3 but better suited for simpler queries.
Honestly, I think this is a great move and makes it clear that OpenAI, at least, is trying to listen to its customers. While GPT-5 will continue to improve in the coming weeks and months, more models (and the flexibility they provide) isn't a bad thing.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI's Greg Brockman says it's not too late to build AI startups
OpenAI's Greg Brockman says it's not too late to build AI startups

Business Insider

time16 minutes ago

  • Business Insider

OpenAI's Greg Brockman says it's not too late to build AI startups

If you're dreaming of joining the AI startup race, it might not be too late to start. "Sometimes it might feel like all the ideas are taken, but the economy is so big," Greg Brockman, OpenAI's cofounder and president, said in an episode of the "Latent Space" podcast released on Saturday. "It is worthwhile and really important for people to really think about how do we get the most out of these amazing intelligences that we've created." Brockman said startups that connect large language models to real-world applications are extremely valuable. Brockman, who cofounded OpenAI in 2015, added that domains like healthcare require founders to think about all the stakeholders and how they can insert AI models into the existing system. "There is so much fruit that is not yet picked, so go ahead and ride the GPT river," he said. Brockman also advised founders against building "better wrappers.""AI wrapper" is a dismissive term used to refer to simple applications that are built on top of existing AI models and can be easily offered by LLM companies themselves. "It's really about understanding a domain and building up expertise and relationships and all of those things," Brockman said. Brockman's comments are part of a Silicon Valley debate about how new AI founders can future-proof their startup ideas. Last year, OpenAI CEO Sam Altman said his company would "steamroll" any startup building "little things" on top of its model. He said that companies that underestimate the speed of AI model growth risk becoming part of the "OpenAI killed my startup meme." In a June podcast, Instagram cofounder and Anthropic's chief product officer, Mike Krieger, offered some advice for startups that want to avoid being made obsolete by LLM companies. Startups with deep knowledge in areas like law or biotechnology and those with good customer relationships can survive AI giants, Krieger said. He also suggested that startups play with new AI interfaces that feel "very weird" at first. "I don't envy them," he added, about founders wanting to build in the AI space. "Maybe that's part of the reason why I wanted to join a company rather than start one."

7 Terrifying AI Risks That Could Change The World
7 Terrifying AI Risks That Could Change The World

Forbes

timean hour ago

  • Forbes

7 Terrifying AI Risks That Could Change The World

There's no doubt about it, AI can be scary. Anyone who says they aren't at least a little bit worried is probably very brave, very stupid, or a liar. It makes total sense because the unknown is always frightening, and when it comes to AI, there are a lot of unknowns. How exactly does it work? Why can't we explain certain phenomena like hallucinations? And perhaps most importantly, what impact is it going to have on our lives and society? Many of these fears have solidified into debates around particular aspects of AI—its impact on human jobs, creativity or intellectual property rights, for example. And those involved often make it clear that the potential implications are terrifying. So here I will overview what I have come to see as some of the biggest fears. These are potential outcomes of the AI revolution that no one wants to see, but we can't be sure they aren't lurking around the corner… 1. Impact On Jobs One of the most pressing fears, and perhaps the one that gets the most coverage, is that huge swathes of us will be made redundant by machines that are cheaper to run than human workers. Having robots do all the work for us sounds great, but in reality, most people need a job to earn a living. Some evangelize about a post-scarcity economy where robot labor creates an abundance of everything we need, but this is highly theoretical. What's real is that workers in fields as diverse as software engineering, voice acting and graphic design are already reportedly being replaced. Fueling this fear is that while international bodies and watchdogs like the WEF have issued warnings about the potential threat, governments have been slow to come out with plans for a centralized, coordinated response. 2. Environmental Harm Operating generative AI language models requires huge amounts of compute power. This is provided by vast data centers that burn through energy at rates comparable to small nations, creating poisonous emissions and noise pollution. They consume massive amounts of water at a time when water scarcity is increasingly a concern. Critics of the idea that the benefits of AI are outweighed by the environmental harm it causes often believe that this damage will be offset by efficiencies that AI will create. But again, a lot of these advances are currently theoretical, while the environmental impact of AI is happening today. 3. Surveillance The threat that AI poses to privacy is at the root of this one. With its ability to capture and process vast quantities of personal information, there's no way to predict how much it might know about our lives in just a few short years. Employers increasingly monitoring and analyzing worker activity, the growing number of AI-enabled cameras on our devices, and in our streets, vehicles and homes, and police forces rolling out facial-recognition technology, all raise anxiety that soon no corner will be safe from prying AIs. 4. Weaponization Another common and entirely rational fear is that AI will be used to create weapons unlike anything seen before outside of science fiction. Robot dogs have been deployed in the Ukraine war for reconnaissance and logistics, and autonomous machine guns are capable of targeting enemies on a battlefield and shooting when given human authorization. Lethal autonomous AI hasn't yet been deployed as far as we know, but the fear is that this is inevitably just a matter of time. From computer-vision-equipped hunter-killer drones to AI-powered cyber attacks capable of knocking out critical infrastructure across entire regions, the possibilities are chilling. 5. Intellectual Property Theft If you're an author, artist or other creative professional, you may be among the many who are frustrated by the fact that multinational technology companies can train their AIs on your work, without paying you a penny. This has sparked widespread protest and backlash, with artists and their unions arguing that tech companies are effectively monetizing their stolen IP. Legal debate and court cases are in progress, but with the likes of OpenAI and Google throwing huge resources into their missions for more and more training data, there are legitimate fears that the rights of human creators might be overlooked. 6. Misinformation AI enables and accelerates the spread of misinformation, making it quicker and easier to disseminate, more convincing, and harder to detect from Deepfake videos of world leaders saying or doing things that never happened, to conspiracy theories flooding social media in the form of stories and images designed to go viral and cause disruption. The aim is often to destabilize, and this is done by undermining trust in democratic institutions, scientific consensus or fact-based journalism. One very scary factor is that the algorithmic nature of AI reinforces views by serving up content that individuals are likely to agree with. This can result in them becoming trapped in 'echo-chambers' and pushed towards fringe or extremist beliefs. 7. AI Will Hurt Us Right back to Mary Shelley's Frankenstein, via Space Odyssey, Terminator and The Matrix, cautionary tales have warned us of the potential dangers of giving our creations the power of thought. Right now, that gulf between fiction and reality still seems uncrossable; it's hard to comprehend how we would go from ChatGPT to machines intent or even capable of maliciously harming us. But the threat of 'runaway AI', where AI begins developing and evolving by itself in ways that might not be aligned with our best interests, is treated very seriously. Many leading AI researchers and alliances have spoken openly about the need for safeguards and transparency to prevent unknowable circumstances from emerging in the future. While this may seem a more distant and perhaps fanciful threat than some of the others covered here, it's certainly not one that can be ignored. Ultimately, fear alone is not a strategy. While it is vital to acknowledge and address the risks of AI, it is equally important to focus on building the safeguards, governance frameworks, and ethical guidelines that can steer this technology toward positive outcomes. By confronting these fears with informed action, we can shape a future where AI serves humanity rather than threatens it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store