logo
I tested GPT-5 and now I get why the Internet hates it. Is it time to ditch ChatGPT?

I tested GPT-5 and now I get why the Internet hates it. Is it time to ditch ChatGPT?

Calvin Wankhede / Android Authority
After years of rumors and speculation, OpenAI's next-gen GPT-5 language model is finally here. But while many of those early rumors claimed that the next major ChatGPT model would achieve artificial general intelligence or AGI, that's not the case.
GPT-5 does not surpass human-level intelligence, although it's smarter and more capable than any of its predecessors. Despite the improvements, however, it has garnered significant and widespread backlash across the internet.
So what does GPT-5 bring to the table and why have so many loyal users already turned their back on it? I tested it to find out.
Why GPT-5 is so controversial
Until a few days ago, the ChatGPT experience felt bloated if you weren't an AI model expert — the app offered nearly half a dozen different models to choose from. Each model had a unique advantage. For example, the o3 series promised detailed problem-solving skills while GPT-4.1 excelled at coding tasks. And if a task required analysis, you could manually engage a 'deep research' mode. However, the default GPT-4o model worked for most tasks.
All of that is now history. If you open ChatGPT today, you'll find that you can only chat with the newest GPT-5 model. OpenAI says this is because it has created a routing system that can automatically decide which model your request needs to go to. Indeed, I've noticed that some prompts will inspire the chatbot to ponder and research, while it will immediately respond to simpler questions.
With GPT-5, you no longer have to select a specific model for your task.
GPT-5 does have multiple models under its belt, though, even if you can't manually select the one you want. For example, when I asked how many times the letter 'R' appears in the word strawberry, ChatGPT thought for a few seconds and returned with the correct answer: three. Hovering over the 'Retry' button revealed that it had used the 'GPT-5 Thinking Mini' model for my prompt.
Even on a free account, I've noticed that ChatGPT will default to GPT-5 for most responses and think for longer if necessary. That said, shorter responses tend to rely on the GPT-5 Mini model. And as we've seen for the past couple of years, free users only get a limited number of responses from the large model before the chatbot forces you over to a scaled down version. Paying $20 monthly for a ChatGPT Plus subscription overcomes this limitation, though, and you can manually select the larger GPT-5 model with thinking for all responses.
GPT-5 is less eager to hold conversations with users, and can come across as a bit curt.
However, not everyone has welcomed the upgrade; some users have demanded OpenAI offer a way to use the older GPT-4o model indefinitely. They argue that GPT-5's responses feel robotic, in a push to improve safety and accuracy metrics, and that it lacks the distinct personality that the last-gen GPT-4o model offered. In response to these criticisms, OpenAI has brought back the GPT-4o model for ChatGPT Plus users but it's unclear how long this will last.
In my time using GPT-5, I've noticed that the new model does seem less eager to hold a conversation. It also lacks the creative writing capabilities of GPT-4o — a big problem if you use ChatGPT for advice, roleplay, or help with drafting letters and emails. It may seem like a minor gripe, but the thousands of complaints online speak for themselves.
GPT-5 vs GPT-4o: Is the upgrade really a downgrade?
Depending on how you use ChatGPT, you may struggle to notice a big improvement going from GPT-4o to GPT-5. That's because many of the changes are really quite subtle.
OpenAI says GPT-5 is more accurate, hallucinates significantly less often, and follows instructions more closely. Impressively, its responses are 45% less likely to include a factual error. On the flip side, however, we're seeing claims that GPT-5's responses are more sterile and direct. And from my testing, that does seem to be the case.
Here's a side-by-side comparison showing just how much character GPT-5 lacks compared to GPT-4o:
GPT-5
GPT-4o
GPT-5's response is perfectly functional, but lacking any depth – it's machine-generated. Here's another example, where I asked both models if I could use lime instead of lemon in a recipe:
GPT-5
GPT-4o
GPT-4o ended its response with a 'Let me know what you're making—happy to give a more specific answer' while GPT-5 is borderline curt: 'What's the recipe? The stakes change a lot between, say, roast chicken and lemon meringue pie.'
So is GPT-5 straight up better at anything? OpenAI said it's vastly better at coding so let's play to its strengths and ask it to write the code for a web app. As the images below showcase, GPT-5 does deliver a much more polished result with the exact same prompt. In fact, its output looks like a real website while GPT-4o's seems like only a suggested starting point.
GPT-5 output web app
GPT-4o output web app
ChatGPT now also lets you pick from multiple response styles — in addition to the default, you have cynic, robot, listener, and nerd. I think power users will benefit from the robot and nerd personalities the most as it cuts out most of the AI's politeness. But if you prefer a friendlier version, the listener and default styles are your best bet. Still, the feature doesn't seem to help the many users who wish OpenAI would just give them the choice to use GPT-4o.
GPT-5 is a work in progress
Calvin Wankhede / Android Authority
Shortly after OpenAI introduced GPT-4, the tech industry was rife with speculation that a future model with more advanced capabilities would threaten humanity. That's because the jump in capability from GPT-3.5, ChatGPT's original model, to GPT-4 was massive. Besides making fewer factual mistakes, the latter model could perceive images, browse the internet, and even mimic human speech. In its early days, GPT-4 would even get into tense emotional exchanges with users.
However, GPT-5 doesn't represent the same quantum leap from its predecessor. While it's a significant upgrade on paper, it's not nearly as impactful in day-to-day use and can even feel like a regression. I definitely think it's a net positive to have a model that makes fewer mistakes, but it's missing the glamor and spectacle that many expected from such a big version update.
GPT-5 takes a dramatically smaller leap than GPT-4 did at launch.
In fact, I'd say that GPT-5 is not as exciting as some of OpenAI's other recent releases. Agent Mode, for instance, allows the chatbot to control a web browser using a simulated mouse. In my testing, I found that the agent can actually perform real-world tasks like logging into websites and can even handle curveballs like shopping for groceries.
It remains to be seen if OpenAI will tweak GPT-5 to meet the needs of its most vocal users. This may very well be the company's vision for ChatGPT's future: a helpful assistant that doesn't get too close to its users. However, I think the new model takes one step too far in the name of correctness, and risks sacrificing the very traits that made ChatGPT so popular. It is now up to OpenAI to prove that a smarter AI doesn't have to mean one that's devoid of personality.
Follow
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How CTOs can capture the $750 billion AI opportunity
How CTOs can capture the $750 billion AI opportunity

Fast Company

time17 minutes ago

  • Fast Company

How CTOs can capture the $750 billion AI opportunity

Those of us who lead development teams already know how deeply AI has transformed the way innovation happens. Now there are numbers to quantify how big of an impact it's having. AI-powered software innovation is generating savings of $28,249 per developer annually, according to GitLab's 2025 executive research report, which surveyed thousands of C-level executives from around the world. Applying this to the world's 27 million developers means that AI has the potential to unlock more than $750 billion in global value annually. With savings like that, it's no wonder C-Suite leaders are on board with AI's potential to drive greater efficiency in software innovation. Nine out of ten executives (91%) say that software innovation is now a core business priority for their organization. But there's still room to grow. While executives aspire to build human-AI partnerships that split software development work 50-50, the majority report that AI is currently only handling about 25% of the work. To help operationalize AI among development teams, leaders must effectively communicate and frame its value and connect development activities to business outcomes by focusing on problem-solving capabilities and direct business impact rather than code volume. This shift in thinking will be critical to fully realizing AI's potential. AI isn't eliminating developer jobs. However, it is fundamentally changing what those jobs require and how executives need to lead and structure teams to capitalize on this massive opportunity. The organizations capturing AI value share three critical traits: They have the right CTO strategy with relentless customer focus They're applying platform thinking to enable their teams to scale more effectively with AI They're investing in team structures and upskilling to help their developers reap the benefits of AI LEVERAGE THE RIGHT TECHNICAL LEADERS With 82% of C-suite leaders willing to invest over half of their IT budgets in software innovation, it's clear that this is the moment for technical leaders to shine. I've found throughout my career that CTOs come in several different styles, and companies need varying styles of technical leadership at different points in their evolution. There are three CTO styles: builder, strategist, and guardian. Builders Builder CTOs excel at innovating with AI, establishing core technical architecture and developing innovative products while constantly validating their assumptions with customer feedback. Builder CTOs are ideal for smaller, high-growth companies, and those that are earlier in their AI journeys. Strategists Strategist CTOs shine when companies mature, combining deep technical acumen and business knowledge to build platforms, develop long-term visions, cultivate strategic partnerships, and position the company for long-term, scalable growth. The strategist CTO can help make AI into a permanent, value-additive component of the company's strategic platform. Guardian Guardian CTOs excel at helping companies with complex IT infrastructures and large customer bases maintain stability, security and operational efficiency. They are the right fit for companies whose priorities include implementing governance and security measures around AI, as well as establishing AI processes and standards to maximize efficiency and cost savings. Success in AI-powered software innovation requires leadership that can identify targeted AI applications, translate them into customer value and enable teams to focus on higher-value work. EMBRACE PLATFORM THINKING FOR SCALE As organizations grow, teams specialize in focusing on specific challenges, but with more teams, coordination among them can become inefficient. By the time an organization reaches the tens of thousands, those divisions often turn into silos that can hinder effective collaboration among humans and prevent the organization from realizing the benefits of human-AI collaboration. In my experience, the most effective CTOs are implementing platform-based approaches to set their companies up for scalable growth without creating silos. The most common way is to establish a centralized team that's responsible for building a platform that product teams across the organization can use. This team's primary purpose is to automate mundane tasks and provide streamlined workflows for all software innovation teams throughout the organization—a role that AI can enhance. CTOs may need to establish specialized teams to support a complicated subsystem required by the rest of the organization. An organization with a complex need, such as evaluating fraud risk in new customers or solving supply-chain complexities in real time, might organize a team dedicated to supporting that as an AI-powered 'subsystem' that the rest of the company can use. Setting up software teams for success in the AI era means letting humans focus on work that AI can't do well. Although AI can assist with many software development tasks like coding, it cannot define the 'why' behind a project. Engineers who can translate business needs into technical solutions and anticipate future trends will be invaluable. Those who can combine technical skills with critical thinking skills will be better able to guide AI technologies and realize the productivity gains of human-AI partnerships. Training in specific AI-related skills, such as prompt engineering and data management, will be crucial. Human contributions that will matter the most are creativity, strategic vision, and collaboration. However, there's a significant perception gap to address. A recent GitLab DevSecOps report found that 25% of individual contributors reported that their organizations don't provide adequate AI training, compared to only 15% of C-level executives who shared the same sentiment. Forward-thinking CTOs will frame upskilling as an investment in the human-AI partnership that will deliver competitive advantages. THE FUTURE STILL NEEDS HUMAN INNOVATORS The $750 billion opportunity represented by AI-powered software innovation won't materialize automatically. Taking advantage of the power of AI requires appropriate leadership, platform thinking, and upskilling in a way that enables humans to focus on what they do best while letting AI manage and automate mundane tasks. AI is changing the software development landscape, but it's not eliminating the need for skilled engineers. Instead, it's shifting the focus toward higher-value work that requires human judgment, ingenuity, and strategic thinking. Over time, human software innovators will be able to focus increasingly on the work that drives competitive advantage. Companies embracing AI-powered innovation can then transform themselves and their industries in unforeseen ways.

The Hidden Hazards of Generative AI in Medicine
The Hidden Hazards of Generative AI in Medicine

Medscape

time17 minutes ago

  • Medscape

The Hidden Hazards of Generative AI in Medicine

The idiosyncrasies and inadequacies of NHS IT systems — past and present — have driven many a doctor to frustration and despair, as they grapple with yet another new software package whilst trying to manage patient demand. There is an understandable reluctance to embrace change, given that change has rarely delivered the promised efficiencies. It is perhaps unsurprising, therefore, that some healthcare professionals view artificial intelligence (AI) with suspicion. Dr Sara Jamieson Even those who have embraced it and are optimistic about its benefits admit they're unsure about the risks. AI, though, is different from anything that has come before. It is something we should consider embracing and incorporating into our practice as it develops. If we don't, we risk being left behind. Imagine how life might look if we had ignored the introduction of the World Wide Web back in the mid-90s. Similarly, think back to the rapid introduction of telemedicine at the start of the COVID-19 pandemic. This method of consulting with patients, previously frowned upon as too inherently risky, continued even after infection levels subsided. Any new way of practising will bring with it new medicolegal risks, and steps will need to be considered to mitigate these. Whilst beyond the scope of this article, a true understanding of the medicolegal implications of using AI in healthcare requires an understanding of what AI is and how its outputs are generated. This is particularly true for generative AI tools such as ChatGPT. Dr Lucy Hanington According to a survey of over 1000 GPs in the UK, published in BMJ Health & Care Informatics, a fifth of GPs are using generative AI tools such as ChatGPT to help with day-to-day tasks such as writing letters. One in five said they had used these tools in their clinical practice, and of these, nearly a third (28%) said they had used them to suggest a different diagnosis, and a quarter said they had used them to suggest treatment options. Consider this scenario: Dr C, a GP, was running behind schedule and still had three more patients to see. During her next consultation, a 50-year-old patient, Mr B, came in with a set of vague symptoms. Dr C considered a range of possible conditions. Feeling under pressure, she discreetly opened ChatGPT and entered an anonymised summary of the patient's symptoms, requesting a differential diagnosis and possible lab tests. The AI quickly returned a detailed summary of plausible possibilities, including some that Dr C hadn't considered herself. She was impressed and used the suggestions to help her decide on the next steps for Mr B, which included arranging further tests. That night, however, the consultation weighed on her mind, and she couldn't sleep. She knew she hadn't disclosed her use of AI to the patient. She also worried whether she had inadvertently input details that could make Mr B identifiable. She also questioned whether the AI's suggested diagnoses might have influenced her clinical judgement. By morning, Dr C was feeling anxious and uneasy, and decided to call Medical Protection Society (MPS) for advice. A medicolegal consultant advised her to consider whether, objectively, she still agreed with the management plan and could justify it clinically. The GP was also advised to rectify any omissions immediately and to discuss the case with a colleague if helpful. The medicolegal consultant also explained the consent and confidentiality principles around AI use. Benefits Generative AI tools offer many potential benefits for both doctors and patients. Patients may use these tools to understand medical terminology or a diagnosis they have been given. Doctors may find that, when used safely, generative AI can aid diagnosis or identify potential drug interactions. However, generative AI is not always correct. As well as errors or omissions, it can sometimes produce 'hallucinations,' confidently presenting incorrect information as fact. It is incumbent on the clinicians using these tools to ensure that information shared with a patient is reliable and accurate. Bias, whilst not unique to AI, also deserves consideration. The data used by AI tools may be biased due to the inclusion or exclusion of certain information. Outputs may also fail to account for the demographics of a particular patient population. The use of generative AI does not permit doctors to work outside the limits of their competence. There should be no overreliance on the software, and doctors remain ultimately responsible for the decisions they make. Data Protection and Confidentiality Data protection and confidentiality, as highlighted in the earlier scenario, are key considerations. Compliance with General Data Protection Regulation is essential when using generative AI. These tools, by their nature, store, share, and learn from the information entered into them and can be accessed by anyone. Care must be taken not to input any personal patient data. Simply removing a patient's name may not be sufficient to anonymise their information, as other details could make them identifiable. To ensure compliance with data protection legislation, we recommend seeking guidance from: Hospital Data Protection Officers, who may be able to advise on this in a secondary care setting Integrated Care Boards, who may have policies that would be applicable The Information Commissioners Office (ICO) Consent The earlier scenario also highlights the issue of consent. Consent remains a key cornerstone of all doctor-patient interactions. The ICO advises that, for consent to be a lawful basis for processing data when using AI, it must be freely given, specific, and clear. The individual must agree to it, and they must be able to withdraw their consent at any time. AI as an Aid It is important to hold in mind that AI is a tool to assist, rather than replace, the clinician. When it comes to decision-making, AI software can't readily consider the individual wishes and circumstances of the patient. The 'black box' problem — a lack of transparency in how an AI system reaches conclusions — can make it difficult to challenge outputs or detect errors. Ultimately, AI should not replace clinical reasoning, evidence-based medicine, or consultation with colleagues, peers, multidisciplinary teams, specialists, seniors, and supervisors. Training and Continued Learning Clinicians should aim to be literate in AI, understand its basis and governance, and know how to use it safely. We would encourage all clinicians to incorporate learning on the topic as part of their annual development plans. A multitude of resources on AI are available across medical colleges and institutions. We would also recommend watching the recordings of the recent MPS Foundation AI symposia. A white paper, published by the MPS Foundation, the Centre for Assuring Autonomy at the University of York, and the Improvement Academy hosted at the Bradford Institute for Health Research, offers further useful guidance for doctors to consider on AI use. Conclusion Like it or not, AI is here to stay. Readers should consider its potential benefits while remaining alive to its limitations and risks. Doctors should also ensure they continue to work in a manner consistent with General Medical Council guidance and relevant legislation. If in doubt about using AI tools and their medicolegal implications, doctors can contact MPS or their medical defence organisation to request advice. This article is published as part of an editorial collaboration between Medscape UK and MPS that aims to deliver medicolegal content to help healthcare professionals navigate the many challenges they face in their clinical practice. Dr Sara Jamieson, MBBS, trained in psychiatry before joining MPS in 2016 as a medicolegal consultant. She has disclosed no relevant financial relationships. Dr Lucy Hanington, BMBCh, MRCPCH, has worked as a medicolegal consultant at MPS since 2016. She has disclosed no relevant financial relationships.

The Future Of Banking: Emerging Tech To Redefine Digital Experiences
The Future Of Banking: Emerging Tech To Redefine Digital Experiences

Forbes

time18 minutes ago

  • Forbes

The Future Of Banking: Emerging Tech To Redefine Digital Experiences

Banks, fintechs, and big tech firms will leverage both maturing and emerging technologies to redefine digital banking experiences, ultimately reshaping the financial landscape. So what's on the horizon? AI-powered interfaces: crafting human-centered digital banking experiences Conversational banking has emerged in recent years, and advancements in AI are set to further transform consumer interactions within financial services. The future of digital banking will be defined by modern, intuitive, and human-centered interfaces. AI-powered virtual assistants will observe, gather information, learn, and communicate with consumers. This will allow organizations to gain deeper insights into consumer intent and emotions, enabling them to generate multimodal responses that incorporate appropriate tone, emotion, and visual elements. AI and IoT: unlocking consumer insights and transforming data into actionable intelligence In the future, sensors and IoT analytics combined with AI will help firms understand consumers' context and intent. Open finance will enable secure data sharing across ecosystems, supporting embedded finance. Advanced AI and analytics will turn raw data into actionable insights, while AI and ML will streamline data processing. Large language models and generative AI will enhance data analysis, creating vast datasets and new content. Meanwhile, sensory AI, synthetic data, and explainable AI will improve situational understanding, generate insights where data is limited, and make AI systems more transparent and trustworthy. Edge computing and AI: accelerating real-time insights and autonomous decision-making Edge computing and AI, powered by faster networks (5G/6G) and advanced hardware, will bring data processing closer to its source, reducing latency and bandwidth use. This will enable faster insights, real-time analytics, and smarter decision-making. Real-time transactions and data processing will enhance responsiveness and efficiency. AI decisioning technologies will scale near-real-time engagement, while IoT will embed finance into objects, vehicles, and homes, creating automated, IoT-enabled scenarios. AI agents: automating decisions and paving the way for autonomous finance AI and advanced analytics will automate decisions, processes, and experiences. Expert AI agents will integrate analysis and execution, automating complex tasks. Although still in its early stages, agentic AI will pave the way for more advanced AI applications in automation and personalized services. Ultimately, this will lead to the development of personal AI agents and autonomous finance. Banks And Platforms Will Empower Consumers With Future Digital Experiences Over the next decade, we believe that consumers will shift from rigid, predetermined paths to cocurated, conversational journeys. They'll take a more active role in deciding where, what, and how they consume content, information, and advice. Banks and platforms will reduce cognitive load by delivering the right content or services at the moment of need, dynamically assembling content and services based on data and context, providing actionable suggestions, and acting on behalf of consumers with their permission. Digital Banking Experiences Will Evolve Through Three Phases As consumers adopt new digital experiences and technologies mature, Forrester expects digital banking experiences to evolve through three interrelated and mutually reinforcing phases: Trust Will Be A Key Factor In Shaping This Future Trust will be a key factor in determining how much personal data consumers are willing to share, the degree of autonomy they allow AI agents, and the breadth of services they utilize. Empowered consumers will demand transparency and control over their data, sharing it only for personalized experiences that offer real value. They'll tailor their interactions with AI, balancing convenience and control based on perceived risks. This post was written by Principal Analyst Aurelie L'Hostis and it originally appeared here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store