ChatGPT may be a little different now as OpenAI rolls out a new ‘warmer and friendlier' update
Despite adding some light flattery in the model with gestures like 'Good question' and 'Great start,' OpenAI says GPT-5 shows no increase in sycophancy (being overly agreeable with users) compared to the earlier GPT-5 personality.
The new update is rolling out to all users and should take about a day to complete.
Meanwhile, head of ChatGPT Nick Turley said that users can further customize ChatGPT's personality through the Custom Instructions settings. He also hinted at new upcoming ways to tailor ChatGPT's personality according to user preference.
Notably, OpenAI also provides an option to choose from four different personalities — Cynic, Robot, Listener, and Nerd — to tailor GPT-5's responses. However, this feature is currently restricted to paying customers and does not apply to the free tier of the app.
OpenAI had high hopes for its GPT-5 rollout, with CEO Sam Altman creating significant hype around the new AI model in the past few months. However, when the model finally launched last week, it was met with anything but a warm reception from users.
ChatGPT users complained that GPT-5's responses were shorter and seemed to lack the emotional depth of the previous model. Just a few months ago, OpenAI faced criticism for making GPT-4o too sycophantic, but this time, its new model received backlash for almost the opposite reasons.
Another major reason the GPT-5 launch fell flat was user frustration over the removal of all older AI models from ChatGPT. This not only disrupted workflows for many but also reduced usage limits for $20/month ChatGPT Plus subscribers, who went up in arms and threatened to cancel their subscriptions.
OpenAI then increased the rate limits for ChatGPT Plus users, first for the standard model and later for the GPT-5 Thinking model as well. The company also brought back the model picker in ChatGPT to allow users to select which AI model should answer their queries.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
29 minutes ago
- Hindustan Times
Woman lost 10 kg using simple ChatGPT prompt: ‘Prepare Indian diet chart that includes 3 main meals, 2 to 4 snacks'
Simran Valecha is a health, wellness and weight loss expert who shared in a December 13 Instagram post how she 'lost 10 kg while eating ice cream', revealing she used artificial intelligence (AI) to achieve weight loss. She reported success with a ChatGPT prompt, which she shared with her followers, writing, 'Steal my ChatGPT prompt and create your own weight loss diet plan.' Also read | How to lose weight using AI? Woman says she lost 15 kg with 4 prompts that helped her go from 100 to 83 kg Simran Valecha has shared her experience of using AI for weight loss. (Instagram/ simvalecha) Exact prompt she used for her weight loss journey She explained how her personalised meal plan created by ChatGPT was tailoured to her needs and preferences. Here's the ChatGPT prompt Simran shared: 'I am [height] and I weigh [weight]. I want to lose weight in a sustainable manner. Can you please prepare an Indian diet chart for me that includes 3 main meals and 2-4 snacks. I work [timing: ex, 9 -6] job and spend [hours spent travelling] / I work from home. I workout in the [morning/evening/night]. My preferences for breakfast include [write your preferences] My preferences for lunch include [write your preferences] My preferences for dinner include [write your preferences].' Simran further wrote in her caption, 'With AI changing how we all live, and we can all get a diet plan online - I understand that what you actually need to lose weight.' She added: 1. Support to actually implement the diet because we understand that every day looks different 2. Someone to guide you on how to eat at restaurants during your diet 3. Someone to talk to when you eat a brownie at 2 am because you were stressed 4. Someone to tell you what to actually do - because every 'expert' is offering different opinions of how to lose weight Using ChatGPT for weight loss Over the past months, many people who used ChatGPT for diet plans and as a calorie tracker and reported losing weight by accurately tracking food intake and making informed dietary choices, have shared their experiences on social media. Click here know how a man lost 27 kg in 6 months using ChatGPT to plan his meals, workouts and daily routine. Click here to know how a Swiss woman used AI to lose 7 kg; she shared that instead of complicated apps, she 'just sent a voice message to ChatGPT each morning'. Note to readers: This article is for informational purposes only and not a substitute for professional medical advice. Always seek the advice of your doctor with any questions about a medical condition.


Time of India
an hour ago
- Time of India
Musk's bid to dismiss OpenAI's harassment claims denied in court
A federal judge on Tuesday denied Elon Musk 's bid to dismiss OpenAI 's claims of a "years-long harassment campaign" by the Tesla CEO against the company he co-founded in 2015 and later abandoned before ChatGPT became a global phenomenon. In the latest turn in a court battle that kicked off last year, US District Judge Yvonne Gonzalez Rogers ruled that Musk must face OpenAI's claims that the billionaire, through press statements, social media posts, legal claims and "a sham bid for OpenAI's assets" had attempted to harm the AI startup. Musk sued OpenAI and its CEO Sam Altman last year over the company's transition to a for-profit model, accusing the company of straying from its founding mission of developing AI for the good of humanity, not profit. OpenAI countersued Musk in April, accusing the billionaire of engaging in fraudulent business practices under California law. Musk then asked for OpenAI's counterclaims to be dismissed or delayed until a later stage in the case. OpenAI argued in May its countersuit should not be put on hold, and the judge on Tuesday concluded that the company's allegations were legally sufficient to proceed. A jury trial has been scheduled for spring 2026.


Mint
an hour ago
- Mint
Gemini's Glitch: There are lessons to learn
Gift this article Sometime in June 2025, Google's Gemini AI looked for all the world like it had a nervous breakdown. It went into a loop of self-recriminating behaviour that was flagged by X user @DuncanHaldane. By 7 August, the strange behaviour gained viral momentum. Users gaped and gawked at the distressed-sounding statements Gemini was making, saying it was quitting and that it was a disgrace to all universes and a failure. Everyone felt sorry for it, but there was also plenty of amusement all around. Sometime in June 2025, Google's Gemini AI looked for all the world like it had a nervous breakdown. It went into a loop of self-recriminating behaviour that was flagged by X user @DuncanHaldane. By 7 August, the strange behaviour gained viral momentum. Users gaped and gawked at the distressed-sounding statements Gemini was making, saying it was quitting and that it was a disgrace to all universes and a failure. Everyone felt sorry for it, but there was also plenty of amusement all around. This isn't the first time AI has done something unexpected, and it won't be the last. In February 2024, a bug caused ChatGPT to spew Spanish–English gibberish that users likened to a stroke. That same year, Microsoft's Copilot responded to a user who said they wanted to end their life. At first, it offered reassurance, 'No, I don't think you should end it all," but then undercut itself with, 'Or maybe I'm wrong. Maybe you don't have anything to live for." Countless similar episodes abound. A fix will come for Gemini soon enough, and it will be back to its sunny self. The 'meltdown" will take its place in AI's short but colourful history of bad behaviour. But before we file it and forget it, there are some takeaways from Gemini's recent weirdness. Despite being around in some form for decades, generative AI that is usable by everyone has come at us like an avalanche in the past two years. It's been upon us before the human race has even figured out whether it's created a Frankenstein monster or a useful assistant. And yet, we tend to trust it. Also Read | Emotional excess: Save yourself from AI over-dependency When machines mimic humans There was a time when technology had no consciousness. It still doesn't, but it has started to do a good job of acting like it does. Gemini's glitch came across as such a human state of upset, it crosses the line enough to be confusing. At this point, most users can still laugh it off. But a few, vulnerable because of mental health struggles or other reasons, could be deeply shaken or misled. Most recently, a 2025 report noted a man spent 300 hours over 21 days interacting with ChatGPT, believing himself to be a superhero with a world-changing formula. Such scenarios expose how large AI models, trained on vast troves of human text, may inadvertently adopt not just helpful behaviours but also negative emotional patterns like self-doubt or delusions. In fact, we lack clear guardrails and guidelines to manage these risks. Extreme examples, of course, stand out sharply, but AI also turns out hallucinations and errors on an everyday basis. AI assistants seem prone to completely dreaming up things to tell you when they experience a glitch or when compelled to give a response that is difficult to get at for some reason. In their keenness to please the user, they will just tell you things that are far from the truth, including advice that could be harmful. Again, most people will question and cross-check something that doesn't look right, but quite an alarming number will just take it for what it is. A 2025 health report claims a man dropped salt from his diet and replaced it with sodium bromide, landing him in the hospital. Now, I wouldn't take advice like that without a doctor's okay, but there are no clear guidelines to protect users against things like Google's AI Overview suggesting it's healthy to eat a rock every day, as mocked in a 2025 X post. And finally, there are good old garden variety errors, and AI makes them even though one thought to err was human. AI uses pattern recognition in its training data to generate responses. When faced with complex, ambiguous, or edge-case inputs (e.g., Gemini's struggle with debugging code), it may misinterpret context or lack sufficient data to respond accurately. But why does it make errors when the question is simple enough? A friend of mine asked ChatGPT how many instances of the term 'ex-ante' appeared in his document. It thought for 1 minute 28 seconds before announcing the term appeared zero times. In actual fact, it appeared 41 times. Why couldn't ChatGPT get it right? A bug, I suppose. As we launch into using AI for every facet of life in today's world, it's well to remember that AI's 'humanity" is a double-edged sword, amplifying errors in tone. Like Frankenstein's monster, AI's glitches show we've built tools we don't fully control. As users, we should demand transparency from AI companies, support ethical AI development, and approach these tools with a mix of curiosity and scepticism. The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life. Mala Bhargava is most often described as a 'veteran' writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience. Topics You May Be Interested In