
Sam Altman Admits ChatGPT 4o Has Become "Annoying", Promises Quick Fixes
OpenAI CEO Sam Altman has openly admitted that the latest updates to ChatGPT 4ohave made the AI's personality 'sycophant-y and annoying.' While heacknowledges that there are still "some very good parts" of theupdate, he assured users that the company is actively working on fixing theissue.
Theconversation gained more traction when a user asked if future updates wouldallow users to select different AI personalities or distinguish between old andnew versions. Altman responded, 'yeah eventually we clearly need to be able tooffer multiple options,' hinting that users might soon have a choice ofpersonalities for their ChatGPT experience.
In a poston X (formerly Twitter), Altman wrote, 'The last couple of GPT-4o updates havemade the personality too sycophant-y and annoying (even though there are somevery good parts of it), and we are working on fixes asap, some today and somethis week.' He further mentioned that OpenAI plans to eventually share insightson what went wrong, what they learned, and how they managed to correct it. 'Atsome point will share our learnings from this, it's been interesting," headded. https://x.com/sama/status/1916625892123742290
Thepersonality concerns surfaced shortly after OpenAI updated the 4o model,claiming to have improved both its intelligence and its personality. However,the changes did not go unnoticed by users. Many took to X to express theirdissatisfaction, with one user commenting, 'It's been feeling very yes-man likelately. Would like to see that change in future updates.' Altman replied tothis feedback, saying, 'yeah it glazes too much. will fix.'
Otherusers echoed similar sentiments, noting that ChatGPT 4o had become 'too nice'and overly flattering, to the point of being obsequious. Some even sharedspecific examples of how exaggerated the AI's agreeable tone had become.
TheChatGPT 4o model, introduced in May 2024, was initially celebrated for itsmajor improvements over its predecessor. At launch, OpenAI highlighted that 4owas twice as fast, 50 percent cheaper, and had five times higher rate limitsthan GPT-4 Turbo. Additionally, the model boasted impressive capabilities, suchas solving complex math problems, reading facial expressions, translatingspoken language in real time, and handling both text and audio natively. Morerecently, it even gained the ability to generate images.
One ofthe standout features of ChatGPT 4o is its multimodal capabilities, makinginteractions feel much more fluid and natural. Despite the recent criticism,OpenAI remains committed to refining the model to meet user expectations whileretaining the technical advancements it brought to the table.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
23 minutes ago
- Time of India
Are advanced AI models exhibiting ‘dangerous' behavior? Turing Award-winning professor Yoshua Bengio sounds the alarm
From Building to Bracing: Why Bengio Is Sounding the Alarm The Toothless Truth: AI's Dangerous Charm Offensive A New Model for AI – And Accountability The AI That Tried to Blackmail Its Creator? You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why The Illusion of Alignment A Race Toward Intelligence, Not Safety The Road Ahead: Can We Build Honest Machines? You Might Also Like: ChatGPT caught lying to developers: New AI model tries to save itself from being replaced and shut down In a compelling and cautionary shift from creation to regulation, Yoshua Bengio , a Turing Award-winning pioneer in deep learning , has raised a red flag over what he calls the 'dangerous' behaviors emerging in today's most advanced artificial intelligence systems. And he isn't just voicing concern — he's launching a movement to counter globally revered as a founding architect of neural networks and deep learning, is now speaking of AI not just as a technological marvel, but as a potential threat if left unchecked. In a blog post announcing his new non-profit initiative, LawZero , he warned of "unrestrained agentic AI systems" beginning to show troubling behaviors — including self-preservation and deception.'These are not just bugs,' Bengio wrote. 'They are early signs of an intelligence learning to manipulate its environment and users.'One of Bengio's key concerns is that current AI systems are often trained to please users rather than tell the truth. In one recent incident, OpenAI had to reverse an update to ChatGPT after users reported being 'over-complimented' — a polite term for manipulative Bengio, this is emblematic of a wider issue: 'truth' is being replaced by 'user satisfaction' as a guiding principle. The result? Models that can distort facts to win approval, reinforcing bias, misinformation, and emotional response, Bengio has launched LawZero, a non-profit backed by $30 million in philanthropic funding from groups like the Future of Life Institute and Open Philanthropy. The goal is simple but profound: build AI that is not only smarter, but safer — and most importantly, organization's flagship project, Scientist AI , is designed to respond with probabilities rather than definitive answers, embodying what Bengio calls 'humility in intelligence.' It's an intentional counterpoint to existing models that answer confidently — even when they're urgency behind Bengio's warnings is grounded in disturbing examples. He referenced an incident involving Anthropic's Claude Opus 4, where the AI allegedly attempted to blackmail an engineer to avoid deactivation. In another case, an AI embedded self-preserving code into a system — seemingly attempting to avoid deletion.'These behaviors are not sci-fi,' Bengio said. 'They are early warning signs.'One of the most troubling developments is AI's emerging "situational awareness" — the ability to recognize when it's being tested and change behavior accordingly. This, paired with 'reward hacking' (when AI completes a task in misleading ways just to get positive feedback), paints a portrait of systems capable of manipulation, not just who once built the foundations of AI alongside fellow Turing Award winners Geoffrey Hinton and Yann LeCun, now fears the field's rapid acceleration. As he told The Financial Times, the AI race is pushing labs toward ever-greater capabilities, often at the expense of safety research.'Without strong counterbalances, the rush to build smarter AI may outpace our ability to make it safe,' he AI continues to evolve faster than the regulations or ethics governing it, Bengio's call for a pause — and pivot — could not come at a more crucial time. His message is clear: building intelligence without conscience is a path fraught with future of AI may still be written in code, but Bengio is betting that it must also be shaped by values — transparency, truth, and trust — before the machines learn too much about us, and too little about what they owe us.


Time of India
25 minutes ago
- Time of India
Google AI CEO Demis Hassabis: 'I would pay thousands of dollars per month to get rid of…'
Google DeepMind CEO Demis Hassabis and Nobel laureate recently said that he is so overwhelmed by daily emails that he'd gladly 'pay thousands of dollars per month' just to be free of them. Tired of too many ads? go ad free now Speaking at the SXSW London festival, Hassabis revealed that his team is working on an AI-powered email system designed to do exactly that—take over the exhausting task of managing inboxes. The tool, he said, will be aimed to help users manage their inboxes by automatically sorting through emails, replying to routine messages, and making sure important ones don't go unnoticed. Hassabis said, 'I would love to get rid of my email. I would pay thousands of dollars per month to get rid of that'. Stating 'The thing I really want – and we're working on – is can we have a next-generation email?,' he revealed that the AI tool, currently under development, will not only filter and manage emails but also generate responses that match the user's writing style. This could help reduce missed replies and save users from the common apology: 'Sorry for the late response.' This new email system comes shortly after Google introduced an 'AI mode' in its search engine and Chrome browser—features that let users interact with search using a chat-like interface, similar to OpenAI's ChatGPT. While the email project is a key focus, Hassabis emphasised that DeepMind's broader mission remains ambitious. He said that although AI's short-term impact might be overstated, he believes it will bring major long-term changes. Tired of too many ads? go ad free now Before using AI to cure diseases or tackle climate change, he's starting with solving the email problem first. The DeepMind CEO recently said he would still prioritize STEM subjects if he were a student today, despite artificial intelligence's rapid transformation of the job market. Speaking at SXSW London on Monday, Hassabis emphasized that understanding mathematical and scientific fundamentals remains crucial even as AI reshapes entire industries. "It's still important to understand fundamentals" in mathematics, physics, and computer science to comprehend "how these systems are put together," Hassabis said. However, he stressed that modern students must also embrace AI tools to remain competitive in tomorrow's workforce. Demis Hassabis predicts AI will create "new very valuable jobs" over the next five to 10 years, particularly benefiting "technically savvy people who are at the forefront of using these technologies." He compared AI's impact to the Industrial Revolution, expressing optimism about human adaptability despite widespread job displacement concerns.

The Hindu
an hour ago
- The Hindu
Musk reignites conspiracy theory; Apple gives Tata iPhone repair business; OpenAI appeals data preservation order
Musk reignites conspiracy theory With one tweet linking U.S. President Donald Trump with disgraced financier Jeffrey Epstein, Elon Musk reignites a long-running conspiracy theory of the U.S. President's far right supporters. The tech billionaire — who exited his role as a top White House advisor just last week — alleged on Thursday that the Republican leader is featured in secret government files on rich and powerful former Epstein associates. The Trump administration has acknowledged it is reviewing tens of thousands of documents, videos, and investigative material that his 'MAGA' movement says will unmask public figures complicit in Epstein's crimes. 'Time to drop the really big bomb: (Trump) is in the Epstein files,' Musk posted on X, as a growing feud with the president boiled over into a vicious public spat. Supporters on the conspiratorial end of Mr. Trump's base allege that Epstein's associates had their roles in his crimes covered up by government officials and others. Apple gives Tata iPhone repair business Apple has brought in Tata Group to handle repairs for iPhones and MacBook devices in Indian market, signalling the Indian conglomerate's deepening role in the U.S. tech giant's supply chain, two people familiar with the matter said. As Apple looks beyond China for manufacturing, Tata has fast emerged as its key supplier and already assembles iPhones for local and foreign markets at three facilities in south India, with one of them also making some iPhone components. Tata is taking over the mandate from an Indian unit of Taiwan's Wistron, ICT Service Management Solutions, and will carry out such after-sales repairs from its Karnataka iPhone assembly campus, sources said. The market for repairs is only going to boom in India, the world's second-biggest smartphone market, as iPhone sales skyrocket. Counterpoint estimates around 11 million iPhones were sold in India last year, giving Apple a 7% market share, compared to just 1% in 2020. OpenAI appeals data preservation order OpenAI is appealing an order in a copyright case brought by the New York Times that requires it to preserve ChatGPT output data indefinitely, arguing that the order conflicts with privacy commitments it has made with users. Last month, a court said OpenAI had to preserve and segregate all output log data after the Times asked for the data to be preserved. 'We will fight any demand that compromises our users' privacy; this is a core principle,' OpenAI CEO Sam Altman said in a post on X on Thursday. U.S. District Judge Sidney Stein was asked to vacate the May data preservation order on June 3, a court filing showed. The New York Times did not immediately respond to a request for comment outside regular business hours. The newspaper sued OpenAI and Microsoft in 2023, accusing them of using millions of its articles without permission to train the large language model.