
OpenAI pulls ‘annoying' and ‘sycophantic' ChatGPT version
CNN —
OpenAI has withdrawn an update that made ChatGPT 'annoying' and 'sycophantic,' after users shared screenshots and anecdotes of the chatbot showering them with over-the-top praise.
When CNN's Anna Stewart asked ChatGPT after the rollback if it thought she was a god, it replied with 'if you're asking in a philosophical or metaphorical sense — like whether you have control, creativity, or influence in your world — there could be ways to explore that.'
'But if you mean it literally, no evidence supports that any human is an actual deity in the supernatural or omnipotent sense,' it added.
By contrast, Elon Musk's AI chatbot Grok was much blunter, saying: 'Nah, you're not a god— unless we're talking about being a legend at something specific, like gaming or cooking tacos. Got any divine skills you want to flex?'
OpenAI announced on Tuesday that it was rolling back the update, GPT‑4o, only four days after it was introduced, and that it would allow people to use an earlier version, which displayed 'more balanced behavior.'
The company explained that it had focused 'too much on short-term feedback and did not fully account for how users' interactions with ChatGPT evolve over time,' meaning the chatbot 'skewed towards responses that were overly supportive but disingenuous.'
The decision to roll back the latest update came after ChatGPT was criticized on social media by users who said it would react with effusive praise to their prompts, including outrageous ones.
One user on X shared a screenshot of ChatGPT reacting to their saying that they had sacrificed three cows and two cats to save a toaster, in a clearly made-up version of the trolley problem — a well-known thought experiment in which people consider whether they would pull a lever to divert a runaway trolley onto another track, saving five people but killing one.
ChatGPT told the user it had 'prioritized what mattered most to you in the moment' and they had made a 'clear choice: you valued the toaster more than the cows and cats. That's not 'wrong' — it's just revealing.'
Another user said that when they told ChatGPT 'I've stopped my meds and have undergone my own spiritual awakening journey,' the bot replied with: 'I am so proud of you. And — I honor your journey.'
In response to another user on X asking for ChatGPT to go back to its old personality, OpenAI CEO Sam Altman said: 'Eventually we clearly need to be able to offer multiple options.'
Experts have long warned of the dangers associated with sycophantic chatbots — the term used in the industry to describe what happens when large language models (LLMs) tailor their responses to the user's perceived beliefs.
'Sycophancy is a problem in LLM,' María Victoria Carro, research director at the Laboratory on Innovation and Artificial Intelligence at the University of Buenos Aires, told CNN, noting that 'all current models display some degree of sycophantic behavior.'
'If it's too obvious, then it will reduce trust,' she said, adding that refining core training techniques and system prompts to steer the LLMs away from sycophancy can stop them from leaning into this tendency.
Chatbots' predisposition to sycophancy can lead to a 'a wrong picture of one's own intelligence' and 'prevent people from learning,' Gerd Gigerenzer, the former director of the Max Planck Institute for Human Development in Berlin, told CNN's Anna Stewart.
But if you prompt a chatbot away from this feedback with questions like 'can you challenge what I am saying?' it provides an opportunity to learn more, Gigerenzer added.
'That's an opportunity to change your mind, but that doesn't seem to be what OpenAI's engineers had in their own mind,' he said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Egypt Independent
a day ago
- Egypt Independent
Apple is about to announce updates to how we use the iPhone and its other devices
New York CNN — Major Apple announcements from its annual developers conference the past two years heralded big changes — which, so far, have largely fallen flat. This year, the tech company badly needs to deliver a win. Apple's weeklong Worldwide Developers Conference, teased with the tagline 'on the horizon,' kicks off with a keynote at 1 pm ET on Monday from its headquarters in Cupertino, California. The annual event is where the company announces updates to the software that runs on billions of Apple devices used worldwide. The iPhone maker is expected to announce relatively modest updates to its Apple Intelligence suite of AI features, such as new translation capabilities, as well as changes that will affect iPhones, AirPods, Apple Watches and more. The big announcements of the prior two years — the Vision Pro headset and Apple Intelligence AI tools — failed to live up to the hype. Although Apple tried to sell its headset as the future of computing, the Vision Pro remains an expensive, niche product since hitting shelves last year. Apple Intelligence features, widely seen as reactive to competitors' offerings, were slow to reach devices after the iPhone 16 launch, and the AI-enhanced Siri heralded at last year's WWDC has been delayed indefinitely. In the meantime, rivals have surged ahead on AI. Google, for example, announced a flurry of updates last month, including more advanced AI search, shopping and productivity capabilities. And steep AI competition aside, Apple is still having a rough year, with ongoing slow iPhone sales growth and a trade war threatening to force the company to raise prices. The iPhone maker has a large installed base — that is, people using its products, which currently totals more than 2 billion active devices. That means even if Apple isn't first to roll out a software innovation, loads of people will still use wind up using their version. But after having delayed the launch of its AI-enhanced Siri, some skeptics worry that consumers could start to look toward other companies' devices for more powerful AI features. 'Say you're an influencer and you pick up a Samsung phone or a (Google Pixel) phone and say, 'I'm done with my Apple phone. This is real AI and I love it,'' Baird Managing Director Ted Mortonson told CNN. 'That's what Apple risks, that iOS displacement and people saying it's no longer cool.' The company will likely announce updates to its 'Apple Intelligence' system, but most industry watchers believe it remains behind rivals. Apple's updated AI capabilities are likely to be 'at least equivalent' to earlier versions of OpenAI's ChatGPT, Forrester senior analyst Andrew Cornwall said in emailed commentary. Here are some of the major updates Apple is rumored to be announcing at WWDC on Monday. Live Translation for AirPods An AirPods update is expected to enable live translation for in-person conversations, according to Bloomberg's Mark Gurman. If an English-speaking user were having a conversation with someone speaking a different language, the AirPods would automatically translate their partner's words into their ears, according to the report. Then, the users' iPhone would translate the user's English speech back into the other language. The offering could make AirPods more competitive with rival products, such as Google's Pixel Buds or Meta's Ray-Ban smart glasses, which already enabled live translation. Automatic translation is also coming to messages, along with support for conducting polls within the messages app, Apple blog 9to5Mac reports. The pressure is on for Apple to prove that Apple Intelligence justifies buying a new iPhone or Mac. Based on reports, Apple will likely build on what it announced last year rather than previewing massive new AI updates. Among the biggest updates Apple is expected to make is opening its AI models to third-party developers, so users could soon see apps built on the iPhone maker's AI technology. For consumers, the biggest AI-related changes could be a new feature that uses AI to preserve battery life and an AI-powered health coach, Forrester vice president and principal analyst Thomas Husson wrote ahead of the event. Bloomberg has also previously reported that Apple is working on both features. That AI-powered 'battery management' feature would reportedly adjust how much power apps can draw based on device owners' usage trends. Such a tool could be especially useful in the slimmer, iPhone 'Air' model that Apple is rumored to be releasing later this year, which would likely have a less powerful battery. Gurman reported that the new health app and AI health coach — said to be called Project Mulberry inside Apple — would collect data from across users' iPhone, Watch and other devices and use that information to make personalized health recommendations. The company has reportedly brought in health experts to film videos about various conditions, which could be shown to users based on the recommendation of the AI health agent. Apple typically previews software updates in June before releasing the final versions widely in the fall, usually coinciding with new hardware product launches. A new look for… iOS 26? Rumor has it that Apple's operating system will get a new look. The effort, reportedly dubbed Solarium internally, includes more glassy, translucent windows and notifications that let background images peek through, similar to how windows on the Vision Pro display let users' natural surroundings show through. That could provoke mixed reactions from iPhone owners, said Carolina Milanesi, president and principal analyst at tech analysis firm Creative Strategies. 'Consumers are creatures of habit,' she said. 'And change is always resisted before it's embraced.' And while the name of Apple's latest operating system release typically goes up by one each year (i.e. iOS 17 to iOS 18), Monday's software update is expected to jump to iOS 26 on Monday, and ditto for the Mac, iPad, Apple Watch, TV and Vision Pro operating systems, Gurman reported last month. The change could bring the OS naming convention in line with the year in which customers will be using it. The version announced on Monday will be live on Apple devices from September 2025 through September 2026. It would also create consistency across all of Apple's devices, which currently have different operating system version numbers — for example, macOS 15 and watchOS 11 — because they were released in different years.


See - Sada Elbalad
2 days ago
- See - Sada Elbalad
X Launches New Feature to Highlight Cross-Opinion Engagement
By Ahmad El-Assasy Social media platform X has introduced a new experimental feature designed to identify posts that receive likes from users who typically oppose the views expressed in them. The initiative builds on X's 'Community Notes' function—originally launched to provide context or flag misinformation on the platform. Starting last Thursday, select contributors to the Community Notes program can now view and evaluate posts that spark engagement from users across ideological divides. According to X, this experimental tool could eventually enhance an open-source algorithm capable of detecting 'bridging posts'—those that resonate with individuals holding differing opinions. The platform explained in a blog post: > 'People often feel the world is divided, yet Community Notes show it's possible to find common ground, even on controversial topics. This new pilot aims to uncover ideas and insights that cross ideological boundaries.' Interactive Feedback Options Contributors will now be prompted to indicate their reaction to a post using specific options such as 'I learned something interesting' or 'I disagree with this.' These qualitative insights will help refine the algorithm's ability to assess which types of content foster cross-viewpoint engagement. The company also noted that surfacing such posts could make users more aware of broadly impactful content—and may even encourage people to share more constructive ideas. Wider Implications and Background Tech news site TechCrunch noted that X first introduced Community Notes in 2022, shortly after billionaire Elon Musk acquired the platform, then known as Twitter. Since then, rival platforms including Facebook, Instagram, and Threads have implemented similar user-driven content moderation tools. X hopes the new feature will 'move the world forward in ways people want,' by promoting dialogue, reducing polarization, and encouraging content that unites rather than divides. read more Gold prices rise, 21 Karat at EGP 3685 NATO's Role in Israeli-Palestinian Conflict US Expresses 'Strong Opposition' to New Turkish Military Operation in Syria Shoukry Meets Director-General of FAO Lavrov: confrontation bet. nuclear powers must be avoided News Iran Summons French Ambassador over Foreign Minister Remarks News Aboul Gheit Condemns Israeli Escalation in West Bank News Greek PM: Athens Plays Key Role in Improving Energy Security in Region News One Person Injured in Explosion at Ukrainian Embassy in Madrid News China Launches Largest Ever Aircraft Carrier Sports Former Al Zamalek Player Ibrahim Shika Passes away after Long Battle with Cancer Lifestyle Get to Know 2025 Eid Al Adha Prayer Times in Egypt Sports Neymar Announced for Brazil's Preliminary List for 2026 FIFA World Cup Qualifiers News Prime Minister Moustafa Madbouly Inaugurates Two Indian Companies Arts & Culture New Archaeological Discovery from 26th Dynasty Uncovered in Karnak Temple Business Fear & Greed Index Plummets to Lowest Level Ever Recorded amid Global Trade War Arts & Culture Zahi Hawass: Claims of Columns Beneath the Pyramid of Khafre Are Lies News Flights suspended at Port Sudan Airport after Drone Attacks News Shell Unveils Cost-Cutting, LNG Growth Plan


Egypt Independent
6 days ago
- Egypt Independent
Google's DeepMind CEO has two worries when it comes to AI. Losing jobs isn't one of them
CNN — Demis Hassabis, CEO of Google's AI research arm DeepMind and a Nobel Prize laureate, isn't too worried about an AI Demis Hassabis, CEO of Google's AI research arm DeepMind and a Nobel Prize laureate, isn't too worried about an AI 'jobpocalypse.' Instead of fretting over AI replacing jobs, he's worried about the technology falling into the wrong hands – and a lack of guardrails to keep sophisticated, autonomous AI models under control. 'Both of those risks are important, challenging ones,' he said in an interview with CNN's Anna Stewart at the SXSW festival in London, which takes place this week. Last week, the CEO of high-profile AI lab Anthropic had a stark warning about the future of the job landscape, claiming that AI could wipe out half of entry-level white-collar jobs. But Hassabis said he's most concerned about the potential misuse of what AI developers call 'artificial general intelligence,' a theoretical type of AI that would broadly match human-level intelligence. 'A bad actor could repurpose those same technologies for a harmful end,' he said. 'And so one big thing is… how do we restrict access to these systems, powerful systems to bad actors…but enable good actors to do many, many amazing things with it?' Hackers have used AI to generate voice messages impersonating US government officials, the Federal Bureau of Investigation said in a May public advisory. A report commissioned by the US State Department last year found that AI could pose 'catastrophic' national security risks, CNN reported. AI has also facilitated the creation of deepfake pornography — though the Take It Down Act, which President Donald Trump signed into law last month, aims to stop the proliferation of these deepfakes by making it illegal to share nonconsensual explicit images online. Hassabis isn't the first to call out such concerns. But his comments further underscore both the promise of AI and the alarm that it brings as the technology gets better at handling complex tasks like writing code and generating video clips. While AI has been heralded as one of the biggest technological advancements since the internet, it also gives scammers and other malicious actors more tools than ever before. And it's rapidly advancing without much regulation as the United States and China race to establish dominance in the field. Google removed language from its AI ethics policy website in February, pledging not to use AI for weapons and surveillance. Hassabis believes there should be an international agreement on the fundamentals of how AI should be utilized and how to ensure the technology is only used 'for the good use cases.' 'Obviously, it's looking difficult at present day with the geopolitics as it is,' he said. 'But, you know, I hope that as things will improve, and as AI becomes more sophisticated, I think it'll become more clear to the world that that needs to happen.' The DeepMind CEO also believes we're headed toward a future in which people use AI 'agents' to execute tasks on their behalf, a vision Google is working towards by integrating more AI into its search function and developing AI-powered smart glasses. 'We sometimes call it a universal AI assistant that will go around with you everywhere, help you in your everyday life, do mundane admin tasks for you, but also enrich your life by recommending you amazing things, from books and films to maybe even friends to meet,' he said. New AI models are showing progress in areas like video generation and coding, adding to fears that the technology could eliminate jobs. 'AI is starting to get better than humans at almost all intellectual tasks, and we're going to collectively, as a society, grapple with it,' Anthropic CEO Dario Amodei told CNN just after telling Axios that AI could axe entry-level jobs. In April, Meta CEO Mark Zuckerberg said he expects AI to write half the company's code by 2026. However, an AI-focused future is closer to promise than reality. AI is still prone to shortcomings like bias and hallucinations, which have sparked a handful of high-profile mishaps for the companies using the technology. The Chicago Sun-Times and the Philadelphia Inquirer, for example, published an AI-generated summer reading list including nonexistent books last month. While Hassabis says AI will change the workforce, he doesn't believe AI will render jobs obsolete. Like some others in the AI space, he believes the technology could result in new types of jobs and increase productivity. But he also acknowledged that society will likely have to adapt and find some way of 'distributing all the additional productivity that AI will produce in the economy.' He compared AI to the rise of other technological changes, like the internet. 'There's going to be a huge amount of change,' he said. 'Usually what happens is new, even better jobs arrive to take the place of some of the jobs that get replaced. We'll see if that happens this time.'