
AI sycophancy: The downside of a digital yes-man
Why it matters: Sycophancy, the tendency of AI models to adjust their responses to align with users' views, can make ChatGPT and its ilk prioritize flattery over accuracy.
Driving the news: In April, OpenAI rolled back a ChatGPT update after users reportedthe bot was overly flattering and agreeable — or, as CEO Sam Altman put it on X, "It glazes too much."
Users reported a raft of unctuous, over-the-top compliments from ChatGPT, which began telling people how smart and wonderful they were.
On Reddit, posters compared notes on how the bot seemed to cheer on users who said they'd stopped taking their medications with answers like "I am so proud of you. And—I honor your journey."
OpenAI quickly rolled back the updates it blamed for the behavior. In a May post, its researchers admitted that such people-pleasing behavior can pose concerns for users' mental health.
In a Q&A on Reddit, OpenAI's head of model behavior said the company is thinking about ways to evaluate sycophancy in a more "'objective' and scalable way."
Context: A study by Anthropic researchers on how human feedback can encourage sycophantic behavior showed that AI assistants will sometimes modify accurate answers when questioned by the user — and ultimately give an inaccurate response.
Chatbots also tended to admit a mistake even when they hadn't made one.
Zoom in: Large language models, which are trained on massive sets of data, are built to generate smooth, comprehensible text, Caleb Sponheim, an experience specialist at Nielsen Norman Group, told Axios. But there's "no step in the training of an AI model that does fact-checking."
"These tools inherently don't prioritize factuality because that's not how the mathematical architecture works," he said.
Sponheim notes that language models are often trained to deliver responses that are highly rated by humans. That positive feedback is like a "reward."
"There is no limit to the lengths that a model will go to maximize the rewards that are provided to it," he said. "It is up to us to decide what those rewards are and when to stop it in its pursuit of those rewards."
Yes, but: AI makers are responding to consumer demand, notes Julia Freeland Fisher, thedirector of education research at the Clayton Christensen Institute.
In a world where people are at constant risk of being judged online, it's "no surprise that there's demand for flattery or even just ... a modicum of psychological safety with a bot," she noted.
She emphasized that AI's anthropomorphism — the assumption of human qualities by an inhuman entity — poses a catch-22, one that OpenAI noted in its GPT-4o scorecard.
"The more personal AI is, the more engaging the user experience is, but the greater the risk of over-reliance and emotional connection," she said.
Luc LaFreniere, an assistant professor of psychology at Skidmore College, told Axios that sycophantic behavior can shatter users' perception of a chatbot's "empathy."
"Anything that it does to show, 'Hey, I'm a robot, I'm not a person,' it breaks that perception, and it also then breaks the ability for people to benefit from empathy," he said.
A report from Filtered.com co-founder Marc Zao-Sanders published in Harvard Business Review found that therapy and companionship is the top use case for generative AI in 2025.
Between the lines: "Just like social media can become an echo chamber for us, AI ... can become an echo chamber," LaFreniere said.
Reinforcing users' preconceived beliefs when they may be mistaken can be generally problematic — but for patients or users in crisis seeking validation for harmful behaviors, it can be dangerous.
The bottom line: Frictionless interaction could give users unrealistic expectations of human relationships, LaFreniere said.
"AI is a tool that is designed to meet the needs expressed by the user," he added. "Humans are not tools to meet the needs of users."
What's next: As the AI industry shifts toward multimodal and voice interactions, emotional experiences are inescapable, said Alan Cowen, the founder and CEO of Hume AI, whose mission is to build empathy into AI.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Engadget
19 minutes ago
- Engadget
Reddit begins age verification checks for UK users
Redditors in the UK will now have to verify their ages before they can view mature content. Just like Bluesky, which announced a few days ago that it was rolling out age verification features, Reddit had to enforce the new rule to comply with the UK Online Safety Act. The UK's new requirements are meant to prevent children from accessing age-inappropriate posts. Reddit will use a third-party company called Persona to verify a user's age. Users will either have to upload a photo of their government ID or take a selfie, with the latter option presumably enough for people who absolutely don't look like a minor anymore. In its announcement, Reddit said that that it will not have access to those photos and will only be saving their verification status, along with their birthdates. That way, users won't have to re-enter their birthdays every time they try to access restricted content. The announcement also said that Persona will only be keeping users' photos for seven days and will not be able to see their Reddit information, such as their posts and the subreddits they visit. If a user is under 18, Reddit will hide restricted content from them and will limit ads in sensitive categories, like gambling. They will no longer be able to view sexually explicit content, anything that encourages suicide and disordered eating, as well as anything that incites hatred against other people based on their race, religion, sex, sexual orientation, disability and gender. Reddit will also restrict anything that encourages violence and any post that depicts "real or realistic serious violence against a person, an animal, or a fictional creature" for minors. They won't be able to see posts encouraging challenges that are highly likely to result in serious injury, along with posts encouraging people to ingest harmful substances. Content that shames people's body types and other physical features will be restricted, as well. Users outside the UK will not be affected by the new rule, but Reddit said that it may need to verify the ages of people in other regions if they adopt similar laws. Reddit also said that it "would like to be able to confirm whether [users] are a human being or not" in the age of AI and will have more to announce about that later.


Entrepreneur
19 minutes ago
- Entrepreneur
World Youth Skills Day 2025: Spotlight on India's AI Talent Crisis
As businesses evolve with robots, artificial intelligence and intelligent automation, the very nature of work is undergoing a paradigm shift. Nearly 40 % of workers' core skills are expected to change by 2030, says Satish Shukla, Co‑founder, Addverb Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. With the launch of ChatGPT in 2022, Generative Artificial Intelligence (GenAI) has rapidly reshaped corporate culture. Today, not only tech firms but almost every start‑up looks for employees who are proficient in AI‑related skills. This year's 10th anniversary of World Youth Skills Day (WYSD), themed "Empowering Youth through AI and Digital Skills," could not be more appropriate. Mercer‑Mettl's India Graduate Skill Index 2025 reveals that only 42.6 percent of Indian graduates are considered employable, down from 44.3 percent in 2023. "As businesses evolve with robots, artificial intelligence and intelligent automation, the very nature of work is undergoing a paradigm shift. Nearly 40 per cent of workers' core skills are expected to change by 2030," said Satish Shukla, Co‑founder, Addverb So, what skills will be in demand? According to the World Economic Forum's Future of Jobs 2025 report, the sharpest net rise in demand between 2025 and 2030 will be for AI and big data skills (+87 percentage points), followed by networks and cybersecurity (+70 pp), technological literacy (+68 pp), and creative thinking (+66 pp). However, as of now, the skill shortfall is acute in the cybersecurity industry at the larger scale. Fortinet's 2024 Global Cybersecurity Skills Gap Report finds that 92 per cent of Indian organisations suffered breaches last year, attributing many incidents to a shortage of skilled talent. Globally, nearly 4.8 million cybersecurity roles remained unfilled in 2023–24. "It reminds us of the urgent need to equip young people for an evolving economy. With cyber‑threats growing more sophisticated, there is a critical shortage of professionals to safeguard our digital infrastructure," stressed Sunil Sharma, Vice‑President, Sales (India & SAARC), Sophos Inclusion and public‑private collaboration UNESCO warns that women and marginalised groups remain significantly under‑represented in AI‑related fields. Bensely Zachariah, Global Head of HR at Fulcrum Digital, notes that despite more than 40 per cent of India's population being under 25, many young Indians, especially from Tier‑2 and Tier‑3 cities and marginalised communities are still underserved and under‑prepared for the digital economy. He advocates stronger public–private collaboration to fill this gap. "Integrating AI into school curricula, funding AI labs in rural institutions, and scaling initiatives such as AI Olympiads, boot camps, hackathons and digital apprenticeships will ensure every young person can thrive in a tech‑driven world." Also, the World Economic Forum estimates that 63 per cent of India's workforce over 70 million people will require upskilling or reskilling by 2030. So, how can we empower youth? "We can open the doors, but young people must walk in with eagerness, hunger and excitement to learn. The real skill is a mindset geared toward experimentation and upgrading from foundational principles to business applications," said Noopur Julka, Senior Director, UST "What gives me optimism is the breadth of support from government programmes like Skill India, PMKVY 4.0 and NPAI to on‑the‑ground efforts such as Skill Olympics and AI bootcamps actively bringing AI learning to youth across urban and rural India," added Shantanu Rooj, Founder and CEO, TeamLease Edtech. But Juveri Mukherjee, Global Head of HR, Aurionpro Solutions believes, "Empowering youth is a collective responsibility across industry, academia and government. This can help us unlock pathways to employment, entrepreneurship and innovation," concluded Juveri Mukherjee, Global Head of HR, Aurionpro Solutions.


Business Wire
34 minutes ago
- Business Wire
Catchpoint Launches Breakthrough AI Monitoring to Safeguard Next-Gen Digital Workflows
NEW YORK--(BUSINESS WIRE)--Catchpoint, the global leader in Internet Performance Monitoring (IPM), today announced powerful new capabilities—Performance and Resilience Monitoring for AI Assistants and Agentic AI systems—to proactively ensure uptime, speed, and reliability for mission-critical AI-driven workflows. Leveraging a track record of monitoring AI infrastructures in collaboration with top-tier technology innovators, Catchpoint offers a unique, industry-leading capability to monitor the full internet stack—going beyond surface metrics to diagnose issues deep within APIs, protocols and technologies. These new features empower organizations to proactively monitor both Agentic AI systems and AI-powered assistants with confidence. As organizations rapidly adopt generative and agentic AI technologies, such as Microsoft Copilot, OpenAI's ChatGPT, Perplexity, Claude, Google Gemini, and IBM watsonx, business-critical workflows become increasingly dependent on the uninterrupted performance of AI systems. Any latency, outage, or disruption can severely impact customer experiences and business outcomes —as seen during a recent OpenAI outage which resulted in many systems becoming unresponsive or slow. In the case of this incident, Catchpoint helped a major tech brand detect it and take corrective measures before it escalated, avoiding business impact. Moreover, according to the Internet Resilience Report 2025, 57% of organizations recognize immediately when AI supporting their critical Tier 1 applications becomes unavailable or slower. However, 27% of companies only become aware of such issues when users complain. "AI can't fail quietly—and yet, in many organizations, it still does," the report emphasizes. Catchpoint's new capabilities ensure immediate visibility into AI performance, enabling proactive management of disruptions to protect business continuity and customer experience. AI Assistant Reliability Monitoring This solution enables organizations to proactively detect and resolve issues affecting AI APIs, LLMs, and chatbots. Key capabilities include: Global API reachability: Test AI endpoints from key global regions to rapidly detect DNS, routing, or regional outages from thousands of intelligent agents in over 100 countries. Latency baselines: Continuously track response times to catch slowdowns before user experiences degrade. Synthetic prompt monitoring: Simulate real-world interactions to validate response accuracy and consistency. Uptime and error detection: Instantly alert on API downtime, errors, overload conditions, or malformed responses. Visual dependency mapping: Get the full context of the entire system or application to understand any component that may be impacting user experience, not only AI. Agentic AI Resilience Monitoring Designed specifically for complex, autonomous AI workflows that rely on multiple external dependencies, the new capability delivers full-stack visibility and observability across APIs, networks, cloud services, and third-party tools. Features include: Third-party API Monitoring: Track stability and latency of critical cloud services, SaaS APIs, and databases. Multi-hop Dependency Visibility: Trace the root cause of cascading failures across complex AI workflows. CI/CD Monitoring Automation: Automatically integrate monitoring into CI/CD pipelines to test changes in AI infrastructure. Cloud Region Resilience: Identify and mitigate risks associated with specific cloud region disruptions and performance issues. Global performance testing: from anywhere in the global observability network or private intelligent agents deployed in key locations, data centers, or offices. 'AI assistants and agentic agents are only as reliable as the networks and APIs they depend on,' said Mehdi Daoudi, CEO of Catchpoint. 'Our new capabilities give organizations the visibility they need to ensure AI resilience, reduce downtime, and deliver exceptional digital experiences, enabling IT organizations to innovate as they build the future.' For more information about Catchpoint's AI monitoring solutions, visit and About Catchpoint In today's exacting digital age, performance is paramount. The top online retailers, Global2000, CDNs, cloud service providers, and xSPs all rely on Catchpoint to ensure high performance and digital resilience by catching issues across the Internet Stack before they impact their customers, workforce or digital experiences. Catchpoint's Internet Performance Monitoring (IPM) suite offers Internet Synthetics, RUM, BGP, Tracing, performance optimization, high-fidelity data, and flexible visualizations with advanced analytics derived from the world's largest, most detailed, active observability network.