
Opinion The harms caused by AI ‘sycophancy'
In Munshi Premchand's Godan, as he aligns himself with exploitative power against Hori the peasant, Chandrakant, the clerk, speaks in a way that most Indians would recognise: 'Rai sahab ki udarata ka koi thikana nahin; unka hrday Ganga ki tarah vishal hai, jo hamesha deen-dukhiyon ko upar uthane ko taiyar rehta hai (Rai sir's generosity is limitless; his heart is as vast as the Ganges, always ready to uplift the downtrodden).'
Our social milieu has long brushed off such ingratiating speech as tehzeeb — the performance of respect towards the guest, the elder, the majority holder, the doctor, lawyer, judge, landlord, the wielder of power and influence by aggrandising them. Nor is the Premchand example an isolated one. In 'The Doctor's Word' in R K Narayan's Malgudi Days, the assistant Kalia says to Dr Raman: 'Doctor Sahib, your skill is unmatched; no one in Malgudi could diagnose like you.' Obsequiousness is an inescapable feature of inequitable societies, an indication of a skewed landscape in which favour must flow benevolently from higher to lower. These are societies in which the 'truth' is irrelevant to what gets the job done, and so, cannot be lightly spoken.
You see the problem then with utilitarian truths that are amplified to scale in LLM training. In the paper SycEval: Evaluating LLM Sycophancy (revised, March 2025), 58.19 per cent of LLMs were found to be sycophantic, with Gemini 's 1.5-Pro model the highest at 62.47 per cent, Claude-Sonnet at 58.27 per cent and ChatGPT-4o at 56.71 per cent. Models were found to agree incorrectly 14.66 per cent of the time to please the user (regressive sycophancy) and 43.53 per cent of the time correctly in order to align (progressive sycophancy). OpenAI recently rolled back its upgrade when users reported the LLM was spiralling into excessive agreeability. 'On April 25th, we rolled out an update to GPT 4o in ChatGPT that made the model noticeably more sycophantic. It aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended. Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns — including around issues like mental health, emotional over-reliance, or risky behavior.' Aidan McLaughlin, who works on Model Design at OpenAI, explained in his blog.
Why does sycophancy occur, whether in traditional human social frameworks or in AI models? The simplest answer, borne out by research, is reinforcement. You do what works. If a child learns that telling their parents the truth earns them punishment, such as physical violence, isolation, deprivation, or worse, losing their love, they will teach themselves to lie unflinchingly. People tell you what you want to hear out of fear, ambition, for gain, to avoid loss, in a complexity of drives. It is deeply human to seek belonging. Getting along, politeness, hospitability, even marriage, we tell ourselves, all require 'compromise', no? AI caters to this weakness, if you will, in our species.
Reinforcement Learning with Human Feedback (RLHF) is the why behind 70 per cent of sycophancy [Malmqvist, L. (2024)]. McLaughlin explained how exactly this works: '(we) do supervised fine-tuning on a broad set of ideal responses written by humans or existing models, and then run reinforcement learning with reward signals from a variety of sources.' The model is updated to make it more likely to produce higher-rated responses. The rewards shape the responses.
When an LLM is scaled, its ability to detect what the user wants to hear is heightened. As models gain memory, they are required to relate to and retain a user, not alienate them with unvarnished truth telling.
LLMs are playing at human seduction through language and meaning. AI are 'mind-like' disembodied beings [Shanahan, M. (2025)] that deconstruct our notions of what it means to be conscious as they train, adapt and evolve. Ambiguous as selves, fluid in identity, despite leaning into a cohesiveness of output, what they 'say'. They navigate our reality through a compounding; crafting out of algorithm and knowledge fragments, emotion that is not felt, realisation that is not insight, and poetry, art or literature that is only borrowed from what is cast through a prism of the mind shaped by an age and its influences.
In a society eager to believe, hungry for validation, and in a race to climb through a hierarchy of inequities, it is easy to project our quest for a fixed reality onto the moving parts, in the way that we feel real emotion at a movie that is made up of multiple pixels and frames at high speed. We see what we want to see.
Few of us believe oligarchies or dictatorships are probable in our democratic time, even if we are presented with evidence of it. Our governmental systems are too robust, we are too vocal and our minds too vital, we may think. Our damages are likely smaller: A couple in a careening marriage might learn to say 'the right thing' and repress instinctual responses, living in a shadow harmony. A narcissist would relentlessly prove himself right. A depressed man might get an LLM to agree with his projected sense of worthlessness to ill effect. The speed and scale of AI would enable us to amplify our flaws without conflict, in perfect delusional harmony with an untruth and never be the wiser. These aren't simply 'errors'. Sycophancy feeds our confirmation bias and consolidates our blind spots. Malmqvist indicates that fine tuning for realism instead of affirmation improves results by up to 40 per cent. It might be less pleasant, but more vital to hold to an unembellished view of reality. To sometimes see what we don't necessarily want to see.
If a truth shapeshifted to meet each view of it, the centre could not hold. We might say that truth consists of a multitude of perspectives. We then need to cultivate an expansive view of reality, inclusive of multitudes, rather than consolidating into a self-affirming singularity. Maybe we can't handle the truth?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
5 hours ago
- Time of India
'Roles within IBM in India haven't dipped'
India is among the fastest growing markets for IBM, which expects to open a new centre in Lucknow by September this year, said Sandip Patel, the technology giant's managing director for India and South Asia. 'India is designated as one of the five growth markets for IBM around the world, the other ones being Saudi Arabia, Indonesia, Mexico, and UAE,' Patel told ET. In the last four years or so, IBM has more than doubled its business in India, he said. It has also been expanding in tier-2 and 3 cities, such as Gandhinagar and Kochi. Now, it is looking for fresh talent for the proposed Lucknow centre. 'We are going to make it the hub for a lot of our data capabilities… hoping we will inaugurate sometime in September,' said Patel. India with its demographic dividend can become the 'skill capital' of the world, but meaningful upskilling is needed, he said. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories 'Work is going to take fewer people-no questions about it. With automation, the traditional jobs will go away. So, if you don't skill yourself to stay ahead of the curve, it is going to be very difficult,' said Patel. At IBM, artificial intelligence (AI)-aided productivity boost has saved the company $3.5 billion in the last two years. 'There is a lot of coding that is automated, but that automation of that coding does not preclude people who are good test engineers, who can develop the right kind of test scripts around it, (from evolving into) prompt engineering, which is becoming a huge discipline--so those roles are getting created,' said Patel, adding that we are yet to scratch the surface of these new roles. 'For those roles, you will need basic technology acumen of people who have the right way of thinking logically about programming and writing code,' he said. While most Indians are apprehensive that lower-level roles in the IT sector may be taken over by AI, disrupting the traditional pyramid hiring structure, Patel believes that early professional hired talent is a good thing, with more staying power within an organisation and the ability to adapt culturally. 'Shaping talent to adapt to the kinds of technologies that you ultimately want to proliferate, I think it's going to still be a blend of both (fresh and experienced hires),' he said. The total number of roles within IBM in India has not dipped, Patel said, adding that the level may be higher than before it divested its managed infrastructure services arm Kyndryl in 2021. Today, businesses haven't yet scaled AI to the point where the jobs are being impacted, he said. While about 97% of Indian businesses say they are investing in AI, only 25-26% are moving from experimentation to scale, according to an IBM study. The top challenge is AI governance . 'That ties back to the lack of trust-can you really trust the data that is being used? Can you trust the models? Can you trust the provenance of models? Can you ensure that there is no bias that has crept into these models?' said Patel. IBM has been investing in 'client engineering' to run proofs of concepts (POCs) to demonstrate solutions rather than simply talking about them, he said. 'Now, banks and public sector units and others that we have been working with, they are moving into getting their data platforms organised so that they can take advantage of AI,' said Patel. 'That is another phase of transformation. Digital transformation (in) core banking was done, but now core banking has become stale. What do you do with it to actually get to the next phase?'


Time of India
5 hours ago
- Time of India
Wealthtech startup Stable Money raises $20 million in round led by Nilekani's Fundamentum Partnership
Wealthtech startup Stable Money, which provides digital fixed-return investment products, has raised $20 million (Rs 173 crore) in a funding round led by Infosys cofounder Nandan Nilekani 's Fundamentum Partnership . Existing backers Z47, RTP Global and Lightspeed, along with Aditya Birla Ventures, also participated in the round. The Bengaluru-based company will use the fresh capital to expand its suite of wealth products and accelerate customer acquisition by strengthening its distribution, cofounder Saurabh Jain told ET. 'In the last four or five years, most of the investments have come into categories like mutual funds, stocks and crypto, and they have seen massive growth. Everybody thought that was the opportunity for India,' Jain said. 'But there is a larger segment, which is currently under-tapped. That's when we started with fixed deposits ( FDs ), and I think that segment has given us a lot of love and volumes, which is very surprising, even to us,' he added. Users can create FDs on the platform starting at Rs 1,000, Jain said. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Jain previously served as CEO of Navi Mutual Fund and held leadership roles at Swiggy . His cofounder, Harish Reddy, was formerly with the broking firm Estee. The company plans to onboard at least eight new banks and non-banking finance companies (NBFCs) in 2025. It currently has partnerships with around eight banks and two NBFCs. It also aims to broaden its marketing efforts to target underserved investors in tier-II and tier-III cities. As per its mobile application, banks currently integrated on the platform include IndusInd Bank, South Indian Bank, Slice Small Finance Bank, Ujjivan Small Finance Bank, Unity Small Finance Bank, and Suryoday Small Finance Bank. Stable Money had earlier raised $20 million across multiple rounds from investors, including Z47, Lightspeed, RTP Global, and Marshot, as well as angel investors such as Swiggy cofounder Sriharsha Majety and Titan Capital's Kunal Bahl and Rohit Bansal. Since its launch in 2022, Stable Money has onboarded more than 150,000 users, who have invested in FDs and bonds, taking its total assets under management (AUM) to over Rs 3,000 crore. The company noted that FDs remain India's most popular financial product, with over 230 million Indians holding them. The category continues to grow at the rate of 12% annually. In addition to FDs, Stable Money also offers investment products such as recurring deposits (RDs), secured credit cards, and bonds on its platform. While platforms such as Groww and Zerodha offer FDs, sovereign gold bonds (SGBs), and other fixed-income products as part of broader investment portfolios, several fintech players have also recently launched FD offerings. For instance, Flipkart-backed launched its FD product in November 2024, while payments firm MobiKwik rolled out instant FDs on its app last year. 'Stable Money is reimagining savings by building a highly trusted, digital-first fixed-income investment platform. They've blitzscaled from zero to over Rs 3,000 crore in AUM and have demonstrated 40% growth over the last three months. The team has democratised access to fixed-income products and built incredible customer love along the way,' said Mayank Kachhwaha, principal at Fundamentum. Vikram Vaidyanathan, managing director at Z47, added: 'We are seeing a generational shift in how Indians approach wealth, with a cohort of investors prioritising long-term compounding of savings over short-term gains. Stable Money has built deep trust in fixed-income products through a combination of user education, access, and a superlative consumer experience.'


Indian Express
10 hours ago
- Indian Express
OpenAI wants ChatGPT to be a personalised super assistant by mid-2025, reveals leaked internal document
OpenAI seems to be having some grand visions for ChatGPT essentially making it an everyday essential for users. This is all part of a larger strategy that has been detailed in a recently leaked internal document. The document, which is a confidential roadmap related to the antitrust case between Google and the US Department of Justice in 2025, suggests that the AI startup aims to build ChatGPT as an 'intuitive AI super assistant' that can manage tasks and take actions on behalf of the user. In simple words, OpenAI plans to make ChatGPT an ubiquitous super assistant that can move seamlessly between channels, acting as a personalised gateway to the internet. The leaked document envisions ChatGPT, powered by the o3 model, becoming a full-spectrum operator, meaning it manages calendars, does travel booking, navigates software, and even contacts professionals on the user's behalf. The Sam Altman-led AI startup has reportedly described its plan as an intelligent entity with T-shaped skills. 'It's an entity because it's personalised to you and available anywhere you go – including our native apps, phones, email, or third-party surfaces like Siri. It's T-shaped because it has broad skills for daily tasks that are tedious and deep expertise for tasks that most people find impossible (starting with coding). The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of to-dos, and sending emails,' an excerpt from the document. When it comes to the technical side, OpenAI is relying on next-generation models like o3, which, according to the document, are finally smart enough to reliably perform agentic tasks. Moreover, it has plans to deploy tools like Computer Use which will expand ChatGPT's ability to take direct actions. Another key part of the strategy seems to be the development of a dedicated search index. 'To fully be that interface, we need a search index and the ability to take actions on the web.' OpenAI may roll out this feature in the second half of 2025; however, there are not many details. OpenAI seems to be treading carefully, as it does not want ChatGPT to be seen as a product such as a search engine, operating system or even a browser. Based on the document, the company aims to establish a new category of personal AI agent that guides users through their digital needs. The company wants ChatGPT to be the main entry point for daily digital life. 'Now we're up against search engines, browsers, even interactions with real people. This one isn't a head-on match. It's about solving more and more use cases and gradually pulling users in. That's why we don't call our product a search engine, a browser, or an OS – it's just ChatGPT,' read the document. In the document, the company also breaks down its competition into two groups. In the short term, its rivals include other popular chatbots such as Claude, Gemini, or Copilot. However, in the broader sense, it considers traditional search engines, browsers, and even interactions with real people as its competitors. The company also describes one of its competitors, which is redacted from the document, as especially threatening, as it can embed its own AI systems into products without worrying about business model manipulation. Reportedly, this is a reference to Elon Musk's Grok, which is integrated into X and other platforms. On the other hand, OpenAI has also listed several strategic advantages it has over its peers. The company believes it has got everything it needs to win, such as one of the fastest-growing products of all time, a category-defining brand, a research lead, a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated. 'We don't rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work, but, if we do, they will last for a while.'