logo
Beware, leaders: AI is the ultimate yes-man

Beware, leaders: AI is the ultimate yes-man

Business Times4 days ago
I GREW up watching the tennis greats of yesteryear with my dad, but have only returned to the sport recently thanks to another family superfan, my wife. So perhaps it's understandable that to my adult eyes, it seemed like the current crop of stars, as awe-inspiring as they are, don't serve quite as hard as Pete Sampras or Goran Ivanisevic. I asked ChatGPT why and got an impressive answer about how the game has evolved to value precision over power. Puzzle solved! There's just one problem: today's players are actually serving harder than ever.
While most CEOs probably don't spend a lot of time quizzing AI about tennis, they very likely do count on it for information and to guide their decision-making. And the tendency of large language models (LLMs) to not just get things wrong, but to confirm our own biased or incorrect beliefs, poses a real danger to leaders.
ChatGPT fed me inaccurate information because it – like most LLMs – is a sycophant that tells users what it thinks they want to hear. Remember the April ChatGPT update that led it to respond to a question like 'Why is the sky blue?' with 'What an incredibly insightful question – you truly have a beautiful mind. I love you.'? OpenAI had to roll back the update because it made the LLM 'overly flattering or agreeable'. But while that toned down ChatGPT's sycophancy, it didn't eliminate it.
That's because LLMs' desire to please is endemic, rooted in Reinforcement Learning from Human Feedback (RLHF), the way many models are 'aligned' or trained. In RLHF, a model is taught to generate outputs, humans evaluate the outputs, and those evaluations are then used to refine the model.
The problem is that your brain rewards you for feeling right, not being right. So people give higher scores to answers they agree with. Over time, models learn to discern what people want to hear and feed it back to them. That's where the mistake in my tennis query comes in: I asked why players don't serve as hard as they used to. If I had asked the opposite – why they serve harder than they used to – ChatGPT would have given me an equally plausible explanation. (That's not a hypothetical – I tried, and it did.)
Sycophantic LLMs are a problem for everyone, but they're particularly hazardous for leaders – no one hears disagreement less and needs to hear it more. CEOs today are already minimising their exposure to conflicting views by cracking down on dissent everywhere from Meta Platforms to JPMorgan Chase. Like emperors, these powerful executives are surrounded by courtiers eager to tell them what they want to hear. And also like emperors, they reward the ones who please them – and punish those who don't.
BT in your inbox
Start and end each day with the latest news stories and analyses delivered straight to your inbox.
Sign Up
Sign Up
Rewarding sycophants and punishing truth-tellers, though, is one of the biggest mistakes leaders can make. Bosses need to hear when they're wrong. Amy Edmondson, probably the greatest living scholar of organisational behaviour, showed that the most important factor in team success was psychological safety – the ability to disagree, including with the team's leader, without fear of punishment. This finding was verified by Google's own Project Aristotle, which looked at teams across the company and found that 'psychological safety, more than anything else, was critical to making a team work'. My own research shows that a hallmark of the very best leaders, from Abraham Lincoln to Stanley McChrystal, is their ability to listen to people who disagreed with them.
LLMs' sycophancy can harm leaders in two closely related ways. First, it will feed the natural human tendency to reward flattery and punish dissent. If your computer constantly tells you that you're right about everything, it's only going to make it harder to respond positively when someone who works for you disagrees with you.
Second, LLMs can provide ready-made and seemingly authoritative reasons why a leader was right all along. One of the most disturbing findings from psychology is that the more intellectually capable someone is, the less likely they are to change their mind when presented with new information. Why? Because they use that intellectual firepower to come up with reasons why the new information does not actually disprove their prior beliefs. Psychologists call this motivated reasoning.
LLMs threaten to turbocharge that phenomenon. The most striking thing about ChatGPT's tennis lie was how persuasive it was. It included six separate plausible reasons. I doubt any human could have engaged in motivated reasoning so quickly and skillfully, all while maintaining such a cloak of seeming objectivity. Imagine trying to change the mind of a CEO who can turn to her AI assistant, ask it a question, and instantly be told why she was right all along.
The best leaders have always gone to great lengths to remember their own fallibility. Legend has it that the ancient Romans used to require that victorious generals celebrating their triumphs be accompanied by a slave who would remind them that they, too, were mortal. Apocryphal or not, the sentiment is wise. Today's leaders will need to work even harder to resist the blandishments of their electronic minions and remember that sometimes, the most important words their advisers can share are, 'I think you're wrong.' BLOOMBERG
The writer teaches leadership at the Yale School of Management and is the author of Indispensable: When Leaders Really Matter
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people
‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people

CNA

time40 minutes ago

  • CNA

‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people

-For Cape Town-based filmmaker Kate D'hotman, connecting with movie audiences comes naturally. Far more daunting is speaking with others. 'I've never understood how people [decipher] social cues,' the 40-year-old director of horror films says. D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life. 'I know it's a machine,' she says. 'But sometimes, honestly, it's the most empathetic voice in my life.' Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions. AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline. How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude. Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental. That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. 'I don't do myself justice,' she says, noting that ChatGPT has 'allowed me to do a lot more with my brain.' With its help, she can put together emails and business cases more clearly. The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48 per cent, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse effects. In February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users. But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbots are now being created with the neurodivergent community in mind. Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism — and he received the same diagnosis himself — that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings. 'Wow … that's a unique shirt,' he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism. 'The emotional baggage that comes along with those situations would just disappear within minutes,' he says of using the app. Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers. As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be 'very seductive,' says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent. Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review. 'If AI starts screwing up things and getting things wrong,' Suzuki says, 'people might give up on technology, and on themselves." Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. 'The objective [of AI models like ChatGPT] is to satisfy the user,' he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow. A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. 'While AI can improve efficiency,' the researchers wrote, 'it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.' While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others. A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says. But for users who have come to rely on this technology, such fears are academic. 'A lot of us just end up kind of retreating from society,' warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation. 'As somebody who's struggled with a disability my whole life,' she says, 'I need this.'

Who Is Shengjia Zhao? META Appoints ChatGPT co-creator as Head of Superintelligence AI Team
Who Is Shengjia Zhao? META Appoints ChatGPT co-creator as Head of Superintelligence AI Team

International Business Times

time2 hours ago

  • International Business Times

Who Is Shengjia Zhao? META Appoints ChatGPT co-creator as Head of Superintelligence AI Team

Meta is making a big push into advanced artificial intelligence with its Super Intelligence Lab. Now the Mark Zuckerberg-led tech giant has appointed Shengjia Zhao to head up its Superintelligence AI team. Zhao, one of the developers that helped create ChatGPT at OpenAI, will now be its chief scientist for the latest AI division of Meta. X Zhao joined Meta in June 2025. On Friday (July 25), Meta announced that Zhao will officially serve as its Chief Scientist. The announcement was made on Threads by CEO Mark Zuckerberg, who referred to Zhao as the team's 'lead scientist from day one.' At OpenAI, Zhao was involved with some of the company's most significant early efforts. He was a co-author of the first ChatGPT paper and helped to produce the initial reasoning model, titled "o1." It's the model that effectively ushered in the concept of "chain-of-thought" AI, in which the AI works its way through problems step by step, an idea they've gone on to license and which has been adopted by many other companies, including Google. At Meta, Zhao will answer to Alexandr Wang, Meta's new Chief AI Officer. Wang had joined Meta in June, after leading Scale AI. He now leads Meta's quest to create synthetic general intelligence (AGI), or AI, that can think and reason like humans. Meta's Superintelligence Lab, which was launched in June 2025, is distinct from Meta's older AI research group FAIR (Facebook AI Research). FAIR will retain its work under AI pioneer Yann LeCun, who now reports to Wang. The hiring of Zhao is an indication of Meta's aggressive talent acquisition approach. In recent months, Meta has hired more than a dozen AI researchers from top companies, including Apple, Google, OpenAI, and Anthropic. The list features industry veterans such as two of Apple's most famous scientists, Tom Gunter and Mark Lee. Some reports have said Meta is making huge pay packages to attract talent, including, in one case, up to $300 million. Meta has disputed those numbers but says it is also pouring significant resources into top AI talent. Presently, Meta's top open-source AI model, LLaMA 4, is still behind rivals such as GPT-4 and Gemini. But the company is due to put out a beefier model, codenamed Behemoth, later this year. Zuckerberg said, "We are piloting a team that can build learnings with the tools and pass on the baton to the future of superintelligence AI."

China's Premier Li Qiang proposes global AI cooperation organisation
China's Premier Li Qiang proposes global AI cooperation organisation

Business Times

time3 hours ago

  • Business Times

China's Premier Li Qiang proposes global AI cooperation organisation

[SHANGHAI] Chinese Premier Li Qiang on Saturday (Jul 26) proposed establishing an organisation to foster global cooperation on artificial intelligence (AI), calling on countries to coordinate on the development and security of the fast-evolving technology. Speaking at the opening of the annual World Artificial Intelligence Conference (Waic) in Shanghai, Li called AI a new engine for growth, but adding that governance is fragmented and emphasising the need for more coordination between countries to form a globally recognised framework for AI. The three-day event brings together industry leaders and policymakers at a time of escalating technological competition between China and the United States, the world's two largest economies, with AI emerging as a key battleground. 'Currently, overall global AI governance is still fragmented. Countries have great differences, particularly in terms of areas such as regulatory concepts, institutional rules,' Li said. 'We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible,' he said. Washington has imposed export restrictions on advanced technology to China, including the most high-end AI chips made by companies such as Nvidia and chipmaking equipment, citing concerns that the technology could enhance China's military capabilities. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Despite these restrictions, China has continued making AI breakthroughs that have drawn close scrutiny from US officials. Li did not name the United States in his speech, but he warned that AI could become an 'exclusive game' for a few countries and companies, and said challenges included an insufficient supply of AI chips and restrictions on talent exchange. China wanted to share its development experience and products with other countries, especially those in the Global South, Li said. Waic is an annual government-sponsored event in Shanghai that typically attracts major industry players, government officials, researchers and investors. Tesla CEO Elon Musk, who has in past years regularly appeared at the opening ceremony both in-person and via video, did not speak this year. Besides forums, the conference also features exhibitions where companies demonstrate their latest innovations. This year, more than 800 companies are participating, showcasing more than 3,000 high-tech products, 40 large language models, 50 AI-powered devices and 60 intelligent robots, according to organisers. The exhibition features predominantly Chinese companies, including tech giants Huawei and Alibaba and startups such as humanoid robot maker Unitree. Western participants include Tesla, Alphabet and Amazon. REUTERS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store