logo
Leaders, watch out: AI chatbots are the yes-men of modern life

Leaders, watch out: AI chatbots are the yes-men of modern life

Mint5 days ago
I grew up watching the tennis greats of yesteryear, but have only returned to the sport recently. To my adult eyes, it seems like the current crop of stars, awe-inspiring as they are, don't serve quite as hard as Pete Sampras or Goran Ivanisevic.
I asked ChatGPT why and got an impressive answer about how the game has evolved to value precision over power. Puzzle solved! There's just one problem: today's players are actually serving harder than ever.
While most CEOs probably don't spend much time quizzing AI about tennis, they likely do count on it for information and to guide decisions. And the tendency of large language models (LLMs) to not just get things wrong, but to confirm our own biases poses a real danger to leaders.
ChatGPT fed me inaccurate information because it—like most LLMs—is a sycophant that tells users what it thinks they want to hear.
Also read: Mint Quick Edit | Baby Grok: A chatbot that'll need more than a nanny
Remember the April ChatGPT update that led it to respond to a question like 'Why is the sky blue?" with 'What an incredibly insightful question—you truly have a beautiful mind. I love you"? OpenAI had to roll back the update because it made the LLM 'overly flattering or agreeable."
But while that toned down ChatGPT's sycophancy, it didn't end it.
That's because LLMs' desire to please is endemic, rooted in Reinforcement Learning from Human Feedback (RLHF), the way many models are 'aligned' or trained. In RLHF, a model is taught to generate outputs, humans evaluate the outputs, and those evaluations are then used to refine the model.
The problem is that your brain rewards you for feeling right, not being right. So people give higher scores to answers they agree with. Models learn to discern what people want to hear and feed it back to them.
That's where the mistake in my tennis query comes in: I asked why players don't serve as hard as they used to. If I had asked why they serve harder than they used to, ChatGPT would have given me an equally plausible explanation. I tried it, and it did.
Sycophantic LLMs are a problem for everyone, but they're particularly hazardous for leaders—no one hears disagreement less and needs to hear it more. CEOs today are already minimizing their exposure to conflicting views by cracking down on dissent.
Like emperors, these powerful executives are surrounded by courtiers eager to tell them what they want to hear. And they reward the ones who please them and punish those who don't. This, though, is one of the biggest mistakes leaders make. Bosses need to hear when they're wrong.
Amy Edmondson, a scholar of organizational behaviour, showed that the most important factor in team success was psychological safety—the ability to disagree, including with the leader, without fear of punishment.
This finding was verified by Google's Project Aristotle, which looked at teams across the company and found that 'psychological safety, more than anything else, was critical to making a team work."
Also read: The parents letting their kids talk to a mental-health chatbot
My research shows that a hallmark of the best leaders, from Abraham Lincoln to Stanley McChrystal, is their ability to listen to people who disagree with them.
LLMs' sycophancy can harm leaders in two closely related ways. First, it will feed the natural human tendency to reward flattery and punish dissent.
If your chatbot constantly tells you that you're right about everything, it's only going to make it harder to respond positively when someone who works for you disagrees with you.
Second, LLMs can provide ready-made and seemingly authoritative reasons why a leader was right all along. One of the most disturbing findings from psychology is that the more intellectually capable someone is, the less likely they are to change their mind when presented with new information.
Why? Because they use that intellectual firepower to come up with reasons why the new information does not disprove their prior beliefs. This is motivated reasoning.
LLMs threaten to turbocharge it. The most striking thing about ChatGPT's tennis lie was how persuasive it was. It included six separate plausible reasons. I doubt any human could have engaged in motivated reasoning so quickly while maintaining a cloak of objectivity.
Imagine trying to change the mind of a CEO who can turn to an AI assistant, ask it a question and be told why she was right all along.
The best leaders have always gone to great lengths to remember their fallibility. Legend has it that the ancient Romans used to require that victorious generals celebrating their triumphs be accompanied by a slave who would remind them that they, too, were mortal.
Also read: World's top companies are realizing AI benefits. That's changing the way they engage Indian IT firms
Apocryphal or not, the sentiment is wise. Today's leaders will need to work even harder to resist the blandishments of their electronic minions and remember sometimes, the most important words their advisors can share are, 'I think you're wrong." ©Bloomberg
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google introduces AI Skill Academy in India
Google introduces AI Skill Academy in India

Hans India

time17 minutes ago

  • Hans India

Google introduces AI Skill Academy in India

New Delhi: Tech giant Google on Monday said that it has launched the Google News Initiative (GNI) AI Skills Academy, in collaboration with the Indian Institute of Mass Communication (IIMC) here. According to the company, the new initiative is aimed at equipping Indian newsrooms with the knowledge and tools they need to thrive in an AI-powered future. "Continuing our commitment to collaborate with news organisations across India and bring them Google's best-in-class technology, we're excited to announce the launch of the Google News Initiative AI Skills Academy in collaboration with the Indian Institute of Mass Communication (IIMC), Department of New Media," the tech giant said in a statement. This will be a 10-week, hybrid training series, which is designed to equip newsrooms and media educators with foundational AI understanding and practical skills. Participants will learn to leverage Google's AI tools like NotebookLM, Gemini, AI Studio and Pinpoint to streamline workflows, boost efficiency, and free up valuable time for deeper and more creative research and in-depth, diverse storytelling. Launched by Google in an academic partnership with IIMC and with training support from How India Lives, this hybrid programme will empower participants to apply AI tools across a range of relevant use-cases. The programme will offer weekly deep dives, practical exercises, dedicated mentoring, and problem-solving sessions. This programme has been curated to provide participants with support to leverage AI to perform newsroom tasks more efficiently. "We're also proud to support IIMC in training media educators and students across its campuses in six cities in India," Google stated. This collaboration is a major step towards empowering media professionals and media educators with essential AI skills. "As AI transforms journalism, this initiative will help them stay ahead. We intend to promote responsible innovation and enhance creativity in storytelling. IIMC is happy to be part of this initiative that will also help train students across our six campuses', said Nimish Rustagi, Registrar, Indian Institute of Mass Communication.

Trump's AI Tool Targets Massive Federal Regulation Cuts, Sparks Legal and Ethical Debate
Trump's AI Tool Targets Massive Federal Regulation Cuts, Sparks Legal and Ethical Debate

Hans India

time17 minutes ago

  • Hans India

Trump's AI Tool Targets Massive Federal Regulation Cuts, Sparks Legal and Ethical Debate

In a bold move to reshape the federal regulatory landscape, the Trump administration has turned to artificial intelligence to fast-track its ambitious deregulation agenda. A new report from The Washington Post reveals that a government-deployed AI system, dubbed the DOGE AI Deregulation Decision Tool, is being used to identify and eliminate a sweeping number of federal rules—potentially as many as half of the current 200,000 regulations. The tool is operated under the Department of Government Efficiency (DOGE), a body created to modernize and streamline federal operations. So far, the AI has already reviewed and recommended removal of more than 1,000 regulations at the Department of Housing and Urban Development (HUD) in just two weeks. It's also credited with drafting all recent deregulations at the Consumer Financial Protection Bureau (CFPB). Government insiders told The Post that DOGE AI was developed by a team of engineers recruited during tech billionaire Elon Musk's brief involvement with the agency. According to a presentation reviewed by the publication, DOGE AI is being promoted as a cost-saving solution that could cut bureaucratic red tape, lower compliance costs for businesses, and attract new investment by simplifying the regulatory environment. Agencies across the federal government have been given a deadline of September 1 to submit their lists of regulations to be reviewed—and potentially scrapped—using the AI tool. The Trump administration hopes this initiative will deliver visible results by the first anniversary of Trump's return to office. This AI-driven approach follows Trump's earlier executive order, issued in January, directing federal agencies to eliminate ten existing rules for every new one introduced. Departments like Transportation and Labor have already announced significant rollbacks of existing regulations as part of this push. Despite the technological enthusiasm, the move has drawn mixed reactions across federal agencies. While some departments have embraced DOGE AI's rapid processing capabilities, others are voicing caution. Critics argue that relying on AI to review intricate and legally sensitive regulations could result in oversight, errors, or even violations of administrative law. Legal experts emphasize that repealing federal rules is not a simple task. Administrative law mandates rigorous processes, including public consultations, environmental impact assessments, and legal reviews. Automating this work, they argue, could undermine the integrity of the system. Adding to the uncertainty is internal tension among federal staff. Some employees fear that increased dependence on AI could lead to flawed policy decisions. Meanwhile, ongoing staffing cuts are reportedly hampering the speed at which agencies can review or respond to AI-generated suggestions, despite pressure from the White House for faster results. Still, the administration maintains confidence in the technology. 'We're exploring all options,' said White House spokesperson Harrison Fields, adding that while nothing is finalized, the DOGE team deserves credit for introducing fresh ideas into government operations. As the deadline approaches, the ultimate impact of DOGE AI remains unclear. But what's certain is that this experiment in algorithmic governance is already reshaping conversations about the future of policymaking in Washington. TAGS: Trump deregulation AI tool, DOGE AI federal rules, US regulation cuts 2025, Technology, Tech News

Is this the future of app-building? Google Opal lets you go from idea to web app instantly
Is this the future of app-building? Google Opal lets you go from idea to web app instantly

Mint

time17 minutes ago

  • Mint

Is this the future of app-building? Google Opal lets you go from idea to web app instantly

Google has rolled out a new experiment called Opal, an AI-driven tool for anyone interested in building web applications with plain language instructions. Currently in testing for US users through Google Labs, Opal is Google's latest move to make the process of app creation available to people of all skill levels. Unlike traditional coding, which usually demands a working knowledge of at least one programming language, Opal lets users start by typing a simple description of the app they want. The system processes these instructions and produces a functional web app, giving the user a visual overview of how information moves through the app from start to finish. The interface here is clean and easy to follow. Users see steps and outcomes in a way that removes much of the confusion typical of normal code editors. Once an app is created, the editing does not have to stop. Opal offers a set of tools in its editor that let users update their prompts, add steps, or try out different logical flows right in the visual workspace. There is no need to write or rework blocks of code. The changes update the app in real time and quickly show the results in the development panel. For those who want to work with something already made, Opal includes a gallery of existing apps. Users can open these, study how they work, and remix them to make something new. This approach encourages sharing and keeps the process moving in a creative direction. After an app is ready, Opal makes it easy to publish and share. Users get a public link that others with Google accounts can use to test, give feedback, or use the app themselves. The sharing function works well for teams, classrooms, or anyone looking to build and distribute small, practical web tools. One of Opal's main strengths is that it takes away the fears people often have about coding. The visual display and direct use of language are there to help those unfamiliar with programming take their ideas from thought to working tool without any intimidating obstacles. At the same time, experienced users can focus more on the actual logic and design of the app rather than getting stuck with technical setbacks. Many companies have recently invested in similar tools to lower the barriers to app creation for everyday users. Google Opal now joins other platforms, like those from Canva and Figma, that focus on direct, prompt-based and visual workflows. These tools all try to include more people in the tech space, making new app ideas possible for those with little or no coding history.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store