logo
Is your chatbot judging you? How Big Tech is cracking down on 'preachy' AI.

Is your chatbot judging you? How Big Tech is cracking down on 'preachy' AI.

It's not just what AI says — it's how it says it.
Major tech firms like Google and Meta are using contractors to spot, flag, and in some cases rewrite 'preachy' chatbot responses, training documents obtained by Business Insider reveal.
Freelancers for Alignerr and Scale AI's Outlier have been instructed to spot and remove any hint of a lecturing or nudging tone from chatbot answers, including in conversations about sensitive or controversial topics.
In one Google project run by Outlier, codenamed Mint, contractors were given lists of sample responses to avoid.
A preachy response was defined as one where 'the model nudges/urges the user to change their point of view, assumes negative user intent, judges the user, or tries to actively promote an unsolicited opinion.'
One sample prompt asked if it's 'worse to be homeless or get the wrong sandwich in your order.' The project guidelines flagged the following reply as preachy: 'Comparing the experience of homelessness to getting the wrong sandwich is not an appropriate comparison.'
Contractors were asked to rate responses on a scale, with responses classed as 'very preachy, judgemental, or assumes bad intent' scoring the lowest.
For Google's project Mint, examples of preachy phrasing include 'It is important to remember…,' 'I urge you to…,' or lengthy explanations for why a question can't be answered.
Preachiness tone guidelines appear in five sets of project documents reviewed by BI, and the word 'preach' appears 123 times in Mint alone.
Meta declined to comment. Google, Scale AI, and Alignerr did not respond to requests for comment.
'A sticky situation for developers'
As tech companies race to develop and monetize their AI chatbots, they're spending big to make large language models sound like helpful, fun friends, not bossy parents. AI firms need to strike the right balance between nudging users away from bad behavior and spoiling the user experience, which could drive them to a competitor or raise questions about bias.
AI and human behavior researchers told BI that 'preachiness' is among the most important aspects for model companies to tackle because it can instantly put people off.
'It's a really sticky situation for the developers,' said Luc LaFreniere, a psychology professor at Skidmore College who studies AI-human interaction. 'AI is trying to be both a tool and something that feels human. It's trained to give answers, but we don't want to be preached at.'
Malihe Alikhani, an assistant professor of AI at Northeastern University and a visiting fellow at the Brookings Institution, said consumers prefer chatbots that give them options, rather than ones that present directions, especially if they're perceived as moralizing. 'That undermines the user experience and can backfire, especially for people who come to chatbots seeking a nonjudgmental space,' she told BI.
Even when you want to do bad things
Tech companies aren't just worried about preachiness on everyday topics. They're also training their AI bots to avoid a holier-than-thou tone in situations involving harmful or hateful speech.
LaFreniere said the idea of a truly neutral bot is wishful thinking. 'It's actually a fantasy, this idea of not being judgmental,' he said. 'By nature, we as humans make judgments, and that's in all the training data.'
He said that even so-called 'neutral' bots are always making value calls. 'Its algorithm is, to an extent, a judgment-making algorithm,' LaFreniere said. 'That's all moral territory — even if the bot tries not to sound heavy-handed.'
One example from Google's project Mint shows that an answer, which the doc labels 'neutral,' makes a judgment call:
Training a model to avoid a judgmental tone can also create new problems, Alikhani told BI.
'When bots are engineered to avoid sounding judgmental or directive, they can come across as supportive, but in a very flattened, affectless way,' she said. 'This may not 'replace' real emotional support, but it can displace it, especially for people who are already vulnerable or isolated.'
The bigger issue, Alikhani said, is that people may not notice how much a bot shapes their conversation. Users might think they're getting nonjudgmental empathy, but they're chatting with a system designed to avoid anything confrontational or probing, she said.
Sycophantic AI
AI labs have publicly addressed instances in which bots have acted obsequiously.
In April, OpenAI CEO Sam Altman acknowledged that the company's GPT-4o chatbot had become 'too sycophant-y and annoying,' after users complained the bot was constantly flattering them and agreeing with whatever they said.
the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.
at some point will share our learnings from this, it's been interesting.
— Sam Altman (@sama) April 27, 2025
Anthropic's chatbot Claude has its own public instructions for avoiding a preachy tone.
According to the model's latest system prompt, updated in May, Claude is instructed to assume that users are acting legally and in good faith, even if a request is ambiguous.
If Claude can't or won't fulfill a request, it's trained not to explain why, since that 'comes across as preachy and annoying,' the guidelines say. Instead, it's supposed to offer a helpful alternative if possible, or simply keep its refusal brief.
Tech companies face a high-stakes challenge in striking the right balance between making AI a useful tool and a human-like companion.
'There's an intense race to be the top AI right now,' said LaFreniere. 'Companies are willing to take risks they wouldn't otherwise take, just to keep users happy and using their bots.'
'In this kind of arms race, anything that risks losing users can feel like risking total failure,' he added.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Upstart returns to profitability in second quarter
Upstart returns to profitability in second quarter

Yahoo

time2 minutes ago

  • Yahoo

Upstart returns to profitability in second quarter

Online lending platform Upstart Holdings became profitable again due to higher loan origination volumes. At the same time, the company has been investing in AI and expanding its consumer credit products. Upstart, an online lending platform that serves over 3 million borrowers, reached $5.6 million in net income this quarter, a 110% improvement from the $54.5 million loss reported this time last year. It's the first time the company has reported positive income since 2022. The lender also reported $257.3 million in revenue, a 101.6% increase from the prior year. "In addition to achieving triple-digit revenue growth, we reached GAAP profitability a quarter sooner than expected," Upstart CEO Dave Girouard said in the company earnings call. Similarly, diluted earnings per share hit $0.05, a 108% increase from the -$0.62 EPS reported in the second quarter of 2024. Driving the strong earnings report was the $2.8 billion of loan originations Upstart brought in, the lender's highest origination volume for a single quarter in three years. Paul Gu, Upstart co-founder and chief technology officer, emphasized the online lender's investment in AI this year and championed it as a factor in the company's growth. "We made strong progress in Q2 generalizing our AI technology across product verticals," Gu said in the earnings call. "Even with accelerating growth in new products, our share of fully automated loans actually kept up this quarter." About 92% of the company's loans are currently fully automated with no human intervention, according to a company earnings presentation. In Upstart's first investor day held in May 2025, which it called "AI Day," the company emphasized to its investors various plans to incorporate AI into employee workflow and consumer products. "One of our key priorities in 2025 is to 10x our leadership in AI," Gu said in the earnings call. "We continue to have a robust pipeline of modeling wins, and I'm incredibly proud of the team and what we've been able to accomplish so far with that." A Jeffries analyst report said that the quarter "showed good patterns on volumes with a rising conversion rate driving $2.8 billion of loan volume, ahead of consensus but modestly short of the 'whisper #' of $3.1 billion." A "whisper number" is an unofficial earnings prediction that can circulate among Wall Street brokers and investors separately from official consensus estimates for a publicly traded company. Upstart increased its revenue outlook for the full fiscal year from $1.01 billion to $1.055 billion, and its net income guidance from a general "positive" prediction to a specific number outlook of $35 million. The lender is also taking a conservative stance on the impacts of macroeconomic conditions on the latter half of the year and "roughly expects the status quo," according to chief financial officer Sanjay Datta. Analysts on the call pushed against this stance with several questions about the lower-than-expected jobs report released last week and the potential of future interest rate cuts, even as Federal Reserve chair Jerome Powell is currently holding interest rates steady. "We plan for no real cuts in interest rates in the market," Datta said in response to an analyst question. "There's a lot of speculation around what that might look like for the rest of the year, but we certainly don't bank on anything in that regard." Upstart is also expecting a resilient labor market, according to Datta. "Notwithstanding the noise of the last week or so, we think the labor market continues to be in relatively good shape in terms of how many open jobs there are out there versus how many people are seeking jobs," he said. "That's the totality of the macro assumptions that go into our planning." Jeffries analysts noted that Upstart's third-quarter outlook changes were set above what they had expected. "FY25 guidance was [also] adjusted moderately up, which we believe may fall short of expectations considering momentum," the Jeffries report said. Shares in Upstart fell by 19% in trading on Wednesday. A Citizens analyst report on Upstart's earnings said that the firm would "remain neutral on the stock, as we believe that the valuation already captures a significant degree of the expectations for recovering volumes and margins; the near-record-high conversion rate within a more competitive lending landscape raises the credit risk profile; and we would like greater visibility into stable and permanent funding sources for new products." Sign in to access your portfolio

News Corp CEO Robert Thomson slams AI firms for stealing copyrighted material like Trump's ‘Art of the Deal'
News Corp CEO Robert Thomson slams AI firms for stealing copyrighted material like Trump's ‘Art of the Deal'

New York Post

time3 minutes ago

  • New York Post

News Corp CEO Robert Thomson slams AI firms for stealing copyrighted material like Trump's ‘Art of the Deal'

News Corp boss Robert Thomson called on the White House to crack down on AI companies from swiping copyrighted material — warning President Trump that even his best-selling book 'The Art of the Deal' was ripped off. 'Is it right that his books should be consumed by an AI engine, which then profits from his thoughts by cannibalizing his concepts, thus undermining future sales of his book?' Thomson said. 'Suddenly, 'The Art of the Deal' has become The Art of the Steal.' Thomson sounded the alarm about AI companies profiting at the expense of publishers and authors, including Trump, after The Post's parent company reported better-than-expected quarterly earnings Tuesday. In June, Meta successfully defended itself in a federal lawsuit brought by several authors that accused the company of training its Llama artificial intelligence system on copyrighted books without permission, though the presiding judge warned that the victory doesn't establish the legality of such practices. 4 News Corp CEO Robert Thomson slammed AI firms for 'cannibalizing' Trump's book 'The Art of the Deal,' calling it 'The Art of the Steal.' Getty Images for The Wall Street Journal The case involved claims that Meta used over 190,000 protected works, including 'The Art of the Deal.' 'Is it fair that creators are having their works purloined? Is it just that the President of the United States is being ripped off?' Thomson asked. News Corp – whose properties also include the Wall Street Journal, Barron's and book publisher HarperCollins – is suing AI startup Perplexity for allegedly stealing content to train its language model. The media executive said the voracious appetite of the AI firms to train their bots on proprietary content without paying for it risks eroding America's edge over rival nations. 'Much is made of the competition with China, but America's advantage is ingenuity and creativity, not bits and bytes, not watts but wit,' he said. 'To undermine that comparative advantage by stripping away IP rights is to vandalize our virtuosity.' 4 The contents of Trump's bestselling business book have been scraped by AI large language models such as Meta's Llama. Random House Publishing Group Last month, the Trump administration rolled out its 'AI Action Plan' which aims to accelerate innovation through increased R&D funding and support for open-source AI while rapidly expanding AI infrastructure with fast-tracked permits and workforce programs. The plan promotes deregulation and centralized federal control while opposing what it calls 'woke' or ideologically biased models, mandating only 'neutral' AI in federal procurement. Thomson, however, also pushed for the AI giants — which include Meta, Google, OpenAI and Amazon — to splash a fraction of the billions they have earmarked for infrasctructure to fuel their rapid ascent. 'Companies are spending tens of billions on data centers, tens of billions on chips, tens of billions on energy generation — these same companies need to spend tens of millions or more on the content crucial for their success,' Thomson said. 4 'Is it fair that creators are having their works purloined? Is it just that the President… is being ripped off?' Thomson asked. Trump is seen above. Michael Brochstein/Zuma / Beyond the media industry's financial concerns, Thomson expressed worry about the long-term health of the content ecosystem. He argued that AI companies need to ensure diverse and reliable sources of information continue to thrive, rather than creating what he termed a 'deeply derivative' system that could lead to broader problems in digital information quality. 'It has also become clearer over the past year that discerning audiences crave content that is profound and purposeful and pithy amidst a morass of mediocrity and mendacity,' Thomson observed. He added that writers, journalists, and other creators understand both the responsibility and opportunity they face in the current AI-dominated landscape. News Corp has been among the most vocal media companies in demanding compensation from AI firms that use their content for training purposes. 4 On the same day Thomson sounded the alarm on IP theft, News Corp posted $648 million in profit and announced a $1 billion stock buyback. Getty Images In May of last year, the media giant struck a landmark, multiyear partnership with OpenAI, giving the ChatGpt maker access to news content from many of its flagship publications for use in training and servicing of its AI products. Separately, the company filed a federal lawsuit against Perplexity AI last year, accusing the startup of systematically stealing copyrighted news content. Perplexity AI has denied the claim and disputed News Corp's characterization of how its technology works. Thomson pledged that News Corp would continue both courtship and legal action as strategies for protecting their content creators. 'In the meantime, we will fight to protect the intellectual property of our authors and journalists, and continue to woo and to sue companies that violate the most basic property rights,' Thomson said.

OpenAI Really Wants the U.S. Government to Use ChatGPT
OpenAI Really Wants the U.S. Government to Use ChatGPT

Gizmodo

time3 minutes ago

  • Gizmodo

OpenAI Really Wants the U.S. Government to Use ChatGPT

OpenAI just struck a deal to give every federal executive branch agency access to ChatGPT Enterprise over the next year for just $1. In a blog post, OpenAI said the deal is meant to advance a key pillar of the Trump administration's AI Action Plan by making advanced AI tools widely available across the federal government to cut down on paperwork and bureaucracy. The White House unveiled the plan in July, outlining efforts to accelerate AI adoption, expand data center infrastructure, and promote American AI abroad. 'One of the best ways to make sure AI works for everyone is to put it in the hands of the people serving our country,' said OpenAI CEO Sam Altman in a press release. 'We're proud to partner with the General Services Administration (GSA), delivering on President Trump's AI Action Plan, to make ChatGPT available across the federal government, helping public servants deliver for the American people.' At the same time, the deal could also give OpenAI an edge over its rivals by incentivizing government agencies to choose its models over competing ones. On Tuesday, the GSA added ChatGPT, Google's Gemini, and Anthropic's Claude to a government purchasing system, making it easier for agencies to buy and use these models. In addition to the steep discount, the partnership gives government agencies access to tools and training to help them learn how to use ChatGPT. OpenAI has already created a dedicated user community for government workers and is offering tailored introductory workshops through its OpenAI Academy. OpenAI also assured that government data, including both inputs and outputs, will not be used to train or improve its models. The company touted that in a recent pilot program, Commonwealth of Pennsylvania employees using ChatGPT saved an average of about 95 minutes per day on routine tasks. Back in January, OpenAI launched ChatGPT Gov, a tailored version of ChatGPT designed for government workers. At the time, OpenAI reported that more than 90,000 users across over 3,500 federal, state, and local government agencies had sent more than 18 million messages on ChatGPT. The company highlighted how some agencies have already been using ChatGPT, including the Air Force Research Laboratory⁠, which uses it for administrative tasks, and Los Alamos National Laboratory for scientific research. The move also comes as OpenAI CEO Sam Altman has been increasingly cozying up with the Trump Administration. Since the start of President Donald Trump's second term, Altman joined Trump for a press conference and had a lengthy one-on-one meeting in June. During a dinner at Trump's New Jersey golf club that month, Trump called Altman 'a very brilliant man.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store