26-07-2025
People are starting to talk more like ChatGPT
Artificial intelligence, the theory goes, is supposed to become more and more human. Chatbot conversations should eventually be nearly indistinguishable from those with your fellow man. But a funny thing is happening as people use these tools: We're starting to sound more like the robots.
A study by the Max Planck Institute for Human Development in Berlin has found that AI is not just altering how we learn and create, it's also changing how we write and speak.
The study detected 'a measurable and abrupt increase' in the use of words OpenAI's ChatGPT favours – such as delve, comprehend, boast, swift, and meticulous – after the chatbot's release. 'These findings,' the study says, 'suggest a scenario where machines, originally trained on human data and subsequently exhibiting their own cultural traits, can, in turn, measurably reshape human culture.'
Researchers have known ChatGPT-speak has already altered the written word, changing people's vocabulary choices, but this analysis focused on conversational speech. Researchers first had OpenAI's chatbot edit millions of pages of emails, academic papers, and news articles, asking the AI to 'polish' the text. That let them discover the words ChatGPT favoured.
Following that, they analysed over 360,000 YouTube videos and 771,000 podcasts from before and after ChatGPT's debut, then compared the frequency of use of those chatbot-favoured words, such as delve, realm, and meticulous. In the 18 months since ChatGPT launched, there has been a surge in use, researchers say – not just in scripted videos and podcasts but in day-to-day conversations as well.
People, of course, change their speech patterns regularly. Words become part of the national dialogue and catch-phrases from TV shows and movies are adopted, sometimes without the speaker even recognising it. But the increased use of AI-favoured language is notable for a few reasons.
The paper says the human parroting of machine-speak raises 'concerns over the erosion of linguistic and cultural diversity, and the risks of scalable manipulation.' And since AI trains on data from humans that are increasingly using AI terms, the effect has the potential to snowball.
'Long-standing norms of idea exchange, authority, and social identity may also be altered, with direct implications for social dynamics,' the study says.
The increased use of AI-favoured words also underlines a growing trust in AI by people, despite the technology's immaturity and its tendency to lie or hallucinate. 'It's natural for humans to imitate one another, but we don't imitate everyone around us equally,' study co-author Levin Brinkmann tells Scientific American. 'We're more likely to copy what someone else is doing if we perceive them as being knowledgeable or important.'
The study focused on ChatGPT, but the words favoured by that chatbot aren't necessarily the same standbys used by Google's Gemini or Perplexity's Claude. Linguists have discovered that different AI systems have distinct ways of expressing themselves.
ChatGPT, for instance, leans toward a more formal and academic way of communicating. Gemini is more conversational, using words such as sugar when discussing diabetes, rather than ChatGPT's favoured glucose, for instance.
(Grok was not included in the study, but, as shown with its recent meltdown, where it made a series of antisemitic comments – something the company attributed to a problem with a code update – it heavily favours a flippant tone and wordplay.)
'Understanding how such AI-preferred patterns become woven into human cognition represents a new frontier for psycholinguistics and cognitive science,' the Max Planck study says. 'This measurable shift marks a precedent: machines trained on human culture are now generating cultural traits that humans adopt, effectively closing a cultural feedback loop.' – Inc./Tribune News Service