
People are starting to talk more like ChatGPT
A study by the Max Planck Institute for Human Development in Berlin has found that AI is not just altering how we learn and create, it's also changing how we write and speak.
The study detected 'a measurable and abrupt increase' in the use of words OpenAI's ChatGPT favours – such as delve, comprehend, boast, swift, and meticulous – after the chatbot's release. 'These findings,' the study says, 'suggest a scenario where machines, originally trained on human data and subsequently exhibiting their own cultural traits, can, in turn, measurably reshape human culture.'
Researchers have known ChatGPT-speak has already altered the written word, changing people's vocabulary choices, but this analysis focused on conversational speech. Researchers first had OpenAI's chatbot edit millions of pages of emails, academic papers, and news articles, asking the AI to 'polish' the text. That let them discover the words ChatGPT favoured.
Following that, they analysed over 360,000 YouTube videos and 771,000 podcasts from before and after ChatGPT's debut, then compared the frequency of use of those chatbot-favoured words, such as delve, realm, and meticulous. In the 18 months since ChatGPT launched, there has been a surge in use, researchers say – not just in scripted videos and podcasts but in day-to-day conversations as well.
People, of course, change their speech patterns regularly. Words become part of the national dialogue and catch-phrases from TV shows and movies are adopted, sometimes without the speaker even recognising it. But the increased use of AI-favoured language is notable for a few reasons.
The paper says the human parroting of machine-speak raises 'concerns over the erosion of linguistic and cultural diversity, and the risks of scalable manipulation.' And since AI trains on data from humans that are increasingly using AI terms, the effect has the potential to snowball.
'Long-standing norms of idea exchange, authority, and social identity may also be altered, with direct implications for social dynamics,' the study says.
The increased use of AI-favoured words also underlines a growing trust in AI by people, despite the technology's immaturity and its tendency to lie or hallucinate. 'It's natural for humans to imitate one another, but we don't imitate everyone around us equally,' study co-author Levin Brinkmann tells Scientific American. 'We're more likely to copy what someone else is doing if we perceive them as being knowledgeable or important.'
The study focused on ChatGPT, but the words favoured by that chatbot aren't necessarily the same standbys used by Google's Gemini or Perplexity's Claude. Linguists have discovered that different AI systems have distinct ways of expressing themselves.
ChatGPT, for instance, leans toward a more formal and academic way of communicating. Gemini is more conversational, using words such as sugar when discussing diabetes, rather than ChatGPT's favoured glucose, for instance.
(Grok was not included in the study, but, as shown with its recent meltdown, where it made a series of antisemitic comments – something the company attributed to a problem with a code update – it heavily favours a flippant tone and wordplay.)
'Understanding how such AI-preferred patterns become woven into human cognition represents a new frontier for psycholinguistics and cognitive science,' the Max Planck study says. 'This measurable shift marks a precedent: machines trained on human culture are now generating cultural traits that humans adopt, effectively closing a cultural feedback loop.' – Inc./Tribune News Service
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Malay Mail
28 minutes ago
- Malay Mail
Aifeex Accelerates Global Strategy with Seven AI Ecosystems to Lead the Future of AI Finance
KUALA LUMPUR, MALAYSIA - Media OutReach Newswire - 28 July 2025 - On July 22, 2025, Aifeex hosted the '2025 Global Artificial Intelligence Summit' in Kuala Lumpur, unveiling its strategic layout of seven innovative AI financial ecosystems. Attended by top experts and institutions, the event marked a key milestone in global AI finance. From high-frequency AI trading funds to smart DeFi systems and personalized AI agents, Aifeex showcased its strong technological capabilities and forward-looking vision. This move underscores Aifeex's commitment to shaping an AI-driven financial future and advancing global intelligent asset #Aifeex The issuer is solely responsible for the content of this announcement.


Daily Express
an hour ago
- Daily Express
Caution on AI in arbitration
Published on: Monday, July 28, 2025 Published on: Mon, Jul 28, 2025 By: Sisca Humphrey Text Size: Moderator and panelists during the talk session. Kota Kinabalu: Legal experts are urging the arbitration community to tread carefully as Generative Artificial Intelligence (AI) tools become increasingly integrated into dispute resolution processes. The warnings were delivered during the final panel session at the Bicam Global ADR Horizon 2025 event, themed 'Opportunities and Risks of Generative AI in Arbitration.' The discussion, moderated by High Court of Malaya (Commercial Division) Judge Justice Atan Mustaffa Yussof Ahmad, brought together senior practitioners from across Asia to examine both the practical applications and legal grey areas surrounding the use of AI in arbitration. 'An arbitrator's responsibility is to apply judgment, not to rely on auto-generated text. 'We can benefit from technology, but we cannot delegate our core function,' he said. Senior Partner at EPLegal Tony Nguyen emphasised on how generative AI is already being used to streamline legal workflows, including drafting, document review and issue spotting. 'There's no doubt that these tools improve efficiency, especially for large-scale commercial disputes. 'But speed cannot come at the expense of accuracy or legal coherence,' he said. Tony cautioned against uncritical use of AI-generated content in formal submissions, stressing that most tools are trained on datasets that do not reflect regional legal practices or current arbitration rules. 'We need to ask, is this tailored to the tribunal before us or is it just generic output that sounds plausible?' he said. Head of International Litigation and Arbitration at Skadden Asia Friven Yeoh warned of the risks of factual errors and fabricated references often produced by generative systems. 'What concerns me most is not the occasional mistake. It's that these tools present their results with a tone of certainty, which can mislead even experienced users,' Friven said. He stressed the need for legal practitioners to retain full accountability for any materials produced with AI assistance. 'If you submit it, you're responsible for it. That doesn't change just because you used a machine to help draft it,' he said. He also urged the arbitration community to develop internal guidelines rather than wait for regulation to be imposed externally. Dispute Resolution Lawyer from Singapore Chew Kei-Jin highlighted the limitations of AI in assessing human factors which is a key element in many arbitrations. 'No machine can interpret tone, facial expressions or a witness's hesitation during cross-examination. 'These things influence credibility and outcomes. That's not something AI can replicate,' he said. He acknowledged that AI may be useful in certain backend functions, such as document translation and legal research, but insisted these benefits should not be confused with legal judgment. 'It's a tool, not an advisor, not an advocate and certainly not a decision-maker,' he said. Partner at Kim & Chang in Seoul Matthew Christensen raised concerns over a lack of consistent guidance across jurisdictions. 'Some courts are asking whether AI was used to draft legal documents. Others haven't even considered the question,' he said. He maintained that in arbitration, where confidentiality is crucial that even limited use of AI warrants full disclosure, especially when sensitive data is involved. 'If arbitrators or counsel use AI, parties deserve to know. That transparency matters. 'And we should be careful not to expose client data to third-party systems without clear safeguards,' he said. The panel closed with a reminder from Atan that Malaysia and the wider Southeast Asian region, must engage in structured discussions to clarify expectations around the use of AI in legal practice. 'This is not about rejecting technology. It's about developing a responsible approach to its use in proceedings that affect rights, reputations and businesses,' he said. While no clear consensus was reached on how to regulate AI in arbitration, all speakers agreed that the conversation must continue and that human oversight will remain essential for the foreseeable future. * Follow us on our official WhatsApp channel and Telegram for breaking news alerts and key updates! * Do you have access to the Daily Express e-paper and online exclusive news? Check out subscription plans available. Stay up-to-date by following Daily Express's Telegram channel. Daily Express Malaysia


New Straits Times
2 hours ago
- New Straits Times
Content Forum becomes first Malaysian partner in Google's flagger programme
KUALA LUMPUR: Google has partnered with the Communications and Multimedia Content Forum of Malaysia (Content Forum) to strengthen online safety through its global Priority Flagger programme. The move makes the Content Forum the first Malaysian organisation to join the initiative, which allows select partners to identify and report harmful content directly to Google and YouTube via dedicated review channels. Operating under the purview of the Malaysian Communications and Multimedia Commission (MCMC), the Content Forum will now assist in flagging content that potentially violates platform policies, with consideration for local cultural contexts. Google Malaysia country director Farhan Qureshi said the collaboration reflects the importance of tapping into local knowledge to create a safer digital environment. "By working with organisations like the Content Forum, we are adding a crucial layer of local expertise, which deepens our ability to respond to harmful content with relevance and precision," he said. The Priority Flagger programme enables trusted local agencies and non-governmental organisations (NGOs) to alert Google about problematic material across platforms such as Search, Maps, Play, Gmail, and YouTube. These reports receive priority review due to the flaggers' industry expertise. As a Priority Flagger, Content Forum will also participate in policy discussions and feedback sessions with Google, helping shape platform governance. Content Forum chief executive officer Mediha Mahmood said the onboarding marked a meaningful advancement in the country's approach to content regulation. "It allows us to move beyond dialogue into action, ensuring that harmful content is flagged and reviewed with the urgency it deserves. "This collaboration reflects our continued role in setting industry standards, empowering communities, and contributing to a safer digital ecosystem through collective responsibility." Content Forum is a self-regulatory industry body designated under the Communications and Multimedia Act 1998. It represents stakeholders ranging from broadcasters and advertisers to content creators, internet service providers, and civic groups.