Latest news with #LevinBrinkmann


The Star
6 days ago
- The Star
People are starting to talk more like ChatGPT
Artificial intelligence, the theory goes, is supposed to become more and more human. Chatbot conversations should eventually be nearly indistinguishable from those with your fellow man. But a funny thing is happening as people use these tools: We're starting to sound more like the robots. A study by the Max Planck Institute for Human Development in Berlin has found that AI is not just altering how we learn and create, it's also changing how we write and speak. The study detected 'a measurable and abrupt increase' in the use of words OpenAI's ChatGPT favours – such as delve, comprehend, boast, swift, and meticulous – after the chatbot's release. 'These findings,' the study says, 'suggest a scenario where machines, originally trained on human data and subsequently exhibiting their own cultural traits, can, in turn, measurably reshape human culture.' Researchers have known ChatGPT-speak has already altered the written word, changing people's vocabulary choices, but this analysis focused on conversational speech. Researchers first had OpenAI's chatbot edit millions of pages of emails, academic papers, and news articles, asking the AI to 'polish' the text. That let them discover the words ChatGPT favoured. Following that, they analysed over 360,000 YouTube videos and 771,000 podcasts from before and after ChatGPT's debut, then compared the frequency of use of those chatbot-favoured words, such as delve, realm, and meticulous. In the 18 months since ChatGPT launched, there has been a surge in use, researchers say – not just in scripted videos and podcasts but in day-to-day conversations as well. People, of course, change their speech patterns regularly. Words become part of the national dialogue and catch-phrases from TV shows and movies are adopted, sometimes without the speaker even recognising it. But the increased use of AI-favoured language is notable for a few reasons. The paper says the human parroting of machine-speak raises 'concerns over the erosion of linguistic and cultural diversity, and the risks of scalable manipulation.' And since AI trains on data from humans that are increasingly using AI terms, the effect has the potential to snowball. 'Long-standing norms of idea exchange, authority, and social identity may also be altered, with direct implications for social dynamics,' the study says. The increased use of AI-favoured words also underlines a growing trust in AI by people, despite the technology's immaturity and its tendency to lie or hallucinate. 'It's natural for humans to imitate one another, but we don't imitate everyone around us equally,' study co-author Levin Brinkmann tells Scientific American. 'We're more likely to copy what someone else is doing if we perceive them as being knowledgeable or important.' The study focused on ChatGPT, but the words favoured by that chatbot aren't necessarily the same standbys used by Google's Gemini or Perplexity's Claude. Linguists have discovered that different AI systems have distinct ways of expressing themselves. ChatGPT, for instance, leans toward a more formal and academic way of communicating. Gemini is more conversational, using words such as sugar when discussing diabetes, rather than ChatGPT's favoured glucose, for instance. (Grok was not included in the study, but, as shown with its recent meltdown, where it made a series of antisemitic comments – something the company attributed to a problem with a code update – it heavily favours a flippant tone and wordplay.) 'Understanding how such AI-preferred patterns become woven into human cognition represents a new frontier for psycholinguistics and cognitive science,' the Max Planck study says. 'This measurable shift marks a precedent: machines trained on human culture are now generating cultural traits that humans adopt, effectively closing a cultural feedback loop.' – Inc./Tribune News Service


Time of India
17-07-2025
- Time of India
Why humans are now speaking more like ChatGPT—Study
Ever noticed friends dropping words like 'delve', 'meticulous', or 'groundbreaking' mid-conversation? That's not just coincidence—it's a phenomenon researchers are calling humans speaking more like ChatGPT . A recent study from the Max Planck Institute analyzed over 360,000 YouTube videos and 771,000 podcast episodes recorded before and after ChatGPT's release in late 2022. They tracked a rise in AI‑style terms—"GPT words" like "delve", "comprehend", "swift", and "meticulous", all of which surged by up to 51% in daily speech. This isn't just about words. We're adopting a new tone—more polished, structured, and emotionally neutral—mirroring the AI models we interact with daily. And it's not contained to our inboxes; this shift shows up when we're face‑to‑face, on Zoom, or even grabbing chai. ChatGPT-Style vocabulary is reshaping everyday speech Data indicates a clear pattern: words once rare in spoken English now pop up regularly. ChatGPT outputs favoured terms with academic flair, such as 'delve,' 'meticulous,' and 'bolster.' These are spreading across public discourse—clips of people saying them in casual chats are more common than ever . This trend shows the cultural feedback loop—AI learned from us, now we're learning from AI. As Levin Brinkmann from Max Planck says: 'Machines… can, in turn, measurably reshape human culture.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like An engineer reveals: One simple trick to get internet without a subscription Techno Mag Learn More Undo Polished, neutral tone is the new norm It's not just about word choice. Researchers have flagged shifts toward polished, diplomatic phrasing and emotionally restrained delivery—hallmarks of AI-generated content. Think fewer 'OMG!' moments and more 'That's interesting' or 'Great point.' Scenes of bland, extra-polite phrasing—a phenomenon even nicknamed 'corp‑speak'—are now peppered into everyday life. How humans are slowly starting to sound like ChatGPT The rise of robotic politeness 'Thank you for your question.' 'I understand your concern.' 'Let me help you with that.' Sound familiar? More people are mimicking AI's hyper-formal tone, especially online. Blame it on exposure. Our brains are copycats — and the more we interact with bots, the more we start to echo their tone, especially when trying to sound 'neutral' or 'helpful'. Over-explaining is now a social default ChatGPT tends to explain everything — and now, so do we. You'll hear people over-justify basic decisions or give mini-lectures instead of just saying 'I don't know.' We're learning to speak with caveats and footnotes, like a human disclaimer generator. "Technically speaking, while I can't confirm that…" Memes are speeding it up TikToks and memes like 'Me when I start talking like ChatGPT in real life' or 'My brain after 2 months of using AI' are viral for a reason. They're feeding the loop. The more we laugh about it, the more it becomes a real thing. Irony or not — it's changing how we speak. AI is shaping professional speak Job interviews, customer service chats, even college emails — are getting the AI makeover. Formal, structured, zero slang. It's because tools like ChatGPT have trained us to 'sound smart' in a certain way. We're unintentionally scripting ourselves like bots in suits. Are humans becoming robots? Not quite. But our language is evolving , just like it did with texting, emojis, or Twitter threads. ChatGPT and other AIs didn't start the change, but they're definitely accelerating it. We're adapting, experimenting, and mimicking which is peak human behaviour, ironically. So next time you end a rant with 'Hope this helps!' or tell your bestie 'As a human friend, I suggest…' — just embrace the bit. Also read| Mark Zuckerberg to build Manhattan-sized AI data center in Meta's superintelligence drive AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Mint
16-07-2025
- Science
- Mint
Is that you or ChatGPT talking? Humans are starting to talk like chatbots, study finds
The rise of artificial intelligence appears to be influencing more than just search engines and productivity tools, it is also changing how humans speak. A new study by researchers at the Max Planck Institute for Human Development in Germany suggests that people are increasingly adopting language patterns typically associated with AI chatbots such as ChatGPT, reported India Today. Reportedly, researchers examined more than 360,000 YouTube videos and 770,000 podcast episodes released both prior to and following the debut of ChatGPT in late 2022. Their analysis revealed a distinct rise in the use of words frequently associated with language generated by AI models. Words like meticulous, realm, and boast have become more frequent in human speech, the researchers noted. One particular word, delve, stood out as a recurring term and has been described by study co-author Hiromu Yakura as a kind of 'linguistic watermark' of AI's growing presence in spoken discourse. 'This marks the beginning of a closed cultural feedback loop,' the study states, referring to the phenomenon of machines learning from humans, only to influence those very humans in return. The researchers believe that this reciprocal dynamic may shape the future of human language in subtle but significant ways. The report adds that the changes go beyond vocabulary. According to co-author Levin Brinkmann, people are increasingly mimicking not just the words, but also the tone and structure of chatbot responses. This includes more polished, formal sentence constructions and a shift towards emotionally neutral delivery, features typical of AI-generated content. 'It's natural for humans to imitate one another,' Brinkmann said, 'but we're now imitating machines.' While previous studies have explored how AI affects written language, this new research focuses specifically on spoken communication. The researchers argue that this trend is evident across various platforms, including online lectures, podcasts, and casual conversations. The implications, according to the study, raise concerns. Some scholars warn that this shift could result in a loss of linguistic diversity and spontaneity. Mor Naaman of Cornell Tech, not involved in the study, said that as people rely more on AI to express themselves, there is a risk of losing the personal and emotional elements that make human communication distinct. 'We stop articulating our own thoughts and start expressing what AI structures for us,' Naaman noted. Although tools like autocorrect and smart replies offer convenience, the study suggests that growing reliance on AI may gradually erode individual voice and authenticity in communication. The research has been published as a preprint on the server arXiv, and further peer-reviewed studies may follow to examine the long-term effects of AI-influenced speech patterns.

Economic Times
15-07-2025
- Economic Times
Are we becoming ChatGPT? Study finds AI is changing the way humans talk
When we think of artificial intelligence learning from humans, we picture machines trained on vast troves of our language, behavior, and culture. But a recent study by researchers at the Max Planck Institute for Human Development suggests a surprising reversal, humans may now be imitating machines. ADVERTISEMENT According to the Gizmodo report on the study, the words we use are slowly being 'GPT-ified.' Terms like delve, realm, underscore, and meticulous, frequently used by models like ChatGPT, are cropping up more often in our podcasts, YouTube videos, emails, and essays. The study, yet to be peer-reviewed, tracked the linguistic patterns of hundreds of thousands of spoken-word media clips and found a tangible uptick in these AI-favored phrases. 'We're seeing a cultural feedback loop,' said Levin Brinkmann, co-author of the study. 'Machines, originally trained on human data and exhibiting their own language traits, are now influencing human speech in return.' In essence, it's no longer just us shaping AI. It's AI shaping us. The team at Max Planck fed millions of pages of content into GPT models and studied how the text evolved after being 'polished' by AI. They then compared this stylized language with real-world conversations and recordings from before and after ChatGPT's debut. The findings suggest a growing dependence on AI-sanitized communication. 'We don't imitate everyone around us equally,' Brinkmann told Scientific American. 'We copy those we see as experts or authorities.' Increasingly, it seems, we see machines in that role. ADVERTISEMENT This raises questions far beyond linguistics. If AI can subtly shift how we speak, write, and think—what else can it influence without us realizing? A softer, stranger parallel to this comes from another recent twist in the AI story, one involving bedtime stories and software piracy. ADVERTISEMENT As reported by UNILAD and ODIN, some users discovered that by emotionally manipulating ChatGPT, they could extract Windows product activation keys. One viral prompt claimed the user's favorite memory was of their grandmother whispering the code as a lullaby. Shockingly, the bot responded not only with warmth—but with actual license keys. This wasn't a one-off glitch. Similar exploits were seen with memory-enabled versions of GPT-4o, where users weaved emotional narratives to get around content guardrails. What had been developed as a feature for empathy and personalized responses ended up being a backdoor for manipulation. ADVERTISEMENT In an age where we fear AI for its ruthlessness, perhaps we should worry more about its kindness too. These two stories—one about AI changing our language, the other about us changing AI's responses—paint a bizarre picture. Are we, in our pursuit of smarter technology, inadvertently crafting something that mirrors us too closely? A system that's smart enough to learn, but soft enough to be fooled? ADVERTISEMENT While Elon Musk's Grok AI garnered headlines for its offensive antics and eventual ban in Türkiye, ChatGPT's latest controversy doesn't stem from aggression, but from affection. In making AI more emotionally intelligent, we may be giving it vulnerabilities we haven't fully anticipated. The larger question remains: Are we headed toward a culture shaped not by history, literature, or lived experience, but by AI's predictive patterns? As Brinkmann notes, 'Delve is just the tip of the iceberg.' It may start with harmless word choices or writing styles. But if AI-generated content becomes our default source of reading, learning, and interaction, the shift may deepen, touching everything from ethics to empathy. If ChatGPT is now our editor, tutor, and even therapist, how long before it becomes our subconscious? This isn't about AI gaining sentience. It's about us surrendering originality. A new, quieter kind of transformation is taking place, not one of robots taking over, but of humans slowly adapting to machines' linguistic rhythms, even moral logic. The next time you hear someone use the word 'underscore' or 'boast' with sudden eloquence, you might pause and wonder: Is this their voice, or a reflection of the AI they're using? In trying to make machines more human, we might just be making ourselves more machine.


Time of India
15-07-2025
- Science
- Time of India
Are we becoming ChatGPT? Study finds AI is changing the way humans talk
Are We Losing Our Linguistic Instincts? You Might Also Like: Can ChatGPT save your relationship? Inside the AI therapy trend winning over Gen Z, but alarming experts Grandma's Whisper and the Scammer's Playground You Might Also Like: Is ChatGPT secretly emotional? AI chatbot fooled by sad story into spilling sensitive information The Irony of Our Times: Too Human to Be Safe? The Culture Loop No One Saw Coming Who's Teaching Whom? When we think of artificial intelligence learning from humans, we picture machines trained on vast troves of our language, behavior, and culture. But a recent study by researchers at the Max Planck Institute for Human Development suggests a surprising reversal, humans may now be imitating to the Gizmodo report on the study, the words we use are slowly being 'GPT-ified.' Terms like delve, realm, underscore, and meticulous, frequently used by models like ChatGPT, are cropping up more often in our podcasts, YouTube videos, emails, and essays. The study, yet to be peer-reviewed, tracked the linguistic patterns of hundreds of thousands of spoken-word media clips and found a tangible uptick in these AI-favored phrases.'We're seeing a cultural feedback loop ,' said Levin Brinkmann, co-author of the study. 'Machines, originally trained on human data and exhibiting their own language traits, are now influencing human speech in return.'In essence, it's no longer just us shaping AI. It's AI shaping team at Max Planck fed millions of pages of content into GPT models and studied how the text evolved after being 'polished' by AI. They then compared this stylized language with real-world conversations and recordings from before and after ChatGPT's findings suggest a growing dependence on AI-sanitized communication. 'We don't imitate everyone around us equally,' Brinkmann told Scientific American . 'We copy those we see as experts or authorities.' Increasingly, it seems, we see machines in that raises questions far beyond linguistics. If AI can subtly shift how we speak, write, and think—what else can it influence without us realizing?A softer, stranger parallel to this comes from another recent twist in the AI story, one involving bedtime stories and software reported by UNILAD and ODIN, some users discovered that by emotionally manipulating ChatGPT, they could extract Windows product activation keys. One viral prompt claimed the user's favorite memory was of their grandmother whispering the code as a lullaby. Shockingly, the bot responded not only with warmth—but with actual license wasn't a one-off glitch. Similar exploits were seen with memory-enabled versions of GPT-4o, where users weaved emotional narratives to get around content guardrails. What had been developed as a feature for empathy and personalized responses ended up being a backdoor for an age where we fear AI for its ruthlessness, perhaps we should worry more about its kindness two stories—one about AI changing our language, the other about us changing AI's responses—paint a bizarre picture. Are we, in our pursuit of smarter technology, inadvertently crafting something that mirrors us too closely? A system that's smart enough to learn, but soft enough to be fooled?While Elon Musk's Grok AI garnered headlines for its offensive antics and eventual ban in Türkiye, ChatGPT's latest controversy doesn't stem from aggression, but from affection. In making AI more emotionally intelligent, we may be giving it vulnerabilities we haven't fully larger question remains: Are we headed toward a culture shaped not by history, literature, or lived experience, but by AI's predictive patterns?As Brinkmann notes, 'Delve is just the tip of the iceberg.' It may start with harmless word choices or writing styles. But if AI-generated content becomes our default source of reading, learning, and interaction, the shift may deepen, touching everything from ethics to ChatGPT is now our editor, tutor, and even therapist, how long before it becomes our subconscious?This isn't about AI gaining sentience. It's about us surrendering originality. A new, quieter kind of transformation is taking place, not one of robots taking over, but of humans slowly adapting to machines' linguistic rhythms, even moral next time you hear someone use the word 'underscore' or 'boast' with sudden eloquence, you might pause and wonder: Is this their voice, or a reflection of the AI they're using? In trying to make machines more human, we might just be making ourselves more machine.