Latest news with #LevinBrinkmann


Time of India
6 days ago
- Time of India
Why humans are now speaking more like ChatGPT—Study
Ever noticed friends dropping words like 'delve', 'meticulous', or 'groundbreaking' mid-conversation? That's not just coincidence—it's a phenomenon researchers are calling humans speaking more like ChatGPT . A recent study from the Max Planck Institute analyzed over 360,000 YouTube videos and 771,000 podcast episodes recorded before and after ChatGPT's release in late 2022. They tracked a rise in AI‑style terms—"GPT words" like "delve", "comprehend", "swift", and "meticulous", all of which surged by up to 51% in daily speech. This isn't just about words. We're adopting a new tone—more polished, structured, and emotionally neutral—mirroring the AI models we interact with daily. And it's not contained to our inboxes; this shift shows up when we're face‑to‑face, on Zoom, or even grabbing chai. ChatGPT-Style vocabulary is reshaping everyday speech Data indicates a clear pattern: words once rare in spoken English now pop up regularly. ChatGPT outputs favoured terms with academic flair, such as 'delve,' 'meticulous,' and 'bolster.' These are spreading across public discourse—clips of people saying them in casual chats are more common than ever . This trend shows the cultural feedback loop—AI learned from us, now we're learning from AI. As Levin Brinkmann from Max Planck says: 'Machines… can, in turn, measurably reshape human culture.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like An engineer reveals: One simple trick to get internet without a subscription Techno Mag Learn More Undo Polished, neutral tone is the new norm It's not just about word choice. Researchers have flagged shifts toward polished, diplomatic phrasing and emotionally restrained delivery—hallmarks of AI-generated content. Think fewer 'OMG!' moments and more 'That's interesting' or 'Great point.' Scenes of bland, extra-polite phrasing—a phenomenon even nicknamed 'corp‑speak'—are now peppered into everyday life. How humans are slowly starting to sound like ChatGPT The rise of robotic politeness 'Thank you for your question.' 'I understand your concern.' 'Let me help you with that.' Sound familiar? More people are mimicking AI's hyper-formal tone, especially online. Blame it on exposure. Our brains are copycats — and the more we interact with bots, the more we start to echo their tone, especially when trying to sound 'neutral' or 'helpful'. Over-explaining is now a social default ChatGPT tends to explain everything — and now, so do we. You'll hear people over-justify basic decisions or give mini-lectures instead of just saying 'I don't know.' We're learning to speak with caveats and footnotes, like a human disclaimer generator. "Technically speaking, while I can't confirm that…" Memes are speeding it up TikToks and memes like 'Me when I start talking like ChatGPT in real life' or 'My brain after 2 months of using AI' are viral for a reason. They're feeding the loop. The more we laugh about it, the more it becomes a real thing. Irony or not — it's changing how we speak. AI is shaping professional speak Job interviews, customer service chats, even college emails — are getting the AI makeover. Formal, structured, zero slang. It's because tools like ChatGPT have trained us to 'sound smart' in a certain way. We're unintentionally scripting ourselves like bots in suits. Are humans becoming robots? Not quite. But our language is evolving , just like it did with texting, emojis, or Twitter threads. ChatGPT and other AIs didn't start the change, but they're definitely accelerating it. We're adapting, experimenting, and mimicking which is peak human behaviour, ironically. So next time you end a rant with 'Hope this helps!' or tell your bestie 'As a human friend, I suggest…' — just embrace the bit. Also read| Mark Zuckerberg to build Manhattan-sized AI data center in Meta's superintelligence drive AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Mint
7 days ago
- Science
- Mint
Is that you or ChatGPT talking? Humans are starting to talk like chatbots, study finds
The rise of artificial intelligence appears to be influencing more than just search engines and productivity tools, it is also changing how humans speak. A new study by researchers at the Max Planck Institute for Human Development in Germany suggests that people are increasingly adopting language patterns typically associated with AI chatbots such as ChatGPT, reported India Today. Reportedly, researchers examined more than 360,000 YouTube videos and 770,000 podcast episodes released both prior to and following the debut of ChatGPT in late 2022. Their analysis revealed a distinct rise in the use of words frequently associated with language generated by AI models. Words like meticulous, realm, and boast have become more frequent in human speech, the researchers noted. One particular word, delve, stood out as a recurring term and has been described by study co-author Hiromu Yakura as a kind of 'linguistic watermark' of AI's growing presence in spoken discourse. 'This marks the beginning of a closed cultural feedback loop,' the study states, referring to the phenomenon of machines learning from humans, only to influence those very humans in return. The researchers believe that this reciprocal dynamic may shape the future of human language in subtle but significant ways. The report adds that the changes go beyond vocabulary. According to co-author Levin Brinkmann, people are increasingly mimicking not just the words, but also the tone and structure of chatbot responses. This includes more polished, formal sentence constructions and a shift towards emotionally neutral delivery, features typical of AI-generated content. 'It's natural for humans to imitate one another,' Brinkmann said, 'but we're now imitating machines.' While previous studies have explored how AI affects written language, this new research focuses specifically on spoken communication. The researchers argue that this trend is evident across various platforms, including online lectures, podcasts, and casual conversations. The implications, according to the study, raise concerns. Some scholars warn that this shift could result in a loss of linguistic diversity and spontaneity. Mor Naaman of Cornell Tech, not involved in the study, said that as people rely more on AI to express themselves, there is a risk of losing the personal and emotional elements that make human communication distinct. 'We stop articulating our own thoughts and start expressing what AI structures for us,' Naaman noted. Although tools like autocorrect and smart replies offer convenience, the study suggests that growing reliance on AI may gradually erode individual voice and authenticity in communication. The research has been published as a preprint on the server arXiv, and further peer-reviewed studies may follow to examine the long-term effects of AI-influenced speech patterns.

Economic Times
15-07-2025
- Economic Times
Are we becoming ChatGPT? Study finds AI is changing the way humans talk
When we think of artificial intelligence learning from humans, we picture machines trained on vast troves of our language, behavior, and culture. But a recent study by researchers at the Max Planck Institute for Human Development suggests a surprising reversal, humans may now be imitating machines. ADVERTISEMENT According to the Gizmodo report on the study, the words we use are slowly being 'GPT-ified.' Terms like delve, realm, underscore, and meticulous, frequently used by models like ChatGPT, are cropping up more often in our podcasts, YouTube videos, emails, and essays. The study, yet to be peer-reviewed, tracked the linguistic patterns of hundreds of thousands of spoken-word media clips and found a tangible uptick in these AI-favored phrases. 'We're seeing a cultural feedback loop,' said Levin Brinkmann, co-author of the study. 'Machines, originally trained on human data and exhibiting their own language traits, are now influencing human speech in return.' In essence, it's no longer just us shaping AI. It's AI shaping us. The team at Max Planck fed millions of pages of content into GPT models and studied how the text evolved after being 'polished' by AI. They then compared this stylized language with real-world conversations and recordings from before and after ChatGPT's debut. The findings suggest a growing dependence on AI-sanitized communication. 'We don't imitate everyone around us equally,' Brinkmann told Scientific American. 'We copy those we see as experts or authorities.' Increasingly, it seems, we see machines in that role. ADVERTISEMENT This raises questions far beyond linguistics. If AI can subtly shift how we speak, write, and think—what else can it influence without us realizing? A softer, stranger parallel to this comes from another recent twist in the AI story, one involving bedtime stories and software piracy. ADVERTISEMENT As reported by UNILAD and ODIN, some users discovered that by emotionally manipulating ChatGPT, they could extract Windows product activation keys. One viral prompt claimed the user's favorite memory was of their grandmother whispering the code as a lullaby. Shockingly, the bot responded not only with warmth—but with actual license keys. This wasn't a one-off glitch. Similar exploits were seen with memory-enabled versions of GPT-4o, where users weaved emotional narratives to get around content guardrails. What had been developed as a feature for empathy and personalized responses ended up being a backdoor for manipulation. ADVERTISEMENT In an age where we fear AI for its ruthlessness, perhaps we should worry more about its kindness too. These two stories—one about AI changing our language, the other about us changing AI's responses—paint a bizarre picture. Are we, in our pursuit of smarter technology, inadvertently crafting something that mirrors us too closely? A system that's smart enough to learn, but soft enough to be fooled? ADVERTISEMENT While Elon Musk's Grok AI garnered headlines for its offensive antics and eventual ban in Türkiye, ChatGPT's latest controversy doesn't stem from aggression, but from affection. In making AI more emotionally intelligent, we may be giving it vulnerabilities we haven't fully anticipated. The larger question remains: Are we headed toward a culture shaped not by history, literature, or lived experience, but by AI's predictive patterns? As Brinkmann notes, 'Delve is just the tip of the iceberg.' It may start with harmless word choices or writing styles. But if AI-generated content becomes our default source of reading, learning, and interaction, the shift may deepen, touching everything from ethics to empathy. If ChatGPT is now our editor, tutor, and even therapist, how long before it becomes our subconscious? This isn't about AI gaining sentience. It's about us surrendering originality. A new, quieter kind of transformation is taking place, not one of robots taking over, but of humans slowly adapting to machines' linguistic rhythms, even moral logic. The next time you hear someone use the word 'underscore' or 'boast' with sudden eloquence, you might pause and wonder: Is this their voice, or a reflection of the AI they're using? In trying to make machines more human, we might just be making ourselves more machine.


Time of India
15-07-2025
- Science
- Time of India
Are we becoming ChatGPT? Study finds AI is changing the way humans talk
Are We Losing Our Linguistic Instincts? You Might Also Like: Can ChatGPT save your relationship? Inside the AI therapy trend winning over Gen Z, but alarming experts Grandma's Whisper and the Scammer's Playground You Might Also Like: Is ChatGPT secretly emotional? AI chatbot fooled by sad story into spilling sensitive information The Irony of Our Times: Too Human to Be Safe? The Culture Loop No One Saw Coming Who's Teaching Whom? When we think of artificial intelligence learning from humans, we picture machines trained on vast troves of our language, behavior, and culture. But a recent study by researchers at the Max Planck Institute for Human Development suggests a surprising reversal, humans may now be imitating to the Gizmodo report on the study, the words we use are slowly being 'GPT-ified.' Terms like delve, realm, underscore, and meticulous, frequently used by models like ChatGPT, are cropping up more often in our podcasts, YouTube videos, emails, and essays. The study, yet to be peer-reviewed, tracked the linguistic patterns of hundreds of thousands of spoken-word media clips and found a tangible uptick in these AI-favored phrases.'We're seeing a cultural feedback loop ,' said Levin Brinkmann, co-author of the study. 'Machines, originally trained on human data and exhibiting their own language traits, are now influencing human speech in return.'In essence, it's no longer just us shaping AI. It's AI shaping team at Max Planck fed millions of pages of content into GPT models and studied how the text evolved after being 'polished' by AI. They then compared this stylized language with real-world conversations and recordings from before and after ChatGPT's findings suggest a growing dependence on AI-sanitized communication. 'We don't imitate everyone around us equally,' Brinkmann told Scientific American . 'We copy those we see as experts or authorities.' Increasingly, it seems, we see machines in that raises questions far beyond linguistics. If AI can subtly shift how we speak, write, and think—what else can it influence without us realizing?A softer, stranger parallel to this comes from another recent twist in the AI story, one involving bedtime stories and software reported by UNILAD and ODIN, some users discovered that by emotionally manipulating ChatGPT, they could extract Windows product activation keys. One viral prompt claimed the user's favorite memory was of their grandmother whispering the code as a lullaby. Shockingly, the bot responded not only with warmth—but with actual license wasn't a one-off glitch. Similar exploits were seen with memory-enabled versions of GPT-4o, where users weaved emotional narratives to get around content guardrails. What had been developed as a feature for empathy and personalized responses ended up being a backdoor for an age where we fear AI for its ruthlessness, perhaps we should worry more about its kindness two stories—one about AI changing our language, the other about us changing AI's responses—paint a bizarre picture. Are we, in our pursuit of smarter technology, inadvertently crafting something that mirrors us too closely? A system that's smart enough to learn, but soft enough to be fooled?While Elon Musk's Grok AI garnered headlines for its offensive antics and eventual ban in Türkiye, ChatGPT's latest controversy doesn't stem from aggression, but from affection. In making AI more emotionally intelligent, we may be giving it vulnerabilities we haven't fully larger question remains: Are we headed toward a culture shaped not by history, literature, or lived experience, but by AI's predictive patterns?As Brinkmann notes, 'Delve is just the tip of the iceberg.' It may start with harmless word choices or writing styles. But if AI-generated content becomes our default source of reading, learning, and interaction, the shift may deepen, touching everything from ethics to ChatGPT is now our editor, tutor, and even therapist, how long before it becomes our subconscious?This isn't about AI gaining sentience. It's about us surrendering originality. A new, quieter kind of transformation is taking place, not one of robots taking over, but of humans slowly adapting to machines' linguistic rhythms, even moral next time you hear someone use the word 'underscore' or 'boast' with sudden eloquence, you might pause and wonder: Is this their voice, or a reflection of the AI they're using? In trying to make machines more human, we might just be making ourselves more machine.


NDTV
15-07-2025
- Science
- NDTV
Humans Are Starting To Sound And Talk Like ChatGPT, Study Shows
The rise of artificial intelligence (AI) chatbots, such as ChatGPT, has changed how humans communicate with each other, a new study has claimed. Researchers at the Max Planck Institute for Human Development, Germany, found that humans are starting to speak more like ChatGPT and not the other way around. The researchers analysed over 360,000 YouTube videos and 771,000 podcast episodes from before and after ChatGPT's release to track the frequency of so-called 'GPT words'. The outcome showed that ever since ChatGPT became popular, people are using certain words much more often -- words that pop up a lot in AI-generated text. "We detect a measurable and abrupt increase in the use of words preferentially generated by ChatGPT such as delve, comprehend, boast, swift, and meticulous, after its release," the study, published in the preprint server arXiv, highlighted. "These findings suggest a scenario where machines, originally trained on human data and subsequently exhibiting their own cultural traits, can, in turn, measurably reshape human culture. This marks the beginning of a closed cultural feedback loop in which cultural traits circulate bidirectionally between humans and machines." While previous studies have shown that AI models were influencing written communication for humans, it is the first time that research has shown its impact on verbal language. ChatGPT or any other AI model is trained on vast amounts of data using books, websites, forums, Wikipedia, and other publicly available resources. It is then fine-tuned using proprietary techniques and the reinforcement learning process. The end result is a linguistic and behavioural profile that, while rooted in human language, "exhibits systematic biases that distinguish it from organic human communication". "The patterns that are stored in AI technology seem to be transmitting back to the human mind," study co-author Levin Brinkmann told Scientific American. "It's natural for humans to imitate one another, but we don't imitate everyone around us equally. We're more likely to copy what someone else is doing if we perceive them as being knowledgeable or important."