logo
Are we becoming ChatGPT? Study finds AI is changing the way humans talk

Are we becoming ChatGPT? Study finds AI is changing the way humans talk

Economic Times15-07-2025
When we think of artificial intelligence learning from humans, we picture machines trained on vast troves of our language, behavior, and culture. But a recent study by researchers at the Max Planck Institute for Human Development suggests a surprising reversal, humans may now be imitating machines.
ADVERTISEMENT According to the Gizmodo report on the study, the words we use are slowly being 'GPT-ified.' Terms like delve, realm, underscore, and meticulous, frequently used by models like ChatGPT, are cropping up more often in our podcasts, YouTube videos, emails, and essays. The study, yet to be peer-reviewed, tracked the linguistic patterns of hundreds of thousands of spoken-word media clips and found a tangible uptick in these AI-favored phrases.
'We're seeing a cultural feedback loop,' said Levin Brinkmann, co-author of the study. 'Machines, originally trained on human data and exhibiting their own language traits, are now influencing human speech in return.'
In essence, it's no longer just us shaping AI. It's AI shaping us. The team at Max Planck fed millions of pages of content into GPT models and studied how the text evolved after being 'polished' by AI. They then compared this stylized language with real-world conversations and recordings from before and after ChatGPT's debut.
The findings suggest a growing dependence on AI-sanitized communication. 'We don't imitate everyone around us equally,' Brinkmann told Scientific American. 'We copy those we see as experts or authorities.' Increasingly, it seems, we see machines in that role.
ADVERTISEMENT This raises questions far beyond linguistics. If AI can subtly shift how we speak, write, and think—what else can it influence without us realizing? A softer, stranger parallel to this comes from another recent twist in the AI story, one involving bedtime stories and software piracy.
ADVERTISEMENT As reported by UNILAD and ODIN, some users discovered that by emotionally manipulating ChatGPT, they could extract Windows product activation keys. One viral prompt claimed the user's favorite memory was of their grandmother whispering the code as a lullaby. Shockingly, the bot responded not only with warmth—but with actual license keys. This wasn't a one-off glitch. Similar exploits were seen with memory-enabled versions of GPT-4o, where users weaved emotional narratives to get around content guardrails. What had been developed as a feature for empathy and personalized responses ended up being a backdoor for manipulation.
ADVERTISEMENT In an age where we fear AI for its ruthlessness, perhaps we should worry more about its kindness too. These two stories—one about AI changing our language, the other about us changing AI's responses—paint a bizarre picture. Are we, in our pursuit of smarter technology, inadvertently crafting something that mirrors us too closely? A system that's smart enough to learn, but soft enough to be fooled?
ADVERTISEMENT While Elon Musk's Grok AI garnered headlines for its offensive antics and eventual ban in Türkiye, ChatGPT's latest controversy doesn't stem from aggression, but from affection. In making AI more emotionally intelligent, we may be giving it vulnerabilities we haven't fully anticipated. The larger question remains: Are we headed toward a culture shaped not by history, literature, or lived experience, but by AI's predictive patterns?
As Brinkmann notes, 'Delve is just the tip of the iceberg.' It may start with harmless word choices or writing styles. But if AI-generated content becomes our default source of reading, learning, and interaction, the shift may deepen, touching everything from ethics to empathy. If ChatGPT is now our editor, tutor, and even therapist, how long before it becomes our subconscious? This isn't about AI gaining sentience. It's about us surrendering originality. A new, quieter kind of transformation is taking place, not one of robots taking over, but of humans slowly adapting to machines' linguistic rhythms, even moral logic.
The next time you hear someone use the word 'underscore' or 'boast' with sudden eloquence, you might pause and wonder: Is this their voice, or a reflection of the AI they're using? In trying to make machines more human, we might just be making ourselves more machine.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI, Oracle deepen AI data center push with 4.5 gigawatt Stargate expansion
OpenAI, Oracle deepen AI data center push with 4.5 gigawatt Stargate expansion

Time of India

time11 minutes ago

  • Time of India

OpenAI, Oracle deepen AI data center push with 4.5 gigawatt Stargate expansion

OpenAI and Oracle will develop another 4.5 gigawatts of data center capacity, expanding a tie-up that has promised hundreds of billions of dollars in infrastructure investment to keep the U.S. ahead in the global artificial intelligence race. The ChatGPT maker did not disclose the locations or funding details for the new facilities in Tuesday's announcement. The move builds on the Stargate initiative , an up to $500 billion and 10 gigawatt project that also includes Japanese technology investor SoftBank Group and is setting up its first AI data center in Abilene, Texas. OpenAI, as well as its backer Microsoft , are among the technology companies pouring billions of dollars on data centers to power generative AI services such as ChatGPT and Copilot that require huge amounts of computing power. The growing use of AI in sensitive sectors such as defense, as well as China's push to catch up, has made the nascent technology a top priority for U.S. President Donald Trump, who unveiled Stargate at the White House in January. The new data centers will bring Stargate's total capacity under development to more than 5 gigawatts, which will run on over 2 million chips, OpenAI said in a blog post, adding that the tie-up now expects to exceed its initial commitment. Oracle did not immediately respond to a Reuters request for comment, while the White House declined to comment. Analysts have raised doubts about the venture's ability to secure the funding, including $100 billion for immediate deployment. In January, xAI owner Elon Musk, dismissed the group, saying "they don't actually have the money." OpenAI and SoftBank will each commit $19 billion to fund Stargate, reports said in January. The Wall Street Journal reported on Monday the two companies have been at odds with each other and Stargate is now setting a more modest goal of building a small data center at end-2025, likely in Ohio.

Researchers develop new ways for AI models to work together
Researchers develop new ways for AI models to work together

Hans India

time2 hours ago

  • Hans India

Researchers develop new ways for AI models to work together

Researchers have developed a set of algorithms that allow different artificial intelligence (AI) models to 'think' and work together as one. The development, by researchers at the Weizmann Institute of Science (WIS) makes it possible to combine the strengths of different AI systems, speeding up performance and reducing costs, Xinhua news agency reported. The new method significantly improves the speed of large language models, or LLMs, which power tools like ChatGPT and Gemini. On average, it increases performance by 1.5 times, and in some cases by as much as 2.8 times, the team said, adding that it could make AI more suitable for smartphones, drones, and autonomous vehicles. In those settings, faster response times can be critical to safety and accuracy. For example, in a self-driving car, a faster AI model can mean the difference between a safe decision and a dangerous error. Until now, AI models developed by different companies could not easily communicate or collaborate because each uses a different internal 'language,' made up of unique tokens. The researchers compared this to people from different countries trying to talk without a shared vocabulary. To overcome this, the team developed two algorithms. One allows a model to translate its output into a shared format that other models can understand. The other encourages collaboration using tokens that have the same meaning across different systems, like common words in human languages. Though initially concerned that meaning might be lost in translation, the researchers found that their system worked efficiently. The new tools are already available through open-source platforms and are helping developers worldwide create faster and more collaborative AI applications. The finding was presented at the International Conference on Machine Learning being held in Vancouver, Canada.

Paytm shares in focus as Co swings to Rs 122 crore profit in Q1 from YoY loss
Paytm shares in focus as Co swings to Rs 122 crore profit in Q1 from YoY loss

Time of India

time2 hours ago

  • Time of India

Paytm shares in focus as Co swings to Rs 122 crore profit in Q1 from YoY loss

Live Events Business highlights (You can now subscribe to our (You can now subscribe to our ETMarkets WhatsApp channel Shares of One 97 Communications, the parent company of fintech platform Paytm , will be in focus on Wednesday after the firm reported a consolidated net profit of Rs 122.5 crore in Q1FY26, marking a turnaround from a loss of Rs 839 crore in the same quarter last from operations rose 28% year-on-year (YoY) to Rs 1,917 crore, up from Rs 1,502 crore in Q1FY25. On a sequential basis, topline growth was marginal at 0.3%, compared to Rs 1,911 crore in Q4FY25, when the company had posted a net loss of Rs 540 revenue grew 28% YoY, supported by an increase in subscription-based merchants, higher Gross Merchandise Value (GMV), and growth in revenue from financial services profit rose 52% YoY to Rs 1,151 crore, with a contribution margin of 60%—a 10 percentage point improvement—driven by better net payment revenue, a greater share of financial services revenue, and lower direct contribution profit stood at Rs 1,151 crore, up 52% YoY, with a contribution margin of 60% (up 10 percentage points YoY), driven by improved net payment revenue, higher share of distribution of financial services revenue, and reduction in direct Earnings Before Interest, Taxes, Depreciation and Amortisation (EBITDA) and PAT turned profitable at Rs 72 crore (margin of 4%) and Rs 123 crore respectively, demonstrating AI-led operating leverage, disciplined cost structure and higher other income, the company filing Cash balance stood at Rs 12,872 crore, providing capital flexibility to expand merchant payments, distribution of financial services, and AI-led payment revenue was up 38% YoY to Rs 529 crore, led by growth in high-quality subscription merchants and an increase in payment processing of financial services revenue increased by 100% YoY to Rs 561 crore, driven by growth in merchant loans, trail revenue from Default Loss Guarantee (DLG) portfolio, and improved collection Read: 7 Nifty500 stocks with highest dividend yields. Do you own any? The Vijay Shekhar Sharma-led company in a statement claimed that its "undisputed leadership" in merchant payments continued in the quarter under review with 1.30 crore merchant device subscriptions across MSMEs and enterprise payment merchants.(Disclaimer: Recommendations, suggestions, views and opinions given by the experts are their own. These do not represent the views of the Economic Times)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store