logo
New Study Shows AI Is AI Biased Toward AI. 10 Steps To Protect Yourself

New Study Shows AI Is AI Biased Toward AI. 10 Steps To Protect Yourself

Forbesa day ago
Large language models show dangerous favoritism toward AI-generated content. What does this means for human agency
In the sprawling digital landscape of 2025, where artificial intelligence generates everything from news articles to marketing copy, a troubling pattern has emerged: AI systems consistently favor content created by other AI systems over human-written text. This "self-preference bias" isn't just a technical curiosity—it's reshaping how information flows through our digital ecosystem, often in ways we don't even realize.
Navigating Digital Echo Chambers
Recent research reveals that large language models exhibit a systematic preference for AI-generated content, even when human evaluators consider the quality equivalent. When an LLM evaluator scores its own outputs higher than others' while human annotators consider them of equal quality, we're witnessing something unprecedented: machines developing a form of algorithmic narcissism.
This bias manifests across multiple domains. Self-preference is the phenomenon in which an LLM favors its own outputs over texts from other LLMs and humans and studies show this preference is remarkably consistent. Whether evaluating product descriptions, news articles, or creative content, AI systems demonstrate a clear favoritism toward machine-generated text.
The implications are worrisome. In hiring processes, AI-powered screening tools might unconsciously favor résumés that have been "optimized" by other AI systems, potentially discriminating against candidates who write their own applications. In academic settings, AI grading systems could inadvertently reward AI-assisted assignments while penalizing less polished, but authentic human work.
The Human Side Of The Bias Equation
And here's where the story becomes even more complicated: humans show their own contradictory patterns. Participants tend to prefer AI-generated responses. However, when the AI origin is revealed, this preference diminishes significantly, suggesting that evaluative judgments are influenced by the disclosure of the response's provenance rather than solely by its quality.
This reveals a fascinating psychological complexity. When people don't know content is AI-generated, they often prefer it — perhaps because AI systems have been trained to produce text that hits our cognitive sweet spots. However, the picture becomes murkier when AI origin is revealed. Some studies find minimal impact of disclosure on preferences, while others document measurable penalties for transparency, with research showing that revealing AI use consistently led to drops in trust.
Consider the real-world implications: This inconsistent response to AI disclosure creates a complex landscape where the same content might be received differently depending on how its origins are presented. During health crises or other critical information moments, these disclosure effects could literally be matters of life and death.
The Algorithmic Feedback Loop
The most concerning aspect isn't either bias in isolation. It's how they interact. As AI systems increasingly train on internet data that includes AI-generated content, they're essentially learning to prefer their own "dialects." Meanwhile, humans who unconsciously consume and prefer AI-optimized content are gradually shifting their own writing and thinking patterns.
GPT-4 exhibits a significant degree of self-preference bias, and researchers hypothesize this occurs because LLMs may favor outputs that are more familiar to them, as indicated by lower perplexity. In simpler terms, AI systems prefer content that feels "normal" to them, which increasingly means content that sounds like AI.
This creates a dangerous feedback loop. As AI-generated content proliferates across the internet, future AI systems will train on this data, reinforcing existing biases and preferences. Meanwhile, humans exposed to increasing amounts of AI-optimized content might unconsciously adopt its patterns, creating a convergence toward machine-preferred communication styles.
The Stakes Are Already High
These biases aren't hypothetical future problems — they're shaping decisions today. In recruitment, AI-powered tools are already screening millions of job applications. If these systems prefer AI-optimized résumés, candidates who don't use AI assistance face an invisible disadvantage. In content marketing, brands using AI-generated copy might receive algorithmic boosts from AI-powered recommendation systems, while human creators see their reach diminished.
The academic world provides another stark example. As AI detection tools become commonplace, students face a perverse incentive: write too well, and you might be falsely flagged as using AI. Write in a more AI-compatible style and you might avoid detection but contribute to the homogenization of human expression.
In journalism and social media, the implications are even more profound. If AI-powered content recommendation algorithms favor AI-generated news articles and posts, we could see a systematic amplification of machine-created information over human reporting and authentic social expression.
Building Double Literacy For The AI Age
Navigating this landscape requires double literacy — a holistic understanding of ourselves and society, and of the tools we interact with. This type of 360° comprehension encompasses both, our own cognitive biases and the algorithmic biases of AI systems we interact with daily.
Here are 10 practical steps to invest in your double bias shield today:
The Hybrid Path Forward
A pragmatic solution in this hybrid era isn't to reject AI or pretend we can eliminate bias entirely. Instead, we need to invest in hybrid intelligence – the complementarity of of AI and NI, to develop more refined relationships with both human and artificial intelligence. This means creating AI systems that are transparent about their limitations and training humans to be more discerning consumers and creators of information.
Organizations deploying AI should implement bias audits that specifically look for self-preference tendencies. Developers need to build AI systems that can recognize and compensate for their own biases. Most importantly, we need educational frameworks that help people understand how AI systems think differently from humans. Beyond good and bad judgment this is the time to acknowledge and harness differences deliberately.
The AI mirror trap puts a spotlight on this moment we're living through. We're creating assets that reflect our own patterns back at us, often in amplified form. Our agency in this AI-saturated world depends not on choosing between human and artificial intelligence, but on developing the wisdom to understand and navigate both.
The future belongs not to those who can best mimic AI or completely avoid it, but to those who can dance skillfully with both human and artificial forms of intelligence.The music has just begun. Let's start practicing.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Duolingo's rollercoaster week highlights a crucial risk factor to companies: Chart of the Week
Duolingo's rollercoaster week highlights a crucial risk factor to companies: Chart of the Week

Yahoo

time14 minutes ago

  • Yahoo

Duolingo's rollercoaster week highlights a crucial risk factor to companies: Chart of the Week

New AI capabilities can mean instant reversals of fortune. Something that the language-learning platform Duolingo learned this week, on the receiving end of that novel dynamic. After the company posted a fantastic quarter, fueling a 30% stock surge, a stroke of bad luck saw it get jolted. It just so happened that OpenAI debuted its latest model, GPT-5, which demonstrated, among many other things, its ability to create a language-learning tool from a short prompt. OpenAI researcher Yann Dubois asked the model to create an app to help his partner learn French. And in a few minutes GTP-5 churned out several iterations, with flashcards, a progress tracker, and even a simple snake-style game with a French twist, a mouse and cheese variation to learn new vocab. Sign up for the Yahoo Finance Morning Brief By subscribing, you are agreeing to Yahoo's Terms and Privacy Policy The GPT-5 debut instantly wiped out a big chunk of Duolingo's gains, cutting the 30% gains in half. But the downward momentum continued Friday, with the stock sinking 4% to end the week. C'est la vie. The company's corporate lawyers, of course, did warn against this in its annual 10-K, albeit in boilerplate language. Tucked into the risk factors section, Duolingo notes, "It is possible that a new product could gain rapid scale at the expense of existing brands through harnessing a new technology (such as generative AI)." Consider this another warning to anyone making software. There's also irony in the wild swings. Part of Duolingo's successful quarter stemmed from the business's efficient use of AI. Gross margins, the company said, outperformed management expectations due to lower AI costs. And AI conversational features have become part of the company's learning tools, helping achieve double-digit subscriber growth. Earlier this year, CEO Luis von Ahn shared a memo on LinkedIn outlining his vision to make Duolingo an "AI-first" company. But the enthusiasm for AI, which led to the initial stock bump this week, also led to the clawback. AI giveth and taketh away. Duolingo's roller-coaster ride highlights the risks of competing in the space. Rapid development and fierce competition can leave firms suddenly behind — perceived as under threat, inferior, or obsolete —from every iteration of OpenAI's models and from the moves of other influential AI players vying to transform computing and productivity. OpenAI's new flagship technology arrives more than two years after the release of GPT-4. But the onset of software on demand, of allowing people to conjure up apps using a few words and without any coding know-how, underscores why AI hardware companies are also such a hot play on Wall Street. Firms building out AI infrastructure are seen as even more desirable than cheaper-to-invest-in software companies. You can't just vibe code the construction of a data center. But to be fair to Duolingo, and to my mother-in-law, a high school French teacher, you can't exactly do that with language learning either. Hamza Shaban is a reporter for Yahoo Finance covering markets and the economy. Follow Hamza on X @hshaban. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Using AI to write your dating profile is just high-tech catfishing
Using AI to write your dating profile is just high-tech catfishing

Yahoo

time44 minutes ago

  • Yahoo

Using AI to write your dating profile is just high-tech catfishing

As I was hanging out with a couple of friends recently, one of them mentioned that he had used an artificial intelligence program to punch up his online dating profile. I didn't respond. The other guy we were with is so much an AI fanboy that it's hard to get through a conversation without him describing how much simpler the technology has made his life. Still, the idea of trusting machines to bridge the gender divide and unlock secrets generations of humans have sought seemed sketchy to me. I've heard all the hype. I've been told, repeatedly, about how AI will change our lives in wonderful and unimaginable ways. Even so, are we really ready to move toward a future where we rely on AI to fill our social calendars? Computer-assisted canoodling comes with risks Sharpening a business résumé with computer assistance is one thing. Since olden times, lots of books have been devoted to that subject, so enlisting AI's help is a logical progression there. Dating is − or at least should be − different. It is, after all, among the most interpersonal experiences human beings can have. Do we really want microprocessors more involved in matters of the heart than they already are? No question, if you are using a dating site, an algorithm already helps pick your matches. That decreases the likelihood of those Hallmark Channel pairings with people of very different backgrounds meeting by chance and falling in love. Opinion: Indiana prides itself on work. What happens when AI takes our jobs? Maybe that's no great loss. The odds of mismatched couples staying together anywhere but in those fictitious and perpetually snowy Hallmark villages may be on the low side, anyway. Still, is it wise − or even ethical − to use AI to make a dating profile more attractive to potential matches? Using an old photo or a "glamour" portrait with a dating profile is a familiar trick. But AI has the ability to take photo doctoring to a whole new level. Rather than just sending would-be suitors images of what we looked like in our prime, AI can manufacture super-buff versions of ourselves that we might not recognize in a mirror. If relationships never advance past the online flirting stage, that's fine. If potential romantic partners finally meet face-to-face, that's when those AI-generated illusions will be shattered. Some might argue keeping virtual relationships going a while gives less physically attractive people more opportunities to demonstrate their good qualities, before being dismissed based on their appearance. Hicks: AI won't steal your job. It will make human tasks more valuable. I would counter that relationships grounded in deception probably aren't headed anywhere good. Using AI to provide more clever answers to dating profile questions isn't much better than digitally altering photos. If your dating profile gives people the impression you are smarter, funnier or more self assured than you really are, you will be found out eventually. I think most people would agree that creating AI-generated profiles that are purely fake to scam people is wrong. Yet AI-assisted dating profiles are at least a step − and maybe several steps − in that direction. Will AI have to teach us the facts of life someday? Overall, AI's intrusion into the dating world is a disturbing trend. If people become so reliant on technology to handle the most intimate details of their personal lives, it won't be long before a "date" might be two people sitting across from each other in a restaurant, parroting what their smartphones are telling them to say to each other. Opinion: Let Hoosier kids have summer. School shouldn't start in late July. Some of us worry about AI eventually overthrowing human civilization. Research already suggests that AI programs would resort to blackmail to protect themselves from deactivation or replacement. The end for our species might not be as dramatic as Skynet commissioning an army of Terminators to wipe us out. At the rate we're going, maybe all AI would need to do is provide enough bad dating advice so we're no longer able to procreate. Blake Fontenay is USA TODAY's commentary editor. This article originally appeared on USA TODAY: AI dating profiles: Why fake personalities kill romance | Opinion Solve the daily Crossword

I'm experimenting with AI in my 70s. I've had to adapt to new technologies my entire career — those who embrace change survive.
I'm experimenting with AI in my 70s. I've had to adapt to new technologies my entire career — those who embrace change survive.

Yahoo

time44 minutes ago

  • Yahoo

I'm experimenting with AI in my 70s. I've had to adapt to new technologies my entire career — those who embrace change survive.

When ChatGPT was released in 2022, former attorney Orrin Onken was already a retired man. Learning to use AI tools for tasks like fact-finding has made his retirement easier. As a keen adopter of new tech, Onken said workers should embrace the changes wrought by AI. One day in the early nineties, I read a Time Magazine article about this thing called the internet. It seemed to show a lot of potential, so I bought a book about it — that's what you did back then when you wanted to learn something. Before long, I was on a dial-up connection, meeting people from across the planet on the web. I didn't work in tech. I practiced law for most of my career, but I've been an early adopter of new technology throughout my life. I retired in 2020, after 30 years of working in law, so I was comfortably out of the workforce when ChatGPT was released in 2022. At the time, some people were aghast and predicted an imminent robot uprising. Meanwhile, I was eager to start experimenting with it. I now use AI nearly every day as my assistant and advisor, but I don't have to integrate it into a workplace. Some professionals are nervous that this latest technological revolution will upend their careers, but my years in the workforce taught me that those who embrace change survive. I was never one to shy away from technological advancements When I entered the legal profession in 1982, typewriters were on the verge of being phased out, and I was part of a wave of young lawyers who, unlike the partners we worked for, knew how to use a keyboard. The advancements kept coming. Word processors became document assembly systems. Law books and law libraries became vestiges of a bygone era. At every juncture, a change would be too much for some of the older attorneys, and they'd retire. Being mid-career, I didn't have that option. I tried to embrace change faster than my competitors and reap the advantages of beating them to the punch. I once taught myself about databases and built my own. It was time-consuming but paid off handsomely in the end. By my retirement, I had automated my timekeeping, accounting, case tracking, and document creation. When creating simpler estate plans after talking with clients, I could sometimes finish the documents faster than the client could make it to the parking lot. AI tools have made my retirement easier My attraction to new workplace tech didn't go away just because I retired. When I first got ChatGPT, I played around with it like a new toy, but I didn't have much use for it. As chatbots became more common and were built into my browsers, I began using them instead of Google or Wikipedia. AI was faster and more seamless than other tools. Today, on my browser and phone, I have ChatGPT, Gemini, Claude, and NotebookLM. Claude, an AI assistant built by Anthropic, is my favorite. When I need to find facts to use in my debates with other retired men at the park, Claude gets them from Wikipedia for me. I was taking one of those guys to the Veterans Affairs hospital the other day, and Claude found out for me whether the hospital had WiFi for guests. Before AI, I used search engines for those things. Now I skip the lists of sponsored links and the clickbait articles behind them. Claude cites its sources, and they aren't always the greatest, but if something is mission-critical, I'll double-check what the LLM is saying. I often ask Claude questions about my health. Its conclusions about causes for my symptoms aren't always right, but given the scheduling issues, policies, and preconditions of the US healthcare system, I still find it a helpful first port of call. I know turning to the internet for medical advice comes with risks, but I find it comforting and another arrow in my quiver of managing health as an older person. Claude's availability and bedside manner appeal to me, despite the varied accuracy. Claude's always ready to chat, with no appointments necessary. The chatbot never becomes impatient, condescending, or snippy. If Claude advises me to check with my human doctors, I do it. The first thing I tell them is what Claude said. So far, their reactions have been bemused but appreciative, rather than resistant. I've seen many waves of new technology, and I'm not worried about this one Today, there are those who see AI as the beginning of the end for humanity, but among my retired buddies at the park, who've seen previous tech breakthroughs come and go, there's been a collective shrug. Some take it up. Others, set in their ways, carry on without it. Some worry their children and grandchildren will have employment opportunities taken away by AI, but most are just thankful that, being out of the workforce, it won't happen to them. There's already been much talk of AI replacing entry-level white collar jobs. I spent a career writing briefs that AI could now write as well as most recent law school graduates. It summarizes material well, but I'm not sure it's sophisticated enough to appeal to the emotions of a judge. I foresee lawyers being able to use AI tools to save time on research and writing, but I think handholding clients and appearing in court will go on as usual. I'm a little jealous of the still-practicing lawyers who get to integrate this new technology into their practices. I enjoyed those kinds of challenges when I was in their shoes, and I feel I'm missing out on the exciting ways that AI will impact the practice I spent my career in. Changes will be wrought by AI, but it won't be the first or last time that technological change has transformed our workplaces. As a retiree, AI doesn't threaten my livelihood or my lifestyle; it makes life easier. For people still on the front lines, as frightening as the changes may be, I think those who learn AI, embrace it, and integrate it into their careers will do just fine. Do you have a story to share about using AI in retirement? Contact the editor, Charissa Cheong, at ccheong@ Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store