logo
How AI Is Undermining Online Authenticity

How AI Is Undermining Online Authenticity

Fast Company10-07-2025
In a prescient tweet, OpenAI CEO Sam Altman noted that AI will become persuasive long before it becomes intelligent. A scintillating study conducted by researchers at the University of Zurich just proved him right.
In the study, researchers used AI to challenge Redditors' perspectives in the site's / changemyview subreddit, where users share an opinion on a topic and challenge others to present counter arguments in a civilized manner. Unbeknownst to users, researchers used AI to produce arguments on everything from dangerous dog breeds to the housing crisis.
The AI-generated comments proved extremely effective at changing Redditors' minds. The university's ethics committee frowned upon the stud y, as it's generally unethical to subject people to experimentation without their knowledge. Reddit's legal team seems to be pursuing legal action against the university.
Unfortunately, the Zurich researchers decided not to publish their full findings, but what we do know about the study points to glaring dangers in the online ecosystem—manipulation, misinformation, and a degradation of human connection.
The power of persuasion
The internet has become a weapon of mass deception.
In the AI era, this persuasion power becomes even more drastic. AI avatars resembling financial advisors, therapists, girlfriends, and spiritual mentors can become a channel for ideological manipulation.
The University of Zurich study underscores this risk. If manipulation is unacceptable when researchers do it, why is it okay for tech giants to do it?
Large language models (LLMs) are the latest products of algorithmically driven content. Algorithmically curated social media and streaming platforms have already proven manipulative.
Facebook experimented with manipulating users' moods—without their consent— through their newsfeeds as early as 2012.
The Rabbit Hole podcast shows how YouTube's algorithm created a pipeline for radicalizing young men.
Cambridge Analytica and Russiagate showed how social media influences elections at home and abroad.
TikTok 's algorithm has been shown to create harmful echo chambers that produce division.
Foundational LLMs like Claude and ChatGPT are like a big internet hive mind. The premise of these models holds that they know more than you. Their inhumanness makes users assume their outputs are unbiased.
Algorithmic creation of content is even more dangerous than algorithmic curation of content via the feed. This content speaks directly to you, coddles you, champions and reinforcing your viewpoint.
Look no farther than Grok, the LLM produced by Elon Musk's company xAI. From the beginning, Musk was blatant about engineering Grok to support his worldview. Earlier this year, Grok fell under scrutiny for doubting the number of Jews killed in the holocaust and for promoting the falsehood of white genocide in South Africa.
Human vs. machine
Reddit users felt hostile toward the study because the AI responses were presented as human responses. It's an intrusion. The subreddit's rules protect and incentivize real human discussion, dictating that the view in question must be yours and that AI-generated posts must be disclosed.
Reddit is a microcosm of what the internet used to be: a constellation of niche interests and communities largely governing themselves, encouraging exploration. Through this digital meandering, a whole generation found likeminded cohorts and evolved with the help of those relationships.
Since the early 2010s, bots have taken over the internet. On social media, they are deployed en masse to manipulate public perception. For example, a group of bots in 2016 posed as Black Trump supporters, ostensibly to normalize Trumpism for minority voters. Bots played a pivotal role in Brexit, for another.
I believe it matters deeply that online interaction remains human and genuine. If covert, AI-powered content is unethical in research, its proliferation within social media platforms should send up a red flag, too.
The thirst for authenticity
The third ethical offense of the Zurich study: it's inauthentic.
The researchers using AI to advocate a viewpoint did not hold that viewpoint themselves. Why does this matter? Because the point of the internet is not to argue with robots all day.
If bots are arguing with bots over the merits of DEI, if students are using AI to write and teachers are using AI to grade then, seriously, what are we doing?
I worry about the near-term consequences of outsourcing our thinking to LLMs. For now, the experience of most working adults lies in a pre-AI world, allowing us to employ AI judiciously (mostly, for now). But what happens when the workforce is full of adults who have never known anything but AI and who never had an unassisted thought?
LLMs can't rival the human mind in creativity, problem-solving, feeling, and ingenuity. LLMs are an echo of us. What do we become if we lose our original voice to cacophony?
The Zurich study treads on this holy human space. That's what makes it so distasteful, and, by extension, so impactful.
The bottom line
The reasons this study is scandalous are the same reasons it's worthwhile. It highlights what's already wrong with a bot-infested internet, and how much more wrong it could get with AI. Its trespasses bring the degradation of the online ecosystem into stark relief.
This degradation has been happening for over a decade—yet incrementally, so that we haven't felt it. A predatory, manipulative internet is a foregone conclusion. It's the water we're swimming in, folks.
This study shows how murky the water's become, and how much worse it might get. I hope it will fuel meaningful legislation or at least a thoughtful, broad-based personal opting out. In the absence of rules against AI bots, Big Tech is happy to cash in on their largess.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Palantir gets $10B Army contract to improve readiness with AI
Palantir gets $10B Army contract to improve readiness with AI

Yahoo

time24 minutes ago

  • Yahoo

Palantir gets $10B Army contract to improve readiness with AI

Aug. 1 (UPI) -- Denver-based software firm Palantir Technologies secured a 10-year contract worth up to $10 billion to improve the U.S. Army's military readiness and efficiency with artificial intelligence. Palantir specializes in developing software that enables the use of AI to make real-time decisions, such as those that the U.S. Army could face in the future. The contract supports the Army's sustained capability growth and enhances its ability to meet operational warfighting needs with proven [and] commercially available AI technologies, it said in an online announcement on Thursday. "This enterprise agreement represents a pivotal step in the Army's commitment to modernizing our capabilities while being fiscally responsible," Army Chief Information Officer Leo Garciga said. The enterprise agreement with Palantir will help the Army to save money while improving its operational efficiency and military readiness through a "comprehensive framework for the Army's future software and data needs," according to the announcement. "By streamlining our procurement processes and leveraging enterprise-level discounts, we are not only enhancing our operational effectiveness but also maximizing our buying power," Garciga added. The Army will consolidate 15 prime contracts and 60 related contracts into one enterprise agreement with Palantir, which accelerates the delivery of proven commercial software to the Army's military units. "This streamlined approach reduces procurement timelines, ensuring soldiers have rapid access to cutting-edge data integration, analytics and AI tools," according to the Army. The contract also saves money by eliminating contract and pass-through fees, while reducing procurement timelines. The contract enables the Army and other Defense Department to pay up to $10 billion over 10 years to obtain Palantir products, but it does not require the military to spend the full $10 billion. Palantir earlier this year provided the Army with two AI-powered systems as part of a $178 million contract, according to a CNBC news report. The Defense Department in May also increased to $795 million an existing contract with Maven Smart Systems to improve the military's AI capabilities. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

SAP is acquiring SmartRecruiters
SAP is acquiring SmartRecruiters

Yahoo

time24 minutes ago

  • Yahoo

SAP is acquiring SmartRecruiters

SAP announced Friday that it has reached an agreement to acquire recruiting software company SmartRecruiters. In a press release, the European software giant said that SmartRecruiters 'powerful, user-friendly interfaces and seamless workflows' will complement SAP's existing HR tools. Muhammad Alam, the SAP executive board member who leads product and engineering, said in a statement that with this acquisition, 'Customers will be able to manage the entire candidate lifecycle — from sourcing and interviewing to onboarding and beyond — all in a single system to streamline the experience for recruiters, hiring managers and, in particular, candidates.' The terms of the acquisition were not disclosed. The deal is expected to close in the fourth quarter of this year. SmartRecruiters' last announced round of funding was a $110 million Series E in 2021, which valued the company at $1.5 billion.

Google AI summary feature deals blow to link clicks and website traffic
Google AI summary feature deals blow to link clicks and website traffic

Yahoo

time31 minutes ago

  • Yahoo

Google AI summary feature deals blow to link clicks and website traffic

Google's algorithm changes last year led to a plummet in search engine traffic for news websites and publishers, and even resulted in the bankruptcy of some independent publishers, including Turkey's Gazete Duvar. The technology giant is now dealing another blow to online publishers through its artificial intelligence summaries. Google's Artificial Intelligence Overview feature is a service offered by the internet giant to compete with artificial intelligence tools such as ChatGPT. The feature offers short summaries generated by Gemini that appear at the top of search results. Below the summary, there are details such as links to the source and a "Show more" option. In other words, when a user types a query into the Google search engine, they can get the answer in a summary without clicking on the links below. But this presents a serious risk, especially for websites that rely on Google Ads revenue and traditional search engine optimisation (SEO) efforts. Traffic of popular websites dropped The introduction of the AI Overview has particularly affected traffic to sites that feature content such as holiday guides, health tips and product reviews. According to Similarweb, search traffic to websites decreased by 55% between April 2022 and April 2025. A report published by the Wall Street Journal indicates that traffic to many well-known news sites around the world is declining. Traffic from organic search to HuffPost's desktop and mobile sites has more than halved in the last three years, while the Washington Post has seen a nearly identical decline. Business Insider CEO Barbara Peng laid off about 21% of her staff in May, citing "extreme traffic declines beyond our control" as the reason for the layoffs. The share of traffic from organic search to the New York Times' desktop and mobile sites also fell to 36.5% in April 2025. Nicholas Thompson, CEO of The Atlantic, predicts that traffic from Google will drop towards zero and says the company needs to improve its business model. "Google is changing from a search engine to an answer engine," Thompson said in an interview with the Wall Street Journal. In an interview with the WSJ, Thompson and other industry leaders said they are trying to develop new strategies and are most concerned with building reader relationships. Users don't click on links Google executives argue that the company is committed to sending traffic to the web and that people who click on links after seeing the AI Overview tend to spend more time on these sites. However, survey studies show a different picture. According to a new study by the Pew Research Centre, only % of the 900 Google users in the US click on the source page specified as a result of a search that includes the Artificial Intelligence Overview summary. The rest are content with the short information provided by Google. Moreover, according to the study, the most frequently cited sources in both AI summaries and traditional search results are Wikipedia, YouTube (also a Google subsidiary) and Reddit. 15% of the sources in AI summaries come from these three sites. Referrals to government sites (e.g. with the 'gov' extension) are more common in AI summaries than in traditional search results. On the other hand, both AI summaries and traditional search results are equally likely to refer to news sites, at 5%. Worse still, 404 Media, a website known for specialised technology news, noticed that a story about AI-assisted music production was not showing up in Google searches. This was because Artificial Intelligence Overview summarised the content of the story, but did not link to the actual story. The site summarises the situation with the following statement: "The AI Overview ensures that information is presented in such a way that the source itself is never clicked on." SEO loses its impact The effects of the AI Overview on SEO (search engine optimisation) are also remarkable. According to the Register, the latest data showed that the click-through rate of the top-ranked site in searches with an AI summary fell by an average of 34.5%. Being on the first page is no longer as meaningful as it used to be. AI often provides false information Moreover, there is another risk: the reliability of artificial intelligence. 404 Media published a news article showing that one of the responses given by the feature in question was actually generated from another AI summary, which in turn was based on an AI source. The margin of error increases as the information moves away from the main source. This situation is described by experts as "the vicious circle of information that leads to the collapse of artificial intelligence models themselves". When there are not enough sources of quality information, users are left with inaccurate and superficial content produced by artificial intelligence. The advertising industry continues to work for Google The revenues of websites and Google are based on the following advertising cycle: Websites allow people who use search engines such as Google free access to their content. Google redirects users to websites where they see adverts as well as content. Most websites make money from these adverts. According to the BBC, an estimated 68% of internet activity starts on search engines, and around 90% of searches take place on Google. This means that websites rely heavily on Google to make money. It is stated that the Artificial Intelligence Overview mode could therefore destroy the business model that has existed for 23 years. However, Google experience no loss in this change, at least for now. Alphabet, the Google's parent company, increased its revenues to a record level in the last quarter of 2024. According to data released by the company, Google's total revenue increased by 14% compared to last year, reaching $96.4 billion. According to the Register, the bulk of the revenue still comes from advertising: exactly $54.2 billion. That's because Google now places adverts directly in or around AI Overview summaries. According to a study by SparkToro, by 2024, only 360 out of every 1,000 Google searches in the US led to sites that are not owned or advertised by Google. These rates are predicted to worsen with the rise of artificial intelligence summaries. 'Desperation not demand' While Google still dominates the market, rival AI-powered search engines such as Perplexity are slowly entering the competition. According to Bank of America executive Muhammad Rasulnejat, Google's spending of $14 billion on infrastructure investment in the last quarter alone points to "desperation in the face of competition", not growing demand. In addition to all this, the fact that the US Department of Justice still accuses Google of monopolisation creates a separate pressure. The Ministry is even demanding that Google divest its Chrome browser. The company's recent advertising and artificial intelligence moves may further inflame these debates.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store