logo
Online interactions are become less genuine

Online interactions are become less genuine

Fast Company08-07-2025
In a prescient tweet, OpenAI CEO Sam Altman noted that AI will become persuasive long before it becomes intelligent. A scintillating study conducted by researchers at the University of Zurich just proved him right.
In the study, researchers used AI to challenge Redditors' perspectives in the site's / changemyview subreddit, where users share an opinion on a topic and challenge others to present counter arguments in a civilized manner. Unbeknownst to users, researchers used AI to produce arguments on everything from dangerous dog breeds to the housing crisis.
The AI-generated comments proved extremely effective at changing Redditors' minds. The university's ethics committee frowned upon the stud y, as it's generally unethical to subject people to experimentation without their knowledge. Reddit's legal team seems to be pursuing legal action against the university.
Unfortunately, the Zurich researchers decided not to publish their full findings, but what we do know about the study points to glaring dangers in the online ecosystem—manipulation, misinformation, and a degradation of human connection.
The power of persuasion
The internet has become a weapon of mass deception.
In the AI era, this persuasion power becomes even more drastic. AI avatars resembling financial advisors, therapists, girlfriends, and spiritual mentors can become a channel for ideological manipulation.
The University of Zurich study underscores this risk. If manipulation is unacceptable when researchers do it, why is it okay for tech giants to do it?
Large language models (LLMs) are the latest products of algorithmically driven content. Algorithmically curated social media and streaming platforms have already proven manipulative.
Facebook experimented with manipulating users' moods—without their consent— through their newsfeeds as early as 2012.
The Rabbit Hole podcast shows how YouTube's algorithm created a pipeline for radicalizing young men.
Cambridge Analytica and Russiagate showed how social media influences elections at home and abroad.
TikTok 's algorithm has been shown to create harmful echo chambers that produce division.
Foundational LLMs like Claude and ChatGPT are like a big internet hive mind. The premise of these models holds that they know more than you. Their inhumanness makes users assume their outputs are unbiased.
Algorithmic creation of content is even more dangerous than algorithmic curation of content via the feed. This content speaks directly to you, coddles you, champions and reinforcing your viewpoint.
Look no farther than Grok, the LLM produced by Elon Musk's company xAI. From the beginning, Musk was blatant about engineering Grok to support his worldview. Earlier this year, Grok fell under scrutiny for doubting the number of Jews killed in the holocaust and for promoting the falsehood of white genocide in South Africa.
Human vs. machine
Reddit users felt hostile toward the study because the AI responses were presented as human responses. It's an intrusion. The subreddit's rules protect and incentivize real human discussion, dictating that the view in question must be yours and that AI-generated posts must be disclosed.
Reddit is a microcosm of what the internet used to be: a constellation of niche interests and communities largely governing themselves, encouraging exploration. Through this digital meandering, a whole generation found likeminded cohorts and evolved with the help of those relationships.
Since the early 2010s, bots have taken over the internet. On social media, they are deployed en masse to manipulate public perception. For example, a group of bots in 2016 posed as Black Trump supporters, ostensibly to normalize Trumpism for minority voters. Bots played a pivotal role in Brexit, for another.
I believe it matters deeply that online interaction remains human and genuine. If covert, AI-powered content is unethical in research, its proliferation within social media platforms should send up a red flag, too.
The thirst for authenticity
The third ethical offense of the Zurich study: it's inauthentic.
The researchers using AI to advocate a viewpoint did not hold that viewpoint themselves. Why does this matter? Because the point of the internet is not to argue with robots all day.
If bots are arguing with bots over the merits of DEI, if students are using AI to write and teachers are using AI to grade then, seriously, what are we doing?
I worry about the near-term consequences of outsourcing our thinking to LLMs. For now, the experience of most working adults lies in a pre-AI world, allowing us to employ AI judiciously (mostly, for now). But what happens when the workforce is full of adults who have never known anything but AI and who never had an unassisted thought?
LLMs can't rival the human mind in creativity, problem-solving, feeling, and ingenuity. LLMs are an echo of us. What do we become if we lose our original voice to cacophony?
The Zurich study treads on this holy human space. That's what makes it so distasteful, and, by extension, so impactful.
The bottom line
The reasons this study is scandalous are the same reasons it's worthwhile. It highlights what's already wrong with a bot-infested internet, and how much more wrong it could get with AI. Its trespasses bring the degradation of the online ecosystem into stark relief.
This degradation has been happening for over a decade—yet incrementally, so that we haven't felt it. A predatory, manipulative internet is a foregone conclusion. It's the water we're swimming in, folks.
This study shows how murky the water's become, and how much worse it might get. I hope it will fuel meaningful legislation or at least a thoughtful, broad-based personal opting out. In the absence of rules against AI bots, Big Tech is happy to cash in on their largess.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Big Tech's acquihire deals face regulatory scrutiny, outgoing EU antitrust official says
Big Tech's acquihire deals face regulatory scrutiny, outgoing EU antitrust official says

Yahoo

time5 minutes ago

  • Yahoo

Big Tech's acquihire deals face regulatory scrutiny, outgoing EU antitrust official says

By Foo Yun Chee BRUSSELS (Reuters) -Big Tech deals to acquire skills rather than major companies may soon come under the regulatory scrutiny they previously avoided, the outgoing head of the European Commission's antitrust unit said. Acquihires, in which Big Tech hires start-ups' founders and senior managers rather than acquire the companies, have been viewed by antitrust regulators as an attempt to evade merger rules. "It is important to preserve effective competition," Olivier Guersent, the director general at the competition unit, told Reuters in an interview earlier this week and ahead of his retirement on Thursday after a 33-year career tackling antitrust, cartels and financial services. He said the Commission was pushing national agencies with call-in powers to act. Such powers, enjoyed by Denmark, Hungary, Ireland, Italy, Sweden, Slovenia, Lithuania and Latvia, allow them to refer below-EU threshold mergers to the EU enforcer. "So we need to be patient and have enough member states that have call-in provisions and use them. But we are working on it. Within the ECN, we are actively encouraging it to do so," Guersent said. The European Competition Network is a forum for cooperation between the Commission and national regulators. Guersent said acquihires can be considered a merger as staff are part of a company's assets. Instances include Microsoft's $650 million deal to hire most of AI start-up Inflection's staff, including its co-founders and Google's poaching of employees from chatbot startup both last year. Last month Google hired staff members from AI code generation startup Windsurf. Amazon hired AI firm Adept's co-founders and some of its team in June last year, while Meta poached data-labelling startup Scale AI's CEO in June after taking a multi-billion dollar stake. Guersent, who spearheaded the EU's landmark Digital Markets Act that aims to curb Big Tech's power, said the results were encouraging. "It made a difference in fields in which decades of antitrust enforcement have not managed to make a difference," he said. "Did it change everything as much as we would have liked? Probably not. So that's why success is always relative," he said, contrasting Apple's changes to its closed ecosystem with Meta's pushback. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Brunel Q2 and H1 2025 results: Driving efficiency, building resilience
Brunel Q2 and H1 2025 results: Driving efficiency, building resilience

Yahoo

time5 minutes ago

  • Yahoo

Brunel Q2 and H1 2025 results: Driving efficiency, building resilience

Amsterdam, 1 August 2025 – Brunel International N.V. (Brunel; BRNL), a global specialist delivering customised project and workforce solutions to drive sustainable industry transformations through technology and talent, today announced its second quarter and first half 2025 results. Q2 2025 Key points Revenue of EUR 303 million, down 12% (down 7% organically) Gross profit of EUR 52 million, down 20% (down 14% organically) Underlying EBIT of EUR 6.3 million, down 46% (down 27% organically) Additional cost reduction program in execution, to deliver around EUR 10 million in structural yearly savings, with one-off cost of around EUR 8 million Continuing deployment of our IT platform, including AI capabilities H1 2025 Key points Revenue of EUR 613 million, down 11% (down 8% organically) Gross Profit of EUR 109 million, down 19% (down 16% organically) Underlying EBIT of EUR 14.7 million, down 45% (down 31% organically) Free cash flow EUR 24.3 negative (EUR 0.2 negative) Cost reduction program delivered EUR 13.4 lower cost to EUR 94 million, down 12% Earnings per share of EUR 0.01 (H1 2024: EUR 0.30) Peter de Laat, CEO: 'Our second quarter unfolded largely in line with expectations, with a slightly improved year-on-year trend compared to Q1. We saw continued softness in the Netherlands and DACH. At the same time, the performance in Australasia, Americas and Asia was relatively strong. In DACH, we were able to keep the headcount fairly stable compared to Q1. The announced governmental investments in DACH do not yet result in an increased demand although we are growing in the Defence and Energy market. In the Netherlands, clients remain reluctant to work with freelancers. Our global perm placement activities recovered slightly compared to Q1, with fees increasing from EUR 3.3 million to EUR 4.1 million, although still significantly down on last year. The cost reduction program launched in the summer of 2024 is delivering results. We achieved EUR 13.4 million cost savings in H1, ahead of our targeted EUR 20 million annual run-rate. Considering the ongoing market challenges we have now initiated an additional programme, including the closure of our test centre for car parts in Germany. This additional programme is expected to deliver around EUR 10 million in additional annual savings, with a one-off impact of EUR 8 million. With these measures, we continue to align our cost structure to the continued lower level of activity. Meanwhile, we continue to strengthen our digital foundation and drive the integration of AI into our services. These investments, combined with our disciplined execution, ensure we remain well-positioned to capture the opportunities that lie ahead, even in a challenging macroeconomic context.' Results callToday (1 August 2025), at 10:30 AM CEST, Brunel will be hosting a results call. ID 014728 The dial-in number for the Netherlands is +31 85 888 7233, for UK: +44 800 358 1035, for US: +1 646 233 4753. Other locations – see You can listen to the call through a real-time audio webcast. You can access the webcast and presentation at A replay of the presentation and the Q&A will be available on our website by the end of the day. Attachment: Press Release Brunel Brunel Q2 and H1 2025 Brunel International NVSign in to access your portfolio

The fellowship offering job-hunting grads an AI training lifeline
The fellowship offering job-hunting grads an AI training lifeline

Fast Company

time7 minutes ago

  • Fast Company

The fellowship offering job-hunting grads an AI training lifeline

In early March, Volkan Çinar, a chemistry postdoc at MIT, received an email recruiting him to train AI models. Çinar studies carbon-carbon bonds formation in graphene. Given the stiff competition for jobs in academia, Çinar was no longer sure if his dream of working in academia made sense. So he was receptive to the email's pitch. The email came from Handshake, the job search platform which connects 18 million students from 1,600 higher ed institutions to career opportunities, introducing its new MOVE (Model Validation Expert) Fellowship. The new program gives Handshake an entrée into the high end of AI model training, the hot sector that's seen Meta acquire a 49% stake in Scale for more than $14 billion and Surge bootstrap itself to $1 billion in revenue. For talent like Çinar, MOVE offers better money than teaching and comes with AI training. 'I'd never considered working in AI,' Çinar says. 'But given that I'm exploring other positions, I thought I'd give it a try,' even if it meant the risk of paving the road for AI models to take over his field. What to expect from the program The Fellowship's acceptance rate and pay range How to make yourself competitive for an AI gig A better way to source expert talent for AI labs

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store