Research, ethics, and artificial intelligence
Public Domain
A study on artificial intelligence by Zurich University, approved by its own ethics committee, has been widely lambasted for failing to get informed consent of unwitting participants.
How do you do research on how chatbots affect opinions on social media, without resorting to throwing artificial voices into an opinion thread on social media?
Well, certainly not the way that the University of Zurich went about it, by secretly launching a series of Reddit profiles run by chatbots pretending to be variously a rape victim, trauma councillor, and a black man opposed to the Black Lives Matter movement.
It's now been threatened with legal action after failing to get informed consent for the experiment.
The research team only disclosed their experiment to the wider public after they'd finished collecting data, and their post outlining what they'd done attracted thousands of comments from users who felt their privacy had been breached.
Reddit responded by banning the university from its platform and threatening legal action. The university has now promised the study's results won't be released to the public, and says they will be reviewing and strengthening their ethical review process.
This particular issue may be resolved, but the discussions around the ethical guidelines when it comes to research using artificial intelligence are ongoing.
"My initial thoughts were quite similar to a lot of people on Reddit, which was, 'They've done what?'," says Dr Andrew Lensen, a senior lecturer in Artificial Intelligence at Victoria University.
By not informing Reddit users that they might be subject to this experiment Lensen says the researchers bypassed one of the fundamental principles of ethics.
"Consent ... in a lot of AI research especially it does come back to the idea of consent, which is that if you are going to run a study with human participants, then they need to opt in and they need to be consenting in an informed and free way," he says.
In a Reddit post the researchers said, "to ethically test LLMs' [large language models] persuasive power in realistic scenarios, an unaware setting was necessary," which the ethics committee at the University of Zurich acknowledged before giving this research the green light.
But Lensen questions this reasoning, saying the argument of prior consent being "impractical" wouldn't get past any ethics committee in New Zealand.
"The human ethics committee would be saying, 'well how can you redesign your experiment so that you can get consent, while still meeting the essence of what you're trying to study?'" he asks.
It turns out there are other ways, and Reddit users were quick to alert the researchers to a similar study conducted by OpenAI.
"[OpenAI] took existing threads and then made Arti-Chatbot to respond and then compared the Chatbot responses to the human responses... and then they had people essentially score them in a blind way, so the person scoring didn't know which was a Chatbot and which was a human," Lensen says.
There has been an influx in the number of bots lurking in the comment sections of various social media platforms.
It's hard to put an exact figure on how many there are because they're constantly changing and updating to become "more human", making them difficult to detect.
But Lensen says that just means we, being the actual real people, need to think twice about any accounts we engage with.
"It's not necessarily that the things posted by bots online are 'bad' ... but as humans we also want to know what is AI generated and what is human because we value those things differently," he says.
Lensen says AI can be helpful when it comes to getting information and talking through ideas, but they can't fully replace a real-life person.
"We tend to want human reactions and human responses, we don't want facts and hot AI takes," he says.
Lensen says there is a need for more research like Zurich University's, with the addition of prior consent, to understand how people interact with bots and what the effect is.
"Is it going to polarise people or is it going to bring people together? How do people feel, how do they react when you tell them afterwards whether or not it was a bot or human and why do they feel that way?
"And what does that then mean for how we want the internet or social media or even our society to operate with this influx of bots?"
Check out how to listen to and follow The Detail
here
.
You can also stay up-to-date by liking us on
or following us on
.
Sign up for Ngā Pitopito Kōrero
,
a daily newsletter curated by our editors and delivered straight to your inbox every weekday.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

RNZ News
12-08-2025
- RNZ News
Realizing AI can make us dumber. So what now?
technology life and society about 1 hour ago Wall Street Journal tech reporter Sam Schechner says he first suspected artificial intelligence was eating his brain when he used ChatGPT to help write an email to his son's basketball coach. He lives in Paris, and speaks fluent French, but Schechner started asking AI for help with emails, summarizing legal documents, even texting friends. Soon French words didn't come as easily when effort was optional and mental concentration was offloaded to AI. He writes about his experience in a piece for The Wall Street Journal titled How I Realized AI Was Making Me Stupid-and What I Do Now.

RNZ News
15-07-2025
- RNZ News
Scientists keen to send probe to Venus to search for life
science space 15 minutes ago Scientists in the UK want to send a tiny space probe to Venus to search for signs of life in its clouds. Imperial College London's Dave Clements spoke to Corin Dann.

RNZ News
13-07-2025
- RNZ News
How can the education system keep up with a changing world?
In a world that seems to be constantly changing - what and how should children be learning? It's a question that educators around the world are grappling with, particularly as artificial intelligence cements itself in the learning landscape. Paul Ellis is at the forefront of these challenges, as Head of Thought Leadership for Education Futures at Cambridge University Press and Assessment. His focus is on helping schools work out how best to arm their students with what they need now, and into the future. He says climate change and environmental challenges is one area that will become more important for students to navigate - and should be brought out from under the science umbrella to other parts of the curriculum. He also believes global perspectives can help with understanding geopolitical problems, and there's a need for educators to strike a balance giving students a good foundation of knowledge - along with the skills to use it. To embed this content on your own webpage, cut and paste the following: See terms of use.