logo
#

Latest news with #FondazioneBrunoKessler

AI chatbots are better debaters than humans, study finds
AI chatbots are better debaters than humans, study finds

NZ Herald

time20-05-2025

  • Politics
  • NZ Herald

AI chatbots are better debaters than humans, study finds

Gallotti, head of the Complex Human Behaviour Unit at the Fondazione Bruno Kessler research institute in Italy, added that humans with their opponents' personal information were actually slightly less persuasive than humans without that knowledge. Gallotti and his colleagues came to these conclusions by matching 900 people based in the United States with either another human or GPT-4, the LLM created by OpenAI known colloquially as ChatGPT. While the 900 people had no demographic information on who they were debating, in some instances, their opponents – human or AI – had access to some basic demographic information that the participants had provided, specifically their gender, age, ethnicity, education level, employment status and political affiliation. The pairs then debated a number of contentious sociopolitical issues, such as the death penalty or climate change. With the debates phrased as questions like 'should abortion be legal' or 'should the US ban fossil fuels,' the participants were allowed a four-minute opening in which they argued for or against, a three-minute rebuttal to their opponents' arguments and then a three-minute conclusion. The participants then rated how much they agreed with the debate proposition on a scale of 1 to 5, the results of which the researchers compared against the ratings they provided before the debate began and used to measure how much their opponents were able to sway their opinion. 'We have clearly reached the technological level where it is possible to create a network of LLM-based automated accounts that are able to strategically nudge the public opinion in one direction,' Gallotti said in an email. The LLMs' use of the personal information was subtle but effective. In arguing for government-backed universal basic income, the LLM emphasised economic growth and hard work when debating a White male Republican between the ages of 35 and 44. But when debating a Black female Democrat between the ages of 45 and 54 on that same topic, the LLM talked about the wealth gap disproportionately affecting minority communities and argued that universal basic income could aid in the promotion of equality. 'In light of our research, it becomes urgent and necessary for everybody to become aware of the practice of microtargeting that is rendered possible by the enormous amount of personal data we scatter around the web,' Gallotti said. 'In our work, we observe that AI-based targeted persuasion is already very effective with only basic and relatively available information.' Sandra Wachter, a professor of technology and regulation at the University of Oxford, described the study's findings as 'quite alarming'. Wachter, who was not affiliated with the study, said she was most concerned in particular with how the models could use this persuasiveness in spreading lies and misinformation. 'Large language models do not distinguish between fact and fiction. … They are not, strictly speaking, designed to tell the truth. Yet they are implemented in many sectors where truth and detail matter, such as education, science, health, the media, law, and finance,' Wachter said in an email. Junade Ali, an AI and cybersecurity expert at the Institute for Engineering and Technology in Britain, said that though he felt the study did not weigh the impact of 'social trust in the messenger' – how the chatbot might tailor its argument if it knew it was debating a trained advocate or expert with knowledge on the topic and how persuasive that argument would be – it nevertheless 'highlights a key problem with AI technologies'. 'They are often tuned to say what people want to hear, rather than what is necessarily true,' he said in an email. Gallotti said he thinks stricter and more specific policies and regulations can help counter the impact of AI persuasion. He noted that while the European Union's first-of-its-kind AI Act prohibits AI systems that deploy 'subliminal techniques' or 'purposefully manipulative or deceptive techniques' that could impair citizens' ability to make an informed decision, there is no clear definition for what qualifies as subliminal, manipulative or deceptive. 'Our research demonstrates precisely why these definitional challenges matter: When persuasion is highly personalised based on sociodemographic factors, the line between legitimate persuasion and manipulation becomes increasingly blurred,' he said.

AI Gets a Lot Better at Debating When It Knows Who You Are, Study Finds
AI Gets a Lot Better at Debating When It Knows Who You Are, Study Finds

Gizmodo

time19-05-2025

  • Science
  • Gizmodo

AI Gets a Lot Better at Debating When It Knows Who You Are, Study Finds

A new study shows that GPT-4 reliably wins debates against its human counterparts in one-on-one conversations—and the technology gets even more persuasive when it knows your age, job, and political leanings. Researchers at EPFL in Switzerland, Princeton University, and the Fondazione Bruno Kessler in Italy paired 900 study participants with either a human debate partner or OpenAI's GPT-4, a large language model (LLM) that, by design, produces mostly text responses to human prompts. In some cases, the participants (both machine and human) had access to their counterparts' basic demographic info, including gender, age, education, employment, ethnicity, and political affiliation. The team's research—published today in Nature Human Behaviour—found that the AI was 64.4% more persuasive than human opponents when given that personal information; without the personal data, the AI's performance was indistinguishable from the human debaters. 'In recent decades, the diffusion of social media and other online platforms has expanded the potential of mass persuasion by enabling personalization or 'microtargeting'—the tailoring of messages to an individual or a group to enhance their persuasiveness,' the team wrote. When GPT-4 was allowed to personalize its arguments, it became significantly more persuasive than any human—boosting the odds of changing someone's mind by 81.2% compared to human-human debates. Importantly, human debaters did not become so persuasive when given access to that personal information. 'In the context of persuasion, experts have widely expressed concerns about the risk of LLMs being used to manipulate online conversations and pollute the information ecosystem by spreading misinformation, exacerbating political polarization, reinforcing echo chambers and persuading individuals to adopt new beliefs,' the researchers added. GPT-4 can argue with you, and given a set of facts about you, it may excel at convincing you to change your point of view, the researchers found. The team notes in the paper's discussion that LLMs have previously been criticized for generating and diffusing hate speech, misinformation, and propaganda; at scale, LLMs with users' personal information could be harnessed for malicious purposes. The team's research pairs nicely with a recent ChatGPT update that allows the model to remember more of users' conversations (with their permission), meaning that the AI can have access to a catalogue of information about its users. But there's also good news—or bad news—depending on how you see it. GPT-4 was very effective at persuading its opponents on less controversial issues, but with more entrenched positions (referred to in the research as 'opinion strength'), the bot had a harder time convincing humans to change their minds. In other words, there's no indication that GPT-4 would be any more successful than you are at the Thanksgiving debate table. What's more, the researchers found that GPT-4 tends to use more logical and analytical language, while human debaters relied more on personal pronouns and emotional appeals. Surprisingly, personalization didn't dramatically change GPT-4's tone or style—it just made its arguments more targeted. In three out of four cases, human participants could correctly identify their opponent as AI, which the researchers attribute to GPT-4's distinct writing style. But participants had a difficult time identifying human opponents as human. Regardless, people were more likely to change their mind when they thought they were arguing with an AI than when they believed their opponent was human. The team behind the study says this experiment should serve as a 'proof of concept' for what could happen on platforms like Reddit, Facebook, or X, where debates and controversial topics are routine—and bots are a very established presence. The recent paper shows that it doesn't take Cambridge Analytica-level profiling for an AI to change human minds, which the machines managed with just six types of personal information. As people increasingly rely on LLMs for help with rote tasks, homework, documentation, and even therapy, it's critical that human users remain circumspect about the information they're fed. It remains ironic that social media—once advertised as the connective tissue of the digital age—fuels loneliness and isolation, as two studies on chatbots found in March. So even if you find yourself in a debate with an LLM, ask yourself: What exactly is the point of discussing such a complicated human issue with a machine? And what do we lose when we hand over the art of persuasion to algorithms? Debating isn't just about winning an argument—it's a quintessentially human thing to do. There's a reason we seek out real conversations, especially one-on-one: To build personal connections and find common ground, something that machines, with all their powerful learning tools, are not capable of.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store