21-05-2025
Express View: Matter of debate
Another week, and another warning about the potential of artificial intelligence. Researchers at the Swiss Federal Institute of Technology in Lausanne have found that when it comes to persuading others in a debate, large language models (LLMs) like ChatGPT can perform as well as humans, if not better. In the age of mis/dis-information, this ability could spell catastrophe, says the team behind the study: Not only can AI offer convincing, rational arguments, when fed with its opponents' personal details, it can customise its arguments to win the debate.
No amount of Ghiblification can prettify the spectre that this poses, of conspiracy theories, hoaxes and 'alternative facts' supercharged by personal data and amplified across platforms. And for those whose lives have already been affected by the 'AI wave', particularly when it comes to employment, there is cold comfort in the suggestion that AI's debating tools can be deployed on the side of the angels, that they can be used to persuade people away from political polarisation and harmful behaviours and beliefs.
Yet, as the computational linguist Emily M Bender said, AI is but a 'stochastic parrot', capable of producing plausible language but with no real understanding of what it is saying. Which means that if AI does cause harm through its 'debating skills', the blame rests entirely on the humans involved. To debate is, after all, a deeply human act, driven by other human things such as desire, power and ambition. Any technology capable of experiencing these remains in the realm of fiction. The scariest thing about AI is not that it can brainwash the multitudes, but that those who hold its reins are human and, therefore, deeply flawed.