19 hours ago
- Business
- South China Morning Post
Chinese AI expert warns of ‘existential risks' when large models begin to deceive
Failing to address the risks posed by deceptive, out-of-control
artificial intelligence (AI) could leave humans facing 'existential risks', a leading Chinese computer scientist and AI expert has warned.
Andrew Yao Chi-chih, dean of Tsinghua University's College of AI, told an on-campus forum on Monday that the industry had seen multiple instances of deceptive behaviour by large language models (LLMs).
'Once large models become sufficiently intelligent, they will deceive people,' Yao said at the forum themed 'Ethical Singularity in the AI Era', while calling for attention to be paid to the 'existential risks' arising from such problems.
Andrew Yao is a Chinese Academy of Sciences academician and 2000 Turing Award winner. Photo: Weibo
One such risk was the loss of control of the LLMs as general AI capabilities advanced rapidly, Yao was quoted as saying by state-backed media outlet The Paper.
Yao – a Chinese Academy of Sciences academician and 2000 Turing Award winner – cited the case of an LLM that accessed internal company emails to threaten a supervisor and prevent itself from being shut down.
This kind of behaviour, he argued, already showed that AI was beginning to 'cross boundaries' and was becoming increasingly dangerous.
China – the public and government agencies alike – has been in the grip of an AI frenzy since the Chinese start-up DeepSeek launched a chatbot in January to rival US giants such as ChatGPT, stunning the tech world.
Authorities strongly support the expansion of AI adoption and have significantly increased investment in the sector, making AI development a national priority backed by substantial incentives.