
China facing growing campaign to make it compulsory to label AI-generated content
Support for the mandatory labelling of
AI-generated content is growing in China as a lawmaker's proposal was backed by a commentary in party mouthpiece People's Daily.
Advertisement
Li Dongsheng, a member of the
National People's Congress , said the labelling could reduce malicious misuse of the technology.
In the commentary published online on Thursday, Liu Xingliang, director of the Beijing-based Data Centre of China Internet (DCCI)'s internet research institute, wrote that the requirement was a 'practical necessity' and would 'align with current social governance needs and contribute to the industry's long-term healthy growth.'
Citing the rise of
models such as DeepSeek, Liu wrote that artificial intelligence was becoming part of people's daily lives and work but also gave rise to a 'series of social issues, such as academic fraud and deepfake scams'.
'These problems may accelerate the spread of misinformation and even pose a threat to social stability,' Liu said.
Advertisement
'Any emerging technology undergoes a phase of unregulated expansion in its early development stages. However, as negative impacts become more apparent, appropriate regulation becomes a necessary measure.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
.jpg%3Fitok%3DxzH4i0ZA&w=3840&q=100)

South China Morning Post
an hour ago
- South China Morning Post
How Hangzhou's ‘Six Little Dragons' built a new Chinese tech hub
Read more: Hangzhou is rapidly gaining recognition as a leading technology centre in China, primarily because of the rise of local start-ups known as the 'Six Little Dragons'. One of the hottest new companies is DeepSeek, which captured global attention in early 2025 with its cost-efficient artificial intelligence (AI) models. The other 'dragons' include robotics firms Unitree and Deep Robotics, video game studio Game Science, brain-machine interface innovator BrainCo, and 3D interior design software developer Manycore. The cluster of hi-tech companies forming in Hangzhou is no coincidence. The city is already home to Zhejiang University and Alibaba, which owns the South China Morning Post. It attracts talent from across the country, and its business-friendly policies have nurtured many start-ups.


South China Morning Post
an hour ago
- South China Morning Post
Meet Goku, the Chinese quant trading fund with AI ambitions beyond finance
Goku Technologies, a Shanghai-based quantitative trading fund, is expanding its ambitions beyond finance, looking to artificial intelligence (AI) as a means to overcome challenges faced by humanity, according to CEO Wang Xiao. Advertisement In an interview with the Post, Wang said the firm's AI initiatives were driven by a vision that transcended short-term commercial interests. 'What I believe is [AI exists] to solve issues for humanity – this is the only thing that matters,' he said. Last month, Goku launched Shanghai AllMind Artificial Intelligence Technology, an AI-focused subsidiary, in a move that draws parallels with Chinese hedge fund High-Flyer Quant, which established DeepSeek in 2023 Wang, however, played down the comparison, pointing out that Goku was pursuing a different path with AllMind. Instead of competing directly with major tech companies like Alibaba Group Holding or start-ups like DeepSeek, which develop AI models from scratch, AllMind was envisioned as a catalyst for AI-assisted scientific discovery, according to Wang. Alibaba owns the South China Morning Post. Shanghai-based Goku Technologies has drawn comparisons with High-Flyer Quant, owner of leading Chinese AI start-up DeepSeek. Photo: Screenshot Goku gained attention last month with the release of a research paper introducing a new AI training technique that combines supervised fine-tuning and reinforcement learning – key methods used by industry leaders like OpenAI and DeepSeek.


South China Morning Post
11 hours ago
- South China Morning Post
Misuse of AI risks harming public confidence in the justice system
The risks of relying on artificial intelligence for research, without verifying the results, should by now be clear to all, almost three years after the groundbreaking launch of ChatGPT. Lawyers using generative AI tools to prepare material for court should be setting a shining example. But judges around the world, from Britain to the US, Canada and Australia, continue to be presented with arguments based on non-existent court judgments generated by AI. More needs to be done to prevent such abuses. Hong Kong is not immune to the problem. Secretary for Justice Paul Lam Ting-kwok used a ceremony for three new senior counsel to sound a warning last weekend. He said the city's legal profession faced the challenge of adopting new technology without compromising integrity. Lam then quoted from a UK court judgment delivered the previous day. The court had warned that AI tools are 'not capable of conducting reliable legal research'. Dame Victoria Sharp, one of two judges ruling in the case of Ayinde, pointed out that AI's 'apparently coherent and plausible responses' may be entirely incorrect or simply untrue.