
China has dealt with over 3,500 non-compliant AI products since April
This is a result of a campaign launched in April, targeting the abuse of AI technology in forms such as deepfake face-swapping and voice-cloning that infringes on public interests, as well as the failure to properly label AI-generated content that has misled the public, according to the Cyberspace Administration of China (CAC) on Friday.
More than 960,000 items with illegal or harmful content were removed from the internet, and over 3,700 related accounts were shut down over the period, the CAC said.
During this phase, the CAC instructed local cyberspace authorities to intensify their actions against non-compliant AI products, and to cut off their marketing and traffic channels. It urged major websites and platforms to strengthen their technical safeguards. Efforts were also made to accelerate the implementation of labeling regulations for AI-generated content.
In the next phase of the campaign, the CAC will focus on prominent issues such as AI-generated rumors and vulgar online content, build a technical monitoring system, and standardize sanction protocols to maintain a healthy online environment and steer AI development in a more positive direction.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
10 minutes ago
- The Star
OpenAI releases free, downloadable models in competition catch-up
SAN FRANCISCO: OpenAI on Tuesday released two new artificial intelligence (AI) models that can be downloaded for free and altered by users, to challenge similar offerings by US and Chinese competition. The release of gpt-oss-120b and gpt-oss-20b "open-weight language models" comes as the ChatGPT-maker is under pressure to share inner workings of its software in the spirit of its origin as a nonprofit. "Going back to when we started in 2015, OpenAI's mission is to ensure AGI (Artificial General Intelligence) that benefits all of humanity," said OpenAI chief executive Sam Altman. An open-weight model, in the context of generative AI, is one in which the trained parameters are made public, enabling users to fine-tune it. Meta touts its open-source approach to AI, and Chinese AI startup DeepSeek rattled the industry with its low-cost, high-performance model boasting an open weight approach that allows users to customise the technology. "This is the first time that we're releasing an open-weight model in language in a long time, and it's really incredible," OpenAI co-founder and president Greg Brockman said during a briefing with journalists. The new, text-only models deliver strong performance at low cost, according to OpenAI, which said they are suited for AI jobs like searching the internet or executing computer code, and are designed to be easy to run on local computer systems. "We are quite hopeful that this release will enable new kinds of research and the creation of new kinds of products," Altman said. OpenAI said it is working with partners including French telecommunications giant Orange and cloud-based data platform Snowflake on real-world uses of the models. The open-weight models have been tuned to thwart being used for malicious purposes, according to OpenAI. Altman early this year said his company had been "on the wrong side of history" when it came to being open about how its technology works. He later announced that OpenAI will continue to be run as a nonprofit, abandoning a contested plan to convert into a for-profit organisation. The structural issue had become a point of contention, with major investors pushing for better returns. That plan faced strong criticism from AI safety activists and co-founder Elon Musk, who sued the company he left in 2018, claiming the proposal violated its founding philosophy. In the revised plan, OpenAI's money-making arm will be open to generate profits but will remain under the nonprofit board's supervision. – AFP

Barnama
37 minutes ago
- Barnama
- Is AI In The Newsroom A Tool, Threat Or Transformation?
Opinions on topical issues from thought leaders, columnists and editors. But with such powers come profound questions: Can we trust AI-generated content? What becomes of journalists' jobs? And how do we ensure that ethics remain at the forefront? Artificial Intelligence (AI) is altering the task of journalism in the ways the news is collected, processed, and delivered. Through automated summaries or tools that can identify deepfakes, AI provides newsrooms a means to increase productivity and velocity. However, the artistry of storytelling, emotional understanding, and values-based thinking are uniquely human traits that AI cannot replicate. AI is not aiming to replace journalists; it seeks to make their work easier. The technology is capable of handling tedious tasks like summarising articles, scanning through vast datasets, and writing initial reports. As newsrooms, including those in Southeast Asia, press on with AI, it is worth considering both the opportunities and challenges. This article examines what AI can and cannot do for journalism, and how journalists will need to adapt to the new age in which we find ourselves. A recent instance involving the fabrication of book titles in an AI-generated summer reading list illustrates its shortcomings. Blunders like these demonstrate why human supervision is imperative. Ultimately, AI should be regarded as an alleviating aid, not a substitute. Fast forward five years from now, the ideal newsroom is one in which AI is fully integrated, but journalists remain in control. AI can manage tedious work such as sifting through data, customising content for different demographics, and drafting preliminary versions of stories. This liberates reporters to focus on investigating, storytelling, and tough ethical decisions. The partnership between AI and human beings is crucial. AI offers up the data, and journalists supply the understanding. Transparency, accountability, and regular training will ensure that newsrooms remain rooted in journalistic values. AI excels at handling tasks that require processing large volumes of information. It can generate financial reports, sports news, or weather predictions in moments. It also assists with verifying facts by swiftly comparing various sources. However, when it comes to comprehending cultural context, interpreting subtle cues, or making moral choices, AI continues to struggle. It lacks human intuition and often amplifies biases present in its training data. This is where human judgement becomes essential. Across Southeast Asia, news organisations are beginning to adopt AI, although the pace of implementation varies. In Malaysia, Media Prima had announced plans to integrate AI across its operations by the end of 2024. BERNAMA is offering AI training for its staff and has previously experimented with AI-generated content, such as an Azan (call to prayer) video during Ramadan. These preliminary steps show both growing interest and a clear recognition of the caution required in AI integration. Risks may arise from excessive dependence on AI While AI can enhance productivity, excessive dependence on it may pose serious risks. It could lead to a decline in critical thinking, weaken ethical oversight, and undermine the human factor that lends journalism its trustworthiness. That infamous summer reading list with imaginary books is only one example of what can go awry. If news organisations are not careful, over-reliance on AI might erode public confidence. The right balance – using AI as a tool, not a crutch – is crucial. To maintain journalism's credibility, a multi-faceted strategy is essential. News organisations should use AI technologies to detect manipulated content while also equipping journalists with digital literacy and critical thinking skills. Transparency is vital. AI-generated content should be clearly marked, and its role in the reporting process explicitly explained. Above all, media outlets must establish and follow rigorous ethical standards regarding the use of AI. These actions will help preserve public trust. AI will undoubtedly transform journalism, but that doesn't necessarily mean job losses. Instead, it's likely to create new roles focused on managing AI systems, curating content, and ensuring ethical compliance. For example, Reuters uses an AI tool called Lynx Insight to help journalists identify emerging stories, thus freeing reporters to concentrate on more creative and investigative tasks. What's needed now is for journalists to upskill — to learn how to work with AI, not against it. Journalists must acquire knowledge to stay relevant Journalists must acquire knowledge in data analysis, AI principles, and digital ethics to stay relevant. Understanding how AI functions and learning to interpret data responsibly will be crucial. In evaluating AI-generated content and guarding against misinformation, reporters will need strong digital literacy. Even in the AI era, ethics and transparency must remain central values. With the right training, modern journalists can continue to be vital and relevant. AI is rapidly integrating into the daily routines of journalism. While it offers immense benefits in speed and information processing, it can never replace the thoughtfulness, compassion, or ethical judgement that human journalists bring. As newsrooms, particularly in Southeast Asia, embrace these technologies, caution, accountability, and a commitment to truth must guide their journey. The journalists who adapt and learn to collaborate with AI will not just survive the changes; they will shape the future of trustworthy, meaningful journalism. -- BERNAMA Prof Ts Dr Manjit Singh Sidhu is a Professor at the College of Computing and Informatics, Universiti Tenaga Nasional (UNITEN). He is a Fellow of the British Computer Society, Chartered IT Professional, Fellow of the Malaysian Scientific Association, Senior IEEE member and Professional Technologist MBOT Malaysia.


Malay Mail
3 hours ago
- Malay Mail
Perak MB encourages AI adoption in Islamic affairs, highlights digital initiatives
IPOH, August 7 — Muslims should not view artificial intelligence (AI) as a threat but as an opportunity, provided it is guided by divine revelation and aligned with Islamic principles, said Perak Menteri Besar Datuk Seri Saarani Mohamad. Speaking at the launch of a national symposium on AI challenges, Saarani said that technological progress must be aligned with the principles of Maqasid Syariah, which prioritise the preservation of religion, life, intellect, lineage, and property. 'In line with this awareness, various digitalisation initiatives and the integration of AI are being actively strengthened at the national level. 'Efforts to enhance digital technology in the governance of religious affairs are now being vigorously implemented across the country,' he said at the event held at the Perak Royal Golf Club. Saarani highlighted several digital initiatives already implemented at the state level that integrate technology with Islamic administrative affairs. Through a collaboration between the state government and the Perak Islamic Religious Department (JAIPk), the Perak Digital 2.0 Portal now offers an e-donation feature with dedicated QR codes, enabling Muslims to contribute to mosques safely and transparently. He said JAIPk has also initiated early exposure to AI technology among its personnel to explore its potential in enhancing religious services. This includes developing a Shariah-based virtual assistant to answer basic fiqh (Islamic jurisprudence) questions and assisting in filtering deviant social media content. Further embracing modern technology, the Perak Islamic Religious and Malay Customs Council (MAIPk) has launched the MAIPk Bestari application, which allows for the digital payment of zakat fitrah, enhancing efficiency and convenience for payers. In addition, Saarani noted that the Perak Digital application, specifically designed for state mosques, was upgraded in 2024 with new modules and enhanced security systems for managing data, activities, e-donations, and administration. The Perak Mufti Department is also strengthening its staff's digital communication skills through comprehensive ICT training.