logo
Youth hooked on AI child porn

Youth hooked on AI child porn

The Star26-05-2025

Authorities leading multi-national effort to combat issue, find offenders
PETALING JAYA: Artificial intelligence (AI)-generated deepfakes are driving a steep rise in synthetic child sexual abuse material (CSAM) and fuelling addictions to such content, particularly among minors.
Bukit Aman Criminal Investi­gations Department (CID) sexual, women and child investigation division (D11) principal assistant director Senior Asst Comm Siti Kamsiah Hassan ( pic ) said AI is increasingly being misused to generate hyper-realistic CSAM, including deepfake imagery, which is complicating detection and posing challenges for law enforcement in bringing those behind it to book.
She said offenders are also exploiting encrypted messaging applications and dark web forums to share CSAM and are communicating anonymously.
SAC Siti Kamsiah said such misuse has also led to a surge in sextortion cases, especially targeted at minors and through social media platforms.
'Many of these victims received threats that traumatised them severely and drove them to suicide. Our latest findings based on the cases under investigation show a rise in underage or teenage offenders who have developed an addiction to pornography. Many were found to have downloaded and stored CSAM in cloud storage or their email accounts,' she said in an interview recently.
Earlier this month, SAC Siti Kamsiah led a team from her division alongside other units of the CID in a collaborative effort with the Dutch authorities to exchange advanced strategies for combating CSAM and identifying offenders.
She said the meeting enabled both nations to carry out crucial discussions on the technical and legislative challenges faced by the authorities.
'We can see how a CyberTipline Report, channelled through the National Centre for Missing and Exploited Children (NCMEC), could serve as a document for presentation in court, even in the absence of a mutual legal assistance treaty (MLAT), or as evidence when none is available against an offender.
'We also discussed the legality of ratifying the acceptance of data evidence obtained by police through hacking into the computers of offenders.
'The sharing of invaluable knowledge and information in the meeting was very beneficial to our police force, especially for the D11.
It will definitely help make us a united, capable and responsive unit in countering the threat of sexual crimes against children in cyberspace,' she said.
Representatives of the Dutch authorities comprised Netherlands police attache Eddy Assens, Transnational Sexual Child Abuse Expertise Centre programme manager Jan van der Helm and Netherlands Public Prosecution Service officers Linda van den Oever, Nicole Smits and Meike Willebrands.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Fake Pope sermons go viral, fuelling fears over AI misinformation
Fake Pope sermons go viral, fuelling fears over AI misinformation

Malay Mail

time17 hours ago

  • Malay Mail

Fake Pope sermons go viral, fuelling fears over AI misinformation

WASHINGTON, June 6 — AI-generated videos and audios of Pope Leo XIV are populating rapidly online, racking up views as platforms struggle to police them. An AFP investigation identified dozens of YouTube and TikTok pages that have been churning out AI-generated messages delivered in the pope's voice or otherwise attributed to him since he took charge of the Catholic Church last month. The hundreds of fabricated sermons and speeches, in English and Spanish, underscore how easily hoaxes created using artificial intelligence can elude detection and dupe viewers. 'There's natural interest in what the new pope has to say, and people don't yet know his stance and style,' said University of Washington professor emeritus Oren Etzioni, founder of a nonprofit focused on fighting deepfakes. 'A perfect opportunity to sow mischief with AI-generated misinformation.' After AFP presented YouTube with 26 channels posting predominantly AI-generated pope content, the platform terminated 16 of them for violating its policies against spam, deceptive practices and scams, and another for violating YouTube's terms of service. 'We terminated several channels flagged to us by AFP for violating our Spam policies and Terms of Service,' spokesperson Jack Malon said. The company also booted an additional six pages from its partner program allowing creators to monetize their content. TikTok similarly removed 11 accounts that AFP pointed out — with over 1.3 million combined followers — citing the platform's policies against impersonation, harmful misinformation and misleading AI-generated content of public figures. 'Chaotic uses' With names such as 'Pope Leo XIV Vision,' the social media pages portrayed the pontiff supposedly offering a flurry of warnings and lessons he never preached. But disclaimers annotating their use of AI were often hard to find — and sometimes non-existent. On YouTube, a label demarcating 'altered or synthetic content' is required for material that makes someone appear to say something they did not. But such disclosures only show up toward the bottom of each video's click-to-open description. A YouTube spokesperson said the company has since applied a more prominent label to some videos on the channels flagged by AFP that were not found to have violated the platform's guidelines. TikTok also requires creators to label posts sharing realistic AI-generated content, though several pope-centric videos went unmarked. A TikTok spokesperson said the company proactively removes policy-violating content and uses verified badges to signal authentic accounts. Brian Patrick Green, director of technology ethics at Santa Clara University, said the moderation difficulties are the result of rapid AI developments inspiring 'chaotic uses of the technology.' Many clips on the YouTube channels AFP identified amassed tens of thousands of views before being deactivated. On TikTok, one Spanish-language video received 9.6 million views while claiming to show Leo preaching about the value of supportive women. Another, which carried an AI label but still fooled viewers, was watched some 32.9 million times. No video on the pope's official Instagram page has more than 6 million views. Experts say even seemingly harmless fakes can be problematic especially if used to farm engagement for accounts that might later sell their audiences or pivot to other misinformation. The AI-generated sermons not only 'corrode the pope's moral authority' and 'make whatever he actually says less believable,' Green said, but could be harnessed 'to build up trust around your channel before having the pope say something outrageous or politically expedient.' The pope himself has also warned about the risks of AI, while Vatican News called out a deepfake that purported to show Leo praising Burkina Faso leader Ibrahim Traore, who seized power in a 2022 coup. AFP also debunked clips depicting the pope, who holds American and Peruvian citizenships, criticizing US Vice President JD Vance and Peru's President Dina Boluarte. 'There's a real crisis here,' Green said. 'We're going to have to figure out some way to know whether things are real or fake.' — AFP

Negeri Sembilan Commercial Crime Cases Rise, Losses Hit RM47.74 Million
Negeri Sembilan Commercial Crime Cases Rise, Losses Hit RM47.74 Million

Barnama

timea day ago

  • Barnama

Negeri Sembilan Commercial Crime Cases Rise, Losses Hit RM47.74 Million

SEREMBAN, June 5 (Bernama) -- Commercial crime cases in Negeri Sembilan have increased, with losses reaching RM47.74 million from January to May this year, compared to RM33.84 million in the same period last year. During this period, 1,351 commercial crime cases were reported in Negeri Sembilan, compared to 1,033 cases reported the previous year, said state deputy police chief SAC Muhammad Idzam Jaafar. He said this reflected an increase of 318 cases, or 30.8 per cent, while the total losses also rose by RM13.9 million, or 41.1 per cent, compared to the same period last year. "Online scams were the main type of commercial crime reported in the state, constituting 82 per cent of all commercial crime cases. "These scams, involving telecommunications fraud (480 cases), online purchase scams (248), non-existent investment schemes (208), fake loan scams (151), and love or parcel scams (22), have resulted in losses exceeding RM42 million," he said in a statement today. Meanwhile, he said police had detected a new phone scam, where perpetrators pose as Touch' n Go card agents and police officers. Muhammad Idzam said the scammers would claim that the victim's Touch 'n Go card had been misused by a third party and that the victim was allegedly involved in money laundering and drug-related crimes. "To resolve the issue, the victim is asked to provide details of their assets, including cash and jewellery. They are then instructed to transfer money to a specified bank account and hand over their jewellery to a runner for investigation purposes. "Following investigations, police traced and seized jewellery that had been pawned by the runner at gold shops and pawnshops around Selangor," he said, adding that the case is being investigated under Section 420 of the Penal Code.

OpenAI finds more Chinese groups using ChatGPT for malicious purposes
OpenAI finds more Chinese groups using ChatGPT for malicious purposes

Free Malaysia Today

timea day ago

  • Free Malaysia Today

OpenAI finds more Chinese groups using ChatGPT for malicious purposes

OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China. (AFP pic) SAN FRANCISCO : OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence (AI) technology for covert operations, which the ChatGPT maker described in a report released today. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticised US President Donald Trump's sweeping tariffs, generating X posts, such as 'Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?'. In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarised social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a US$40 billion funding round valuing the company at US$300 billion.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store