
Japan's legal system struggles to deal with AI child pornography
Disturbingly realistic sexual images of children generated by artificial intelligence are spreading worldwide across social media and online forums — often based on real photos scraped from the internet or school yearbooks.
In Japan, these images fall into a legal gray zone that leaves those who have had their photos used as training data for such AI with no clear path to justice. That gap is becoming increasingly dangerous, experts warn, as AI tools are making it easier for anyone to create and share hyper-realistic images with just a few clicks.
'The current law was designed to protect real children, but generative AI has blurred the line between real and fake,' said Takashi Nagase, a lawyer and professor at Kanazawa University who helped draft internet policy at the Internal Affairs and Communications Ministry.
Under Japan's law on child pornography, which took effect in 1999, the possession and distribution of child sexual abuse material (CSAM) is illegal, but the law applies only to depictions of real, identifiable children.
AI-generated content — including those known as 'deepfakes' that are made using pictures of real people — is not explicitly covered under the current framework, nor are human-drawn illustrations that depict child abuse.
As a result, fictional images created using generative AI trained on photos of children often fall outside the law's reach, unless the child depicted can be clearly identified.
This ambiguity is raising alarms among child protection advocates, as policymakers struggle to decide where to draw the line.
Fighting to stop it
One local government has taken matters into its own hands.
On April 1, a revised ordinance took effect in Tottori Prefecture explicitly banning people from creating or distributing AI-generated child pornography — even if it was created outside the prefecture — using photos of children living in the prefecture.
'We've established (with the ordinance) that AI-general deepfake pornography is not something that should be allowed,' said Tottori Gov. Shinji Hirai in a news conference on April 3, calling for the central government to draft a similar law.
While the ordinance does not mention any punishment for violators, which would be something for future discussions, the idea behind it is to raise awareness of the issue, Hirai added.
Without a national law, enforcement remains patchy and potentially limited by jurisdiction. Images kept on servers overseas or shared anonymously can be difficult to trace or remove, even when local ordinances apply.
Nonprofits are also stepping up pressure. ChildFund Japan, which has long campaigned for stronger child protection policies, began focusing on AI-generated abuse imagery following the global #MeToo movement and growing public support for modernizing Japan's approach to CSAM.
As AI tools become more accessible, there's a growing consensus that education must play a central role in protecting children. |
Getty images
In 2023, the group raised the issue in parliament, and has since hosted symposiums, launched a working group, and held discussions with lawmakers and tech platforms.
In a survey it released in March, 87.9% of the 1,200 people aged 15 to 79 in Japan who responded said they want stricter legislation for banning AI-generated CSAM.
'There's growing concern that generative AI isn't being adequately addressed in Japanese media or law,' said Kazuhiko Takeda executive director of ChildFund Japan. 'The law as it stands was not made from a child's perspective. That has to change.'
One possible route currently open to victims is to file a defamation lawsuit. However, this puts the burden on the child and their guardians to notice and file a complaint if their image is misused — 'a completely unrealistic expectation,' he said. Takeda said the deeper issue is one of awareness — both among lawmakers and the public, advocating for comprehensive legislation that also bans AI-generated images using real photos.
Asked during a Lower House Cabinet Committee meeting on April 9 whether existing legislation is sufficient to prosecute those who create or share such images, Masahiro Komura, state minister for justice, said AI-generated CSAM can be restricted under certain conditions.
Komura said if an image 'shows the posture or appearance of a real child in a way that can be visually perceived,' it may qualify as child pornography — especially if the source material is identifiable.
Empowering children
Chief Cabinet Secretary Yoshimasa Hayashi said in the same parliamentary session that a cross-ministerial task force and a government expert panel are working to address legal and ethical questions surrounding generative AI and its misuse.
Other countries, meanwhile, have already moved ahead.
In February, the U.K. announced a new bill that will make it illegal to possess, create or distribute AI tools designed to create CSAM, with a punishment of up to five years in prison.
In the U.S., AI-generated CSAM is illegal under federal law, regardless of whether the victim exists.
Experts say Japan could benefit from studying these models — but legal reform alone isn't enough. As AI tools become more accessible, there's a growing consensus that education must play a central role in protecting children.
That includes teaching young people about the risks of sharing personal photos online, and integrating AI and media literacy into school curricula, which would empower them to protect themselves from evolving threats that the law has yet to catch up with.
'The generated image might be fictional — but the harm to real victims is not,' said Takeda. 'That's the line Japan needs to draw, and it needs to be drawn now.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Yomiuri Shimbun
11 hours ago
- Yomiuri Shimbun
Japan Govt Unveils Guidelines for Managing AI-Incorporated Defense Systems; Aims to Cancel Research of Systems Deemed High Risk
Yomiuri Shimbun file photo Prime Minister Shigeru Ishiba, left, receives an explanation of the next-generation fighter jet at DSEI Japan 2025, an international defense and security equipment exhibition, in Chiba Prefecture in May. The Defense Ministry has unveiled guidelines for managing the risks associated with defense equipment incorporating artificial intelligence, with the aim of ensuring the use of AI remains within the scope of human control. The guidelines clearly state that the government will not permit the research and development of defense equipment if it is found to be Lethal Autonomous Weapons Systems (LAWS) , in which a human is not involved in selecting targets or deciding which targets to attack. According to the guidelines, risk management for research and development must be conducted in three stages: classification of AI equipment, a legal review and technical review. Equipment will be examined under these guidelines based on how the judgement of the AI system impacts destructive capabilities, dividing research and development targets into high-risk and low-risk categories. If deemed high-risk, the government will assess compliance with international and domestic laws prior to the commencement of research and development. This includes missile launches that are assisted by AI to identify targets. If deemed LAWS, the system's development and research will be canceled. After the legal review is complete, the process moves on to a technical review. This stage verifies that the design allows for human control and ensures safety through mechanisms that reduce AI malfunctions. To ensure an effective review, the ministry will need the cooperation of defense contractors that design equipment incorporated with AI, requiring them to disclose AI algorithms and other relevant information. The ministry plans to finalize the specific methods for ensuring cooperation through future discussions with the companies.


Japan Today
18 hours ago
- Japan Today
Japan sets guidelines for expansion of AI-controlled defense systems
Japan has set guidelines for the safe development of artificial intelligence-controlled defense systems, Defense Minister Gen Nakatani said Friday, aiming to address ethical concerns over weapons that can operate without direct human involvement. The guidelines outline steps to be followed in the research and development of such defense equipment, calling for careful classification of the systems, legal and policy reviews to guarantee compliance, and technical evaluations of operational reliability. Nakatani said the guidelines are intended to "reduce risks of using AI while maximizing its benefits," adding they are expected to "provide predictability" for the private sector, with his ministry to "promote research and development activities in a responsible way." Global concerns over autonomous weapons that use AI are mounting, as the deployment of combat drones has become commonplace in the war between Russia and Ukraine and in conflicts in the Middle East. The Defense Ministry will conduct reviews to check whether systems meet requirements such as clear human accountability and operational safety, while categorizing such weaponry as "high" or "low" risk. If categorized as high risk based on whether AI influences destructive capabilities, the ministry will assess whether the equipment complies with international and domestic laws, remains under human control, and is not a fully autonomous lethal weapon. The ministry unveiled its first-ever basic policy for the promotion of AI use last July, focusing on seven fields including detection and identification of military targets, command and control, and logistical support. Last May, the Foreign Ministry submitted a paper on Japan's stance on lethal autonomous weapons systems, or LAWS, to the United Nations, stating that a "human-centric" principle should be maintained and emerging technologies must be developed and used "in a responsible manner." © KYODO


The Mainichi
a day ago
- The Mainichi
Japan sets guidelines for expansion of AI-controlled defense systems
TOKYO (Kyodo) -- Japan has set guidelines for the safe development of artificial intelligence-controlled defense systems, Defense Minister Gen Nakatani said Friday, aiming to address ethical concerns over weapons that can operate without direct human involvement. The guidelines outline steps to be followed in the research and development of such defense equipment, calling for careful classification of the systems, legal and policy reviews to guarantee compliance, and technical evaluations of operational reliability. Nakatani said the guidelines are intended to "reduce risks of using AI while maximizing its benefits," adding they are expected to "provide predictability" for the private sector, with his ministry to "promote research and development activities in a responsible way." Global concerns over autonomous weapons that use AI are mounting, as the deployment of combat drones has become commonplace in the war between Russia and Ukraine and in conflicts in the Middle East. The Defense Ministry will conduct reviews to check whether systems meet requirements such as clear human accountability and operational safety, while categorizing such weaponry as "high" or "low" risk. If categorized as high risk based on whether AI influences destructive capabilities, the ministry will assess whether the equipment complies with international and domestic laws, remains under human control, and is not a fully autonomous lethal weapon. The ministry unveiled its first-ever basic policy for the promotion of AI use last July, focusing on seven fields including detection and identification of military targets, command and control, and logistical support. Last May, the Foreign Ministry submitted a paper on Japan's stance on lethal autonomous weapons systems, or LAWS, to the United Nations, stating that a "human-centric" principle should be maintained and emerging technologies must be developed and used "in a responsible manner."