logo
Pass Stronger Laws, Crack Down on AI-Made Porn

Pass Stronger Laws, Crack Down on AI-Made Porn

Japan Forward29-04-2025

The Tokyo Metropolitan Police Department has arrested three men and a woman on suspicion of selling obscene images created with generative AI via the internet.
The proliferation of obscene fake images created by generative AI has been a concern for some time. However, this is the first time that police have cracked down on AI porn merchants.
The significance of this crackdown is that it shows that images created by AI, not just images of actual individuals, can be subject to prosecution for the crime of distributing obscene materials. The court's decision in this case will be closely watched, but the crackdown appears to be in line with accepted social standards.
The four suspects are alleged to have created posters with obscene images and sold them on online auction sites. One of them admitted to making roughly ¥10 million JPY (about $70,000 USD) in sales in around a single year.
What is particularly concerning is how easy it was for three of the four suspects to create the obscene posters using free AI generation software, despite having no particular IT knowledge. Regrettably, the barriers to misusing AI have been lowered, even as the scope of abuse is expanding.
Lightning progress is being made in the field of generative AI. The technology has been standardized to the point where the authenticity of an AI-created product cannot be distinguished from the real thing with the human eye. Yet no special knowledge is required and anyone can use AI software. Tokyo Metropolitan Police Headquarters.
Nonetheless, measures to prevent abusive use of AI have lagged. Recently, Tottori Prefecture enacted a revised ordinance for the healthy development of young people that bans the creation and distribution of deepfake pornography.
But shouldn't it really be the national government that takes the lead in establishing such laws?
The AI-generated obscene images in this case were discovered during a "cyber patrol" by the Tokyo Metropolitan Police Department.
Monitoring illegal activities in cyberspace is a necessary task for the police, and we would like to see the quality of their performance in this regard improve. In order for investigators to prepare themselves to do so, they will likely need AI to help with that monitoring. That is something that needs to be considered.
No doubt, thanks to its ability to make many intellectual tasks easier, generative AI is destined to become indispensable to modern society. But, at the same time, there is a risk that false information that cannot be detected as such by individual human beings will be spread. Because AI is so easy to misuse for criminal purposes, users have little sense of guilt.
Crimes using generative AI creations such as images, videos, and text designed to deceive people, lie, slander, or infringe copyrights are expected to become more frequent and sophisticated in the future.
The government must recognize the current situation of AI misuse constitutes a threat disruptive to society. It must work to bring about a healthy society through both legal reform and police crackdowns.
( Read the editorial in Japanese . )
Author: Editorial Board, The Sankei Shimbun

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

U.K. judge warns of risk to justice after lawyers cited fake AI-generated cases in court
U.K. judge warns of risk to justice after lawyers cited fake AI-generated cases in court

Global News

time2 days ago

  • Global News

U.K. judge warns of risk to justice after lawyers cited fake AI-generated cases in court

Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about 'suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked,' leading to false information being put before the court. Story continues below advertisement In a ruling written by Sharp, the judges said that in a USD$120-million lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. Get breaking National news For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen. Sign up for breaking National newsletter Sign Up By providing your email address, you have read and agree to Global News' Terms and Conditions and Privacy Policy The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was 'extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.' In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had 'not provided to the court a coherent explanation for what happened.' The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a 'powerful technology' and a 'useful tool' for the law. Story continues below advertisement 'Artificial intelligence is a tool that carries with it risks as well as opportunities,' the judge said. 'Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.'

U.K. judge warns of risk to justice after lawyers cited fake AI-generated cases in court
U.K. judge warns of risk to justice after lawyers cited fake AI-generated cases in court

CTV News

time3 days ago

  • CTV News

U.K. judge warns of risk to justice after lawyers cited fake AI-generated cases in court

LONDON — Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about 'suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked,' leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound (US$120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was 'extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.' In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had 'not provided to the court a coherent explanation for what happened.' The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a 'powerful technology' and a 'useful tool' for the law. 'Artificial intelligence is a tool that carries with it risks as well as opportunities,' the judge said. 'Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.' Jill Lawless, The Associated Press

More Chinese groups using ChatGPT for covert operations, OpenAI says
More Chinese groups using ChatGPT for covert operations, OpenAI says

Globe and Mail

time5 days ago

  • Globe and Mail

More Chinese groups using ChatGPT for covert operations, OpenAI says

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. Data centres are popping up everywhere, but a jobs boom is unlikely Tech journalist Karen Hao takes aim at Sam Altman's OpenAI 'empire' OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist and content related to the closure of USAID. Some content also criticized U.S. President Donald Trump's sweeping tariffs, generating X posts such as 'Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?' In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations and development of tools for password brute forcing and social media automation. A third example that OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's Foreign Ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a US$40-billion funding round valuing the company at US$300-billion.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store