logo
#

Latest news with #AI-created

Should Hong Kong plug legal gaps to stamp out AI-generated porn?
Should Hong Kong plug legal gaps to stamp out AI-generated porn?

The Star

time6 days ago

  • The Star

Should Hong Kong plug legal gaps to stamp out AI-generated porn?

Betrayal was the first thought to cross the mind of a University of Hong Kong (HKU) law student when she found out that a classmate, whom she had considered a friend, had used AI to depict her naked. 'I felt a great sense of betrayal and was traumatised by the friendship and all the collective memories,' said the student, who has asked to be called 'B'. She was not alone. More than 700 deepfake pornographic images were found on her male classmate's computer, including AI-generated pictures of other HKU students. B said she felt 'used' and that her bodily autonomy and dignity had been violated. Angered by what she saw as inaction by HKU in the aftermath of the discovery and the lack of protection under existing city laws, B joined forces with two other female students to set up an Instagram account to publicise the case. HKU said it had issued a warning letter to the male student and told him to make a formal apology. It later said it was reviewing the incident and vowed to take further action, with the privacy watchdog also launching a criminal investigation. But the women said they had no plans to file a police report as the city lacked the right legal 'framework'. The case has exposed gaps in legislation around the non-consensual creation of images generated by artificial intelligence (AI), especially when the content was only used privately and not published to a wider audience. Authorities have also weighed in and pledged to examine regulations in other jurisdictions to decide on the path forward. Hong Kong has no dedicated laws tackling copyright issues connected to AI-generated content or covering the creation of unauthorised intimate images that have not been published. Legal experts told the Post that more rules were needed, while technology sector players urged authorities to avoid stifling innovation through overregulation. The HKU case is not the only one to have put the spotlight on personal information being manipulated or used by AI without consent. In May, a voice that was highly similar to that of an anchor from local broadcaster TVB was found in an advertisement and a video report from another online news outlet. The broadcaster, which said it suspected another news outlet and an advertiser had used AI to create the voice-over, warned that the technology was capable of creating convincing audio of a person's voice without prior authorisation. Separately, a vendor on an e-commerce platform was found to be selling AI-created videos mimicking a TVB news report with lifelike anchors delivering birthday messages. In response to a Post inquiry, TVB's assistant general manager in corporate communications, Bonnie Wong, said the broadcaster had issued a warning to the vendor and alerted the platform to demand the immediate removal of the infringing content. What the law says Lawyers and a legal academic who spoke to the Post pointed to gaps in criminal, privacy and copyright laws regarding the production of intimate images using AI without consent or providing sufficient remedies for victims. Barrister Michelle Wong Lap-yan said prosecutors would have a hard time laying charges using existing offences such as access to a computer with criminal or dishonest intent and publication, or threatening to publish intimate images without consent. Both offences are under the Crimes Ordinance. The barrister, who has experience advising victims of sexual harassment, said the first offence governing computer use only applied when a perpetrator accessed another person's device. 'Hence, if the classmate generated the images using his own computer, the act may not be caught under [the offence],' Wong said. As for the second offence, a perpetrator needed to publish the images or at least threaten to do so. 'Unless the classmate threatened to publish the deepfake images, he would not be caught under this newly enacted legislation [that came into force in 2021],' Wong said. She said that 'publishing' was defined broadly under the Crimes Ordinance to cover situations such as when a person distributed, circulated, made available, sold, gave or lent images to another person. The definition also included showing the content to another person in any manner. Craig Choy Ki, a US-based barrister, told the Post that challenges also existed in using privacy and copyright laws. He said the city's anti-doxxing laws only applied if the creator of the image disclosed the intimate content to other people. Privacy laws could come into play depending on how the creator had collected personal information when generating the images, and whether they had been published without permission to infringe upon the owner's 'moral rights'. The three alleged victims in the HKU case told the Post that a friend of the male student had discovered the AI-generated pornographic images when borrowing his computer in February. 'As far as we know, [the male student] did not disclose the images to his friend himself,' the trio said. They added that the person who made the discovery told them they were among the files. Stuart Hargreaves, an associate professor from the Chinese University of Hong Kong's law faculty, said legislation covering compensation for harm was inadequate for helping victims of non-consensual AI-produced intimate images. In an academic paper, Hargreaves argued that a court might not deem such images, which were known to be fake and fabricated, as able to cause reputational harm if created or published without consent. Suing the creator for deliberately causing emotional distress could also be difficult given the level of harm required and the threshold usually required by a court. 'In the case of targeted synthetic pornography, the intent of the creator is often for private gratification or esteem building in communities that trade the images. Any harm to the victim may often be an afterthought,' Hargreaves wrote. A Post check found that major AI chatbots prohibited the creation of pictures using specific faces of real people or creating sexual content, but free-of-charge platforms specialising in the generation of such content were readily available online. The three female HKU students also told the Post that one of the pieces of software used to produce the more than 700 intimate images found on their classmate's laptop was ' On its website, the software is described as a 'free AI undressing tool that allows users to generate images of girls without clothing'. The provider is quoted as saying: 'There are no laws prohibiting the use of [the] application for personal purposes.' It also describes the software as operating 'within the bounds of current legal frameworks'. Clicking on a button to create a 'deepnude', a sign-in page tells users they must be aged over 18, that they cannot use another person's photo without their permission and that they will be responsible for the images created. But users can simply ignore the messages. The Post found that the firm behind this software, Itai Tech Limited, is under investigation by the UK's communications services regulator for allegedly providing ineffective protection to prevent children from accessing pornographic content. The UK-registered firm is also being sued by the San Francisco City Attorney for operating websites producing non-consensual intimate images of adults. The Post has reached out to Itai Tech for comment. How can deepfake porn be regulated? Responding to an earlier Post inquiry in July, the Innovation, Technology and Industry Bureau said it would continue to monitor the development and application of AI in the city, draw references from elsewhere and 'review the existing legislation' if necessary. Chief Executive John Lee Ka-chiu also pledged to examine global regulatory trends and research international 'best practices' on regulating the emerging technology. Some overseas jurisdictions have introduced offences for deepfake porn that extend beyond the publication of such content. In Australia, the creation and transmission of deepfake pornographic images made without consent was banned last year, with offenders facing up to seven years in jail if they had also created the visuals. South Korea also established offences last year for possessing and viewing deepfake porn. Offenders can be sentenced to three years' imprisonment or fined up to 30 million won (US$21,666). The United Kingdom has floated a proposal to ban the creation of sexually explicit images of adults without their consent, regardless of whether the creator intended to share the content. Perpetrators would have the offence added to their criminal record and face a fine, while those who shared their unauthorised creations could face jail time. The lawyers also noted that authorities would need to lay out how a law banning the non-consensual creation or possession of sexually explicit images could be enforced. Wong, the barrister, said law enforcement and prosecutors should have adequate experience in establishing and implementing the charges with reference to existing legislation targeting child pornography. In such cases, authorities can access an arrested person's electronic devices with their consent or with a court warrant. Is a legal ban the right solution? Hargreaves suggested Hong Kong consider establishing a broad legal framework to target unauthorised sexually explicit deepfake content with both civil and criminal law reforms. He added that criminal law alone was insufficient to tackle such cases, which he expected to become more common in the future. 'It is a complicated problem that has no single silver bullet solution,' Hargreaves said. The professor suggested the introduction by statute of a new right of action based upon misappropriation of personality to cover the creation, distribution or public disclosure of visual or audiovisual material that depicted an identifiable person in a false sexual context without their consent. Its goal would be to cover harm to a victim's dignity and allow them to seek restitution and quicker remedies. United States-based barrister Choy said the city could establish a tiered criminal framework with varying levels of penalties, while also allowing defences such as satire or news reports. He said the law should only apply when an identifiable person was depicted in the content. But Hargreaves and Choy cautioned that any new law would have to strike a balance with freedom of expression, especially for cases in which content had not been published. The lawyers said it would be difficult to draw up a legal boundary between what might be a private expression of sexual thought and protection for people whose information was appropriated without their consent. Hargreaves said films would not be deemed illegal for portraying a historical or even a current figure engaging in sexual activity, adding that a large amount of legitimate art could be sexually explicit without infringing upon the law. But the same could not be said for pornographic deepfake content depicting a known person without their consent. 'Society should express disapproval of the practice of creating sexually explicit images of people without their consent, even if there is no intent that those images be shared,' Hargreaves said. Choy said the city would need to consult extensively across society on the acceptable moral standards to decide which legal steps would be appropriate and to avoid setting up a 'thought crime' for undisclosed private material. 'When we realise there is a moral issue, the use of the law is to set the lowest acceptable bar in society as guidelines for everyone. The discussion should include how the legal framework should be laid out and whether laws should be used,' he said. But penalties might only provide limited satisfaction for victims, as they had potentially suffered emotional trauma that was not easily remedied under the law. The barrister added that the city should also consider if the creator of the unauthorised content should be the sole bearer of legal consequences, as well as whether AI service providers should have some responsibility. 'I think we sometimes overweaponise the law, thinking that it would solve all problems. With AI being such a novel technology which everyone is still figuring out its [moral] boundaries, education is also important,' Choy said. Will laws hurt tech progress? Lawmakers with ties to the technology sector have expressed concerns over possible legislation banning the creation of deepfake pornography without others' consent, saying that the law could not completely stamp out misuse and the city risked stifling a rapidly developing area. Duncan Chiu, a legislator representing the technology sector, likened the potential ban on creating the content to prohibition on possession of pornography. He said a jurisdiction could block access to websites within its remit but was unable to prevent internet users from accessing the same material in another country that permitted the content. 'This situation is also found elsewhere on the internet. One jurisdiction enacts a law, but you can't stop people from accessing the said software in other countries or regions,' Chiu said. The lawmaker said AI platforms had been working together to establish regulations on ethical concerns, such as labelling images with watermarks to identify them as digital creations. He also gave the example of 'hallucinations', which are incorrect or misleading results generated by AI models, saying the city did not need to legislate over the generation of false content. 'Many generative AI programmes are able to reveal sources behind their answers with a click of a button. Hallucinations can be adjusted away too. It would have been wrong to legislate against this back then, as technology would advance past this,' Chiu said. Johnny Ng Kit-chong, a lawmaker who has incubated tech start-ups in the city, said a social consensus on the use of AI was needed before moving to the legislative stage. Ng said that banning the creation of sexually explicit deepfake content produced without consent would leave the burden on technology firms to establish rules on their platforms. 'This might not affect society much as most people would think regulations on sexual images are [inappropriate]. However, to start-ups, they would be limited in their development of artificial intelligence functions,' Ng said. Ng said Hong Kong could reference the EU Artificial Intelligence Act, which classified AI by the amount of risk with corresponding levels of regulation. Some practices, such as using the technology for social scoring, were banned. But the use of AI to generate sexual content was permitted and not listed as a high-risk practice. Yeung Kwong-chak, CEO of start-up DotAI, told the Post that businesses might welcome more clarity on legal boundaries. His firm uses AI to provide corporate services and also offers training courses for individuals and businesses to learn how to harness the technology. 'Large corporations are concerned about risks [associated with using new technologies], such as data leaks or if AI-generated content may offend internal policies. Having laws will give them a sense of certainty in knowing where the boundaries are, encouraging their uptake of the technology,' Yeung said. The three students in the HKU case said they hoped the law would keep up with the times to offer protection to victims, provide a deterrent and outlaw the creation of pornographic deepfake images without consent. 'The risk of the creation of deepfake images does not lie only in 'possession', but more importantly in the acquisition of indecent images without consent, and the potential distribution of images [of which the] authenticity may become more and more difficult to distinguish as AI continues to develop,' they warned. They added that 'permanent and substantial' punishment to creators of these non-consensual images would do the most to ease their concerns. 'We hope the punishment can be sufficient so that he can be constantly, impactfully reminded of his wrongdoing and not commit such an irritating act ever again,' the students said. - SOUTH CHINA MORNING POST

Meta's ban on political ads will mean less transparency and more slop for users
Meta's ban on political ads will mean less transparency and more slop for users

Irish Examiner

time30-07-2025

  • Politics
  • Irish Examiner

Meta's ban on political ads will mean less transparency and more slop for users

Meta's announcement on Friday that it will ban all political ads in the EU on Facebook and Instagram is terrible not just for political advertisers but for the public as a whole. I was Facebook's first Political Ads representative in Europe, and I launched their Political Ad authorisation and oversight process before leaving the company in 2019. I now advise political campaigners on how to use digital tools, including ads. From Obama through the Arab Spring to Marriage Equality, at first we were convinced social media was a force for good in elections. But Brexit, Trump and the Cambridge Analytica scandal brought a reckoning: social media companies should be held responsible for nefarious political marketing, given that bad actors now had a way to reach the electorate at a scale and impact never seen before. Facebook's response at the time was to launch a complex authorisation and transparency process for all political advertising on its platforms. Anyone who wanted to run ads about politics, elections or 'social issues' must prove they were located in the country their ad targeted, providing their national ID, locally-billed credit card and their IP address. Verifiable contact details for the organisation promoted would be included in an 'Ads Library', where all political ads would be saved for seven years. Clare O'Donoghue Velikić: 'Our presidential election in October may yet shape up to be more fractious than we thought.' The Ad Library would also show how much money had been spent on those ads, and the demographics of people targeted. (You can see this today at All political ads then required a 'Paid for by' disclaimer, including the name of the paying organisation within the ad itself. This process was cumbersome but effective, if imperfect, in protecting democracies against foreign interference in political matters, and to counter 'astro-turfing': setting up fake grassroots organisations to create the impression of widespread local support for a niche position (as we have seen the far right attempt many times, including here in Ireland). The Ads Library also provided oversight to journalists, regulators and the general public, who can see what political ads have been run, by whom, how much money they spent, and roughly who those ads targeted. In 2019, Facebook (now Meta) knew that they were unlikely to make enough money back from political advertising in the EU to justify the investment in these oversight tools. Restrictions on political fundraising and election expenditure in most EU countries mean we will never see the levels of spending on election advertising as in the US. Calin Georgescu, the winner of the first round of Romania's presidential election walks among supporters outside the Romanian parliament in Bucharest in February. Tens of thousands of fake TikTok accounts churned out millions of views of AI-created disinformation, ultimately resulting in an annulled election there. File photo: AP/Alexandru Dobre Even as a sales rep for Government & Politics marketing, I never had a revenue target for political ads. For Facebook, trying to atone for its failures around Cambridge Analytica, political advertising in the EU was essentially a loss leader: if political parties and governments trusted Facebook, and used their tools to help run elections, then they might feel more favourably towards the company within broader business-related legislation. Six years later and the mood has shifted both politically and at now-renamed Meta. EU legislation has finally caught up with micro-targeting tactics, and in October 2025 the Transparency and Targeting of Political Advertising (TTPA) regulation comes into force. Google, who never had as thorough an authorisation and transparency process for political ads as Meta's, responded in late 2024 that they would ban political ads in the EU. But until now, most of us in the political campaigning world were confident that Meta's political ad process, with its local authorisation, verification and transparency through the Ad Library, would be sufficient to meet their obligations under the TTPA. Not so, says VP of Public Policy Europe at Meta, Markus Reinisch. The TTPA would require further changes to Meta's Political Ad processes 'at significant and unsustainable cost', 'without guarantee that our solution would be viewed as compliant'. Non-compliance with the TTPA leaves tech platforms facing penalties of up to 6% of their turnover; for Meta, this could be as high as €7bn. Still smarting from a €200m EU fine in April, Meta are now saying the quiet part out loud: political ads in the EU don't make them enough money to justify the investment in authorisation and transparency. The threat of such heavy fines seals the deal: Meta would rather lose their meagre European political ads revenue than take the risk of non-compliance. So does the TTPA achieve what it had hoped, protecting Europeans from bad political campaigners who use creepy social media tactics, and increasing transparency around electoral processes online? Sadly, the outcome will be the exact opposite. 'Rage bait' Sensationalism and 'rage bait' fuel the organic algorithm - the tech behind most social platforms which decides which non-paid content it will show people, out of the millions of stories, pictures and videos posted each day. It's extremely hard for mainstream 'vanilla' political content - sharing about work on homelessness, or Dáil speeches, or inviting people to a public meeting - to get picked up by the organic algorithm and shown in social media feeds. Meta ads have met this gap for most political campaigners in Ireland and the EU until now - pay a small amount and ensure your content reaches your constituency, or people across the country who care about the theme you're discussing. Non-mainstream political voices, particularly those on the far right, have always been more willing to create organic content which 'games' the algorithm - gets picked up and shown to more people - by creating outrage, shock, sensationalism or disinformation. Now, in the age of AI video production, content creators will be able to churn out algorithm-baiting videos at a scale never seen before. Without verification against fake profiles and astro-turfing, bad actors can create multiple profiles to share this content widely and quickly, before it can be detected - as happened in Romania's recent election, where tens of thousands of fake TikTok accounts churned out millions of views of AI-created disinformation, ultimately resulting in an annulled election. These content bot farms were driven from outside Romania: without any localisation oversight, foreign interference in political social media will rise. TikTok does not allow political 'ads', but had insufficient detection mechanisms to prevent this non-paid overseas manipulation of a democratic process. Calin Georgescu, winner of the first round of last year's annulled election in Romania (left) with presidential candidate George Simion (right) as they cast their votes in the first round of the presidential election rerun in Romania on May 4, 2025. The election there was annulled because fake profiles, astro-turfing and bad actors created outrage, shock, sensationalism or disinformation. File photo: AP/Vadim Ghirda Meanwhile, as we watch the dodgy money move into these unregulated spaces, there will be fewer (likely almost zero) counter voices from mainstream political parties and activists. Of those who manage to gain organic reach through the algorithm, incumbent politicians and parties who have built up large followings on social media will be at a significant advantage over new candidates, smaller parties and niche movements. Finally, with no public library of political content on Facebook and Instagram, there will no longer be any oversight of what's out there - no information for journalists or the public, no record for regulators to review after elections. Backing away from its own Political Ad Library, Meta will be taking away from the EU a profoundly important tool for transparency - the exact opposite of what the TTPA intended. The briefing shared by the Department of Housing, Local Government and Heritage on Friday stated that 'the regulation aims to make it easier for voters to recognise political advertisements, understand who is behind them and know whether they have received a targeted advertisement, so that they are better placed to make informed choices during elections and referendums.' The true outcome of Meta's exit from political advertising will be an EU electorate subjected to unregulated, sensationalist AI slop, without insight or accountability. Will this steer electoral outcomes? Our presidential election in October may yet shape up to be more fractious than we thought. Clare is the Director of ODV Digital, a digital consultancy for the political and campaigning sector, based in Dublin Read More Will tech giants finally take online safety for children seriously?

Spanish Teen Being Investigated Over Nude AI Images Of Classmates
Spanish Teen Being Investigated Over Nude AI Images Of Classmates

NDTV

time27-07-2025

  • Politics
  • NDTV

Spanish Teen Being Investigated Over Nude AI Images Of Classmates

Spanish police said Sunday they were investigating a 17-year-old on suspicion of using artificial intelligence to deepfake nude images of female classmates for sale. The probe was launched after 16 young women at an educational institute in the Valencia region came forward to complain of AI-generated images of them circulating on social media and the internet. The first complaint was lodged in December by an adolescent who said an AI-generated video and faked photos resembling her "completely naked" were posted on a social media account started under her name. As more accusations came in, police suspected the images were the work of a student in the same institute, according to a statement by the police. Tracking the IP addresses used to create the bogus accounts led them to the home of the 17-year-old now under investigation on suspicion of corruption of minors. It is not the first time that Spanish authorities have detected AI-created pornographic images of minors. The government in March said it would put forward a law to treat such deepfaked sexual imagery created by AI without consent as a crime. The bill, which Madrid claims to be a first in Europe, has yet to be passed by the parliament.

Spanish teen investigated over nude AI images of classmates
Spanish teen investigated over nude AI images of classmates

The Sun

time27-07-2025

  • Politics
  • The Sun

Spanish teen investigated over nude AI images of classmates

MADRID: Spanish police said Sunday they were investigating a 17-year-old on suspicion of using artificial intelligence to deepfake nude images of female classmates for sale. The probe was launched after 16 young women at an educational institute in the Valencia region came forward to complain of AI-generated images of them circulating on social media and the internet. The first complaint was lodged in December by an adolescent who said an AI-generated video and faked photos resembling her 'completely naked' were posted on a social media account started under her name. As more accusations came in, police suspected the images were the work of a student in the same institute, according to a statement by the police. Tracking the IP addresses used to create the bogus accounts led them to the home of the 17-year-old now under investigation on suspicion of corruption of minors. It is not the first time that Spanish authorities have detected AI-created pornographic images of minors. The government in March said it would put forward a law to treat such deepfaked sexual imagery created by AI without consent as a crime. The bill, which Madrid claims to be a first in Europe, has yet to be passed by the parliament. – AFP

Spain student, 17, suspected of using AI to create nude images of classmates
Spain student, 17, suspected of using AI to create nude images of classmates

Hindustan Times

time27-07-2025

  • Hindustan Times

Spain student, 17, suspected of using AI to create nude images of classmates

Spanish police said Sunday they were investigating a 17-year-old on suspicion of using artificial intelligence to deepfake nude images of female classmates for sale. Tracking the IP addresses used to create the bogus accounts led them to the home of the 17-year-old now under investigation on suspicion of corruption of minors.(Reuters/Representational Image) The probe was launched after 16 young women at an educational institute in the Valencia region came forward to complain of AI-generated images of them circulating on social media and the internet. The first complaint was lodged in December by an adolescent who said an AI-generated video and faked photos resembling her "completely naked" were posted on a social media account started under her name. As more accusations came in, police suspected the images were the work of a student in the same institute, according to a statement by the police. Tracking the IP addresses used to create the bogus accounts led them to the home of the 17-year-old now under investigation on suspicion of corruption of minors. It is not the first time that Spanish authorities have detected AI-created pornographic images of minors. The government in March said it would put forward a law to treat such deepfaked sexual imagery created by AI without consent as a crime. The bill, which Madrid claims to be a first in Europe, has yet to be passed by the parliament.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store