logo
Hong Kong police search country park for teenager missing for week

Hong Kong police search country park for teenager missing for week

The Star24-07-2025
Hong Kong police have launched a search in Lion Rock Country Park for a teenage girl who has been missing for a week after new leads suggested she went there.
Daisy Kwok Oi-kwan, 18, went missing after she was last seen in a shopping centre on Junction Road in Lok Fu on July 14, two days before the release of the results of the university entrance exams, which she was supposed to take but did not.
Her family reported her missing to police. In a development on Tuesday, police reviewed CCTV footage and found that she was last seen heading uphill towards the country park.
The authorities are searching the site to locate the missing person. The operation was continuing on Tuesday evening.
Kwok, a resident of Mei Foo Sun Chuen private estate in Lai Chi Kok, was last seen wearing a light blue short-sleeved top, long black pants, white trainers and black-rimmed glasses, and carrying a red backpack, according to security camera images taken inside a lift.
She is about 1.7 metres (five feet, seven inches) tall, weighs 55kg (121lbs) and is of medium build. She has long black hair tied in a ponytail and a mole on the left side of her chin.
The girl, who was supposed to sit the Diploma of Secondary Education exams this year, had suffered emotional issues and was taking a break from school, the Post learned.
Anyone who knows of her whereabouts or who may have seen her is urged to contact the regional missing persons unit of Kowloon West on 3661 8036 or 9020 6542. - SOUTH CHINA MORNING POST
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Should Hong Kong plug legal gaps to stamp out AI-generated porn?
Should Hong Kong plug legal gaps to stamp out AI-generated porn?

The Star

time4 days ago

  • The Star

Should Hong Kong plug legal gaps to stamp out AI-generated porn?

Betrayal was the first thought to cross the mind of a University of Hong Kong (HKU) law student when she found out that a classmate, whom she had considered a friend, had used AI to depict her naked. 'I felt a great sense of betrayal and was traumatised by the friendship and all the collective memories,' said the student, who has asked to be called 'B'. She was not alone. More than 700 deepfake pornographic images were found on her male classmate's computer, including AI-generated pictures of other HKU students. B said she felt 'used' and that her bodily autonomy and dignity had been violated. Angered by what she saw as inaction by HKU in the aftermath of the discovery and the lack of protection under existing city laws, B joined forces with two other female students to set up an Instagram account to publicise the case. HKU said it had issued a warning letter to the male student and told him to make a formal apology. It later said it was reviewing the incident and vowed to take further action, with the privacy watchdog also launching a criminal investigation. But the women said they had no plans to file a police report as the city lacked the right legal 'framework'. The case has exposed gaps in legislation around the non-consensual creation of images generated by artificial intelligence (AI), especially when the content was only used privately and not published to a wider audience. Authorities have also weighed in and pledged to examine regulations in other jurisdictions to decide on the path forward. Hong Kong has no dedicated laws tackling copyright issues connected to AI-generated content or covering the creation of unauthorised intimate images that have not been published. Legal experts told the Post that more rules were needed, while technology sector players urged authorities to avoid stifling innovation through overregulation. The HKU case is not the only one to have put the spotlight on personal information being manipulated or used by AI without consent. In May, a voice that was highly similar to that of an anchor from local broadcaster TVB was found in an advertisement and a video report from another online news outlet. The broadcaster, which said it suspected another news outlet and an advertiser had used AI to create the voice-over, warned that the technology was capable of creating convincing audio of a person's voice without prior authorisation. Separately, a vendor on an e-commerce platform was found to be selling AI-created videos mimicking a TVB news report with lifelike anchors delivering birthday messages. In response to a Post inquiry, TVB's assistant general manager in corporate communications, Bonnie Wong, said the broadcaster had issued a warning to the vendor and alerted the platform to demand the immediate removal of the infringing content. What the law says Lawyers and a legal academic who spoke to the Post pointed to gaps in criminal, privacy and copyright laws regarding the production of intimate images using AI without consent or providing sufficient remedies for victims. Barrister Michelle Wong Lap-yan said prosecutors would have a hard time laying charges using existing offences such as access to a computer with criminal or dishonest intent and publication, or threatening to publish intimate images without consent. Both offences are under the Crimes Ordinance. The barrister, who has experience advising victims of sexual harassment, said the first offence governing computer use only applied when a perpetrator accessed another person's device. 'Hence, if the classmate generated the images using his own computer, the act may not be caught under [the offence],' Wong said. As for the second offence, a perpetrator needed to publish the images or at least threaten to do so. 'Unless the classmate threatened to publish the deepfake images, he would not be caught under this newly enacted legislation [that came into force in 2021],' Wong said. She said that 'publishing' was defined broadly under the Crimes Ordinance to cover situations such as when a person distributed, circulated, made available, sold, gave or lent images to another person. The definition also included showing the content to another person in any manner. Craig Choy Ki, a US-based barrister, told the Post that challenges also existed in using privacy and copyright laws. He said the city's anti-doxxing laws only applied if the creator of the image disclosed the intimate content to other people. Privacy laws could come into play depending on how the creator had collected personal information when generating the images, and whether they had been published without permission to infringe upon the owner's 'moral rights'. The three alleged victims in the HKU case told the Post that a friend of the male student had discovered the AI-generated pornographic images when borrowing his computer in February. 'As far as we know, [the male student] did not disclose the images to his friend himself,' the trio said. They added that the person who made the discovery told them they were among the files. Stuart Hargreaves, an associate professor from the Chinese University of Hong Kong's law faculty, said legislation covering compensation for harm was inadequate for helping victims of non-consensual AI-produced intimate images. In an academic paper, Hargreaves argued that a court might not deem such images, which were known to be fake and fabricated, as able to cause reputational harm if created or published without consent. Suing the creator for deliberately causing emotional distress could also be difficult given the level of harm required and the threshold usually required by a court. 'In the case of targeted synthetic pornography, the intent of the creator is often for private gratification or esteem building in communities that trade the images. Any harm to the victim may often be an afterthought,' Hargreaves wrote. A Post check found that major AI chatbots prohibited the creation of pictures using specific faces of real people or creating sexual content, but free-of-charge platforms specialising in the generation of such content were readily available online. The three female HKU students also told the Post that one of the pieces of software used to produce the more than 700 intimate images found on their classmate's laptop was ' On its website, the software is described as a 'free AI undressing tool that allows users to generate images of girls without clothing'. The provider is quoted as saying: 'There are no laws prohibiting the use of [the] application for personal purposes.' It also describes the software as operating 'within the bounds of current legal frameworks'. Clicking on a button to create a 'deepnude', a sign-in page tells users they must be aged over 18, that they cannot use another person's photo without their permission and that they will be responsible for the images created. But users can simply ignore the messages. The Post found that the firm behind this software, Itai Tech Limited, is under investigation by the UK's communications services regulator for allegedly providing ineffective protection to prevent children from accessing pornographic content. The UK-registered firm is also being sued by the San Francisco City Attorney for operating websites producing non-consensual intimate images of adults. The Post has reached out to Itai Tech for comment. How can deepfake porn be regulated? Responding to an earlier Post inquiry in July, the Innovation, Technology and Industry Bureau said it would continue to monitor the development and application of AI in the city, draw references from elsewhere and 'review the existing legislation' if necessary. Chief Executive John Lee Ka-chiu also pledged to examine global regulatory trends and research international 'best practices' on regulating the emerging technology. Some overseas jurisdictions have introduced offences for deepfake porn that extend beyond the publication of such content. In Australia, the creation and transmission of deepfake pornographic images made without consent was banned last year, with offenders facing up to seven years in jail if they had also created the visuals. South Korea also established offences last year for possessing and viewing deepfake porn. Offenders can be sentenced to three years' imprisonment or fined up to 30 million won (US$21,666). The United Kingdom has floated a proposal to ban the creation of sexually explicit images of adults without their consent, regardless of whether the creator intended to share the content. Perpetrators would have the offence added to their criminal record and face a fine, while those who shared their unauthorised creations could face jail time. The lawyers also noted that authorities would need to lay out how a law banning the non-consensual creation or possession of sexually explicit images could be enforced. Wong, the barrister, said law enforcement and prosecutors should have adequate experience in establishing and implementing the charges with reference to existing legislation targeting child pornography. In such cases, authorities can access an arrested person's electronic devices with their consent or with a court warrant. Is a legal ban the right solution? Hargreaves suggested Hong Kong consider establishing a broad legal framework to target unauthorised sexually explicit deepfake content with both civil and criminal law reforms. He added that criminal law alone was insufficient to tackle such cases, which he expected to become more common in the future. 'It is a complicated problem that has no single silver bullet solution,' Hargreaves said. The professor suggested the introduction by statute of a new right of action based upon misappropriation of personality to cover the creation, distribution or public disclosure of visual or audiovisual material that depicted an identifiable person in a false sexual context without their consent. Its goal would be to cover harm to a victim's dignity and allow them to seek restitution and quicker remedies. United States-based barrister Choy said the city could establish a tiered criminal framework with varying levels of penalties, while also allowing defences such as satire or news reports. He said the law should only apply when an identifiable person was depicted in the content. But Hargreaves and Choy cautioned that any new law would have to strike a balance with freedom of expression, especially for cases in which content had not been published. The lawyers said it would be difficult to draw up a legal boundary between what might be a private expression of sexual thought and protection for people whose information was appropriated without their consent. Hargreaves said films would not be deemed illegal for portraying a historical or even a current figure engaging in sexual activity, adding that a large amount of legitimate art could be sexually explicit without infringing upon the law. But the same could not be said for pornographic deepfake content depicting a known person without their consent. 'Society should express disapproval of the practice of creating sexually explicit images of people without their consent, even if there is no intent that those images be shared,' Hargreaves said. Choy said the city would need to consult extensively across society on the acceptable moral standards to decide which legal steps would be appropriate and to avoid setting up a 'thought crime' for undisclosed private material. 'When we realise there is a moral issue, the use of the law is to set the lowest acceptable bar in society as guidelines for everyone. The discussion should include how the legal framework should be laid out and whether laws should be used,' he said. But penalties might only provide limited satisfaction for victims, as they had potentially suffered emotional trauma that was not easily remedied under the law. The barrister added that the city should also consider if the creator of the unauthorised content should be the sole bearer of legal consequences, as well as whether AI service providers should have some responsibility. 'I think we sometimes overweaponise the law, thinking that it would solve all problems. With AI being such a novel technology which everyone is still figuring out its [moral] boundaries, education is also important,' Choy said. Will laws hurt tech progress? Lawmakers with ties to the technology sector have expressed concerns over possible legislation banning the creation of deepfake pornography without others' consent, saying that the law could not completely stamp out misuse and the city risked stifling a rapidly developing area. Duncan Chiu, a legislator representing the technology sector, likened the potential ban on creating the content to prohibition on possession of pornography. He said a jurisdiction could block access to websites within its remit but was unable to prevent internet users from accessing the same material in another country that permitted the content. 'This situation is also found elsewhere on the internet. One jurisdiction enacts a law, but you can't stop people from accessing the said software in other countries or regions,' Chiu said. The lawmaker said AI platforms had been working together to establish regulations on ethical concerns, such as labelling images with watermarks to identify them as digital creations. He also gave the example of 'hallucinations', which are incorrect or misleading results generated by AI models, saying the city did not need to legislate over the generation of false content. 'Many generative AI programmes are able to reveal sources behind their answers with a click of a button. Hallucinations can be adjusted away too. It would have been wrong to legislate against this back then, as technology would advance past this,' Chiu said. Johnny Ng Kit-chong, a lawmaker who has incubated tech start-ups in the city, said a social consensus on the use of AI was needed before moving to the legislative stage. Ng said that banning the creation of sexually explicit deepfake content produced without consent would leave the burden on technology firms to establish rules on their platforms. 'This might not affect society much as most people would think regulations on sexual images are [inappropriate]. However, to start-ups, they would be limited in their development of artificial intelligence functions,' Ng said. Ng said Hong Kong could reference the EU Artificial Intelligence Act, which classified AI by the amount of risk with corresponding levels of regulation. Some practices, such as using the technology for social scoring, were banned. But the use of AI to generate sexual content was permitted and not listed as a high-risk practice. Yeung Kwong-chak, CEO of start-up DotAI, told the Post that businesses might welcome more clarity on legal boundaries. His firm uses AI to provide corporate services and also offers training courses for individuals and businesses to learn how to harness the technology. 'Large corporations are concerned about risks [associated with using new technologies], such as data leaks or if AI-generated content may offend internal policies. Having laws will give them a sense of certainty in knowing where the boundaries are, encouraging their uptake of the technology,' Yeung said. The three students in the HKU case said they hoped the law would keep up with the times to offer protection to victims, provide a deterrent and outlaw the creation of pornographic deepfake images without consent. 'The risk of the creation of deepfake images does not lie only in 'possession', but more importantly in the acquisition of indecent images without consent, and the potential distribution of images [of which the] authenticity may become more and more difficult to distinguish as AI continues to develop,' they warned. They added that 'permanent and substantial' punishment to creators of these non-consensual images would do the most to ease their concerns. 'We hope the punishment can be sufficient so that he can be constantly, impactfully reminded of his wrongdoing and not commit such an irritating act ever again,' the students said. - SOUTH CHINA MORNING POST

Hong Kong police arrest 82 triad suspects, seize assets worth HK$15 million
Hong Kong police arrest 82 triad suspects, seize assets worth HK$15 million

The Star

time6 days ago

  • The Star

Hong Kong police arrest 82 triad suspects, seize assets worth HK$15 million

Hong Kong police have arrested 82 people in a citywide anti-triad operation, breaking up a syndicate that laundered nearly HK$40 billion (US$5.1 billion) and seizing over HK$15 million worth of assets, including a giant Labubu doll estimated to cost HK$1 million. The force said on Wednesday that officers raided multiple locations across the city during the crackdown, which was launched in the early hours of Tuesday. The suspects, comprising 55 men and 27 women aged 19 to 78, included the mastermind and core members of a syndicate that police have been investigating for the past 2½ years, according to Chief Superintendent Kwan King-pan of the organised crime and triad bureau. 'Police have been paying attention to a senior leader of a triad society and his gang's expansion of territory and power,' he said. 'The triad's method of operation is sophisticated, involving a wide range of criminal activities, and its members are cunning and arrogant.' Kwan said the mastermind was a 44-year-old man who ran a trust company set up in 2021 to launder nearly HK$40 billion. On top of traditional triad crimes involving sex, gambling, drugs and illegal loans, the mastermind also used his friends and relatives to carry out large-scale money laundering activities, Kwan said. 'The mastermind used a trust company to conduct loan fraud under the guise of a legitimate import trade business,' he said. 'He has no assets under his name and used his triad members, family and friends to launder more than HK$100 million in property, cars, watches, liquor and other luxury items.' During the operation, police froze HK$1.13 billion of suspected criminal proceeds and around HK$8 million of cash. Among the items seized by the force were a large number of bank documents and HK$7 million worth of assets linked to money laundering, including 11,000 bottles of wine, expensive watches, jewellery, handbags, gold, tea and a giant Labubu figurine. A source told the Post that the human-sized Labubu was worth an estimated HK$1 million. A similar 1.6-metre-tall (5.2 feet) Labubu doll was sold for 820,000 yuan (US$114,200) in Beijing at an auction by Yongle International Auction in June. According to Pop Mart, only 15 such figures exist around the world. Kwan said he believed the syndicate was dismantled during the police operation, with its revenue stream significantly undermined. Chief Superintendent Cheng Lai-ki of the financial intelligence and investigation bureau said the group established the trust company in 2021 and used it to transfer HK$39.6 billion. 'Many of these sums were transferred from shell companies with no actual business operations to mule accounts, cryptocurrency platforms or used by the mastermind to pay off personal credit card loans and purchase luxury items,' she said. Cheng said the company submitted forged documents to apply for bank loans, but the documents were riddled with errors, including incorrect identity numbers and dates. Kwan said investigations were still ongoing, and more arrests were possible. - SOUTH CHINA MORNING POST

Hong Kong police investigate Cambodian scam farm link to missing girl
Hong Kong police investigate Cambodian scam farm link to missing girl

The Star

time26-07-2025

  • The Star

Hong Kong police investigate Cambodian scam farm link to missing girl

Hong Kong police are investigating whether a teenage girl who went missing for several weeks had been involved in cross-border scam farm activities, with mobile phone traces showing she had been in Cambodia before she was found safe. Hong Kong identity card holder Wu Peishan, 17, had left her hometown of Haifeng in Guangdong province on July 1 for a leisure trip to Beihai, Guangxi, accompanied by friends, her father told mainland Chinese media. He called her directly on July 5 after failing to reach her by other methods. At first, she claimed to be in Shenzhen, but later admitted she was in Beihai. On July 9, a friend told her mother that she had been out of contact for two days, sparking their concern, reports said. The teenager's phone location on July 5 showed she was in Vietnam, according to a police insider. The parents found that the girl's phone location on July 7 was Svay Rieng, a border province in southeastern Cambodia. This was reportedly the location of the last available signal. The worried parents filed police reports in both Hong Kong and Haifeng. Her father also posted a US$5,000 reward on Southeast Asian social media platforms, seeking information on her whereabouts in Cambodia. On July 13, someone reached out claiming to have seen her inside a scam compound near the Cambodian border. After reportedly dispatching contacts to investigate the area, her father said there was no trace of her. On Wednesday, a Hong Kong police source said the girl had been located safely in Cambodia and was currently under the care of Chinese authorities in the country. Arrangements for her return were pending. The Post learned that the family, originally from Haifeng, had resident status in Hong Kong. The teenager had returned to the mainland several years ago to live with her mother. Hong Kong police had launched a 'request for police assistance' investigation after receiving calls from the parents, the source said. The Kowloon West regional crime unit is handling the case. - SOUTH CHINA MORNING POST

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store