logo
#

Latest news with #AI-generated

Hong Kong universities told to handle student misconduct ‘seriously' after AI-generated images scandal
Hong Kong universities told to handle student misconduct ‘seriously' after AI-generated images scandal

HKFP

time3 hours ago

  • Politics
  • HKFP

Hong Kong universities told to handle student misconduct ‘seriously' after AI-generated images scandal

Chief Executive John Lee has urged Hong Kong universities to handle student misconduct 'seriously,' after a male student allegedly created AI-generated indecent images of more than 20 women. Universities shoulder the responsibility of developing students' moral character, and any misconduct that infringes on individual rights and privacy should be dealt with seriously, Lee said at a weekly press conference on Tuesday. 'While some misconduct may be dealt with under internal university rules, any act that may contravene the law should be reported to law enforcement agencies for action,' the chief executive said. Lee's remarks come after a male law student at the University of Hong Kong (HKU) was accused of creating pornographic images of around 20 to 30 women, including his classmates and teachers, without their consent. The student allegedly used photos he found on the women's social media accounts to generate pornographic images using free online artificial intelligence (AI) tools. HKU said in a statement on Saturday that the university had issued a warning letter to the male student and demanded that he make a formal apology to the women affected. But it was revealed that some women had requested that HKU involve the university's Disciplinary Committee to handle the matter, according to three anonymous victims who posted a summary of the accusations on Instagram. The victims said their request did not go through as HKU cited legal opinions in telling them that the male student likely did not commit an offence that could be addressed by the Disciplinary Committee. Lee on Tuesday said most of Hong Kong's laws apply to online behaviours, drawing attention to offences under the Crimes Ordinance and the Personal Data (Privacy) Ordinance. For example, both 'publication or threatened publication of intimate images without consent' and 'access to a computer with dishonest intent' are offences under the Crimes Ordinance, each carrying a maximum punishment of five years in jail. Lee also said that in April, the government's Digital Policy Office published a guideline for generative AI use, promoting the 'safe and responsible development' of the technology and its related activities. The chief executive did not mention any proposal for legislation regarding AI. 'AI is an emergent technology that evolves very rapidly, with new capabilities and developments changing in a very short period of time,' he said. He added that the government would monitor the development of AI and study overseas regulations of the technology.

Humans Are Starting To Sound And Talk Like ChatGPT, Study Shows
Humans Are Starting To Sound And Talk Like ChatGPT, Study Shows

NDTV

time7 hours ago

  • Science
  • NDTV

Humans Are Starting To Sound And Talk Like ChatGPT, Study Shows

The rise of artificial intelligence (AI) chatbots, such as ChatGPT, has changed how humans communicate with each other, a new study has claimed. Researchers at the Max Planck Institute for Human Development, Germany, found that humans are starting to speak more like ChatGPT and not the other way around. The researchers analysed over 360,000 YouTube videos and 771,000 podcast episodes from before and after ChatGPT's release to track the frequency of so-called 'GPT words'. The outcome showed that ever since ChatGPT became popular, people are using certain words much more often -- words that pop up a lot in AI-generated text. "We detect a measurable and abrupt increase in the use of words preferentially generated by ChatGPT such as delve, comprehend, boast, swift, and meticulous, after its release," the study, published in the preprint server arXiv, highlighted. "These findings suggest a scenario where machines, originally trained on human data and subsequently exhibiting their own cultural traits, can, in turn, measurably reshape human culture. This marks the beginning of a closed cultural feedback loop in which cultural traits circulate bidirectionally between humans and machines." While previous studies have shown that AI models were influencing written communication for humans, it is the first time that research has shown its impact on verbal language. ChatGPT or any other AI model is trained on vast amounts of data using books, websites, forums, Wikipedia, and other publicly available resources. It is then fine-tuned using proprietary techniques and the reinforcement learning process. The end result is a linguistic and behavioural profile that, while rooted in human language, "exhibits systematic biases that distinguish it from organic human communication". "The patterns that are stored in AI technology seem to be transmitting back to the human mind," study co-author Levin Brinkmann told Scientific American. "It's natural for humans to imitate one another, but we don't imitate everyone around us equally. We're more likely to copy what someone else is doing if we perceive them as being knowledgeable or important."

Hong Kong opens probe into AI-generated porn scandal at university
Hong Kong opens probe into AI-generated porn scandal at university

GMA Network

time7 hours ago

  • Politics
  • GMA Network

Hong Kong opens probe into AI-generated porn scandal at university

HONG KONG — Hong Kong's privacy watchdog said Tuesday it has launched a criminal investigation into an AI-generated porn scandal at the city's oldest university, after a student was accused of creating lewd images of his female classmates and teachers. Three people alleged over the weekend that a University of Hong Kong (HKU) law student fabricated pornographic images of at least 20 women using artificial intelligence, in what is the first high-profile case of its kind in the Chinese financial hub. The university sparked outrage over a perceived lenient punishment after it said Saturday it had only sent a warning letter to the student and demanded he apologize. But Hong Kong's Office of the Privacy Commissioner for Personal Data said Tuesday that disclosing someone else's personal data without consent, and with an intent to cause harm, could be an offence. The watchdog "has begun a criminal investigation into the incident and has no further comment at this stage", it said, without mentioning the student. The accusers said in a statement Saturday that Hong Kong law only criminalizes the distribution of "intimate images," including those created with AI, but not the generation of them. There is no allegation so far that the student spread the deepfake images, and so "victims are unable to seek punishment... through Hong Kong's criminal justice system," they wrote. The accusers said a friend discovered the images on the student's laptop. Experts warn the alleged use of AI in the scandal may be the tip of a "very large iceberg" surrounding non-consensual imagery. "The HKU case shows clearly that anyone could be a perpetrator, no space is 100 percent safe," Annie Chan, a former associate professor at Hong Kong's Lingnan University, told AFP. Women's rights advocates said Hong Kong was "lagging behind" in terms of legal protections. "Some people who seek our help feel wronged, because they never took those photos," said Doris Chong, executive director at the Association Concerning Sexual Violence Against Women, referring to cases at the group's crisis center. "The AI generations are so life-like that their circulation would be very upsetting." Asked about the case at a Tuesday press briefing, Hong Kong leader John Lee said most of the city's laws "are applicable to activities on the internet." HKU said on Saturday it will review the case and take further action if appropriate. — Agence France-Presse

School vows fix after student spread lewd AI pictures
School vows fix after student spread lewd AI pictures

The Star

time18 hours ago

  • The Star

School vows fix after student spread lewd AI pictures

Compiled by DIVYA THERESA RAVI , and R. ARAVINTHAN A SCHOOL in Johor admitted that it had been slow in taking action when one of its students was accused of spreading AI-generated pornographic images of his classmates, Sin Chew Daily reported. Foon Yew High School chairman Tong Sing Chuan admitted that the Chinese independent school was sluggish in addressing the matter and promised reforms. 'Many parents are concerned about this matter. The board is also saddened (that this happened). 'I stress that the board will not accept or tolerate any form of wrongdoing,' he said. Earlier, it was reported that a 16-year-old male student from the school was arrested in April for using AI to create and sell pornographic images of his female schoolmates and alumni. About 30 to 40 female students and alumni are believed to have become victims. Following the incident, the principal resigned on April 12. > A woman in China called off her wedding after her fiance was found to be one of dozens of men who had sex with the infamous cross-dressing 'Red Uncle', China Press reported. Chinese cyberspace was inundated with posts about a man dubbed Red Uncle, who was rumoured to have lured 1,691 heterosexual men into his home for sexual encounters. He then recorded and distributed the intimate encounters online. Netizens then went about identifying the men involved. One man, who made many visits to Red Uncle, was nicknamed 'Jacket Man' because he wore a leather jacket in the videos. Netizens identifed his social media profile and found pre-wedding photos he had taken with his fiancee, and began sharing them. The woman reportedly called off her wedding and will be getting herself tested for sexually transmitted infections. The 38-year-old Red Uncle, whose surname is Jiao, was arrested by Nanjing police for spreading obscene videos online. The police denied that Jiao had encounters with 1,691 men but did not give a specific figure. (The above articles are compiled from the vernacular newspapers (Bahasa Malaysia, Chinese and Tamil dailies). As such, stories are grouped according to the respective language/medium. Where a paragraph begins with a >, it denotes a separate news item.)

When ‘good enough' AI gets you fined (or fired!)
When ‘good enough' AI gets you fined (or fired!)

Techday NZ

time19 hours ago

  • Business
  • Techday NZ

When ‘good enough' AI gets you fined (or fired!)

Why speed is no defence when precision fails and what today's leaders must ask before trusting the tech.. In a world obsessed with faster, cheaper outputs, AI has made 'good enough' look very tempting when it comes to legal and risk advisory outputs. Need an obligation map - "There's a tool for that!" Want to summarise 400 regulatory clauses - "Just prompt the bot." But compliance isn't a race - it's a contract with regulators, stakeholders and the public. And when shortcuts miss the mark, "We used AI" simply won't get you off the hook. In fact, it might raise the bar for what's considered reckless disregard. Speed ≠ Safety - the case of the collapsing proposal Let's start with a recent real-life story. A multinational firm wrestling with niche rules recently invited proposals from several firms. Our bid emphasised expertly curated obligation libraries, legal and risk oversight and 'incremental AI assistance'. Another vendor promised a single platform that would "write all obligations, map all controls and keep them updated automatically". During due diligence, however, the other vendor conceded they could offer speed - but not accuracy. They could offer no assurance that the tool's recommendations were accurate or that it would satisfy a regulator asking the reasonable-steps question. The firm's compliance leaders pressed harder: would the vendor underwrite the output? The answer was no. The value proposition collapsed and along with it the illusion that AI without expert oversight can meet the needs of complex regulated entities and placate their supervisory bodies. Context ≠ Comprehension: The case where automation missed real-world control In yet another cautionary tale, a a high risk venue operator initially relied on AI-generated risk controls to satisfy venue compliance rules (i.e. no under 18 patrons). The tool pulled in industry practice and recommended a range of complex measures but it completely missed a key, simple, manual control: the presence of two full-time security staff who checked patrons on entry. AI simply couldn't see what wasn't written down. This offers a sobering lesson: just because AI can summarise what's on a page doesn't mean it understands what happens on the ground. When AI belongs in your compliance stack None of this is a blanket warning against using AI. Used properly, AI is already driving value in risk and compliance, including: Scanning policy libraries for inconsistent language Flagging emerging risks in real-time from complaints or case data Improving data quality at capture Drafting baseline documentation for expert review Identifying change impacts across jurisdictions and business units But note the pattern: AI handles volume and repetition; humans handle nuance and insight. The most robust use cases right now treat automation as an accelerant and not a replacement. This is because the line between support and substitution must be drawn carefully and visibly. Ask this first before plugging in your next tool As regulators pivot from rule-based assessments to 'reasonable steps' accountability, the key question is no longer just "Did we comply?" but "Can we prove we understood the risk and chose the right tools to manage it?" If your AI-assisted compliance map can't explain its logic, show its exclusions or withstand scrutiny under cross-examination then you don't have a time-saver - you've got a liability. So before you plug in an 'all-in-one automation' solution, first ask: Will this tool produce explainable and auditable outcomes? Is there clear human oversight at every high-risk stress point? Can we justify our decision to use this tool, especially when something goes wrong? If the answer to any of these is no, you're not accelerating your compliance strategy - you're undermining it. We all love speed, but in risk, speed without precision is a rounding error waiting to become a headline. Compliance leaders have a duty to make sure that what's fast is also right and that when it's not, there's someone accountable. In this era of 'good enough' AI, being good is simply no longer good enough…Being right is.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store