logo
Hong Kong Outlaws a Video Game for Promoting ‘Armed Revolution'

Hong Kong Outlaws a Video Game for Promoting ‘Armed Revolution'

Hong Kong's national security police have a new target in their sights: gamers.
In a stern warning issued Tuesday, they effectively banned a Taiwanese video game they described as 'advocating armed revolution,' saying anyone who downloaded or recommended it would face serious legal charges. The move comes as the authorities continue to tighten control over online content they consider a threat to the Chinese city.
'Reversed Front: Bonfire' is an online game of war strategy released by a Taiwanese group. Illustrated in a colorful manga style, players can choose the roles of 'propagandists, patrons, spies or guerrillas' from Taiwan, Mongolia and the Chinese territories of Hong Kong, Xinjiang and Tibet in plots and simulated battles against China's ruling Communist Party. Alternatively, players can choose to represent government fighters.
The game was removed from Apple's app store in Hong Kong on Wednesday, but remains available elsewhere.
But it had already been out of reach for many gamers. It was never available in mainland China and earlier this month Google removed 'Reversed Front' from its app store, citing hateful language, according to the developers.
ESC Taiwan is a group of anonymous volunteers who are outspoken against China's Communist Party. Their products, which include a board game released in 2020, are supported by crowdfunded donations.
The developers said that the removal of the game demonstrated how mobile apps in Hong Kong are subject to the type of political censorship seen in mainland China. 'Our game is precisely accusing and revealing such intentions,' the group's representatives said in an email.
Want all of The Times? Subscribe.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China Market Update: Risk Off Night On Middle East Tensions & Trump Tariff Saber Rattling
China Market Update: Risk Off Night On Middle East Tensions & Trump Tariff Saber Rattling

Forbes

timean hour ago

  • Forbes

China Market Update: Risk Off Night On Middle East Tensions & Trump Tariff Saber Rattling

CLN Despite a weaker US dollar, Asian equities were off overnight on growing chatter of a coming Middle East crisis and Trump's tariff threat reiteration. On the former, Mainland media noted that the International Atomic Energy Agency Board of Governors stated Iran had 'failed to comply with its nuclear safeguard obligations' for the first time in twenty years. No idea if there is any connection with the Israel-Iran concerns. On the latter, Chinese company JL MAG Rare-Earth stated it 'had obtained export licenses issued by national authorities for exports to destinations including the US, Europe, and Southeast Asia.' This follows last Saturday's Ministry of Commerce license approval, as May rare earth exports increased 23% MoM and 2.3% YoY. Hong Kong underperformed as growth stocks favored by foreign investors were hit. Interesting to see the US dollar lower versus every Asian currency except India, while favored Hong Kong growth stocks were hit, with Hong Kong's top ten most heavily traded by value all down for the day. Alibaba -3.21% despite Ant filing for a Hong Kong stablecoin license, while Tencent -1.54% on chatter it will buy South Korea's Nexon Games (225570 KS) for $15B. Hong Kong and Mainland healthcare stocks rebounded after I jinxed them yesterday by noting their recent outperformance. Precious metals had a good day in both Hong Kong and Mainland China. Auto/EV/Hybrid was weak on news, several cities have suspended used car purchase subsidies following reports that dealers were selling new cars as used to garner the subsidies. The Ministry of Commerce and the Ministry of Foreign Affairs held separate afternoon press conferences reiterating the progress made in London in US-Sino relations. They did reiterate that the London meeting confirmed what Trump and Xi had outlined in their June 5th phone conversation. After their underlings screwed up the Geneva trade agreement, the two bosses had to get involved which must be frustrating for them. Reminds me of my wife's frequent lament on household and child tasks, 'I do everything!' My reply is, 'How can you be doing everything, if I'm doing everything!'. YTD Mainland investors have bought $87B of Hong Kong stocks via Southbound Stock Connect versus $102B for 2024. With that said, Tencent has seen net selling since April 23rd, while Alibaba has seen choppy/leaning outflow since early May. Even Kuaishou has seen net outflows in the last week. Xiaomi has seen only two small net buy days since early May. Meituan has bucked the trend despite a very small net sell today. What gives? One factor is that Hong Kong's rebound has led to more capital raising as companies like CATL have been relisted in Hong Kong. Today, Horizon Robotics sold 681mm shares, raising $601mm via private placement. The money needs to come from somewhere to fund those purchases, so the more heavily owned you are, the more likely the stock becomes a funding source. Investors aren't blowing out of these names, just trimming them. Tencent's percentage of shares owned via Southbound Stock Connect has declined 11.82%/1.087B shares on April 24 to 11.13%/1.023B shares today. Alibaba's shares owned via Southbound Stock Connect went from 8.81%/1.682B shares on April 28 to 8.63%/1.646B shares today. Net net, no reason to freak out, IMO. Live Webinar Join us Tuesday, June 17, at 10 am EDT for: Carbon Update: EU Market Momentum & California's Legal Landscape for Investors Please click here to register New Content Read our latest article: Navigating Global Crosswinds: Carbon Markets Respond to Tariff Tactics and Executive Orders Please click here to read Chart1 Chart2 Chart3 Chart4 Chart5 Chart6

Meta sues maker of explicit deepfake app for dodging its rules to advertise AI ‘nudifying' tech
Meta sues maker of explicit deepfake app for dodging its rules to advertise AI ‘nudifying' tech

CNN

timean hour ago

  • CNN

Meta sues maker of explicit deepfake app for dodging its rules to advertise AI ‘nudifying' tech

Meta is suing the Hong Kong-based maker of the app CrushAI, a platform capable of creating sexually explicit deepfakes, claiming that it repeatedly circumvented the social media company's rules to purchase ads. The suit is part of what Meta (META) described as a wider effort to crack down on so-called 'nudifying' apps — which allow users to create nude or sexualized images from a photo of someone's face, even without their consent — following claims that the social media giant was failing to adequately address ads for those services on its platforms. As of February, the maker of CrushAI, also known as Crushmate and by several other names, had run more than 87,000 ads on Meta platforms that violated its rules, according to the complaint Meta filed in Hong Kong district court Thursday. Meta alleges the app maker, Joy Timeline HK Limited, violated its rules by creating a network of at least 170 business accounts on Facebook or Instagram to buy the ads. The app maker also allegedly had more than 55 active users managing over 135 Facebook pages where the ads were displayed. The ads primarily targeted users in the United States, Canada, Australia, Germany and the United Kingdom. 'Everyone who creates an account on Facebook or uses Facebook must agree to the Meta Terms of Service,' the complaint states. Some of those ads included sexualized or nude images generated by artificial intelligence and were captioned with phrases like 'upload a photo to strip for a minute' and 'erase any clothes on girls,' according to the lawsuit. CNN has reached out to Joy Timeline HK Limited for comment on the lawsuit. Tech platforms face growing pressure to do more to address non-consensual, explicit deepfakes, as AI makes it easier than ever to create such images. Targets of such deepfakes have included prominent figures such as Taylor Swift and Rep. Alexandria Ocasio-Cortez, as well as high school girls across the United States. The Take It Down Act, which makes it illegal for individuals to share non-consensual, explicit deepfakes online and requires tech platforms to quickly remove them, was signed into law last month. But a series of media reports in recent months suggest that these nudifying AI services have found an audience by advertising on Meta's platforms. In January, reports from tech newsletter Faked Up and outlet 404Media found that CrushAI had published thousands of ads on Instagram and Facebook and that 90% of the app's traffic was coming from Meta's platforms. That's despite the fact that Meta prohibits ads that contain adult nudity and sexual activity, and forbids sharing non-consensual intimate images and content that promotes sexual exploitation, bullying and harassment. Following those reports, Sen. Dick Durbin, Democrat and ranking member of the Senate Judiciary Committee, wrote to Meta CEO Mark Zuckerberg asking 'how Meta allowed this to happen and what Meta is doing to address this dangerous trend.' Earlier this month, CBS News reported that it had identified hundreds of advertisements promoting nudifying apps across Meta's platforms, including ads that featured sexualized images of celebrities. Other ads on the platforms pointed to websites claiming to animate deepfake images of real people to make them appear to perform sex acts, the report stated. In response to that report, Meta said it had 'removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps.' Meta says it reviews ads before they run on its platforms, but its complaint indicates that it has struggled to enforce its rules. According to the complaint, some of the CrushAI ads blatantly advertised its nudifying capabilities with captions such as 'Ever wish you could erase someone's clothes? Introducing our revolutionary technology' and 'Amazing! This software can erase any clothes.' Now, Meta said its lawsuit against the CrushAI maker aims to prevent it from further circumventing its rules to place ads on its platforms. Meta alleges it has lost $289,000 because of the costs of the investigation, responding to regulators and enforcing its rules against the app maker. When it announced the lawsuit Thursday, the company also said it had developed new technology to identify these types of ads, even if the ads themselves didn't contain nudity. Meta's 'specialist teams' partnered with external experts to train its automated content moderation systems to detect the terms, phrases and emojis often present in such ads. 'This is an adversarial space in which the people behind it — who are primarily financially motivated — continue to evolve their tactics to avoid detection,' the company said in a statement. 'Some use benign imagery in their ads to avoid being caught by our nudity detection technology, while others quickly create new domain names to replace the websites we block.' Meta said it had begun sharing information about nudifying apps attempting to advertise on its sites with other tech platforms through a program called Lantern, run by industry group the Tech Coalition. Tech giants created Lantern in 2023 to share data that could help them fight child sexual exploitation online. The push to crack down on deepfake apps comes after Meta dialed back some of its automated content removal systems — prompting some backlash from online safety experts. Zuckerberg announced earlier this year that those systems would be focused on checking only for illegal and 'high-severity' violations such as those related to terrorism, child sexual exploitation, drugs, fraud and scams. Other concerns must be reported by users before the company evaluates them.

Meta sues maker of explicit deepfake app for dodging its rules to advertise AI ‘nudifying' tech
Meta sues maker of explicit deepfake app for dodging its rules to advertise AI ‘nudifying' tech

CNN

timean hour ago

  • CNN

Meta sues maker of explicit deepfake app for dodging its rules to advertise AI ‘nudifying' tech

Meta is suing the Hong Kong-based maker of the app CrushAI, a platform capable of creating sexually explicit deepfakes, claiming that it repeatedly circumvented the social media company's rules to purchase ads. The suit is part of what Meta (META) described as a wider effort to crack down on so-called 'nudifying' apps — which allow users to create nude or sexualized images from a photo of someone's face, even without their consent — following claims that the social media giant was failing to adequately address ads for those services on its platforms. As of February, the maker of CrushAI, also known as Crushmate and by several other names, had run more than 87,000 ads on Meta platforms that violated its rules, according to the complaint Meta filed in Hong Kong district court Thursday. Meta alleges the app maker, Joy Timeline HK Limited, violated its rules by creating a network of at least 170 business accounts on Facebook or Instagram to buy the ads. The app maker also allegedly had more than 55 active users managing over 135 Facebook pages where the ads were displayed. The ads primarily targeted users in the United States, Canada, Australia, Germany and the United Kingdom. 'Everyone who creates an account on Facebook or uses Facebook must agree to the Meta Terms of Service,' the complaint states. Some of those ads included sexualized or nude images generated by artificial intelligence and were captioned with phrases like 'upload a photo to strip for a minute' and 'erase any clothes on girls,' according to the lawsuit. CNN has reached out to Joy Timeline HK Limited for comment on the lawsuit. Tech platforms face growing pressure to do more to address non-consensual, explicit deepfakes, as AI makes it easier than ever to create such images. Targets of such deepfakes have included prominent figures such as Taylor Swift and Rep. Alexandria Ocasio-Cortez, as well as high school girls across the United States. The Take It Down Act, which makes it illegal for individuals to share non-consensual, explicit deepfakes online and requires tech platforms to quickly remove them, was signed into law last month. But a series of media reports in recent months suggest that these nudifying AI services have found an audience by advertising on Meta's platforms. In January, reports from tech newsletter Faked Up and outlet 404Media found that CrushAI had published thousands of ads on Instagram and Facebook and that 90% of the app's traffic was coming from Meta's platforms. That's despite the fact that Meta prohibits ads that contain adult nudity and sexual activity, and forbids sharing non-consensual intimate images and content that promotes sexual exploitation, bullying and harassment. Following those reports, Sen. Dick Durbin, Democrat and ranking member of the Senate Judiciary Committee, wrote to Meta CEO Mark Zuckerberg asking 'how Meta allowed this to happen and what Meta is doing to address this dangerous trend.' Earlier this month, CBS News reported that it had identified hundreds of advertisements promoting nudifying apps across Meta's platforms, including ads that featured sexualized images of celebrities. Other ads on the platforms pointed to websites claiming to animate deepfake images of real people to make them appear to perform sex acts, the report stated. In response to that report, Meta said it had 'removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps.' Meta says it reviews ads before they run on its platforms, but its complaint indicates that it has struggled to enforce its rules. According to the complaint, some of the CrushAI ads blatantly advertised its nudifying capabilities with captions such as 'Ever wish you could erase someone's clothes? Introducing our revolutionary technology' and 'Amazing! This software can erase any clothes.' Now, Meta said its lawsuit against the CrushAI maker aims to prevent it from further circumventing its rules to place ads on its platforms. Meta alleges it has lost $289,000 because of the costs of the investigation, responding to regulators and enforcing its rules against the app maker. When it announced the lawsuit Thursday, the company also said it had developed new technology to identify these types of ads, even if the ads themselves didn't contain nudity. Meta's 'specialist teams' partnered with external experts to train its automated content moderation systems to detect the terms, phrases and emojis often present in such ads. 'This is an adversarial space in which the people behind it — who are primarily financially motivated — continue to evolve their tactics to avoid detection,' the company said in a statement. 'Some use benign imagery in their ads to avoid being caught by our nudity detection technology, while others quickly create new domain names to replace the websites we block.' Meta said it had begun sharing information about nudifying apps attempting to advertise on its sites with other tech platforms through a program called Lantern, run by industry group the Tech Coalition. Tech giants created Lantern in 2023 to share data that could help them fight child sexual exploitation online. The push to crack down on deepfake apps comes after Meta dialed back some of its automated content removal systems — prompting some backlash from online safety experts. Zuckerberg announced earlier this year that those systems would be focused on checking only for illegal and 'high-severity' violations such as those related to terrorism, child sexual exploitation, drugs, fraud and scams. Other concerns must be reported by users before the company evaluates them.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store