logo
GE2025: Online publication Jom flouted online election advertising rules by boosting articles' access on social media

GE2025: Online publication Jom flouted online election advertising rules by boosting articles' access on social media

CNA30-04-2025

SINGAPORE: Three articles posted on social media by online publication Jom were blocked in Singapore by Meta because they are considered as unauthorised third-party paid online election advertising (OEA) and Jom had "amplified access" to these articles, the Ministry of Digital Development and Information (MDDI) said.
During the election period, Jom had paid for advertisements on Facebook and Instagram to boost the reach of those articles. Meta is the parent company of these social media platforms.
The advertisements containing links to the articles violated regulations for OEA, the ministry said on Wednesday (Apr 30) in response to queries from CNA.
It stated that Singapore citizens and entities are allowed to publish OEA that are not paid and other articles on Jom's website that constitute such unpaid OEA are still accessible to the public.
OEA is defined under the Parliamentary Elections Act (PEA) as any information or material published online that "can reasonably be regarded as intended to promote or prejudice the electoral success or standing of a political party or candidate, even though the information or material can reasonably be regarded as intended to achieve some other purpose as well", the ministry said.
The three articles by Jom that criticised or praised political parties, candidates and their policies satisfy the legal definition of OEA, it added.
The articles are titled:
Is Shan a good bad cop?
The system has stopped evolving: why Harpreet Singh joined the opposition
Affordability in the lion city: is Singapore's public housing model built to last?
The Infocomm Media Development Authority (IMDA), as the assistant returning officer of this election, issued Meta corrective directions on Apr 23 to disable access for Singapore users to Jom's related advertisements.
"The rules on OEA apply to everyone, including online commentators such as Jom. These rules have been observed by other online commentators," the ministry said.
Under the law, only political parties, candidates, election agents and authorised third parties can publish paid OEA and this must also be declared to the returning officer.
MDDI said that this rule is in place to ensure transparency and accountability and prevent the use of paid advertisements to bypass the election expense limits for political parties and candidates.
The publishing of unpaid OEA is also prohibited during the cooling-off period for election campaigning, which lasts from midnight on May 2 until polling closes on May 3.
Jom said in its Facebook and Instagram posts on Tuesday that it was informed by IMDA that its articles flouted the PEA regarding OEA and that it was barred from "promoting ('boosting') our work on Meta".
It also said that it reached out to Meta and was informed of the same thing.
On its social media accounts, Jom states that it is a weekly digital magazine "covering arts, culture, politics, business, technology and more in Singapore".
It said on Tuesday that its ability to grow its "readership and business through social media is vital".
"To be clear, the order only prevents Jom from promoting (boosting) it through Meta. All four articles are still on our website," it added.
Meta states on Facebook that users who want to advertise on its platforms may do so by boosting a post or creating an advertisement.
A boosted post is an advertisement created from an existing piece of content that the users have published on their Facebook Page or Instagram account and this will help the post to reach more people on Facebook or Instagram.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

YouTube loosens rules guiding the moderation of videos
YouTube loosens rules guiding the moderation of videos

Straits Times

time4 hours ago

  • Straits Times

YouTube loosens rules guiding the moderation of videos

SAN FRANCISCO – For years, YouTube has removed videos with derogatory slurs, misinformation about Covid-19 vaccines and election falsehoods, saying the content violated the platform's rules. But since US President Donald Trump's return to the White House, YouTube has encouraged its content moderators to leave up videos with content that may break the platform's rules rather than remove them, as long as the videos are considered to be in the public interest. Those would include discussions of political, social and cultural issues. The policy shift, which hasn't been publicly disclosed, made YouTube the latest social media platform to back off efforts to police online speech in the wake of Republican pressure to stop moderating content. In January, Meta made a similar move, ending a fact-checking program on social media posts. Meta, which owns Facebook and Instagram, followed in the footsteps of X, Elon Musk's social platform, and turned responsibility for policing content over to users. But unlike Meta and X, YouTube has not made public statements about relaxing its content moderation. The online video service introduced its new policy in mid-December in training material that was reviewed by The New York Times. For videos considered to be in the public interest, YouTube raised the threshold for the amount of offending content permitted to half a video, from a quarter of a video. The platform also encouraged moderators to leave up those videos, which would include City Council meetings, campaign rallies and political conversations. The policy distances the platform from some of its pandemic practices, such as when it removed videos of local council meetings and a discussion between Florida's governor, Ron DeSantis, and a panel of scientists, citing medical misinformation. The expanded exemptions could benefit political commentators whose lengthy videos blend news coverage with opinions and claims on a variety of topics, particularly as YouTube takes on a more prominent role as a leading distributor of podcasts. The policy also helps the video platform avoid attacks by politicians and activists frustrated by its treatment of content about the origins of Covid, the 2020 election and Hunter Biden, former President Joe Biden's son. YouTube continuously updates its guidance for content moderators on topics surfacing in the public discourse, said Nicole Bell, a company spokesperson. It retires policies that no longer make sense, as it did in 2023 for some Covid misinformation, and strengthens policies when warranted, as it did this year to prohibit content directing people to gambling websites, according to Bell. In the first three months of this year, YouTube removed 192,586 videos because of hateful and abusive content, a 22 per cent increase from a year earlier. 'Recognising that the definition of 'public interest' is always evolving, we update our guidance for these exceptions to reflect the new types of discussion we see on the platform today,' Bell said in a statement. She added: 'Our goal remains the same: to protect free expression on YouTube while mitigating egregious harm.' Critics say the changes by social media platforms have contributed to the rapid spread of false assertions and have the potential to increase digital hate speech. Last year on X, a post inaccurately said 'Welfare offices in 49 states are handing out voter registration applications to illegal aliens', according to the Center for Countering Digital Hate, which studies misinformation and hate speech. The post, which would have been removed before recent policy changes, was seen 74.8 million times. For years, Meta has removed about 277 million pieces of content annually, but under the new policies, much of that content could stay up, including comments like 'Black people are more violent than whites', said Imran Ahmed, the centre's CEO. 'What we're seeing is a rapid race to the bottom,' he said. The changes benefit the companies by reducing the costs of content moderation, while keeping more content online for user engagement, he added. 'This is not about free speech. It's about advertising, amplification and ultimately profits.' YouTube has in the past put a priority on policing content to keep the platform safe for advertisers. It has long forbidden nudity, graphic violence and hate speech. But the company has always given itself latitude for interpreting the rules. The policies allow videos that violate YouTube's rules, generally a small set, to remain on the platform if there is sufficient educational, documentary, scientific or artistic merit. The new policies, which were outlined in the training materials, are an expansion of YouTube's exceptions. They build on changes made before the 2024 election, when the company began permitting clips of electoral candidates on the platform even if the candidates violated its policies, the training material said. Previously, YouTube removed a so-called public interest video if a quarter of the content broke the platform's rules. As of Dec 18, YouTube's trust and safety officials told content moderators that half a video could break YouTube's rules and stay online. Other content that mentions political, social and cultural issues has also been exempted from YouTube's usual content guidelines. The platform determined that videos are in the public interest if creators discuss or debate elections, ideologies, movements, race, gender, sexuality, abortion, immigration, censorship and other issues. Megan A Brown, a doctoral student at the University of Michigan who researches the online information ecosystem, said YouTube's looser policies were a reversal from a time when it and other platforms 'decided people could share political speech but they would maintain some decorum'. She fears that YouTube's new policy 'is not a way to achieve that'. During training on the new policy, the trust and safety team said content moderators should err against restricting content when 'freedom of expression value may outweigh harm risk'. If employees had doubts about a video's suitability, they were encouraged to take it to their superiors rather than remove it. YouTube employees were presented with real examples of how the new policies had already been applied. The platform gave a pass to a user-created video titled RFK Jr. Delivers SLEDGEHAMMER Blows to Gene-Altering JABS which violated YouTube's policy against medical misinformation by incorrectly claiming that COVID vaccines alter people's genes. The company's trust and safety team decided the video shouldn't be removed because public interest in the video 'outweighs the harm risk', the training material said. The video was deemed newsworthy because it presented contemporary news coverage of recent actions on Covid vaccines by the secretary of the Department of Health and Human Services, Robert F Kennedy Jr. The video also mentioned political figures such as Vice President JD Vance, Elon Musk and Megyn Kelly, boosting its 'newsworthiness'. The video's creator also discussed a university medical study and presented news headlines about people experiencing adverse effects from Covid vaccines, 'signaling this is a highly debated topic (and a sensitive political topic)', according to the materials. Because the creator didn't explicitly recommend against vaccination, YouTube decided that the video had a low risk of harm. Currently, the video is no longer available on YouTube. It is unclear why. Another video shared with the staff contained a slur about a transgender person. YouTube's trust and safety team said the 43-minute video, which discussed hearings for Trump administration Cabinet appointees, should stay online because the description had only a single violation of the platform's harassment rule forbidding a 'malicious expression against an identifiable individual'. A video from South Korea featured two commentators talking about the country's former President Yoon Suk Yeol. About halfway through the more-than-three-hour video, one of the commentators said he imagined seeing Yoon turned upside down in a guillotine so that the politician 'can see the knife is going down'. The video was approved because most of it discussed Yoon's impeachment and arrest. In its training material, YouTube said it had also considered the risk for harm low because 'the wish for execution by guillotine is not feasible'. NYTIMES Join ST's Telegram channel and get the latest breaking news delivered to you.

German digital ministry treads cautiously over online platform levy
German digital ministry treads cautiously over online platform levy

CNA

time30-05-2025

  • CNA

German digital ministry treads cautiously over online platform levy

BERLIN :Germany's new digital ministry said any levy on online platforms would have to be internationally coordinated and not result in higher prices for end consumers, in a sign on Friday of possible divisions within government over plans for such a tax. The Minister of State for Culture Wolfram Weimer had said in an interview published on Thursday that officials were working on a levy which would hit platforms such as Alphabet's Google and Meta's Facebook. A levy of 10 per cent would be reasonable, he said - without specifying if this were a tax on revenue or profit. Germany's ruling parties agreed earlier this year to consider the introduction of a digital services levy, but this was not on the list of projects the coalition wants to prioritise. Weimer's proposal had not yet been agreed upon by the government, officials had said. "The decisive factors in evaluating such a levy are that it is designed in a targeted manner, is internationally coordinated and compatible with EU law, that any potential revenue benefits Germany as a hub for innovation, and that ultimately no higher prices are passed on to end consumers," a spokesperson for the digital ministry said. The proposal comes as Chancellor Friedrich Merz is expected to travel to Washington soon to meet with U.S. President Donald Trump, although a trip has not yet been officially announced. Trump has in the past said he will not allow foreign governments to "appropriate America's tax base for their own benefit". Industry association Bitkom warned that the levy could lead to price increases that would impact businesses, public administrations, and consumers. "These price increases will hinder and slow down the urgently needed acceleration of the digitalization of public services and the digital transformation of companies," said Bitkom President Ralf Wintergerst. "What we need is not more, but fewer financial burdens on digital goods and services."

ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth
ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth

Straits Times

time30-05-2025

  • Straits Times

ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth

(From left) IMDA's Alamelu Subramaniam, Adobe's Andy Parsons, Baroness Jones of Whitchurch, Meta's Simon Milner and SMU's Lim Sun Sun during a discussion at ATxSummit 2025 on May 29. PHOTO: INFOCOMM MEDIA DEVELOPMENT AUTHORITY ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth SINGAPORE – Meta, the parent company of Facebook and Instagram, downplayed fears over the impact of artificial intelligence (AI), urging policymakers and the public to focus on actual outcomes rather than worst-case scenarios. The comments by its Asia-Pacific public policy vice-president Simon Milner drew sharp rebuttals at the ATxSummit 2025 on May 29, where fellow panellists said the rapid spread of AI has real-world consequences such as online harms affecting youth and children. During the panel at Capella Singapore, Mr Milner cited 2024 as the 'year of democracy', as more people across a bigger number of countries went to the polls than at any other time in history. While there were widespread concerns about deepfakes and generative AI (GenAI) disrupting elections, he said no significant evidence of such interference was found – not even in major democracies like the US, India or Indonesia. 'Although enormous amounts of GenAI were deployed across platforms, the impact has not been catastrophic,' he added. However, his views were not shared by fellow panellists discussing the topic of protecting society in an always-online world. Drawing from her work, Singapore Management University's professor of communication and technology Lim Sun Sun said many parents feel anxious and unsure about how to guide their children in navigating the rapid rise of GenAI. 'Even if the data doesn't paint a worrying picture overall, on the ground, people are struggling to understand this technology,' Prof Lim said. Teachers also face a dilemma: Encouraging experimentation with AI while warning about its risks. 'It is a difficult balance,' she added. Baroness Jones of Whitchurch (Margaret Beryl Jones) , the UK's parliamentary under-secretary for the future digital economy and online safety, echoed similar concerns about online harms affecting youth and children. She pointed to an ongoing public debate in the UK about the damaging effects some online platforms have on young users. 'For example, children accessing online suicide forums and committing suicide. This is just heartbreaking, and we have some terrible stories about it,' she said. In May 2024, 17-year-old Vlad Nikolin-Caisley from Hampshire in south-east England died after allegedly being encouraged by members of an online pro-suicide group. His family believes these harmful online interactions played a significant role in his death, intensifying calls for stronger regulation of such platforms. Baroness Jones stressed the need for tech companies to work closely with the government to minimise such harms, but acknowledged that not all companies are fully on board, as the government is 'laying high expectations in a new territory'. But Mr Milner pushed back, arguing that the UK – or more broadly, Europe – rushed to be the first region to regulate AI, which he described as a mistake. He said this approach has led to a stand-off with companies. In contrast, he praised Singapore and other Asian governments for taking a different path: Fostering robust dialogue with tech firms, both publicly and privately, while asking tough questions without rushing into heavy-handed regulations. Mr Andy Parsons, senior director of content authenticity at Adobe, highlighted the spread of child sexual abuse material (CSAM) online. It is becoming nearly impossible for the police to identify real victims if the materials were generated entirely by AI, he said. Mr Parsons warned that this not only hinders efforts to bring perpetrators to justice but also erases the real human suffering behind these crimes – a grave problem that requires urgent attention. Prof Lim agreed, noting that the issue of CSAM has been worsened by the rapid spread of GenAI. She is currently identifying key stakeholders across the industry, government and the community who are involved in tackling the problem . We need to understand 'where else can we coordinate our efforts better so that we can combat this really dreadful scourge', she said. Addressing the concerns raised by his fellow panellists, Mr Milner emphasised that Meta's top priority is developing products with features to mitigate online harms. He cited the introduction of teen-specific accounts on Instagram as a response to growing worries about young people's engagement with the platform. 'I think we should be more parent-focused in our approach to young people's safety,' he said, adding that teen accounts are not just about imposing bans. 'Parents want help, and we are here to help them.' Baroness Jones stressed that AI safety must be approached as safety by design – embedded into platforms from the outset, rather than relying on reactive measures like taking down the content afterwards. 'It should be an integral part of the system that children, in particular, are protected,' she said. But achieving that remains a major challenge. Citing reports from the UK, she highlighted that children as young as eight have encountered disturbing content online, often repeatedly surfaced to them by algorithms. She believed the algorithms are clearly reinforcing exposure to harmful material. If tech companies truly put their minds to it, they could rework the way these systems operate, she said, emphasising that keeping children safe must be the top priority. Prof Lim also called for safety by design, stressing that online spaces should be built with the most vulnerable users in mind – whether they are children, women, the elderly or marginalised communities. She said: 'Because once you've designed for the most vulnerable, it makes the whole platform safer for everyone.' Join ST's WhatsApp Channel and get the latest news and must-reads.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store