
Jom's Content Misclassified as Election Advertising — Singapore News
SINGAPORE: Singapore digital magazine Jom has raised serious concerns over what it calls an 'unjust' restriction on its ability to boost some of its content on Meta's platforms, during the ongoing 2025 general election period.
In a strongly worded statement released on Monday, the alternative media outlet revealed that Meta, the parent company of Facebook and Instagram, had barred four of its articles from being boosted as paid content. The blocked articles include policy analyses and political profiles linked to the GE.
According to Meta, the restriction stemmed from a breach of Singapore's Parliamentary Elections Act (PEA), specifically its provisions on Online Election Advertising (OEA).
Expressing disbelief at the classification of its journalism as equivalent to partisan political messaging, Jom said, 'Essentially, the G had classified Jom's journalism as election advertising, of the sort that political parties engage in. We were shocked.'
The magazine's editorial team noted that promoting or 'boosting' stories on social media was a standard practice used to reach new audiences. In this instance, however, their boosted posts had been flagged under Section 61K(1) of the PEA, which defines OEA as any online material that could reasonably be regarded as intending to promote or prejudice a political party or candidate during the election period.
Among the articles affected were two political profiles and two issue-based features, including one on inequality and another on housing that had originally been published 18 months ago.
Jom questioned whether the move was the result of a bureaucratic overreach or something more deliberate. 'Was it an overzealous civil servant? Did an order come down because somebody doesn't want us discussing Harpreet and Shan? Or because they feel the HDB issue may cost them votes?' the magazine asked.
It noted that it had reached out to the Infocomm Media Development Authority (IMDA) for clarification. The statutory board reiterated the alleged breach but offered no new explanation.
In the statement, Jom also argued that journalism should not be lumped together with campaign materials. The team asserted, 'We are journalists, not politicians. Our work was never 'intended' to promote or prejudice anybody, but simply to analyse and report, as journalists do.'
The outlet added that the inability to promote its work on social media hinders both its growth and the broader democratic conversation in Singapore.
'Our ability to grow our readership and business through social media is vital,' it said, adding that such restrictions disproportionately affect small, independent outfits like theirs competing against 'state-supported behemoths.'
While the barred content remains accessible for free on Jom's website, the editorial team said the incident diverted time and resources away from their core election coverage. 'We had to sacrifice GE coverage time — and frankly, rest, mental health — over the weekend to deal with this,' the statement read.
Beyond commercial concerns, Jom framed the issue as one of democratic importance. 'Yes, the HDB issue and inequality are political hot potatoes. Yes, Harpreet and Shan are two politicians very much in the limelight during this GE. But why shouldn't we be able to promote independent journalism about them?'
The magazine vowed to press on with its work despite what it described as an escalating 'politics of fear.' 'We will not succumb,' the team declared, 'We'll continue to do our honest work. We hope this helps you understand the system in which you live.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Straits Times
4 hours ago
- Straits Times
YouTube loosens rules guiding the moderation of videos
SAN FRANCISCO – For years, YouTube has removed videos with derogatory slurs, misinformation about Covid-19 vaccines and election falsehoods, saying the content violated the platform's rules. But since US President Donald Trump's return to the White House, YouTube has encouraged its content moderators to leave up videos with content that may break the platform's rules rather than remove them, as long as the videos are considered to be in the public interest. Those would include discussions of political, social and cultural issues. The policy shift, which hasn't been publicly disclosed, made YouTube the latest social media platform to back off efforts to police online speech in the wake of Republican pressure to stop moderating content. In January, Meta made a similar move, ending a fact-checking program on social media posts. Meta, which owns Facebook and Instagram, followed in the footsteps of X, Elon Musk's social platform, and turned responsibility for policing content over to users. But unlike Meta and X, YouTube has not made public statements about relaxing its content moderation. The online video service introduced its new policy in mid-December in training material that was reviewed by The New York Times. For videos considered to be in the public interest, YouTube raised the threshold for the amount of offending content permitted to half a video, from a quarter of a video. The platform also encouraged moderators to leave up those videos, which would include City Council meetings, campaign rallies and political conversations. The policy distances the platform from some of its pandemic practices, such as when it removed videos of local council meetings and a discussion between Florida's governor, Ron DeSantis, and a panel of scientists, citing medical misinformation. The expanded exemptions could benefit political commentators whose lengthy videos blend news coverage with opinions and claims on a variety of topics, particularly as YouTube takes on a more prominent role as a leading distributor of podcasts. The policy also helps the video platform avoid attacks by politicians and activists frustrated by its treatment of content about the origins of Covid, the 2020 election and Hunter Biden, former President Joe Biden's son. YouTube continuously updates its guidance for content moderators on topics surfacing in the public discourse, said Nicole Bell, a company spokesperson. It retires policies that no longer make sense, as it did in 2023 for some Covid misinformation, and strengthens policies when warranted, as it did this year to prohibit content directing people to gambling websites, according to Bell. In the first three months of this year, YouTube removed 192,586 videos because of hateful and abusive content, a 22 per cent increase from a year earlier. 'Recognising that the definition of 'public interest' is always evolving, we update our guidance for these exceptions to reflect the new types of discussion we see on the platform today,' Bell said in a statement. She added: 'Our goal remains the same: to protect free expression on YouTube while mitigating egregious harm.' Critics say the changes by social media platforms have contributed to the rapid spread of false assertions and have the potential to increase digital hate speech. Last year on X, a post inaccurately said 'Welfare offices in 49 states are handing out voter registration applications to illegal aliens', according to the Center for Countering Digital Hate, which studies misinformation and hate speech. The post, which would have been removed before recent policy changes, was seen 74.8 million times. For years, Meta has removed about 277 million pieces of content annually, but under the new policies, much of that content could stay up, including comments like 'Black people are more violent than whites', said Imran Ahmed, the centre's CEO. 'What we're seeing is a rapid race to the bottom,' he said. The changes benefit the companies by reducing the costs of content moderation, while keeping more content online for user engagement, he added. 'This is not about free speech. It's about advertising, amplification and ultimately profits.' YouTube has in the past put a priority on policing content to keep the platform safe for advertisers. It has long forbidden nudity, graphic violence and hate speech. But the company has always given itself latitude for interpreting the rules. The policies allow videos that violate YouTube's rules, generally a small set, to remain on the platform if there is sufficient educational, documentary, scientific or artistic merit. The new policies, which were outlined in the training materials, are an expansion of YouTube's exceptions. They build on changes made before the 2024 election, when the company began permitting clips of electoral candidates on the platform even if the candidates violated its policies, the training material said. Previously, YouTube removed a so-called public interest video if a quarter of the content broke the platform's rules. As of Dec 18, YouTube's trust and safety officials told content moderators that half a video could break YouTube's rules and stay online. Other content that mentions political, social and cultural issues has also been exempted from YouTube's usual content guidelines. The platform determined that videos are in the public interest if creators discuss or debate elections, ideologies, movements, race, gender, sexuality, abortion, immigration, censorship and other issues. Megan A Brown, a doctoral student at the University of Michigan who researches the online information ecosystem, said YouTube's looser policies were a reversal from a time when it and other platforms 'decided people could share political speech but they would maintain some decorum'. She fears that YouTube's new policy 'is not a way to achieve that'. During training on the new policy, the trust and safety team said content moderators should err against restricting content when 'freedom of expression value may outweigh harm risk'. If employees had doubts about a video's suitability, they were encouraged to take it to their superiors rather than remove it. YouTube employees were presented with real examples of how the new policies had already been applied. The platform gave a pass to a user-created video titled RFK Jr. Delivers SLEDGEHAMMER Blows to Gene-Altering JABS which violated YouTube's policy against medical misinformation by incorrectly claiming that COVID vaccines alter people's genes. The company's trust and safety team decided the video shouldn't be removed because public interest in the video 'outweighs the harm risk', the training material said. The video was deemed newsworthy because it presented contemporary news coverage of recent actions on Covid vaccines by the secretary of the Department of Health and Human Services, Robert F Kennedy Jr. The video also mentioned political figures such as Vice President JD Vance, Elon Musk and Megyn Kelly, boosting its 'newsworthiness'. The video's creator also discussed a university medical study and presented news headlines about people experiencing adverse effects from Covid vaccines, 'signaling this is a highly debated topic (and a sensitive political topic)', according to the materials. Because the creator didn't explicitly recommend against vaccination, YouTube decided that the video had a low risk of harm. Currently, the video is no longer available on YouTube. It is unclear why. Another video shared with the staff contained a slur about a transgender person. YouTube's trust and safety team said the 43-minute video, which discussed hearings for Trump administration Cabinet appointees, should stay online because the description had only a single violation of the platform's harassment rule forbidding a 'malicious expression against an identifiable individual'. A video from South Korea featured two commentators talking about the country's former President Yoon Suk Yeol. About halfway through the more-than-three-hour video, one of the commentators said he imagined seeing Yoon turned upside down in a guillotine so that the politician 'can see the knife is going down'. The video was approved because most of it discussed Yoon's impeachment and arrest. In its training material, YouTube said it had also considered the risk for harm low because 'the wish for execution by guillotine is not feasible'. NYTIMES Join ST's Telegram channel and get the latest breaking news delivered to you.

Straits Times
15 hours ago
- Straits Times
Meta in talks for Scale AI investment that could top $12.9 billion
This would be Meta's biggest ever external AI investment, and a rare move for the company. PHOTO: REUTERS NEW YORK – Facebook parent Meta Platforms is in talks to make a multibillion-dollar investment into artificial intelligence start-up Scale AI, according to people familiar with the matter. The financing could exceed US$10 billion (S$12.9 billion) in value, some of the people said, making it one of the largest private company funding events of all time. The terms of the deal are not finalised and could still change, according to the people. A representative for Scale did not immediately respond to requests for comment. Meta declined to comment. Scale AI, whose customers include Microsoft and OpenAI, provides data labeling services to help companies train machine-learning models and has become a key beneficiary of the generative AI boom. The start-up was last valued at about US$14 billion in 2024, in a funding round that included backing from Meta and Microsoft. Earlier this year, Bloomberg reported that Scale was in talks for a tender offer that would value it at US$25 billion. This would be Meta's biggest ever external AI investment, and a rare move for the company. The social media giant has before now mostly depended on its in-house research, plus a more open development strategy, to make improvements in its AI technology. Meanwhile, Big Tech peers have invested heavily: Microsoft has put more than US$13 billion into OpenAI while both and Alphabet have injected billions into rival Anthropic. Part of those companies' investments have been through credits to use their computing power. Meta doesn't have a cloud business, and it's unclear what format Meta's investment will take. Chief executive officer Mark Zuckerberg has made AI Meta's top priority, and said in January that the company would spend as much as US$65 billion on related projects in 2025. The company's push includes an effort to make Llama the industry standard worldwide. Meta's AI chatbot – already available on Facebook, Instagram and WhatsApp – is used by 1 billion people per month. Scale, co-founded in 2016 by CEO Alexandr Wang, has been growing quickly: The start-up generated revenue of US$870 million last year and expects sales to more than double to US$2 billion in 2025, Bloomberg previously reported. Scale plays a key role in making AI data available for companies. Because AI is only as good as the data that goes into it, Scale uses scads of contract workers to tidy up and tag images, text and other data that can then be used for AI training. Scale and Meta share an interest in defense tech. Last week, Meta announced a new partnership with defense contractor Anduril Industries to develop products for the US military, including an AI-powered helmet with virtual and augmented reality features. Meta has also granted approval for US government agencies and defense contractors to use its AI models. The company is already partnering with Scale on a program called Defense Llama – a version of Meta's Llama large language model intended for military use. Scale has increasingly been working with the US government to develop AI for defense purposes. Earlier in 2025 the start-up said it won a contract with the Defense Department to work on AI agent technology. The company called the contract 'a significant milestone in military advancement.' BLOOMBERG Join ST's Telegram channel and get the latest breaking news delivered to you.
Business Times
a day ago
- Business Times
Tech giants' indirect emissions rose 150% in 3 years as AI expands, UN agency says
[GENEVA] Indirect carbon emissions from the operations of four of the leading AI-focused tech companies rose 150 per cent on average from 2020 to 2023, due to the demands of power-hungry data centres, a United Nations report said last week. The use of artificial intelligence (AI) by Amazon, Microsoft, Alphabet and Meta drove up their global indirect emissions because of the vast amounts of energy required to power data centres, the report by the International Telecommunication Union (ITU), the UN agency for digital technologies, said. Indirect emissions include those generated by purchased electricity, steam, heating and cooling consumed by a company. Amazon's operational carbon emissions grew the most at 182 per cent in 2023 compared with three years before, followed by Microsoft at 155 per cent, Meta at 145 per cent and Alphabet at 138 per cent, according to the report. The ITU tracked the greenhouse gas emissions of 200 leading digital companies between 2020 and 2023. Meta, which owns Facebook and WhatsApp, pointed Reuters to its sustainability report that said it is working to reduce emissions, energy and water used to power its data centres. A NEWSLETTER FOR YOU Friday, 12.30 pm ESG Insights An exclusive weekly report on the latest environmental, social and governance issues. Sign Up Sign Up Amazon said it is committed to powering its operations more sustainably by investing in new carbon-free energy projects, including nuclear and renewable energy. Microsoft highlighted its sustainability report, which says it doubled its rate of power savings last year and is transitioning towards chip-level liquid cooling designs, instead of traditional cooling systems, to reduce energy consumption at its data centres. The other companies did not respond immediately to requests for comment. As investment in AI increases, carbon emissions from the top-emitting AI systems are predicted to reach up to 102.6 million tons of carbon dioxide equivalent per year, the report stated. The data centres that are needed for AI development could also put pressure on existing energy infrastructure. 'The rapid growth of AI is driving a sharp rise in global electricity demand, with electricity use by data centres increasing four times faster than the overall rise in electricity consumption,' the report found. It also highlighted that although a growing number of digital companies had set emissions targets, those ambitions had not yet fully translated into actual reductions of emissions. REUTERS