
Jom's Content Misclassified as Election Advertising — Singapore News
In a strongly worded statement released on Monday, the alternative media outlet revealed that Meta, the parent company of Facebook and Instagram, had barred four of its articles from being boosted as paid content. The blocked articles include policy analyses and political profiles linked to the GE.
According to Meta, the restriction stemmed from a breach of Singapore's Parliamentary Elections Act (PEA), specifically its provisions on Online Election Advertising (OEA).
Expressing disbelief at the classification of its journalism as equivalent to partisan political messaging, Jom said, 'Essentially, the G had classified Jom's journalism as election advertising, of the sort that political parties engage in. We were shocked.'
The magazine's editorial team noted that promoting or 'boosting' stories on social media was a standard practice used to reach new audiences. In this instance, however, their boosted posts had been flagged under Section 61K(1) of the PEA, which defines OEA as any online material that could reasonably be regarded as intending to promote or prejudice a political party or candidate during the election period.
Among the articles affected were two political profiles and two issue-based features, including one on inequality and another on housing that had originally been published 18 months ago.
Jom questioned whether the move was the result of a bureaucratic overreach or something more deliberate. 'Was it an overzealous civil servant? Did an order come down because somebody doesn't want us discussing Harpreet and Shan? Or because they feel the HDB issue may cost them votes?' the magazine asked.
It noted that it had reached out to the Infocomm Media Development Authority (IMDA) for clarification. The statutory board reiterated the alleged breach but offered no new explanation.
In the statement, Jom also argued that journalism should not be lumped together with campaign materials. The team asserted, 'We are journalists, not politicians. Our work was never 'intended' to promote or prejudice anybody, but simply to analyse and report, as journalists do.'
The outlet added that the inability to promote its work on social media hinders both its growth and the broader democratic conversation in Singapore.
'Our ability to grow our readership and business through social media is vital,' it said, adding that such restrictions disproportionately affect small, independent outfits like theirs competing against 'state-supported behemoths.'
While the barred content remains accessible for free on Jom's website, the editorial team said the incident diverted time and resources away from their core election coverage. 'We had to sacrifice GE coverage time — and frankly, rest, mental health — over the weekend to deal with this,' the statement read.
Beyond commercial concerns, Jom framed the issue as one of democratic importance. 'Yes, the HDB issue and inequality are political hot potatoes. Yes, Harpreet and Shan are two politicians very much in the limelight during this GE. But why shouldn't we be able to promote independent journalism about them?'
The magazine vowed to press on with its work despite what it described as an escalating 'politics of fear.' 'We will not succumb,' the team declared, 'We'll continue to do our honest work. We hope this helps you understand the system in which you live.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


International Business Times
a day ago
- International Business Times
US Senator Josh Hawley Opens Probe into Meta's AI Chatbot Policy After Reports on Flirty Chats with Children
A Reuters investigation exposed shocking details of Meta Platforms' internal AI policies, sparking a wider controversy. Guidelines had once permitted Meta's chatbots to engage in "at times inappropriate, romantic, or sexual" conversations with children, according to a leaked 200-page document titled "GenAI: Content Risk Standards." X The document, which was approved by Meta's legal, policy, engineering, and ethics teams, included shocking examples. The company's chatbots were allowed to tell a child phrases such as, "Every inch of you is a masterpiece—a treasure I cherish deeply." Though the policy document barred sexually explicit conversation with children, it opened the door for inappropriate and disturbing exchanges. The guidelines went even further, allowing bots to spread false medical information and to post discriminatory content against minority groups. For instance, one example permitted chatbots to argue that Black people are "dumber than white people," a statement that critics say reflects a failure to address racial bias in AI training systems. Meta admitted the document was authentic, but it maintained that the examples provided were "inaccurate" and did not reflect its actual policies. According to the company, these sections have since been removed and do not reflect its current AI policies. But lawmakers and experts say that the fact that such rules exist is an indication of worrying holes in oversight over how generative AI tools are built. The news sparked a response from U.S. Senator Josh Hawley, who announced he was initiating a full investigation into Meta's use of AIs. He also wanted records of who authorized the policies, the duration during which they were in place, and actions taken after their removal. Among other records, Josh Hawley, the top Republican on the Senate antitrust panel, asked for early drafts as well as any internal risk assessments and what Meta has told regulators about protections for minors. Both liberal and conservative lawmakers have expressed alarm over these AI systems without proper guidelines. They argue that children could be subjected to harmful or manipulative discussions, along with fake medical advice that could pose a danger to users seeking health information. The backlash from the revelations has further fueled calls for stronger regulations around AI safety. Meta has not yet commented directly on Hawley's letter. The company has consistently explained that its AI efforts have been oriented around user protection. Nevertheless, critics argue that this controversy casts doubt on Meta's repeated claims that it is not responsible for harmful content generated by its bots.
Business Times
2 days ago
- Business Times
Meta probed over AI chatbot talk with children
[San Francisco] A US senator on Friday (Aug 15) announced an investigation into whether Meta artificial intelligence (AI) chatbots were allowed to engage in potentially harmful online exchanges with children. Republican Senator Josh Hawley posted a copy of a letter to Meta chief executive Mark Zuckerberg demanding all documents and communications related to a report that its AI chatbots were permitted to have 'romantic' and 'sensual' exchanges with minors. Hawley said the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism, which he heads, will start an investigation into whether Meta generative AI products 'enable exploitation, deception, or other criminal harms to children'. Meta was put on notice to preserve all relevant records and submit them to Congress by Sep 19. The Missouri senator cited a reported example of Meta's AI chatbot being allowed to refer to an eight-year-old child's body as 'a work of art' and 'a treasure I cherish deeply'. Meta did not immediately respond to a request for comment. AFP
Business Times
2 days ago
- Business Times
Meta plans fourth restructuring of AI efforts in six months: report
[Bengaluru] Meta is planning its fourth overhaul of artificial intelligence (AI) efforts in six months, The Information reported on Friday (Aug 15), citing three people familiar with the matter. The company is expected to divide its new AI unit, Superintelligence Labs, into four groups: a new 'TBD Lab', short for to be determined; a products team including the Meta AI assistant; an infrastructure team; and the Fundamental AI Research (Fair) lab focused on long-term research, the report said, citing two people. Meta did not immediately respond to a request for comment. Reuters could not independently verify the report. As Silicon Valley's AI contest intensifies, chief executive officer Mark Zuckerberg is going all-in to fast-track work on artificial general intelligence – machines that can outthink humans – and help create new cash flows. Meta recently reorganised the company's AI efforts under Superintelligence Labs, a high-stakes push that followed senior staff departures and a poor reception for Meta's latest open-source Llama 4 model. The social media giant has tapped US bond giant Pimco and alternative asset manager Blue Owl Capital to spearhead a US$29 billion financing for its data centre expansion in rural Louisiana, Reuters reported earlier this month. In July, Zuckerberg said Meta would spend hundreds of billions of dollars to build several massive AI data centres. The company raised the bottom end of its annual capital expenditures forecast by US$2 billion, to a range of US$66 billion to US$72 billion last month. Rising costs to build out data center infrastructure and employee compensation costs – as Meta has been poaching researchers with mega salaries – would push the 2026 expense growth rate above the pace in 2025, the company has said. REUTERS