logo
#

Latest news with #contentmoderation

Meta's ‘Free Expression' Push Results In Far Fewer Content Takedowns
Meta's ‘Free Expression' Push Results In Far Fewer Content Takedowns

WIRED

time29-05-2025

  • Business
  • WIRED

Meta's ‘Free Expression' Push Results In Far Fewer Content Takedowns

May 29, 2025 7:09 PM Meta says loosening its enforcement policies earlier this year led to fewer erroneous takedowns on Facebook and Instagram—and didn't broadly expose users to more harmful content. An aerial view of Meta headquarters in Menlo Park, California. Photograph:Meta announced in January it would end some content moderation efforts, loosen its rules, and put more emphasis on supporting 'free expression.' The shifts resulted in fewer posts being removed from Facebook and Instagram, the company disclosed Thursday in its quarterly Community Standards Enforcement Report. Meta said that its new policies had helped reduce erroneous content removals in the US by half without broadly exposing users to more offensive content than before the changes. The new report, which was referenced in an update to a January blog post by Meta global affairs chief Joel Kaplan, shows that Meta removed nearly one third less content on Facebook and Instagram globally for violating its rules from January to March of this year than it did in the previous quarter, or about 1.6 billion items compared to just under 2.4 billion, according to an analysis by WIRED. In the past several quarters, the tech giant's total quarterly removals had previously risen or stayed flat. Across Instagram and Facebook, Meta reported removing about 50 percent fewer posts for violating its spam rules, nearly 36 percent for child endangerment, and almost 29 percent for hateful conduct. Removals increased in only one major rules category—suicide and self-harm content—out of the 11 Meta lists. The amount of content Meta removes fluctuates regularly from quarter to quarter, and a number of factors could have contributed to the dip in takedowns. But the company itself acknowledged that 'changes made to reduce enforcement mistakes' was one reason for the large drop. 'Across a range of policy areas we saw a decrease in the amount of content actioned and a decrease in the percent of content we took action on before a user reported it,' the company wrote. 'This was in part because of the changes we made to ensure we are making fewer mistakes. We also saw a corresponding decrease in the amount of content appealed and eventually restored.' Meta relaxed some of its content rules at the start of the year that CEO Mark Zuckerberg described as 'just out of touch with mainstream discourse.' The changes allowed Instagram and Facebook users to employ some language that human rights activists view as hateful toward immigrants or individuals that identify as transgender. For example, Meta now permits 'allegations of mental illness or abnormality when based on gender or sexual orientation.' As part of the sweeping changes, which were announced just as Donald Trump was set to begin his second term as US president, Meta also stopped relying as much on automated tools to identify and remove posts suspected of less severe violations of its rules because it said they had high error rates, prompting frustration from users. During the first quarter of this year, Meta's automated systems accounted for 97.4 percent of content removed from Instagram under the company's hate speech policies, down by just one percentage point from the end of last year. (User reports to Meta triggered the remaining percentage.) But automated removals for bullying and harassment on Facebook dropped nearly 12 percentage points. In some categories, such as nudity, Meta's systems were slightly more proactive compared to the previous quarter. Users can appeal content takedowns, and Meta sometimes restores posts that it determines have been wrongfully removed. In the update to Kaplan's blog post, Meta highlighted the large decrease in erroneous takedowns. 'This improvement follows the commitment we made in January to change our focus to proactively enforcing high-severity violations and enhancing our accuracy through system audits and additional signals,' the company wrote. Some Meta employees told WIRED in January that they were concerned the policy changes could lead to a dangerous free-for-all on Facebook and Instagram, turning the platforms into increasingly inhospitable places for users to converse and spend time. But according to its own sampling, Meta estimates that users were exposed to about one to two pieces of hateful content on average for every 10,000 posts viewed in the first quarter, down from about two to three at the end of last year. And Meta's platforms have continued growing—about 3.43 billion people in March used at least one of its apps, which include WhatsApp and Messenger, up from 3.35 billion in December.

US will ban foreign officials to punish countries for social media rules
US will ban foreign officials to punish countries for social media rules

The Verge

time28-05-2025

  • Business
  • The Verge

US will ban foreign officials to punish countries for social media rules

The US State Department has launched its latest rebuke against Europe and other countries over their attempts to regulate digital platforms. Secretary of State Marco Rubio announced Wednesday that the US would restrict visas for 'foreign nationals who are responsible for censorship of protected expression in the United States.' He called it 'unacceptable for foreign officials to issue or threaten arrest warrants on U.S. citizens or U.S. residents for social media posts on American platforms while physically present on U.S. soil' and 'for foreign officials to demand that American tech platforms adopt global content moderation policies or engage in censorship activity that reaches beyond their authority and into the United States.' It's not yet clear how or against whom the policy will be enforced, but seems to implicate Europe's Digital Services Act, a law that came into effect in 2023 with the goal of making online platforms safer by imposing requirements on the largest platforms around removing illegal content and providing transparency about their content moderation. Though it's not mentioned directly in the press release about the visa restrictions, the Trump administration has slammed the law on multiple occasions, including in remarks earlier this year by Vice President JD Vance. It's 'unacceptable for foreign officials to demand that American tech platforms adopt global content moderation policies' The State Department's homepage currently links to an article on its official Substack, where senior advisor for the Bureau of Democracy, Human Rights, and Labor Samuel Samson critiques the DSA as a tool to 'silence dissident voices through Orwellian content moderation.' He adds that, 'Independent regulators now police social media companies, including prominent American platforms like X, and threaten immense fines for non-compliance with their strict speech regulations.' Though President Donald Trump has claimed to take actions to crack down on censorship domestically, some moves by his administration have threatened to limit speech within the US. Government websites and institutions that rely on government funding have scrubbed words associated with diversity to avoid his wrath, and The White House cut The Associated Press' access to press briefings when the outlet declined to call the Gulf of Mexico the Gulf of America. 'We will not tolerate encroachments upon American sovereignty,' Rubio says in the announcement, 'especially when such encroachments undermine the exercise of our fundamental right to free speech.'

Elon Musk Is Doing Business With Actual Terrorists, Nonprofit Finds
Elon Musk Is Doing Business With Actual Terrorists, Nonprofit Finds

Yahoo

time18-05-2025

  • Business
  • Yahoo

Elon Musk Is Doing Business With Actual Terrorists, Nonprofit Finds

Who's paying for a blue checkmark on X-formerly-Twitter these days? According to a new report by the big tech accountability nonprofit Tech Transparency Project (TTP), the answer is: a bunch of terrorists. The TTP investigation found that more than 200 X users including individuals who appear to be affiliated with Al-Qaeda, Hezbollah, Hamas, the Houthis, and Syrian and Iraqi militia groups — all deemed foreign terrorist organizations (FTOs) by the US government — are paying for subscriptions to Elon Musk's X. Put simply, Musk is doing business with actual terrorists, highlighting major flaws in his social media company's content moderation practices. These paid subscriptions are granting apparent terrorists blue verification badges, which can offer the accounts an added air of legitimacy. Most importantly, though, the subscriptions are granting the users access to premium X features and perks like content monetization tools, the ability to publish longer posts and videos, and greater platform reach — which the TTP says allows for terrorism-linked users to more effectively distribute and monetize propaganda, as well as promote their fundraising efforts. "They rely on the premium services for the amplification of long propaganda posts and extended videos," TTP director Katie Paul told The New York Times. "They are not just subscribing for the blue check notoriety, they are subscribing for the premium services." As the TTP points out, X's terms of use forbid users from paying for premium services if they're affiliated with groups under US economic sanctions, including ones imposed by the Treasury Department's Office of Foreign Assets Control. Neither X nor the Treasury Department responded to a NYT request for comment. Though X says it reviews subscribed accounts to ensure they "meet all eligibility criteria" for verification, the feature has been pretty broken since Musk took over the platform and made the feature pay-to-play. What's more, last year, a similar TTP report found that over two dozen users with apparent terror links were paying X subscribers with blue badges. Several of those accounts were banned or stripped of their verification status following the release of the report, but as the NYT points out, several have since been able to regain access to premium features. The TTP investigation raises serious questions about X's due diligence around content moderation and platform safety. After all, if X can suppress users that Musk doesn't like, and speech that authoritarian governments don't like, can't it keep US-designated terrorists — whether they're the real deal or impersonators — from nabbing blue checks and using X perks to spread and cash in on propaganda? "There is clear evidence of these groups profiting and fundraising through X," Paul told the NYT. "They are sanctioned for a reason, and the fact that somebody who has such influence and power in the federal government is at the same time profiting from these designated terrorist groups and individuals is extremely concerning." More on X dot com: Elon Musk's Unhinged Grok AI Is Rambling About 'White Genocide' in Completely Unrelated Tweets

Pinterest says mass account bans were caused by an ‘internal error'
Pinterest says mass account bans were caused by an ‘internal error'

The Verge

time15-05-2025

  • The Verge

Pinterest says mass account bans were caused by an ‘internal error'

Pinterest has apologized for a recent wave of 'over-enforcement' that erroneously deactivated many accounts. The platform has experienced some weird moderation issues in recent weeks, and outraged users reported their accounts had been suspended without warning or explanation. In response to many appeals, the platform cited unspecified community guideline violations. The company initially addressed ban concerns with a statement saying that it will 'continuously monitor for content that violates our Community Guidelines and accounts with violative content may be deactivated as a result.' This answer did little to soothe outrage from users who were calling for Pinterest to clarify how accounts had violated the platform's guidelines, and complained that appeals to reinstate mistakenly banned accounts were not being processed, or being rejected without explanation. On Wednesday, Pinterest issued an updated statement by responding to a support post it had published to X in July last year. 'We recently took action on violations of our content policies, but an internal error led to over-enforcement and some accounts were mistakenly deactivated,' Pinterest said on Wednesday in an updated statement attached to an old X post. 'We're sorry for the frustration this caused. We've reinstated many impacted accounts and are making improvements to respond faster when mistakes happen going forward.' Pinterest hasn't given any specific details about what the 'internal error' was, what caused it, or if it has been resolved. Some users reported that Pinterest was also deleting pins for seemingly random and inaccurate content violations, such as images of everyday objects being flagged for 'adult content,' leading to speculation that pins and accounts were being reported by an inadequately implemented AI moderation system. Pinterest has since told TechCrunch that AI moderation was not responsible for the error. Users who were mistakenly suspended in the past few weeks are starting to regain access to their accounts, according to reports on the Pinterest subreddit. Given how clumsily the company has handled the situation, however, some scorned users are in no rush to forgive the platform.

‘My library of Alexandria has been burned down': Pinterest users are fuming over sudden bans
‘My library of Alexandria has been burned down': Pinterest users are fuming over sudden bans

Fast Company

time06-05-2025

  • Business
  • Fast Company

‘My library of Alexandria has been burned down': Pinterest users are fuming over sudden bans

Pinterest fans are nothing if not loyal. Many have spent years—sometimes decades—carefully curating boards filled with wedding inspiration, home decor ideas, fashion, and more. Now users are logging in only to find themselves locked out of their accounts without warning, with all their pins gone. Frustrated users have taken to platforms like X and r/Pinterest to vent. The comment sections on Pinterest's official Instagram and TikTok pages are flooded with pleas from angry users demanding answers. 'I had a beautiful Pinterest board with over 26,000 of the most beautiful images and my account was just permanently banned,' one user posted on X. 'Pinterest you will be dealt with.' Another, who reportedly lost an account they had maintained for seven years, wrote, 'I feel like my library of Alexandria has been burned down.' For creatives, Pinterest isn't just for fun—it's also a professional tool. 'It's the industry standard to present a moodboard before any project goes into action, and the sheer amount of valuable references I've lost out on since being banned is hard to describe,' wrote one Reddit user. 'I've had to postpone shoots and scramble to reassemble projects. Years and years of curating down the drain and multiple projects stuck in limbo.' Those who've lost accounts claim they've done nothing wrong. 'I made a new account, didn't even add anything yet. Get an email saying I'm banned/suspended,' one user posted on X. 'I try to dispute it and your typical bot responds saying there's nothing it can do.' Others are now afraid to even open their accounts for fear of what they might find. Many are pointing the finger at AI. Pinterest's Help Center states that it uses AI in 'improving content moderation,' a system it has relied on for years to enforce its Community Guidelines. Like many platforms, Pinterest uses a mix of AI and human review. A Pinterest spokesperson tells Fast Company: 'Pinterest has long-established public Community Guidelines that clearly outline what is and isn't allowed on the platform. We're committed to building a safer and more positive platform, and enforce these policies rigorously and continuously. Users who believe their account may have been deactivated mistakenly may submit an appeal.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store