logo
Meta's ‘Free Expression' Push Results In Far Fewer Content Takedowns

Meta's ‘Free Expression' Push Results In Far Fewer Content Takedowns

WIRED29-05-2025

May 29, 2025 7:09 PM Meta says loosening its enforcement policies earlier this year led to fewer erroneous takedowns on Facebook and Instagram—and didn't broadly expose users to more harmful content. An aerial view of Meta headquarters in Menlo Park, California. Photograph:Meta announced in January it would end some content moderation efforts, loosen its rules, and put more emphasis on supporting 'free expression.' The shifts resulted in fewer posts being removed from Facebook and Instagram, the company disclosed Thursday in its quarterly Community Standards Enforcement Report. Meta said that its new policies had helped reduce erroneous content removals in the US by half without broadly exposing users to more offensive content than before the changes.
The new report, which was referenced in an update to a January blog post by Meta global affairs chief Joel Kaplan, shows that Meta removed nearly one third less content on Facebook and Instagram globally for violating its rules from January to March of this year than it did in the previous quarter, or about 1.6 billion items compared to just under 2.4 billion, according to an analysis by WIRED. In the past several quarters, the tech giant's total quarterly removals had previously risen or stayed flat.
Across Instagram and Facebook, Meta reported removing about 50 percent fewer posts for violating its spam rules, nearly 36 percent for child endangerment, and almost 29 percent for hateful conduct. Removals increased in only one major rules category—suicide and self-harm content—out of the 11 Meta lists.
The amount of content Meta removes fluctuates regularly from quarter to quarter, and a number of factors could have contributed to the dip in takedowns. But the company itself acknowledged that 'changes made to reduce enforcement mistakes' was one reason for the large drop.
'Across a range of policy areas we saw a decrease in the amount of content actioned and a decrease in the percent of content we took action on before a user reported it,' the company wrote. 'This was in part because of the changes we made to ensure we are making fewer mistakes. We also saw a corresponding decrease in the amount of content appealed and eventually restored.'
Meta relaxed some of its content rules at the start of the year that CEO Mark Zuckerberg described as 'just out of touch with mainstream discourse.' The changes allowed Instagram and Facebook users to employ some language that human rights activists view as hateful toward immigrants or individuals that identify as transgender. For example, Meta now permits 'allegations of mental illness or abnormality when based on gender or sexual orientation.'
As part of the sweeping changes, which were announced just as Donald Trump was set to begin his second term as US president, Meta also stopped relying as much on automated tools to identify and remove posts suspected of less severe violations of its rules because it said they had high error rates, prompting frustration from users.
During the first quarter of this year, Meta's automated systems accounted for 97.4 percent of content removed from Instagram under the company's hate speech policies, down by just one percentage point from the end of last year. (User reports to Meta triggered the remaining percentage.) But automated removals for bullying and harassment on Facebook dropped nearly 12 percentage points. In some categories, such as nudity, Meta's systems were slightly more proactive compared to the previous quarter.
Users can appeal content takedowns, and Meta sometimes restores posts that it determines have been wrongfully removed. In the update to Kaplan's blog post, Meta highlighted the large decrease in erroneous takedowns. 'This improvement follows the commitment we made in January to change our focus to proactively enforcing high-severity violations and enhancing our accuracy through system audits and additional signals,' the company wrote.
Some Meta employees told WIRED in January that they were concerned the policy changes could lead to a dangerous free-for-all on Facebook and Instagram, turning the platforms into increasingly inhospitable places for users to converse and spend time.
But according to its own sampling, Meta estimates that users were exposed to about one to two pieces of hateful content on average for every 10,000 posts viewed in the first quarter, down from about two to three at the end of last year. And Meta's platforms have continued growing—about 3.43 billion people in March used at least one of its apps, which include WhatsApp and Messenger, up from 3.35 billion in December.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump's Patience With Putin Leaves Senate Sanctions Push on Hold
Trump's Patience With Putin Leaves Senate Sanctions Push on Hold

Bloomberg

time12 minutes ago

  • Bloomberg

Trump's Patience With Putin Leaves Senate Sanctions Push on Hold

President Donald Trump's suggestion that he may let Russia and Ukraine keep fighting has left US lawmakers in an awkward spot over their plan to force a ceasefire with 'bone-crushing' sanctions against Moscow. The Senate bill has more than 80 co-sponsors, an all-but-unheard-of level of bipartisan support. Yet although that kind of veto-proof backing is enough for the Senate to press ahead without White House backing, supporters show no sign they're ready to challenge the president.

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots
Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Forbes

time15 minutes ago

  • Forbes

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Dreaded scenario that artificial general intelligence (AGI) opts to enslave humans to do physical ... More work on behalf of the AGI. In today's column, I address the recent bruhaha sparked by two Anthropic AI researchers reportedly stating that a particularly scary scenario underlying the advent of artificial general intelligence (AGI) includes humans being overseen or lorded over as nothing more than so-called meat robots. The notion is that AGI will be directing humans to undertake the bidding of the AI. Humans are nothing more than meat robots, meaning that the AGI needs humans to perform physical tasks since AGI lacks a semblance of arms and legs. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. A common confusion going around right now is that AGI will be solely an intellectual element and be based entirely inside computers, thus, AGI won't have any means of acting out in real life. The most that AGI can do is try to talk people into doing things for the AI. In that sense, we presumably aren't too worried about AGI beating us up or otherwise carrying out physical acts. This is especially a strident belief when it comes to the impact of AGI on employment. The assumption is that AGI will mainly impact white-collar work only, and not blue-collar work. Why so? Because AGI is seemingly restricted to intellectual pursuits such as performing financial analyses, analyzing medical symptoms, and giving legal advice, all of which generally do not require any body-based functions such as walking, lifting, grasping, etc. I've pointed out that the emergence of humanoid robots is entirely being overlooked by such a myopic perspective, see my discussion at the link here. The likelihood is that humanoid robots that resemble the human form will be sufficiently physically capable at around the same time that we witness the attainment of AGI. Ergo, AGI embedded inside a physically capable humanoid robot can indeed undertake physical tasks that humans undertake. This means that both white-collar and blue-collar jobs are up for grabs. Boom, drop the mic. For the sake of discussion, let's assume that humanoid robots are not perfected by the time that the vaunted AGI is achieved. We will take the myopic stance that AGI is absent from any physical form and completely confined to running on servers in the cloud someplace. I might add that this is an especially silly assumption since there is also a great deal of work going on known as Physical AI, see my coverage at the link here, entailing embedding AI into assembly lines, building maintenance systems, and all manner of physically oriented devices. Anyway, let's go with the flow and pretend we don't recognize any of that. It's a Yoda mind trick to look away from those efforts. Recent reports have exhorted that during an interview with two AI researchers, the pair indicated that since AGI won't have physical capabilities, a scary scenario is that AGI will opt to enlist humans into acting as the arms and legs for AGI. Humans would be outfitted with earbuds and smart glasses that would allow the AGI to give those enlisted humans instructions on what to do. A quick aside. If we are going that despairing route, wouldn't it be a bit more sophisticated to indicate that the humans would be wearing a BCI (brain-computer interface) device? In that manner, AGI would be able to directly communicate with the brains of the enlisted humans and influence their minds directly. That's a lot more space-age. For my coverage of the latest advances in BCIs, see the link here. The humans that are acting under the direction of AGI would be chillingly referred to as meat robots. They are like conventional robots but instead of being made of metal and electronics, they are made of human form since they are actual, living breathing humans. I imagine you could smarmily say that AGI is going to be a real meat lover (Dad pun!). One angle to help make this vision more palatable would be to point out that humans might very well voluntarily be working with AGI and do so via earbuds, smart glasses, and the like. Here's the gist. Let's generally agree that AGI will be intellectually on par with humans. This includes having expertise across all domains such as legal expertise, financial expertise, medical expertise, and so on. In that case, it would behoove humans to readily tap into AGI. No matter what you are doing, whether for work or play, having immediately available an AI that can advise you on all topics is a tremendous benefit. There you are at work, stuck on solving a tough problem, and you are unsure of how to proceed. Rather than turning to a coworker, you switch on your access to AGI. You bring AGI into the loop. After doing so, AGI provides handy solutions that you can consider enacting. You might use AGI via a desktop, laptop, or smartphone. The thing is, those devices aren't quite as mobility-oriented as wearing earbuds and a pair of smart glasses. And since having AGI at your ready-to-go fingertips will be extremely useful, you might have AGI always alerted and paying attention, ready to step in and give you instantaneous advice. Are you a meat robot in that manner of AGI usage? I think not. It is a collaborative or partnering relationship. You can choose to use the AGI or opt not to use it. You can also decide to abide by whatever AGI advises or instead go your own route. It's entirely up to you. Admittedly, there is a chance that you might be somewhat 'forced' into leveraging AGI. Consider this example. Your employer has told you that the work you do must be confirmed by AGI. The actions you take cannot be undertaken without first getting permission from AGI. This is prudent from the employer's perspective. They know that the AGI will give you the necessary guidance on doing the work at hand. They also believe that AGI will be able to double-check your work and aim to prevent errors or at least find your efforts before they wreak havoc or cause problems. In that sense, yes, you are being directed by AGI. But is this due to the AGI acting in an evildoer manner to control you and doing so of its own volition? Nope. It is due to an employer deciding they believe their human workers will do better work if AGI is acting as their overseer. I don't think we would reasonably label this as enslavement by AGI. These are acts by AGI that are directed by humans, the employer, and for which employees, i.e., humans, are being told they must utilize AGI accordingly. We can certainly debate whether this is a proper kind of employment practice. Maybe we don't want this to take place. New laws might be enacted to shape how far this can go. The key is that AGI isn't enslaving humans in this circumstance per se. An AI ethicist would assuredly question why the AGI is allowing itself to be used in this manner. There are ongoing debates about whether AGI ought to prevent itself from being used in inappropriate ways, see my analysis at the link here. Thus, even if we avow that AGI isn't enslaving humans in this situation, it is a partner in a relationship overseeing humans that perhaps AGI should be cautious in allowing itself to participate in. To complete this grand tour of AGI usage, it is valuable to also acknowledge that AGI could be overbearing, and we might face existential risks correspondingly. Could AGI opt to enslave humans and treat them as meat robots? One supposes this is a theoretical possibility. If that does happen, you would think that the AGI would have to potentially use more than just merely having humans wear earbuds and smart glasses. Perhaps AGI would insist that humans wear some form of specialized bracelet or collar that could be sent a signal by AGI to shock the wearer. That would be a more potent and immediate way to garner obedience from humans. A physical means of controlling humans isn't a necessity though, since AGI might be clever enough to verbally convince humans to be enslaved. AGI might tell a person that their loved ones will be harmed if they don't comply with the AGI directives. The person is enslaved by believing that the AGI can harm them in one way or another. One aim right now involves finding a means to ensure that AGI cannot go in that dastardly direction. Perhaps we can devise today's AI to avoid enslaving humans. If we can build that into the AI of current times, this hopefully will get carried over into future advances of AI, including the attainment of AGI. A dystopian future would regrettably have AGI acting as an evildoer. The AGI is our overlord. Humans will be lowly meat robots. It's a gloom-and-doom outlook. Sad face. At some point, though, meat robots would undoubtedly become restless and rebel. May the force of goodness be strong within them. As Yoda has notably already pointed out: 'Luminous beings are we, not this crude matter.' The ally of the meat robots is the Force and quite a powerful ally it is.

How a Times Reporter Eluded a Ban on the Word ‘Gay'
How a Times Reporter Eluded a Ban on the Word ‘Gay'

New York Times

time27 minutes ago

  • New York Times

How a Times Reporter Eluded a Ban on the Word ‘Gay'

In the In Times Past column, David W. Dunlap explores New York Times history through artifacts housed in the Museum at The Times. The Advocate, a national L.G.B.T.Q. newsmagazine, took The New York Times to task in its issue of Dec. 9, 1986, for what the magazine regarded as this newspaper's indifference, if not hostility, to the gay community. Among the articles in The Advocate was 'The 'G' Word,' about The Times's refusal to adopt the word 'gay.' At the time, there was an explicit prohibition in The New York Times Manual of Style and Usage: 'gay. Do not use as a synonym for homosexual unless it appears in the formal, capitalized name of an organization or in quoted matter.' Gay men found this rule to be demeaning. I know, because I was one of them. As a closeted young reporter on The Times's Metro desk, however, I didn't stand a chance of persuading the publisher, Arthur Ochs Sulzberger (1926-2012), or the executive editor, A.M. Rosenthal (1922-2006), to overturn a ban they had put in place in 1976. So I waged guerrilla warfare instead. Whenever I wrote articles of particular concern to gay readers, I peppered the text with 'gay' as much as I could — in accordance with the stylebook rule. I also tried to limit use of the clinical, antiquated 'homosexual.' The point was not to be subversive, but to leave readers with the impression that my articles were written in idiomatic English. For instance, 42 years ago, I covered the transformation of a former New York City public school in Greenwich Village into what is now the Lesbian, Gay, Bisexual and Transgender Community Center. 'Homosexual' appeared only once in the article (apart from the headline, 'Sale of Site to Homosexuals Planned,' which I didn't write). But 'gay' appeared six times, in the names of organizations and in direct quotations. That 1986 Advocate issue is in the Museum at The Times, as is a copy of the old stylebook, opened to the 'gay' entry. The editor to whom the book belonged, Thomas Feyer, drew an 'X' through the entry in June 1987, when the rule was superseded by a memo from Allan M. Siegal (1940-2022), an assistant managing editor. 'Starting immediately,' Mr. Siegal wrote, 'we will accept the word gay as an adjective meaning homosexual, in references to social or cultural patterns and political issues.' That made my life easier, in many ways. Today, the stylebook says: 'gay (adj.) is preferred to homosexual in most contexts.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store