logo
Fact-Checking Is Out, ‘Community Notes' Are In

Fact-Checking Is Out, ‘Community Notes' Are In

The Atlantic05-05-2025

One Friday in April, Meta's chief global affairs officer, Joel Kaplan, announced that the process of removing fact-checking from Facebook, Threads, and Instagram was nearly complete. By the following Monday, there would be 'no new fact checks and no fact checkers' working across these platforms, which are used by billions of people globally—no professionals marking disinformation about vaccines or stolen elections. Elon Musk, owner of X—a rival platform with an infamously permissive approach to content moderation— replied to Kaplan, writing, 'Cool.'
Meta, then just called Facebook, began its fact-checking program in December 2016, after President Donald Trump was first elected and the social network was criticized for allowing the rampant spread of fake news. The company will still take action against many kinds of problematic content—threats of violence, for example. But it has left the job of patrolling many kinds of misinformation to users themselves. Now, if users are so compelled, they can turn to a Community Notes program, which allows regular people to officially contradict one another's posts with clarifying or corrective supplementary text. A Facebook post stating that the sun has changed color might receive a useful correction, but only if someone decided to write one and submit it for consideration. Almost anyone can sign up for the program (Meta says users must be over 18 and have accounts 'in good standing'), making it, in theory, an egalitarian approach to content moderation.
Meta CEO Mark Zuckerberg has called the pivot on misinformation a return to the company's 'roots,' with Facebook and Instagram as sites of 'free expression.' He announced the decision to adopt Community Notes back in January, and explicitly framed the move as a response to the 2024 elections, which he described as a 'cultural tipping point towards once again prioritizing speech.' Less explicitly, Meta's shift to Community Notes is a response to years of being criticized from both sides of the aisle over the company's approach to misinformation. Near the end of his last term, Trump targeted Facebook and other online platforms with an executive order accusing them of 'selective censorship that is harming our national discourse,' and during the Biden administration, Zuckerberg said he was pressured to take down more posts about COVID than he wanted to.
Meta's abandonment of traditional fact-checking may be cynical, but misinformation is also an intractable problem. Fact-checking assumes that if you can get a trustworthy source to provide better information, you can save people from believing false claims. But people have different ideas of what makes a trustworthy source, and there are times when people want to believe wrong things. How can you stop them? And, the second question that platforms are now asking themselves: How hard should you try?
Community Notes programs—originally invented in 2021 by a team at X, back when it was still called Twitter—are a somewhat perplexing attempt at solving the problem. It seems to rely on a quaint, naive idea of how people behave online: Let's just talk it out! Reasonable debate will prevail! But, to the credit of social-media platforms, the approach is not as starry-eyed as it seems.
The chief innovation of Community Notes is that the annotations are generated by consensus among people who might otherwise see things differently. Not every note that is written actually appears under a given post; instead, they are assessed using ' bridging' algorithms, which are meant to 'bridge' divides by accounting for what's called 'diverse positive feedback.' This means that a potential note is valued more highly and is more likely to appear on a post if it is rated 'helpful' by a wide array of people who have demonstrated different biases at other times. The basics of this system have quickly become a new industry standard. Shortly after Meta's announcement about the end of fact-checking, TikTok said that it would be testing its own version of Community Notes, called Footnotes—though unlike Meta and X, TikTok will keep using a formal fact-checking program as well.
These tools are 'a good idea and do more good than harm,' Paul Friedl, a researcher at Humboldt University, in Berlin, told me. Friedl co-authored a 2024 paper on decentralized content moderation for Internet Policy Review, which discussed X's Community Notes among other examples, including Reddit's forums and old Usenet messaging threads. A major benefit he and his co-author cited was that these programs may help create a 'culture of responsibility' by encouraging communities 'to reflect, debate, and agree' on the purpose of whatever online space they're using.
Platforms certainly have good reasons to embrace the model. The first, according to Friedl, is the cost. Rather than employing fact-checkers around the world, these programs require only a simple algorithm. Users do the work for free. The second is that people like them—they often find the context added to posts by fellow users to be helpful and interesting. The third is politics. For the past decade, platforms—and Meta in particular—have been highly reactive to political events, moving from crisis to crisis and angering critics in the process. When Facebook first started flagging fake news, it was perceived as too little, too late by Democrats and reckless censorship by Republicans. It significantly expanded its fact-checking program in 2020 to deal with rampant misinformation (often spread by Trump) about the coronavirus pandemic and that year's election. From March 1, 2020, to Election Day that year, according to Facebook's self-reporting, the company displayed fact-checking labels on more than 180 million pieces of content. Again, this was perceived as both too much and not enough. With a notes-based system, platforms can sidestep the hassle of public scrutiny over what is or isn't fact-checked and why and cleanly remove themselves from drama. They avoid making contentious decisions, Friedl said, which helps in an effort 'not to lose cultural capital with any user bases.'
John Stoll, the recently hired head of news at X, told me something similar about Community Notes. The tool is the 'best solution' to misinformation, he said, because it takes 'a sledgehammer to a black box.' X's program allows users to download all notes and their voting history in enormous spreadsheets. By making moderation visible and collaborative, instead of secretive and unaccountable, he argued, X has discovered how to do things in 'the most equitable, fair, and most pro-free-speech way.' ('Free speech' on X, it should be noted, has also meant platforming white supremacists and other hateful users who were previously banned under Twitter's old rules.)
Read: X is a white-supremacist site
People across the political spectrum do seem to trust notes more than they do standard misinformation flags. That may be because notes feel more organic and tend to be more detailed. In the 2024 paper, Friedl and his co-author wrote that Community Notes give responsibilities 'to those most intimately aware of the intricacies of specific online communities.' Those people may also be able to work faster than traditional fact-checkers—X claims that notes usually appear in a matter of hours, while a complicated independent fact-check can take days.
Yet all of these advantages have their limits. Community Notes is really best suited to nitpicking individual instances of people lying or just being wrong. It cannot counter sophisticated, large-scale disinformation campaigns or penalize repeated bad actors (as the old fact-checking regime did). When Twitter's early version of Community Notes, then called Birdwatch, debuted, the details of the mechanism were made public in a paper that acknowledged another important limitation: The algorithm 'needs some cross-partisan agreement to function,' which may, at times, be impossible to find. If there is no consensus, there are no notes.
Musk himself has provided a good case study for this issue. A few Community Notes have vanished from Musk's posts. It's possible that he had them removed—at times, he has seemed to resent the power that X has given its users through the program, suggesting that the system is 'being gamed' and chiding users for citing 'legacy media'—but the disappearances could instead be an algorithmic issue. An influx of either Elon haters or Elon fans could ruin the consensus and the notes' helpfulness ratings, leading them to disappear. (When I asked about this problem, Stoll told me, 'We're, as a company, 100 percent committed to and in love with Community Notes,' but he did not comment on what had happened to the notes removed from Musk's posts.)
The early Birdwatch paper also noted that the system might get really, really good at moderating 'trivial topics.' That is the tool's core weakness and its core strength. Notes, because they are written and voted on by people with numerous niche interests and fixations, can appear on anything. While you'll see them on something classically wrong and dangerous, such as conspiracy theories about Barack Obama's birth certificate, you'll also see them on things that are ridiculous and harmless, such as a cute video of a hedgehog. (The caption for a hedgehog video I saw last week suggested that a stumbling hedgehog was being 'helped' across a street by a crow; the Community Note clarified that the crow was probably trying to kill it, and the original poster deleted the post.) At times, the disputes can be wildly annoying or pedantic and underscore just how severe a waste of your one life it is to be online at all. I laughed recently at an X post: 'People really log on here to get upset at posts and spend their time writing entire community notes that amount to 'katy perry isn't an astronaut.''
The upside, though, is that when anything can be annotated, it feels like less of a big deal or a grand conspiracy when something is. Formal fact-checking programs can feel punitive and draconian, and they give people something to rally against; notes come from peers. This makes receiving one potentially more embarrassing than receiving a traditional fact-check as well; early research has shown that people are likely to delete their misleading posts when they receive Community Notes.
The optimistic take on notes-type systems is that they make use of material that already exists and with which everyone is already acquainted. People already correct each other online all the time: On nearly any TikTok in which someone is saying something obviously wrong, the top comment will be from another person pointing this out. It becomes the top comment because other users 'like' it, which bumps it up. I already instinctively look to the comment section whenever I hear something on TikTok and think, That can't be true, right?
For better or worse, the idea of letting the crowd decide what needs correcting is a throwback to the era of internet forums, where actually culture got its start. But this era of content moderation will not last forever, just as the previous one didn't. By outright saying that a cultural and political vibe, of sorts, inspired the change, Meta has already suggested as much. We live on the actually internet for now. Whenever the climate shifts—or whenever the heads of the platforms perceive it to shift—we'll find ourselves someplace else.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tesla's Texas robotaxi launch: Expect it to be 'low-key'
Tesla's Texas robotaxi launch: Expect it to be 'low-key'

Yahoo

time23 minutes ago

  • Yahoo

Tesla's Texas robotaxi launch: Expect it to be 'low-key'

Tesla's (TSLA) robotaxi will tentatively launch in Austin, Texas, on June 22, according to CEO Elon Musk. CFRA analyst Garrett Nelson thinks this launch will be more of a "low-key" event compared to its big "Cybercab" debut last year. IN the video above, you can hear more of Nelson's take on the launch and why he thinks the stock has some near-term headwinds. To watch more expert insights and analysis on the latest market action, check out more Catalysts here. What are you expecting from this robo taxi launch? Yeah, thanks for having me. We think it's going to be a lot more low-key than the robo taxi day that Tesla held in Los Angeles last October, which was on a Hollywood movie set, very different type of production. I think this is going to be much more low-key. October or sorry, June 22nd is a Sunday, which is very unusual to have a major product launch, but I don't think that's a coincidence. So, I think it's, you know, expectations are high coming into this, but I think the key is really what happens in the three to six months following this event. So, initially, it's only going to be maybe a dozen or so model wise with the most recent version of full self-driving installed on them, operating in a geofenced area of Austin with the vehicle being supervised remotely as well. So, I think a lot of people are going to say, well, why is this so special? You know, what is Tesla doing here that Waymo isn't already doing with the robo taxis that they are operating in Austin. And I think people are going to look at the size of the geofenced area. Waymos is 37 square miles. How quickly they can expand that geofenced area, and then how quickly they can increase the number of vehicles on the roads and also then expand to other markets. What is the time framing for that scaling that would lead you to then raise your own rating and expectations on the stock? Well, Elon Musk thinks they can have maybe a thousand or so on the road by the end of this summer. It seems a bit optimistic, but we'll see. Our main concerns with this stock right now are number one valuation, given this really dramatic rebound since their first quarter earnings release in April. The stock is up, you know, about 100 points from when they reported earnings, only six or seven weeks ago. So, it's valuation at north of 110 times our EPS estimate for next year, but also ongoing market share losses across their three major markets of China, Europe, and the US. That has continued. You look at the China data, Tesla sales were down 15% in May. Meanwhile, total EV sales were up 38% in China. So, in Europe and the US, it's kind of a similar story. So, those near-term issues really concern us here. We know we're right around the corner from their second quarter deliveries report, which will happen in about three weeks. I don't think that's going to be a great release. And so there's some major near-term challenges and valuation being an issue for us also. In addition to the Big Beautiful Bill, in which Tesla will be losing their tax credits, not only on EVs, but for energy storage and solar products as well.

Elon Musk's Tesla sues former Optimus robot engineer for allegedly stealing trade secrets
Elon Musk's Tesla sues former Optimus robot engineer for allegedly stealing trade secrets

New York Post

time23 minutes ago

  • New York Post

Elon Musk's Tesla sues former Optimus robot engineer for allegedly stealing trade secrets

Elon Musk's Tesla is suing one of its former engineers for allegedly stealing trade secrets related to its highly anticipated Optimus humanoid robot. The defendant is Zhongjie 'Jay' Li, who cofounded the humanoid robot startup Proception Inc. after working at Tesla from Aug. 2022 to Sept. 2024, according to the complaint filed in San Francisco federal court on Wednesday. The lawsuit alleges Li, who worked on 'advanced robotic hand sensors—and was entrusted with some of the most sensitive technical data in the program,' downloaded Optimus files onto two smartphones. 3 A former engineer is accused of using Optimus technology to create his own startup. REUTERS Li started Proception less than a week after leaving Tesla and the startup claimed to have built humanoid robot hands with a 'striking resemblance' to Optimus just five months later. 'Rather than build through legitimate innovation, trial, and technical rigor, Defendants took a shortcut: theft,' the lawsuit says. 'They misappropriated Tesla's most sensitive materials, sidestepped the laborious process of development, and launched a company based not on original discovery, but on stolen work.' Tesla is seeking unspecified 'compensatory and exemplary damages' against Li, as well an order barring Li and his associates from using any of the company's trade secrets. The company also requested a jury trial in the case. Proception is based in Palo Alto, Calif., according to its LinkedIn page. Li lists himself as founder and CEO of the startup, which he says is 'tackling one of the most challenging and exciting humanoid projects of our time.' Li did not immediately return a request for comment on the lawsuit. In the suit, Tesla's attorneys said that successfully building an advanced robotic hand is 'among the most challenging' tasks in robotics. 3 Elon Musk has said Optimus is key to Tesla's future. AP 'Although Tesla does not disclose the precise year-over-year investments in Optimus, the research and development costs are in the billions of dollars,' the lawsuit says. 'Such an ambitious project demands unparalleled expertise and substantial time and financial commitment to achieve even incremental progress.' Musk has repeatedly described Tesla's Optimus robot as key to the company's long-term health and growth. The billionaire specifically touted Optimus's advanced robotic hand during the company's earnings call in January. 3 The Optimus humanoid robot is still in production at Tesla. Stanislav Kogiku/SOPA Images/Shutterstock 'My prediction long-term is that Optimus will be overwhelmingly the value of the company,' Musk said at the time. Bloomberg was first to report on the lawsuit.

Meta sues AI ‘nudify' app Crush AI for advertising on its platforms
Meta sues AI ‘nudify' app Crush AI for advertising on its platforms

Yahoo

time25 minutes ago

  • Yahoo

Meta sues AI ‘nudify' app Crush AI for advertising on its platforms

Meta has sued the maker of a popular AI 'nudify' app, Crush AI, that reportedly ran thousands of ads across Meta's platforms. In addition to the lawsuit, Meta says it's taking new measures to crack down on other apps like Crush AI. In a lawsuit filed in Hong Kong, Meta alleged Joy Timeline HK, the entity behind Crush AI, attempted to circumvent the company's review process to distribute ads for AI nudify services. Meta said in a blog post that it repeatedly removed ads by the entity for violating its policies, but claims Joy Timeline HK continued to place additional ads anyway. Crush AI, which uses generative AI to make fake, sexually explicit images of real people without their consent, reportedly ran more than 8,000 ads for its 'AI undresser' services on Meta's platform in the first two weeks of 2025, according to the author of the Faked Up newsletter, Alexios Mantzarlis. In a January report, Mantzarlis claimed that Crush AI's websites received roughly 90% of their traffic from either Facebook or Instagram, and that he flagged several of these websites to Meta. Crush AI reportedly evaded Meta's ad review processes by setting up dozens of advertiser accounts and frequently changed domain names. Many of Crush AI's advertiser accounts, according to Mantzarlis, were named 'Eraser Annyone's Clothes' followed by different numbers. At one point, Crush AI even had a Facebook page promoting its service. Facebook and Instagram are hardly the only platforms dealing with such challenges. As social media companies like X and Meta race to add generative AI to their apps, they've also struggled to moderate how AI tools can make their platforms unsafe for users, particularly minors. Researchers have found that links to AI undressing apps soared in 2024 on platforms like X and Reddit, and on YouTube, millions of people were reportedly served ads for such apps. In response to this growing problem, Meta and TikTok have banned keyword searches for AI nudify apps, but getting these services off their platforms entirely has proven challenging. In a blog post, Meta said it has developed new technology to specifically identify ads for AI nudify or undressing services 'even when the ads themselves don't include nudity.' The company said it is now using matching technology to help find and remove copycat ads more quickly, and has expanded the list of terms, phrases and emoji that are flagged by its systems. Meta said it is also applying the tactics it has traditionally used to disrupt networks of bad actors to these new networks of accounts running ads for AI nudify services. Since the start of 2025, Meta said, it has disrupted four separate networks promoting these services. Outside of its apps, the company said it will begin sharing information about AI nudify apps through Tech Coalition's Lantern program, a collective effort between Google, Meta, Snap and other companies to prevent child sexual exploitation online. Meta says it has provided more than 3,800 unique URLs with this network since March. On the legislative front, Meta said it would 'continue to support legislation that empowers parents to oversee and approve their teens' app downloads.' The company previously supported the US Take It Down Act, and said it's now working with lawmakers to implement it. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store