
Zuckerberg fired the fact-checkers. We tested their replacement.
I drafted a note debunking images claiming to show Trump sleeping during the ceremony. I cited live-stream footage and corroborating time stamps. I linked to research by Snopes.
None of that mattered. My community note never got added to a post because not enough other users voted it was 'helpful.'
Over four months, I've drafted 65 notes debunking conspiracy theories on topics ranging from airplane crashes to Ben & Jerry's ice cream. I've tried to flag fake artificial intelligence-generated video clips, viral hoax security threats and false reports about an ICE partnership with DoorDash.
Only three of them got published, all related to July's Texas floods. That's an overall success rate of less than 5 percent. My proposed notes were on topics other news outlets — including Snopes, NewsGuard and Bloomberg News — had decided were worth publishing their own fact checks about.
Zuckerberg fired professional fact-checkers, leaving users to fight falsehoods with community notes. As the main line of defense against hoaxes and deliberate liars exploiting our attention, community notes appear — so far — nowhere near up to the task.
Feeds filled with inaccurate information matter for the 54 percent of American adults who, according to Pew Research Center, get news from social media.
Zuckerberg's decision to fire fact-checkers was widely criticized as a craven attempt to appeal to President Donald Trump. He said Meta was adopting the crowdsourced community notes system used by Elon Musk's X because users would be more trustworthy and less biased than fact-checkers. Before notes get published to posts, enough users have to agree they're helpful. But agreement turns out to be more complicated than it sounds.
Meta says my test can't be used to evaluate its notes program, which has been public in the United States for more than four months. 'Community Notes is a brand new product that's still in the test-and-learn phase, and it takes time to build a robust contributor community. While there are notes continuously publishing across Threads, Instagram and Facebook, not every note will be broadly rated as helpful by the community — even if those notes were written by a Washington Post columnist,' said spokeswoman Erica Sackin.
Meta declined to answer my questions about how many notes it has published, how many participants are in the program or whether there's evidence it is making an impact, despite promising to be transparent about the program.
Alexios Mantzarlis, director of the Security, Trust and Safety Initiative at Cornell Tech, published another independent evaluation of community notes in June and said it was 'not yet ready for prime time.' He found that only a handful of proposed notes provided truly valuable context and some were inaccurate.
'It is concerning that four months in, they have shared no updates,' Mantzarlis told me.
This matters because community notes programs are spreading beyond X and Meta as a way for Big Tech to outsource the politically fraught work of moderating content. YouTube said it was testing a version last year. And TikTok said in April it was testing a system called Footnotes.
If community notes are becoming a standard for fighting falsehoods, we need to be honest about what they can and cannot do.
I volunteered to join Meta's community notes, and was initially allowed into the program on Threads, and eventually Instagram and Facebook, too.
Volunteers are able to tap a few buttons on any post from a U.S. user and suggest a note, complete with text and a link as proof. I deliberately drafted notes that crossed the political spectrum. For example, I suggested notes on a fabricated image of Pam Bondi seen over half a million times, as well as a false claim about the wealth of Alexandria Ocasio-Cortez seen a quarter of a million times.
I also voted on dozens of notes drafted by others, rating them 'helpful' or 'not helpful.'
Community notes have upsides. I've seen users trying to tackle a wider set of topics than a traditional fact-checker might be an expert in. Some are trying to fight lies in plain, easy-to-understand language.
But I discovered problems quickly. Sometimes, posts I identified for notes wouldn't accept them because they were written by accounts outside the U.S. (which are excluded from Meta's initial program) or had other technical problems. I've seen notes suggested by others that were low quality, some with more opinions than facts, or that sourced to 'Google it.'
The biggest challenge has been cutting through. I started contributing in April, and by early July nothing I had proposed had been published. Only one note that I had rated as helpful, written by someone else, had been published.
Meta says it is not cherry picking which notes get published. It uses a 'bridging algorithm' to try to determine which notes are actually helpful, as opposed to just popular. This formula requires contributors who have disagreed with each other on past notes to agree that a new note is helpful.
In theory, this is a good thing. You don't want to publish notes that contain falsehoods or are simply attacks on particular people or ideas.
However, agreement is tough to find. Notes I couldn't get published included facts that shouldn't be up for debate, including identifying AI deepfakes. This system also doesn't lend itself to the unique risks of breaking news and fast-moving viral conspiracies. (Meta does still remove some falsehoods directly, but in very specific instances when it can lead to physical harm or interfere with elections.)
Meta says it is using the same algorithm as X, which the rival social network has published. 'It's a very, very conservative system,' said Kolina Koltai, who helped develop community notes at Twitter, now called X. The algorithm is better at avoiding bad stuff than ensuring the good stuff actually gets published, she says.
Koltai, who is now a Bellingcat journalist, says her own personal publishing rate on X is about 30 percent — low, but still multiples higher than mine on Meta's social networks.
Is it possible my notes were too one-sided? I shared what I'd been proposing with Mantzarlis, who was also the founding director of the International Fact-Checking Network. He said they used 'non-polarizing language, clear context, and high quality sources.'
It's possible Meta hasn't been able to recruit a wide enough variety of users to meet the bridging algorithm's requirements. It could also be that the community note volunteers that Meta does have just aren't very engaged. I kept contributing for months, even though nothing I wrote was getting published. I can't imagine most unpaid users would bother to stick around.
Some of these factors could improve as Meta's program matures. But since Zuckerberg already fired the professional fact-checkers, the community notes system isn't just a test — it's our current main line of defense.
For users, it's a good idea to bolster our own defenses to social media lies.
But our best hope is that the teams inside Meta who care about the truth can evolve community notes to make them more effective.
It could begin allowing people to draft notes on posts written by overseas accounts. It could develop a triage system to put notes on certain topics or likely to cause more harm higher in the queue. It could also improve engagement by giving contributors clout — maybe a badge? — for repeat submissions.
Still, the reality is, randomly selected volunteers just can't do this work alone.
Twitter originally launched community notes alongside professional fact-checkers. 'It was never, ever meant to be the solo thing,' says Koltai.
Musk also eventually ended X's paid fact-checking program, and community notes have struggled to keep up with the tide of falsehoods. An analysis last year ahead of the presidential election found the majority of accurate notes proposed by X users on political posts were never shown to the public.
Fact-checkers might make mistakes, as Zuckerberg has said, but they can also check each other as part of a community.
Meta could give experts specialized profiles within community notes once they've cleared some kind of credential — and maybe also let them get paid for their work. 'Professionals and the crowd are not contradictory, they can be complementary,' says Mantzarlis.
Andrea Jimenez contributed this report.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
9 minutes ago
- Yahoo
If You'd Invested $1,000 in Palantir Stock 3 Years Ago, Here's How Much You'd Have Today
Key Points Palantir's AIP contributed to accelerated revenue growth. Its growth has led to a pricey valuation, making its near-term direction less certain. 10 stocks we like better than Palantir Technologies › Palantir Technologies (NASDAQ: PLTR) stock has been on a tear since last year's U.S. presidential election. Since the election on Nov. 5, the stock is up by about 250%. While that is a considerable return, it is relatively modest compared to how the stock has performed over the last three years, when the market and Palantir were in the middle of a bear market. Amid the rising popularity of AI and its productivity solution, Palantir's returns since that time are nothing short of eye-popping. Palantir over the last three years If one had bought Palantir on Aug. 8, 2022, a $1,000 investment would be worth about $15,700 today. Three years ago, a dramatic pullback from the highs of the 2021 bull market left Palantir trading below $10 per share, the approximate level where it sold on its first day of trading on Sept. 30, 2020. Additionally, it got off to a slow start as it briefly fell below $6 per share in December 2022. Nonetheless, Palantir began moving higher in 2023, particularly after Open AI's release of an upgraded version of ChatGPT put the spotlight on generative AI. Palantir responded with its own generative AI solution, the artificial intelligence platform (AIP), in April 2023. Customers began reporting massive productivity gains. Over time, the stock's growth accelerated as revenue growth began to rise at a significantly faster rate. Consequently, the stock reached stratospheric highs, with a P/E ratio approaching 600. Although such earnings multiples can be common with growth stocks, its price-to-sales (P/S) ratio of 130 all but verifies that the market has priced the stock for perfection. That increases the odds of a dramatic fall at the slightest hint of bad news. However, few can deny the success of Palantir's investors over the last three years. Even if its near-term prospects are less clear, Palantir should succeed long term, since its tools have delivered massive productivity gains for its customers. Should you invest $1,000 in Palantir Technologies right now? Before you buy stock in Palantir Technologies, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Palantir Technologies wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $653,427!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,119,863!* Now, it's worth noting Stock Advisor's total average return is 1,060% — a market-crushing outperformance compared to 182% for the S&P 500. Don't miss out on the latest top 10 list, available when you join Stock Advisor. See the 10 stocks » *Stock Advisor returns as of August 4, 2025 Will Healy has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Palantir Technologies. The Motley Fool has a disclosure policy. If You'd Invested $1,000 in Palantir Stock 3 Years Ago, Here's How Much You'd Have Today was originally published by The Motley Fool
Yahoo
29 minutes ago
- Yahoo
Windows 11's New Mouse Options, Updated Settings App, and More: Windows Wednesday
Welcome back to Windows Wednesday, where we round up everything Microsoft rolled out in Windows over the past week. Here's everything that showed up across the Insider channels and release versions this week. You can join the Windows Insider program to try out experimental features, but some changes are on a gradual rollout and not yet available to all Insider program members. You should not install Windows Insider builds on an important computer. Input Microsoft was testing a new pointer indicator accessibility setting last year, and the feature has now returned in the Canary Channel. This adds red crosshairs to your cursor, so it's easier to find on a monitor. It can be enabled from Settings > Accessibility > Mouse pointer and touch > Pointer indicator, and afterwards it can be toggled with the Win+Ctrl+X keyboard shortcut. Microsoft said in a blog post, 'Low vision users on Windows can now easily locate and use their cursor. This will allow these users to be more productive and efficient when using a Windows device.' This reminds me of the virtual crosshairs found in some gaming monitors, but those are fixed to the center of the screen, while the Windows 11 feature moves with your mouse cursor. It's also a bit like the feature on macOS where the cursor will temporarily grow in size if you move it around quickly. Settings In the Dev Channel, Microsoft is removing the 'Manage or clear your Bing search history' option from Settings > Privacy & security > Search under Search history. You can still edit your Bing activity from the Privacy dashboard, linked in the Related settings on that settings page. The history page on the Bing website should also work the same way, as long as you're logged into the same Microsoft account as your PC. In the Canary Channel, the search box in Settings is moving to the top center of the window, which Microsoft says will 'deliver a more consistent and better search experience.' It was previously located in the left sidebar, under your profile picture and name. Microsoft already showed this off as part of the advanced Settings search on Copilot+ PCs, but now the updated design should appear on all PCs. File Explorer If your Windows PC is logged into a work or school account with Entra ID, the File Explorer already shows recently-changed files in the Home page. Now, Microsoft is adding user icons to that section, so you can more quickly recognize each person in the activity. This uses the person's Live Persona Card from the Microsoft 365 organizations. Home installations of Windows 11 still don't have these File Explorer features, but if you have a PC managed by your workplace or school, you might notice the new icons. This is rolling out in the Dev Channel. Source: Windows Insider Blog (1, 2, 3)


WIRED
41 minutes ago
- WIRED
Lovense's Gemini Sex Toy Offers a Fresh Sensation in the Bedroom—No AI Needed
I've never had particularly sensitive nipples—except for the first time I went to a clothing-optional beach and failed to use enough sunblock—so I was excited to give the Lovense Gemini a try. No, it has nothing to do with Google's AI chatbot. These are vibrating nipple clamps from Lovense, and I've had great experience with many of the company's sex toys. Many of my partners have had sensitive nipples, so I was curious if I could join the ranks with the Gemini. Usually sold for less than $80, they're an inexpensive way to spice up the bedroom. Great for Enthusiasts and Beginners Courtesy of Amazon After I fully charged the Gemini, which takes about an hour or so, I used it on myself. What I love is that I can clip the base of the clamps to my bra (there's also a cord you can put around your neck if you don't wear a bra), making them 100 percent hands-free. The Gemini is app-controlled (Android, iOS), so I can lie back and play with the slew of intensities and patterns—surprisingly fun even though I was solo. When I added a bullet vibrator to the mix, the vibrations all over made everything even more exciting in a low-key, kinky sort of way. I also appreciate that the clamps are adjustable, so if you love a tight squeeze, a barely there hold, or somewhere in between, you have options. Unlike some of the more intimate toys I've reviewed, like the Luxus Couples Vibrator or the Lelo Ora 3, I was able to get input from a handful of people with the Gemini. Especially people I know who really (like, really ) love nipple play. As much as I enjoyed the vibrations and found the sensation interesting, the Gemini made me realize that, ultimately, I prefer that my nipples get attention from a partner's tongue or with a slight graze of their teeth. Of the three nipple-play aficionados I used the Gemini with—one straight and two gay men—all of them became immediate fans of the toy. They liked the hands-free design and the simplicity of the Lovense Remote app. To paint a picture of this experiment, I had the app in hand, and three men sat on the couch across from me, waiting for their turn to try the Gemini. (It's the closest I'll probably ever get to an orgy.) Each man had a different preference in intensity and patterns, as well as the tightness of the clamps, so I was able to see just how much the Gemini can be the perfect fit for anyone who's into nipple play. Easy to Use The Gemini is well-built, too. Not only is it waterproof—for all those times you're craving a proper buzz on your nips while in the shower—but you also get a whopping two hours of run time when it's fully charged. The device is discreet, in case you're into public play where one partner enjoys the vibrations, while the other controls what they want you to feel.