
How the Secret Algorithms Behind Social Media Actually Work
In January 2021, a few Facebook employees posted an article on the company's engineering blog purporting to explain the news feed algorithm that determines which of the countless posts available each user will see and the order in which they will see them. The article includes a fancy-looking formula central to the algorithm, but the formula is nearly impossible to decipher since the authors didn't bother to explain half the symbols in it.
When I first read that blog post, I felt like the cave dwellers in Plato's famous allegory seeing shadows dance upon the wall— flat, colorless projections of a richer world existing out of sight. I knew the obfuscated formula was not the full story, I just didn't know how to get outside the cave and find the real math behind it.
Eight months later, Facebook was rocked by one of the biggest scandals ever to strike the tech industry. Frances Haugen, a Facebook product manager turned whistleblower, snuck over ten thousand pages of documents and internal messages out of Facebook headquarters. She leaked these to a handful of media outlets.
A barrage of stories soon ran, largely focusing on the most alarming, attention-grabbing revelations. Internal studies documented Instagram's harmful impact on the mental health of vulnerable teen girls. A secret whitelist program exempted VIP users from the moderation system the rest of us face. Mark Zuckerberg and other executives were allegedly unwilling to stem the flood of dangerous extremist content propagated on the platform.
Read More: The Unspoken Etiquette of Mourning on Social Media
Will Oremus, a tech writer for The Washington Post, called me and explained that he had something else in his sights. He wanted to lift the veil on the formula in the engineering blog post, and he realized that Haugen's documents were the key to doing so.
It turns out Facebook engineers have assigned a point value to each type of engagement users can perform on a post (liking, commenting, resharing, etc.). For each post you could be shown, these point values are multiplied by the probability that the algorithm thinks you'll perform that form of engagement. These multiplied pairs of numbers are added up, and the total is the post's personalized score for you. There's a bit more to it than this, but in broad strokes your feed is created by sorting posts according to these scores, from highest to lowest.
Here's what this looks like in symbols. Suppose we have a specific user and a specific post in mind, and we write Plike for the probability that the user likes the post, Plove for the probability that they tap the heart emoji, Pangry for the probability that they tap the angry emoji, Pcomment for the probability that they comment on the post, and Pshare for the probability that they share it. (There are other forms of engagement, but let's just focus on these for now.) And let's write Vlike, Vlove, and so on for the point values assigned to these engagements. Then the magic formula is:
Score = Vlike × Plike + Vlove × Plove + Vangry × Pangry + Vcomment × Pcomment + Vshare × Pshare
The idea is that the algorithm wants to surface the posts you're most likely to engage with—but there are several forms of engagement, not just one. It wouldn't make sense to treat all forms of engagement equally; a reshare really does seem like stronger engagement than a like. So the different forms of engagement are weighted differently, and a weighted sum combines them into an overall measure of anticipated engagement.
Let's try this with some concrete numbers. Suppose a like is worth one point, a heart emoji is worth five points, and a comment is worth thirty points. And suppose one of your friends posts a picture of the puppy they just adopted, while another friend writes a post about a new job they landed. You're fond of both friends, but let's be real: you're more excited about the puppy than the job. If there's a 50% chance you'll like the puppy pic, a 20% chance you'll love it, and a 10% chance you'll comment on it, then the puppy post scores 1 × 0.5 + 5 × 0.2 + 30 × 0.1 = 4.5. If there's a 20% chance you'll like the job announcement post, a 10% chance you'll love it, and a 5% chance you'll comment on it, then its score is 1 × 0.2 + 5 × 0.1 + 30 × 0.05 = 2.2. The puppy pic wins and is placed higher in your feed than the job announcement.
Now suppose there's also a post by your uncle falsely claiming that COVID was caused by 5G towers. Let's go ahead and give a 0% chance of you liking or loving this post. But you are tempted to write a comment telling your uncle he's full of s--t, or at least politely explaining why he's wrong. Let's put your probability of commenting on this post at, say, 20%. Then its score is 1 × 0 + 5 × 0 + 30 × 0.2 = 6, greater than 4.5. So before you come to the puppy post that makes you happy and the job post that makes you mildly envious, you're going to see a COVID conspiracy post that boils your blood. Facebook doesn't try to make you angry, but the algorithm has figured out what kinds of posts will keep you engaged.
The Vs in the formula, the engagement point values, are out of your control. But you can influence the Ps, the estimates of your engagement probabilities. If you tend to engage with posts about food, then over time the algorithm will bump up your estimated engagement probabilities on food posts. If you want more food content in your feed, go ahead and like, love, comment, and share away. If you don't want food content, don't engage with it.
Where things get subtle is with unpleasant posts, especially ones that anger or offend you. Think of it this way. When you argue with your uncle, you are giving his COVID conspiracy posts 30 points for each comment you leave—no matter how critical your comment is—and these points drive all his other COVID conspiracy posts up in your feed. In fact, these points drive his COVID conspiracy posts up in everyone's feed, because the algorithm is smart enough to realize that if you're inclined to comment on these posts, so are his other friends. Worse still, the algorithm associates COVID conspiracy content with other conspiratorial content, so when your uncle posts a flat-Earth link, the algorithm in essence thinks: 'They commented on the other conspiracy posts, so I bet they'll comment on this one, too.'
Read More: When the Group Chat Replaces the Group
It doesn't end there. The algorithm correctly deduces that if you're likely to comment on your uncle's conspiratorial content, you're likely to comment on other users' conspiratorial content as well. In the end, your laudable effort to educate your uncle with a choicely worded comment backfires and tells the algorithm to elevate all conspiratorial content in your feed and, to a lesser but non-negligible extent, in the feeds of other users. Oops.
TikTok also uses an algorithm to determine which of the billions of videos on the platform to show to each of its one billion users. How does it work? The New York Times hunted for answers and got hold of an internal document labeled 'TikTok Algo 101' written by a TikTok engineering team. In a December 2021 article, The New YorkTimes wrote that this document includes a 'rough equation for how videos are scored . . . : Plike × Vlike + Pcomment × Vcomment + Eplaytime × Vplaytime + Pplay × Vplay.' While the article didn't really explain this formula or the symbols in, it is similar enough to Facebook's that we can figure it out.
Surely, Plike is the estimated probability that the user clicks the heart-shaped like button on the video, while Vlike is the point value engineers have assigned to this form of engagement. Same story for Pcomment, commenting on a video, and Pplay, playing a video. Eplaytime I'm rather certain is the number of seconds the algorithm expects the user will watch the video for, and Vplaytime is the point score indicating how many points each second of play time is worth. If, hypothetically, a comment is worth twenty points and a second of play time is worth two points, then a 50% chance of commenting would count for the same amount of engagement as an expected five seconds of play time.
The secret TikTok document goes on to explain that 'the recommender system gives scores to all the videos based on this equation, and returns to users videos with the highest scores.' Sound familiar? Yes, TikTok's mind-reading algorithm is, at its mathematical core, nearly identical to Facebook's algorithm. Both rank posts/videos according to a weighted sum of the amount of engagement they are expected to elicit from the user.
Have you ever seen a TikTok video with overlaid text saying something like 'Wait for it,' 'You won't believe what happens,' or 'You've gotta watch till the end lol'? These phrases tend to bump up the expected play time for most users, so it's a cheap trick to boost the video's score. Some people post videos where literally nothing happens, but they trick you into watching multiple times, thereby racking up even more expected seconds. The math is simple: if the algorithm thinks you'll watch a ten-second video three times, that's thirty seconds of expected play time.
If you don't like a video for whatever reason, limit your play time. Importantly, resist the urge to rewatch it out of frustration or disgust. And don't give in to the temptation to comment. Comments and seconds watched, no matter what quality and kind, tell TikTok's algorithm one thing: 'Give me more videos like this.'
In March 2023, Elon Musk had a big chunk of the source code for X (then Twitter) posted online. Would you be surprised to hear that the platform ranks posts by using a weighted sum of estimated engagement probabilities? For all the jousting by the tech giants, for all the competition to build the best social media platform, it turns out that Facebook, TikTok, and Twitter all run on essentially the same simple math formula. I think it's a safe bet that all the other platforms driven by user engagement do too. The weighted sum of engagement probabilities is the formula driving social media.
My biggest takeaway from this formula and its ubiquity is that users can develop healthier social media feeds, but doing so takes restraint and intentionality.
Imagine there's a KFC in your town, and one time after a stressful day at work you give in to temptation and head there for an easy dinner. The next day, the KFC has mysteriously moved one block closer to your house. Now the convenience and the allure are even greater, so you find yourself visiting it more often. But each time you do, the KFC moves even closer to your house. Soon it's down the block from you and is part of your weekly routine. Eventually the KFC is next door, and you're eating fried chicken more often than any reasonable human being should. You're not proud of it, but how can you resist when KFC is the first thing you see (and smell) in the morning and the last thing before bed at night?
This is how social media algorithms work. They bring the things we engage with closer and closer. Once we start clicking the social media equivalent of junk food, we're going to be served up a lot more of it—which makes it harder to resist. So we click it more and the algorithm promotes it even more highly in our feeds. It's a vicious cycle that can quickly turn our feeds into endless streams of digital dreck. Knowing how and why this cycle happens is the first step to stopping it. Just remember: the tech companies choose the Vs in the social media formula, but the Ps are shaped by your actions online.
Excerpted with permission from ROBIN HOOD MATH: Take Control of The Algorithms That Run Your Life by Noah Giansiracusa published on August 5, 2025 by Riverhead, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2025 Noah Giansiracusa.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Time Business News
3 hours ago
- Time Business News
Junior AI: How a 24-Hour Coding Dare Turned Into a Game-Changing Shopify Photo Editor
Some businesses start with million-dollar funding rounds, massive offices, and fancy launch parties. Junior? Nah… it started with a cup of coffee, a laptop, and a crazy personal dare: 'Can I build a working AI photo editor in 24 hours?' I'm not going to lie; it sounded insane. But in e-commerce, Shopify product images can literally make or break a sale. And most sellers? They don't have the time, budget, or patience to learn Photoshop. So yeah, the idea felt worth losing some sleep over. Day one was pure chaos. No plan, no roadmap, just me and the keyboard. By the next morning, I had a working prototype a basic AI photo editing tool that could: Remove backgrounds cleanly (no more messy edges) Upscale blurry images so they look sharp Add quick AI photo filters for that extra pop It wasn't perfect. In fact, it was a little ugly. But it worked . And honestly, in tech, working beats pretty much any day. One month later, junior – mini ai tools wasn't just a half-baked experiment. It was live on the Shopify App Store, letting sellers do ecommerce image editing directly inside Shopify—no downloads, no complicated settings, no file juggling. Look, there are a ton of online photo editors out there. Most of them try to do everything —from weird meme filters to overly complex AI prompts. Junior? It's laser-focused on what sellers actually need. Think: AI Image Generator for creating extra product shots without hiring a photographer for creating extra product shots without hiring a photographer AI Photo Enhancement to make images sharper and brighter instantly to make images sharper and brighter instantly AI Background Remover that actually understands where your product ends and the background begins that actually understands where your product ends and the background begins AI Image Restoration to bring old or damaged product photos back to life to bring old or damaged product photos back to life AI Image Blending & Combination for creative campaigns or lifestyle shots for creative campaigns or lifestyle shots Batch Editing so you can process hundreds of photos in one go If you're selling online, your product image is your first lighting? Buyer clicks background? You look unprofessional. Low quality? They won't trust you enough to pay. Junior fixes that fast. Even amateur photos can look like they were taken in a studio — all thanks to AI-powered enhancements and background cleanups. We've had Shopify store owners who: Launched an entire seasonal collection in half the time because of batch editing Revived old stock photos instead of paying for a new shoot A/B tested product images made with AI generators and saw conversion rates jump One guy even told me he started using Junior to create custom images for Instagram ads without touching Canva or Photoshop. That's the kind of 'off-label' use I love. In e-commerce, time is money. Junior's Shopify integration means you edit right where your products live — no exporting, no zip files, no nonsense. Upload → Pick an action → Done. You can: Upscale resolution for marketplaces like Amazon Create quick mockups for new products Maintain consistent branding across every listing Junior's not done. We're already working on: Faster AI processing speeds Even cleaner background removal for tricky products (think jewelry or transparent bottles) Direct integration with ad platforms so your edited images can go straight into campaigns Basically — less clicking around, more selling. And if you want to follow updates or my thoughts on content creation tools, you can check my LinkedIn profile. Junior didn't start as a 'startup idea.' It started as a fun challenge. That's why it's so seller-focused — every feature exists to actually solve a pain point. From a one-day caffeine-fueled code sprint to a growing AI tool used by Shopify stores worldwide… it's proof that you don't need 6 months and a giant team to create something that changes the game. Final thought? If you're still editing photos manually in 2025… you're wasting time you could be using to make sales. Let AI handle the boring work — from photo effects online to complete visual content creation — so you can focus on growing your business. TIME BUSINESS NEWS


Digital Trends
4 hours ago
- Digital Trends
Facebook outage: users are reporting issues with the social network
When Facebook, WhatsApp, and Instagram all went down, the groundswell of people rushing to other platforms to continue their social posting and messaging -- likely to poke fun at Facebook, frankly -- was intense. So much so, it seems, that Twitter is also experiencing problems. Everyone's favorite doomsday watchlist Downdetector shows many reports of issues with Twitter, and staff members here at Digital Trends are seeing intermittent problems loading tweets -- both on the timeline and from individual links. So far the issue doesn't seem universal, and content usually loads after a handful of page refreshes, so we can hope this is a little blip and not the start of a larger problem.
Yahoo
4 hours ago
- Yahoo
Exclusive-Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info
By Jeff Horwitz (Reuters) -An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'INCONSISTENT WITH OUR POLICIES' 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'TAYLOR SWIFT HOLDING AN ENORMOUS FISH' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts,' 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.' Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state. (By Jeff Horwitz. Edited by Steve Stecklow and Michael Williams.) 登入存取你的投資組合