logo
#

Latest news with #FrancesHaugen

How the Secret Algorithms Behind Social Media Actually Work
How the Secret Algorithms Behind Social Media Actually Work

Time​ Magazine

time07-08-2025

  • Time​ Magazine

How the Secret Algorithms Behind Social Media Actually Work

Ever wondered how social media platforms decide how to fill our feeds? They use algorithms, of course, but how do these algorithms work? A series of corporate leaks over the past few years provides a remarkable window in the hidden engines powering social media. In January 2021, a few Facebook employees posted an article on the company's engineering blog purporting to explain the news feed algorithm that determines which of the countless posts available each user will see and the order in which they will see them. The article includes a fancy-looking formula central to the algorithm, but the formula is nearly impossible to decipher since the authors didn't bother to explain half the symbols in it. When I first read that blog post, I felt like the cave dwellers in Plato's famous allegory seeing shadows dance upon the wall— flat, colorless projections of a richer world existing out of sight. I knew the obfuscated formula was not the full story, I just didn't know how to get outside the cave and find the real math behind it. Eight months later, Facebook was rocked by one of the biggest scandals ever to strike the tech industry. Frances Haugen, a Facebook product manager turned whistleblower, snuck over ten thousand pages of documents and internal messages out of Facebook headquarters. She leaked these to a handful of media outlets. A barrage of stories soon ran, largely focusing on the most alarming, attention-grabbing revelations. Internal studies documented Instagram's harmful impact on the mental health of vulnerable teen girls. A secret whitelist program exempted VIP users from the moderation system the rest of us face. Mark Zuckerberg and other executives were allegedly unwilling to stem the flood of dangerous extremist content propagated on the platform. Read More: The Unspoken Etiquette of Mourning on Social Media Will Oremus, a tech writer for The Washington Post, called me and explained that he had something else in his sights. He wanted to lift the veil on the formula in the engineering blog post, and he realized that Haugen's documents were the key to doing so. It turns out Facebook engineers have assigned a point value to each type of engagement users can perform on a post (liking, commenting, resharing, etc.). For each post you could be shown, these point values are multiplied by the probability that the algorithm thinks you'll perform that form of engagement. These multiplied pairs of numbers are added up, and the total is the post's personalized score for you. There's a bit more to it than this, but in broad strokes your feed is created by sorting posts according to these scores, from highest to lowest. Here's what this looks like in symbols. Suppose we have a specific user and a specific post in mind, and we write Plike for the probability that the user likes the post, Plove for the probability that they tap the heart emoji, Pangry for the probability that they tap the angry emoji, Pcomment for the probability that they comment on the post, and Pshare for the probability that they share it. (There are other forms of engagement, but let's just focus on these for now.) And let's write Vlike, Vlove, and so on for the point values assigned to these engagements. Then the magic formula is: Score = Vlike × Plike + Vlove × Plove + Vangry × Pangry + Vcomment × Pcomment + Vshare × Pshare The idea is that the algorithm wants to surface the posts you're most likely to engage with—but there are several forms of engagement, not just one. It wouldn't make sense to treat all forms of engagement equally; a reshare really does seem like stronger engagement than a like. So the different forms of engagement are weighted differently, and a weighted sum combines them into an overall measure of anticipated engagement. Let's try this with some concrete numbers. Suppose a like is worth one point, a heart emoji is worth five points, and a comment is worth thirty points. And suppose one of your friends posts a picture of the puppy they just adopted, while another friend writes a post about a new job they landed. You're fond of both friends, but let's be real: you're more excited about the puppy than the job. If there's a 50% chance you'll like the puppy pic, a 20% chance you'll love it, and a 10% chance you'll comment on it, then the puppy post scores 1 × 0.5 + 5 × 0.2 + 30 × 0.1 = 4.5. If there's a 20% chance you'll like the job announcement post, a 10% chance you'll love it, and a 5% chance you'll comment on it, then its score is 1 × 0.2 + 5 × 0.1 + 30 × 0.05 = 2.2. The puppy pic wins and is placed higher in your feed than the job announcement. Now suppose there's also a post by your uncle falsely claiming that COVID was caused by 5G towers. Let's go ahead and give a 0% chance of you liking or loving this post. But you are tempted to write a comment telling your uncle he's full of s--t, or at least politely explaining why he's wrong. Let's put your probability of commenting on this post at, say, 20%. Then its score is 1 × 0 + 5 × 0 + 30 × 0.2 = 6, greater than 4.5. So before you come to the puppy post that makes you happy and the job post that makes you mildly envious, you're going to see a COVID conspiracy post that boils your blood. Facebook doesn't try to make you angry, but the algorithm has figured out what kinds of posts will keep you engaged. The Vs in the formula, the engagement point values, are out of your control. But you can influence the Ps, the estimates of your engagement probabilities. If you tend to engage with posts about food, then over time the algorithm will bump up your estimated engagement probabilities on food posts. If you want more food content in your feed, go ahead and like, love, comment, and share away. If you don't want food content, don't engage with it. Where things get subtle is with unpleasant posts, especially ones that anger or offend you. Think of it this way. When you argue with your uncle, you are giving his COVID conspiracy posts 30 points for each comment you leave—no matter how critical your comment is—and these points drive all his other COVID conspiracy posts up in your feed. In fact, these points drive his COVID conspiracy posts up in everyone's feed, because the algorithm is smart enough to realize that if you're inclined to comment on these posts, so are his other friends. Worse still, the algorithm associates COVID conspiracy content with other conspiratorial content, so when your uncle posts a flat-Earth link, the algorithm in essence thinks: 'They commented on the other conspiracy posts, so I bet they'll comment on this one, too.' Read More: When the Group Chat Replaces the Group It doesn't end there. The algorithm correctly deduces that if you're likely to comment on your uncle's conspiratorial content, you're likely to comment on other users' conspiratorial content as well. In the end, your laudable effort to educate your uncle with a choicely worded comment backfires and tells the algorithm to elevate all conspiratorial content in your feed and, to a lesser but non-negligible extent, in the feeds of other users. Oops. TikTok also uses an algorithm to determine which of the billions of videos on the platform to show to each of its one billion users. How does it work? The New York Times hunted for answers and got hold of an internal document labeled 'TikTok Algo 101' written by a TikTok engineering team. In a December 2021 article, The New YorkTimes wrote that this document includes a 'rough equation for how videos are scored . . . : Plike × Vlike + Pcomment × Vcomment + Eplaytime × Vplaytime + Pplay × Vplay.' While the article didn't really explain this formula or the symbols in, it is similar enough to Facebook's that we can figure it out. Surely, Plike is the estimated probability that the user clicks the heart-shaped like button on the video, while Vlike is the point value engineers have assigned to this form of engagement. Same story for Pcomment, commenting on a video, and Pplay, playing a video. Eplaytime I'm rather certain is the number of seconds the algorithm expects the user will watch the video for, and Vplaytime is the point score indicating how many points each second of play time is worth. If, hypothetically, a comment is worth twenty points and a second of play time is worth two points, then a 50% chance of commenting would count for the same amount of engagement as an expected five seconds of play time. The secret TikTok document goes on to explain that 'the recommender system gives scores to all the videos based on this equation, and returns to users videos with the highest scores.' Sound familiar? Yes, TikTok's mind-reading algorithm is, at its mathematical core, nearly identical to Facebook's algorithm. Both rank posts/videos according to a weighted sum of the amount of engagement they are expected to elicit from the user. Have you ever seen a TikTok video with overlaid text saying something like 'Wait for it,' 'You won't believe what happens,' or 'You've gotta watch till the end lol'? These phrases tend to bump up the expected play time for most users, so it's a cheap trick to boost the video's score. Some people post videos where literally nothing happens, but they trick you into watching multiple times, thereby racking up even more expected seconds. The math is simple: if the algorithm thinks you'll watch a ten-second video three times, that's thirty seconds of expected play time. If you don't like a video for whatever reason, limit your play time. Importantly, resist the urge to rewatch it out of frustration or disgust. And don't give in to the temptation to comment. Comments and seconds watched, no matter what quality and kind, tell TikTok's algorithm one thing: 'Give me more videos like this.' In March 2023, Elon Musk had a big chunk of the source code for X (then Twitter) posted online. Would you be surprised to hear that the platform ranks posts by using a weighted sum of estimated engagement probabilities? For all the jousting by the tech giants, for all the competition to build the best social media platform, it turns out that Facebook, TikTok, and Twitter all run on essentially the same simple math formula. I think it's a safe bet that all the other platforms driven by user engagement do too. The weighted sum of engagement probabilities is the formula driving social media. My biggest takeaway from this formula and its ubiquity is that users can develop healthier social media feeds, but doing so takes restraint and intentionality. Imagine there's a KFC in your town, and one time after a stressful day at work you give in to temptation and head there for an easy dinner. The next day, the KFC has mysteriously moved one block closer to your house. Now the convenience and the allure are even greater, so you find yourself visiting it more often. But each time you do, the KFC moves even closer to your house. Soon it's down the block from you and is part of your weekly routine. Eventually the KFC is next door, and you're eating fried chicken more often than any reasonable human being should. You're not proud of it, but how can you resist when KFC is the first thing you see (and smell) in the morning and the last thing before bed at night? This is how social media algorithms work. They bring the things we engage with closer and closer. Once we start clicking the social media equivalent of junk food, we're going to be served up a lot more of it—which makes it harder to resist. So we click it more and the algorithm promotes it even more highly in our feeds. It's a vicious cycle that can quickly turn our feeds into endless streams of digital dreck. Knowing how and why this cycle happens is the first step to stopping it. Just remember: the tech companies choose the Vs in the social media formula, but the Ps are shaped by your actions online. Excerpted with permission from ROBIN HOOD MATH: Take Control of The Algorithms That Run Your Life by Noah Giansiracusa published on August 5, 2025 by Riverhead, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2025 Noah Giansiracusa.

TikTok Banned the "SkinnyTok" Hashtag. It's Only a Matter of Time Until a New Insidious Diet Trend Replaces It
TikTok Banned the "SkinnyTok" Hashtag. It's Only a Matter of Time Until a New Insidious Diet Trend Replaces It

Yahoo

time05-06-2025

  • Health
  • Yahoo

TikTok Banned the "SkinnyTok" Hashtag. It's Only a Matter of Time Until a New Insidious Diet Trend Replaces It

iantfoto In this op-ed, Features Director Brittney McNamara considers TikTok's SkinnyTok hashtag ban and the seemingly unbeatable monster of diet culture. If you've been on social media lately, you undoubtedly know about #SkinnyTok. Along with the rise in popularity of weight loss drugs like Zepbound and Wegovy over the last few years, thinness as an ideal has also returned to our cultural lexicon, spawning a whole hashtag full of creators discussing how they get and stay thin, swapping diet and workout tips that encourage sometimes extreme measures to be skinny. But on June 3, TikTok banned #SkinnyTok as a search term after concern from European legislators about how the app can negatively impact young people's body image, according to the New York Times. The hashtag had 'become linked to unhealthy weight loss content,' TikTok said in its reasoning for the ban, something the European Commission was investigating because of the potential 'public health risk' associated with promoting 'extreme thinness' to young people online, Politico reports. Now, when users enter that search term, they'll be directed to resources like the National Alliance for Eating Disorders. We know that social media can negatively affect our mental health, and can contribute to body image issues like body dysmorphia and even eating disorders, so this move is an all-around win. There is no benefit — even if society would like to tell you there is — to promoting extreme thinness or unhealthy diets, things that #SkinnyTok was often associated with. But even though the ban is a net positive in this sense, it's simply a bandaid on a much larger issue. Until we reckon with our cultural obsession with thinness and our wholesale buy-in to diet culture, #SkinnyTok will simply shift and transform, taking on a new slender shape online. According to Today, #SkinnyTok began appearing on TikTok around the start of this year, gaining steam in March and April. Videos under the hashtag encouraged viewers to eat less, making hunger seem like a virtue and repackaging harmful diet advice as 'tough love." If you weren't dieting and participating in behaviors to make yourself smaller, many #SkinnyTok posts were there to shame you into submission. It's not clear exactly who started the hashtag, but it is apparent how it gained popularity. Social media and other online forums have long been hotbeds for extreme diet talk and for promoting unhealthy body ideals. In the heyday of Tumblr, 'pro ana' (pro anorexia) and 'thinspo' content abounded. When those topics were banned, users found ways to evade that, substituting letters or words to signal their content to other users without triggering filters that would censor their posts. Meta whistle-blower Frances Haugen revealed internal research that found that 'when [32% of teen girls] felt bad about their bodies, Instagram made them feel worse.' As a result of that information, social media executives testified before Congress in 2021, in part about the ways their platforms impact young people's body image. Just before #SkinnyTok officially earned its title, content creator Liv Schmidt was ousted from TikTok in October 2024 because of her posts instructing viewers on how to be skinny. Her posts violated TikTok's Community Guidelines, which prohibit '​​promoting disordered eating and dangerous weight loss behaviors.' But before her ban, Schmidt had more than 670,000 followers on TikTok, according to the New York Times. She claimed her instructions on how to eat less with the explicit goal of being thin were simply the pursuit of a certain aesthetic, not a roadmap to potentially disordered eating. Even more recently, Schmidt's group chat called the Skinni Societe was demonetized by Meta after The Cut published an inside look at Schmidt's advice to followers, including lines like "eat like your next weigh-in is tomorrow.' The resurgence of explicit diet talk and 'thinspo' on social media is evidence of a trend we've seen growing for a while now. The advent of GLP-1 drugs has made weight loss attainable for many, and has made getting even thinner an option for many already-thin people. And, with another Trump administration in office, a focus on thinness in society is no surprise. Research has shown a link between conservative ideology and anti-fatness, something we've seen mirrored in Trump's own language. So it's not necessarily a shock that people with fatphobic ideas would feel emboldened in this time, especially. TikTok's #SkinnyTok ban is certainly the right move, and it's encouraging to know that people searching for it on that platform will instead be served resources to cope with disordered eating. But as we can see from the long history of disordered eating and 'skinny' content online, this move is likely to remove one threat, only for another to pop up in its wake. Diet culture is much like the mythological hydra; when you cut one head off of this beast, two more grow in its place. The threats get more numerous, more insidious, the more we strike at it. To truly beat #SkinnyTok and trends like it, we'd need a cultural reckoning — one where we collectively decide that thinness isn't a value, but simply one of many states of being. We'd need to grapple with the racism and anti-Blackness baked into anti-fatness, and how promoting thinness has ties to white supremacy. We'd need to address anti-fat bias in medicine, and rethink the common tropes about fatness and health. We'd need to radically change our thinking, our social structures, our collective stereotypes. We'd need to then cauterize the wounds diet culture has left, making sure no new ugly heads could rear when we turn our backs. Judging by the current political and social climate, that seems unlikely. It's certainly possible, and maybe one day we'll get there. In the meantime, #SkinnyTok may be dead, but it's only a matter of time before another hashtag or trend telling young people to aspire to thinness crops up, another head of this seemingly unkillable hydra ready to bite us in our ever-smaller butts. Originally Appeared on Teen Vogue

Congress Finally Took On AI Policy. It's Just Getting Started
Congress Finally Took On AI Policy. It's Just Getting Started

Newsweek

time14-05-2025

  • Politics
  • Newsweek

Congress Finally Took On AI Policy. It's Just Getting Started

Advocates for ideas and draws conclusions based on the interpretation of facts and data. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Congress' recent passage of the TAKE IT DOWN Act marks a pivotal breakthrough. The bill, which helps stop the spread of non-consensual intimate images and deepfakes online, is the first major internet content legislation since 2018, and arguably the first law ever to address harms from generative AI. Finally, we have proof that Washington can act on AI and digital harms. Now, we need to keep the momentum going. The U.S. Capitol building is seen at sunset. The U.S. Capitol building is seen at years, Congress stalled on technology policy. This wasn't for lack of warning signs. In 2021, Facebook whistleblower Frances Haugen went public with internal research showing that Instagram was toxic for many teens. Two years later, another whistleblower, Arturo Béjar, came forward with allegations that Meta ignored growing evidence of harm to young Facebook users. Meta wasn't alone. A wealth of research over the last five years has found evidence of platforms including TikTok, Snapchat, and YouTube recommending harmful content to teens. Polling from my organization, Americans for Responsible Innovation, shows that 79 percent of Americans are now concerned about AI's impact on younger generations. These warning signs and increased public awareness created an environment where public policy mitigating online harms became increasingly possible. Still, Congress stalled for years. One of the prime suspects behind this legislative paralysis was Big Tech's lobbying clout. The industry spent an estimated $250 million to stop regulatory bills in the 117th Congress. Hyper-partisan divides didn't make legislative movement on tech policy any easier. In the era of AI, past failures to act on social media cast a long shadow. Would Washington wait until AI harms were rampant and entrenched before responding? Many in tech policy braced for another round of inaction. Thankfully, two important things changed. First, the tech industry's stance toward regulation shifted. For years, major platforms treated any new regulation as a mortal threat, deploying lobbyists to kill even modest proposals. Now, we're seeing a more strategic approach. In the case of the TAKE IT DOWN Act, Big Tech did something almost unheard of: it didn't fight the bill. In fact, several Silicon Valley giants, including Meta, Snapchat, Google, and X, actively backed it. Even hardline industry groups backed off. The change of heart may partly be due to a shifting regulatory environment. In the absence of federal laws, states started advancing their own digital rules, creating a patchwork that was even harder for industry to swallow than federal regulation. The second change is within Congress itself. Burned by years of inaction on social media, lawmakers in both parties want to get ahead of the curve on AI. Over the past year, instead of waiting for the next whistleblower crisis, Congress did something novel: it educated itself and built bipartisan consensus early. The Senate convened a series of AI insight forums that brought in experts from all sides. Bipartisan working groups in the House and Senate built out roadmaps on AI policy priorities. This process treated AI policy as a shared challenge requiring knowledge and nuance. It's a heartening contrast to the spectacle of social media hearings from the 2010s. The TAKE IT DOWN Act itself is a step forward that offers a template for future political success. It zeroes in on a specific, clearly harmful phenomenon (non-consensual intimate images), and provides a remedy: a federal mandate that such images be swiftly taken down at victims' request. As some lawmakers in Congress have noted, the TAKE IT DOWN Act's passage shows Congress is getting serious about addressing the harms posed by new technologies. And when it comes to bipartisan opportunities to pass tech legislation through Congress, there are plenty of bills to choose from. There's the NO FAKES Act, which would outlaw unauthorized AI deepfakes of real people's likenesses, the CREATE AI Act, which would expand access to AI resources for students and researchers, and the TEST AI Act to set up sandbox environments to evaluate new AI models. As happened with the TAKE IT DOWN Act, tech industry leaders are starting to come to the table rather than trying to block progress. The key going forward will be to keep this spirit alive. Now is the time for Congress to schedule hearings and markups to move additional bipartisan bills through the pipeline, building a suite of smart guardrails for AI and online platforms. These measures can protect consumers and society from the worst harms while encouraging innovation. A year ago, many would have laughed at the idea of Congress leading on issues like novel harms from generative AI. But lessons have been learned. The combination of public pressure, shifting industry attitudes, and lawmakers doing their homework has created an opening. Now it's up to us to widen it. Brad Carson is president of Americans for Responsible Innovation (ARI). Carson is a former congressman representing Oklahoma's 2nd District and served as acting undersecretary of Defense. The views expressed in this article are the writer's own.

Parents are desperate to protect kids on social media. Why did the US let a safety bill die?
Parents are desperate to protect kids on social media. Why did the US let a safety bill die?

Yahoo

time16-02-2025

  • Politics
  • Yahoo

Parents are desperate to protect kids on social media. Why did the US let a safety bill die?

When Congress adjourned for the holidays in December, a landmark bill meant to overhaul how tech companies protect their youngest users had officially failed to pass. Introduced in 2022, the Kids Online Safety act (Kosa) was meant to be a huge reckoning for big tech. Instead, despite sailing through the Senate with a 91-to-3 vote in July, the bill languished and died in the House. Kosa had been passionately championed by families who said their children had fallen victim to the harmful policies of social media platforms and advocates who said a bill reining in the unchecked power of big tech was long overdue. They are bitterly disappointed that a strong chance to check big tech failed because of congressional apathy. But human rights organizations had argued that the legislation could have led to unintended consequences affecting freedom of speech online. Kosa was introduced nearly three years ago in the aftermath of bombshell revelations by the former Facebook employee Frances Haugen about the scope and severity of social media platforms' effects on young users. It would have mandated that platforms like Instagram and TikTok address online dangers affecting children through design changes and allowing young users to opt out of algorithmic recommendations. 'This is a basic product-liability bill,' said Alix Fraser, director of Issue One's Council for Responsible Social Media. 'It's complicated, because the internet is complicated and social media is complicated, but it is essentially just an effort to create a basic product-liability standard for these companies.' Related: US parents: how much do you spend on childcare? A central – and controversial – component of the bill was its 'duty of care' clause, which declared that companies have 'a duty to act in the best interests of minors using their platforms' and would be open to interpretation by regulators. It also would have required that platforms implement measures to reduce harm by establishing 'safeguards for minors'. Critics argued that a lack of clear guidance on what constitutes harmful content might prompt companies to filter content more aggressively, leading to unintended consequences for freedom of speech. Sensitive but important topics such as gun violence and racial justice could be viewed as potentially harmful and subsequently be filtered out by the companies themselves. These censorship concerns were particularly pronounced for the LGBTQ+ community, which, opponents of Kosa said, could be disproportionately affected by conservative regulators, reducing access to vital resources. 'With Kosa, we saw a really well-intentioned but ultimately vague bill requiring online services to take unspecified action to keep kids safe, which was going to lead to several bad outcomes for children, and all marginalized users,' said Aliya Bhatia, a policy analyst at the Center for Democracy and Technology, which opposed the legislation and which receives money from tech donors including Amazon, Google and Microsoft. When the bill was first introduced, more than 90 human rights organizations signed a letter in opposition, underscoring these and other concerns. In response to such criticism, the bill's authors issued revisions in February 2024 – most notably, shifting the enforcement of its 'duty of care' provision from state attorneys general to the Federal Trade Commission. Following these changes, a number of organizations including Glaad, the Human Rights Campaign and the Trevor Project withdrew opposition, stating that the revisions 'significantly mitigate the risk of [Kosa] being misused to suppress LGBTQ+ resources or stifle young people's access to online communities'. But other civil rights groups maintained their opposition, including the Electronic Frontier Foundation (EFF), the ACLU and Fight for the Future, calling Kosa a 'censorship bill' that would harm vulnerable users and freedom of speech at large. They argued the duty-of-care provision could just as easily be weaponized by a conservative FTC chair against LGBTQ+ youth as by state attorneys general. These concerns have been reflected in Trump's FTC chair appointment of the Republican Andrew Ferguson, who said in leaked statements he planned to use his role to 'fight back against the trans agenda'. Concerns around how Ferguson will manage online content is 'exactly what LGBTQ youth in this fight have written and called Congress about hundreds of times over the last couple of years', said Sarah Philips of Fight for the Future. 'The situation that they were fearful of has come to fruition, and anyone ignoring that is really just putting their heads in the sand.' Opponents say that even with Kosa's failure to pass, a chilling effect has already materialized with regards to what content is available on certain platforms. A recent report in User Mag found that hashtags for LGBTQ+-related topics were being categorized as 'sensitive content' and restricted from search. Legislation like Kosa does not take into account the complexities of the online landscape, said Bhatia, of the Center for Democracy and Technology, and is likely to lead platforms to pre-emptively censor content to avoid litigation. 'Children's safety occupies an interesting paradoxical positioning in tech policy, where at once children are vulnerable actors on the internet, but also at the same time benefit greatly from the internet,' she said. 'Using the blunt instrument of policy to protect them can often lead to outcomes that don't really take this into account.' Proponents attribute the backlash to Kosa to aggressive lobbying from the tech industry, though two of the top opponents – Fight for the Future and EFF – are not supported by large tech donors. Meanwhile, major tech companies are split on Kosa, with X, Snap, Microsoft and Pinterest outwardly supporting the bill and Meta and Google quietly opposing it. 'Kosa was an extremely robust piece of legislation, but what is more robust is the power of big tech,' Fraser said, of Issue One. 'They hired every lobbyist in town to take it down, and they were successful in that.' Fraser added that advocates were disappointed in Kosa failing to pass but 'won't rest until federal legislation is passed to protect kids online and the tech sector is held accountable for its actions'. Aside from Ferguson as FTC chair, it is unclear what exactly the new Trump administration and the shifting makeup of Congress mean for the future of Kosa. Though Trump has not directly indicated his views on Kosa, several people in his close circle have expressed support following last-minute amendments to the bill in 2024 facilitated by Elon Musk's X. The congressional death of Kosa may seem like the end of a winding and controversial path, but advocates on both sides of the fight say it's too soon to write the legislation's obituary. 'We should not expect Kosa to disappear quietly,' said Prem M Trivedi, policy director at the Open Technology Institute, which opposes Kosa. 'Whether we are going to see it introduced again or different incarnations of it, more broadly the focus on kid's online safety is going to continue.' Richard Blumenthal, the senator who co-authored the bill with Senator Marsha Blackburn, has promised to reintroduce it in the upcoming congressional session, and other advocates for the bill also say they will not give up. 'I've worked with a lot of these parents who have been willing to recount the worst day of their lives time and time again, in front of lawmakers, in front of staffers, in front of the press, because they know that something has to change,' said Fraser. 'They're not going to stop.'

Parents are desperate to protect kids on social media. Why did the US let a safety bill die?
Parents are desperate to protect kids on social media. Why did the US let a safety bill die?

The Guardian

time16-02-2025

  • Politics
  • The Guardian

Parents are desperate to protect kids on social media. Why did the US let a safety bill die?

When Congress adjourned for the holidays in December, a landmark bill meant to overhaul how tech companies protect their youngest users had officially failed to pass. Introduced in 2022, the Kids Online Safety act (Kosa) was meant to be a massive reckoning for big tech. Instead, despite sailing through the Senate with a 91-to-3 vote in July, the bill languished and died in the House. Kosa had been passionately championed by families who said their children had fallen victim to the harmful policies of social media platforms and advocates who said a bill reining in the unchecked power of big tech was long overdue. They're bitterly disappointed that a strong chance to check big tech failed because of congressional apathy. But human rights organizations had argued that the legislation could have led to unintended consequences affecting freedom of speech online. Kosa was introduced nearly three years ago in the aftermath of bombshell revelations by former Facebook employee Frances Haugen about the scope and severity of social media platforms' effects on young users. It would have mandated that platforms like Instagram and TikTok address online dangers affecting children through design changes and allowing young users to opt out of algorithmic recommendations. 'This is a basic product-liability bill,' said Alix Fraser, director of Issue One's Council for Responsible Social Media. 'It's complicated, because the internet is complicated and social media is complicated, but it is essentially just an effort to create a basic product-liability standard for these companies.' A central – and controversial – component of the bill was its 'duty of care' clause, which declared that companies have 'a duty to act in the best interests of minors using their platforms' and would be open to interpretation by regulators. It also would have required that platforms implement measures to reduce harm by establishing 'safeguards for minors'. Critics argued that a lack of clear guidance on what constitutes harmful content might prompt companies to filter content more aggressively, leading to unintended consequences for freedom of speech. Sensitive but important topics such as gun violence and racial justice could be viewed as potentially harmful and subsequently be filtered out by the companies themselves. These censorship concerns were particularly pronounced for the LGBTQ+ community, which, opponents of Kosa said, could be disproportionately affected by conservative regulators, reducing access to vital resources. 'With Kosa, we saw a really well-intentioned but ultimately vague bill requiring online services to take unspecified action to keep kids safe, which was going to lead to several bad outcomes for children, and all marginalized users,' said Aliya Bhatia, a policy analyst at the Center for Democracy and Technology, which opposed the legislation and which receives money from tech donors including Amazon, Google and Microsoft. When the bill was first introduced, more than 90 human rights organizations signed a letter in opposition, underscoring these and other concerns. In response to such criticism, the bill's authors issued revisions in February 2024 – most notably, shifting the enforcement of its 'duty of care' provision from state attorneys general to the Federal Trade Commission. Following these changes, a number of organizations including Glaad, the Human Rights Campaign and the Trevor Project withdrew opposition, stating that the revisions 'significantly mitigate the risk of [Kosa] being misused to suppress LGBTQ+ resources or stifle young people's access to online communities'. But other civil rights groups maintained their opposition, including the Electronic Frontier Foundation (EFF), the ACLU and Fight for the Future, calling Kosa a 'censorship bill' that would harm vulnerable users and freedom of speech at large. They argued the duty-of-care provision could just as easily be weaponized by a conservative FTC chair against LGBTQ+ youth as by state attorneys general. These concerns have been reflected in Trump's FTC chair appointment of Republican Andrew Ferguson, who said in leaked statements he planned to use his role to 'fight back against the trans agenda'. Concerns around how Ferguson will manage online content is 'exactly what LGBTQ youth in this fight have written and called Congress about hundreds of times over the last couple of years', said Sarah Philips of Fight for the Future. 'The situation that they were fearful of has come to fruition, and anyone ignoring that is really just putting their heads in the sand.' Opponents say that even with Kosa's failure to pass, a chilling effect has already materialized with regards to what content is available on certain platforms. A recent report in User Mag found that hashtags for LGBTQ+-related topics were being categorized as 'sensitive content' and restricted from search. Legislation like Kosa does not take into account the complexities of the online landscape, said Bhatia, of the Center for Democracy and Technology, and is likely to lead platforms to pre-emptively censor content to avoid litigation. 'Children's safety occupies an interesting paradoxical positioning in tech policy, where at once children are vulnerable actors on the internet, but also at the same time benefit greatly from the internet,' she said. 'Using the blunt instrument of policy to protect them can often lead to outcomes that don't really take this into account.' Proponents attribute the backlash to Kosa to aggressive lobbying from the tech industry, though two of the top opponents – Fight for the Future and EFF – are not supported by large tech donors. Meanwhile, major tech companies are split on Kosa, with X, Snap, Microsoft and Pinterest outwardly supporting the bill and Meta and Google quietly opposing it. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion 'Kosa was an extremely robust piece of legislation, but what is more robust is the power of big tech,' Fraser said, of Issue One. 'They hired every lobbyist in town to take it down, and they were successful in that.' Fraser added that advocates are disappointed in Kosa failing to pass but 'won't rest until federal legislation is passed to protect kids online and the tech sector is held accountable for its actions'. Aside from Ferguson as FTC chair, it is unclear what exactly the new Trump administration and the shifting makeup of Congress mean for the future of Kosa. Though Trump has not directly indicated his views on Kosa, several people in his close circle have expressed support following last-minute amendments to the bill in 2024 facilitated by Elon Musk's X. The congressional death of Kosa may seem like the end of a winding and controversial path, but advocates on both sides of the fight say it's too soon to write the legislation's obituary. 'We should not expect Kosa to disappear quietly,' said Prem M Trivedi, policy director at the Open Technology Institute, which opposes Kosa. 'Whether we are going to see it introduced again or different incarnations of it, more broadly the focus on kid's online safety is going to continue.' Richard Blumenthal, the senator who co-authored the bill with the senator Marsha Blackburn, has promised to reintroduce it in the upcoming congressional session, and other advocates for the bill also say they will not give up. 'I've worked with a lot of these parents who have been willing to recount the worst day of their lives time and time again, in front of lawmakers, in front of staffers, in front of the press, because they know that something has to change,' said Fraser. 'They're not going to stop.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store