logo
#

Latest news with #FrancesHaugen

TikTok Banned the "SkinnyTok" Hashtag. It's Only a Matter of Time Until a New Insidious Diet Trend Replaces It
TikTok Banned the "SkinnyTok" Hashtag. It's Only a Matter of Time Until a New Insidious Diet Trend Replaces It

Yahoo

time3 days ago

  • Health
  • Yahoo

TikTok Banned the "SkinnyTok" Hashtag. It's Only a Matter of Time Until a New Insidious Diet Trend Replaces It

iantfoto In this op-ed, Features Director Brittney McNamara considers TikTok's SkinnyTok hashtag ban and the seemingly unbeatable monster of diet culture. If you've been on social media lately, you undoubtedly know about #SkinnyTok. Along with the rise in popularity of weight loss drugs like Zepbound and Wegovy over the last few years, thinness as an ideal has also returned to our cultural lexicon, spawning a whole hashtag full of creators discussing how they get and stay thin, swapping diet and workout tips that encourage sometimes extreme measures to be skinny. But on June 3, TikTok banned #SkinnyTok as a search term after concern from European legislators about how the app can negatively impact young people's body image, according to the New York Times. The hashtag had 'become linked to unhealthy weight loss content,' TikTok said in its reasoning for the ban, something the European Commission was investigating because of the potential 'public health risk' associated with promoting 'extreme thinness' to young people online, Politico reports. Now, when users enter that search term, they'll be directed to resources like the National Alliance for Eating Disorders. We know that social media can negatively affect our mental health, and can contribute to body image issues like body dysmorphia and even eating disorders, so this move is an all-around win. There is no benefit — even if society would like to tell you there is — to promoting extreme thinness or unhealthy diets, things that #SkinnyTok was often associated with. But even though the ban is a net positive in this sense, it's simply a bandaid on a much larger issue. Until we reckon with our cultural obsession with thinness and our wholesale buy-in to diet culture, #SkinnyTok will simply shift and transform, taking on a new slender shape online. According to Today, #SkinnyTok began appearing on TikTok around the start of this year, gaining steam in March and April. Videos under the hashtag encouraged viewers to eat less, making hunger seem like a virtue and repackaging harmful diet advice as 'tough love." If you weren't dieting and participating in behaviors to make yourself smaller, many #SkinnyTok posts were there to shame you into submission. It's not clear exactly who started the hashtag, but it is apparent how it gained popularity. Social media and other online forums have long been hotbeds for extreme diet talk and for promoting unhealthy body ideals. In the heyday of Tumblr, 'pro ana' (pro anorexia) and 'thinspo' content abounded. When those topics were banned, users found ways to evade that, substituting letters or words to signal their content to other users without triggering filters that would censor their posts. Meta whistle-blower Frances Haugen revealed internal research that found that 'when [32% of teen girls] felt bad about their bodies, Instagram made them feel worse.' As a result of that information, social media executives testified before Congress in 2021, in part about the ways their platforms impact young people's body image. Just before #SkinnyTok officially earned its title, content creator Liv Schmidt was ousted from TikTok in October 2024 because of her posts instructing viewers on how to be skinny. Her posts violated TikTok's Community Guidelines, which prohibit '​​promoting disordered eating and dangerous weight loss behaviors.' But before her ban, Schmidt had more than 670,000 followers on TikTok, according to the New York Times. She claimed her instructions on how to eat less with the explicit goal of being thin were simply the pursuit of a certain aesthetic, not a roadmap to potentially disordered eating. Even more recently, Schmidt's group chat called the Skinni Societe was demonetized by Meta after The Cut published an inside look at Schmidt's advice to followers, including lines like "eat like your next weigh-in is tomorrow.' The resurgence of explicit diet talk and 'thinspo' on social media is evidence of a trend we've seen growing for a while now. The advent of GLP-1 drugs has made weight loss attainable for many, and has made getting even thinner an option for many already-thin people. And, with another Trump administration in office, a focus on thinness in society is no surprise. Research has shown a link between conservative ideology and anti-fatness, something we've seen mirrored in Trump's own language. So it's not necessarily a shock that people with fatphobic ideas would feel emboldened in this time, especially. TikTok's #SkinnyTok ban is certainly the right move, and it's encouraging to know that people searching for it on that platform will instead be served resources to cope with disordered eating. But as we can see from the long history of disordered eating and 'skinny' content online, this move is likely to remove one threat, only for another to pop up in its wake. Diet culture is much like the mythological hydra; when you cut one head off of this beast, two more grow in its place. The threats get more numerous, more insidious, the more we strike at it. To truly beat #SkinnyTok and trends like it, we'd need a cultural reckoning — one where we collectively decide that thinness isn't a value, but simply one of many states of being. We'd need to grapple with the racism and anti-Blackness baked into anti-fatness, and how promoting thinness has ties to white supremacy. We'd need to address anti-fat bias in medicine, and rethink the common tropes about fatness and health. We'd need to radically change our thinking, our social structures, our collective stereotypes. We'd need to then cauterize the wounds diet culture has left, making sure no new ugly heads could rear when we turn our backs. Judging by the current political and social climate, that seems unlikely. It's certainly possible, and maybe one day we'll get there. In the meantime, #SkinnyTok may be dead, but it's only a matter of time before another hashtag or trend telling young people to aspire to thinness crops up, another head of this seemingly unkillable hydra ready to bite us in our ever-smaller butts. Originally Appeared on Teen Vogue

Congress Finally Took On AI Policy. It's Just Getting Started
Congress Finally Took On AI Policy. It's Just Getting Started

Newsweek

time14-05-2025

  • Politics
  • Newsweek

Congress Finally Took On AI Policy. It's Just Getting Started

Advocates for ideas and draws conclusions based on the interpretation of facts and data. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Congress' recent passage of the TAKE IT DOWN Act marks a pivotal breakthrough. The bill, which helps stop the spread of non-consensual intimate images and deepfakes online, is the first major internet content legislation since 2018, and arguably the first law ever to address harms from generative AI. Finally, we have proof that Washington can act on AI and digital harms. Now, we need to keep the momentum going. The U.S. Capitol building is seen at sunset. The U.S. Capitol building is seen at years, Congress stalled on technology policy. This wasn't for lack of warning signs. In 2021, Facebook whistleblower Frances Haugen went public with internal research showing that Instagram was toxic for many teens. Two years later, another whistleblower, Arturo Béjar, came forward with allegations that Meta ignored growing evidence of harm to young Facebook users. Meta wasn't alone. A wealth of research over the last five years has found evidence of platforms including TikTok, Snapchat, and YouTube recommending harmful content to teens. Polling from my organization, Americans for Responsible Innovation, shows that 79 percent of Americans are now concerned about AI's impact on younger generations. These warning signs and increased public awareness created an environment where public policy mitigating online harms became increasingly possible. Still, Congress stalled for years. One of the prime suspects behind this legislative paralysis was Big Tech's lobbying clout. The industry spent an estimated $250 million to stop regulatory bills in the 117th Congress. Hyper-partisan divides didn't make legislative movement on tech policy any easier. In the era of AI, past failures to act on social media cast a long shadow. Would Washington wait until AI harms were rampant and entrenched before responding? Many in tech policy braced for another round of inaction. Thankfully, two important things changed. First, the tech industry's stance toward regulation shifted. For years, major platforms treated any new regulation as a mortal threat, deploying lobbyists to kill even modest proposals. Now, we're seeing a more strategic approach. In the case of the TAKE IT DOWN Act, Big Tech did something almost unheard of: it didn't fight the bill. In fact, several Silicon Valley giants, including Meta, Snapchat, Google, and X, actively backed it. Even hardline industry groups backed off. The change of heart may partly be due to a shifting regulatory environment. In the absence of federal laws, states started advancing their own digital rules, creating a patchwork that was even harder for industry to swallow than federal regulation. The second change is within Congress itself. Burned by years of inaction on social media, lawmakers in both parties want to get ahead of the curve on AI. Over the past year, instead of waiting for the next whistleblower crisis, Congress did something novel: it educated itself and built bipartisan consensus early. The Senate convened a series of AI insight forums that brought in experts from all sides. Bipartisan working groups in the House and Senate built out roadmaps on AI policy priorities. This process treated AI policy as a shared challenge requiring knowledge and nuance. It's a heartening contrast to the spectacle of social media hearings from the 2010s. The TAKE IT DOWN Act itself is a step forward that offers a template for future political success. It zeroes in on a specific, clearly harmful phenomenon (non-consensual intimate images), and provides a remedy: a federal mandate that such images be swiftly taken down at victims' request. As some lawmakers in Congress have noted, the TAKE IT DOWN Act's passage shows Congress is getting serious about addressing the harms posed by new technologies. And when it comes to bipartisan opportunities to pass tech legislation through Congress, there are plenty of bills to choose from. There's the NO FAKES Act, which would outlaw unauthorized AI deepfakes of real people's likenesses, the CREATE AI Act, which would expand access to AI resources for students and researchers, and the TEST AI Act to set up sandbox environments to evaluate new AI models. As happened with the TAKE IT DOWN Act, tech industry leaders are starting to come to the table rather than trying to block progress. The key going forward will be to keep this spirit alive. Now is the time for Congress to schedule hearings and markups to move additional bipartisan bills through the pipeline, building a suite of smart guardrails for AI and online platforms. These measures can protect consumers and society from the worst harms while encouraging innovation. A year ago, many would have laughed at the idea of Congress leading on issues like novel harms from generative AI. But lessons have been learned. The combination of public pressure, shifting industry attitudes, and lawmakers doing their homework has created an opening. Now it's up to us to widen it. Brad Carson is president of Americans for Responsible Innovation (ARI). Carson is a former congressman representing Oklahoma's 2nd District and served as acting undersecretary of Defense. The views expressed in this article are the writer's own.

Parents are desperate to protect kids on social media. Why did the US let a safety bill die?
Parents are desperate to protect kids on social media. Why did the US let a safety bill die?

Yahoo

time16-02-2025

  • Politics
  • Yahoo

Parents are desperate to protect kids on social media. Why did the US let a safety bill die?

When Congress adjourned for the holidays in December, a landmark bill meant to overhaul how tech companies protect their youngest users had officially failed to pass. Introduced in 2022, the Kids Online Safety act (Kosa) was meant to be a huge reckoning for big tech. Instead, despite sailing through the Senate with a 91-to-3 vote in July, the bill languished and died in the House. Kosa had been passionately championed by families who said their children had fallen victim to the harmful policies of social media platforms and advocates who said a bill reining in the unchecked power of big tech was long overdue. They are bitterly disappointed that a strong chance to check big tech failed because of congressional apathy. But human rights organizations had argued that the legislation could have led to unintended consequences affecting freedom of speech online. Kosa was introduced nearly three years ago in the aftermath of bombshell revelations by the former Facebook employee Frances Haugen about the scope and severity of social media platforms' effects on young users. It would have mandated that platforms like Instagram and TikTok address online dangers affecting children through design changes and allowing young users to opt out of algorithmic recommendations. 'This is a basic product-liability bill,' said Alix Fraser, director of Issue One's Council for Responsible Social Media. 'It's complicated, because the internet is complicated and social media is complicated, but it is essentially just an effort to create a basic product-liability standard for these companies.' Related: US parents: how much do you spend on childcare? A central – and controversial – component of the bill was its 'duty of care' clause, which declared that companies have 'a duty to act in the best interests of minors using their platforms' and would be open to interpretation by regulators. It also would have required that platforms implement measures to reduce harm by establishing 'safeguards for minors'. Critics argued that a lack of clear guidance on what constitutes harmful content might prompt companies to filter content more aggressively, leading to unintended consequences for freedom of speech. Sensitive but important topics such as gun violence and racial justice could be viewed as potentially harmful and subsequently be filtered out by the companies themselves. These censorship concerns were particularly pronounced for the LGBTQ+ community, which, opponents of Kosa said, could be disproportionately affected by conservative regulators, reducing access to vital resources. 'With Kosa, we saw a really well-intentioned but ultimately vague bill requiring online services to take unspecified action to keep kids safe, which was going to lead to several bad outcomes for children, and all marginalized users,' said Aliya Bhatia, a policy analyst at the Center for Democracy and Technology, which opposed the legislation and which receives money from tech donors including Amazon, Google and Microsoft. When the bill was first introduced, more than 90 human rights organizations signed a letter in opposition, underscoring these and other concerns. In response to such criticism, the bill's authors issued revisions in February 2024 – most notably, shifting the enforcement of its 'duty of care' provision from state attorneys general to the Federal Trade Commission. Following these changes, a number of organizations including Glaad, the Human Rights Campaign and the Trevor Project withdrew opposition, stating that the revisions 'significantly mitigate the risk of [Kosa] being misused to suppress LGBTQ+ resources or stifle young people's access to online communities'. But other civil rights groups maintained their opposition, including the Electronic Frontier Foundation (EFF), the ACLU and Fight for the Future, calling Kosa a 'censorship bill' that would harm vulnerable users and freedom of speech at large. They argued the duty-of-care provision could just as easily be weaponized by a conservative FTC chair against LGBTQ+ youth as by state attorneys general. These concerns have been reflected in Trump's FTC chair appointment of the Republican Andrew Ferguson, who said in leaked statements he planned to use his role to 'fight back against the trans agenda'. Concerns around how Ferguson will manage online content is 'exactly what LGBTQ youth in this fight have written and called Congress about hundreds of times over the last couple of years', said Sarah Philips of Fight for the Future. 'The situation that they were fearful of has come to fruition, and anyone ignoring that is really just putting their heads in the sand.' Opponents say that even with Kosa's failure to pass, a chilling effect has already materialized with regards to what content is available on certain platforms. A recent report in User Mag found that hashtags for LGBTQ+-related topics were being categorized as 'sensitive content' and restricted from search. Legislation like Kosa does not take into account the complexities of the online landscape, said Bhatia, of the Center for Democracy and Technology, and is likely to lead platforms to pre-emptively censor content to avoid litigation. 'Children's safety occupies an interesting paradoxical positioning in tech policy, where at once children are vulnerable actors on the internet, but also at the same time benefit greatly from the internet,' she said. 'Using the blunt instrument of policy to protect them can often lead to outcomes that don't really take this into account.' Proponents attribute the backlash to Kosa to aggressive lobbying from the tech industry, though two of the top opponents – Fight for the Future and EFF – are not supported by large tech donors. Meanwhile, major tech companies are split on Kosa, with X, Snap, Microsoft and Pinterest outwardly supporting the bill and Meta and Google quietly opposing it. 'Kosa was an extremely robust piece of legislation, but what is more robust is the power of big tech,' Fraser said, of Issue One. 'They hired every lobbyist in town to take it down, and they were successful in that.' Fraser added that advocates were disappointed in Kosa failing to pass but 'won't rest until federal legislation is passed to protect kids online and the tech sector is held accountable for its actions'. Aside from Ferguson as FTC chair, it is unclear what exactly the new Trump administration and the shifting makeup of Congress mean for the future of Kosa. Though Trump has not directly indicated his views on Kosa, several people in his close circle have expressed support following last-minute amendments to the bill in 2024 facilitated by Elon Musk's X. The congressional death of Kosa may seem like the end of a winding and controversial path, but advocates on both sides of the fight say it's too soon to write the legislation's obituary. 'We should not expect Kosa to disappear quietly,' said Prem M Trivedi, policy director at the Open Technology Institute, which opposes Kosa. 'Whether we are going to see it introduced again or different incarnations of it, more broadly the focus on kid's online safety is going to continue.' Richard Blumenthal, the senator who co-authored the bill with Senator Marsha Blackburn, has promised to reintroduce it in the upcoming congressional session, and other advocates for the bill also say they will not give up. 'I've worked with a lot of these parents who have been willing to recount the worst day of their lives time and time again, in front of lawmakers, in front of staffers, in front of the press, because they know that something has to change,' said Fraser. 'They're not going to stop.'

Parents are desperate to protect kids on social media. Why did the US let a safety bill die?
Parents are desperate to protect kids on social media. Why did the US let a safety bill die?

The Guardian

time16-02-2025

  • Politics
  • The Guardian

Parents are desperate to protect kids on social media. Why did the US let a safety bill die?

When Congress adjourned for the holidays in December, a landmark bill meant to overhaul how tech companies protect their youngest users had officially failed to pass. Introduced in 2022, the Kids Online Safety act (Kosa) was meant to be a massive reckoning for big tech. Instead, despite sailing through the Senate with a 91-to-3 vote in July, the bill languished and died in the House. Kosa had been passionately championed by families who said their children had fallen victim to the harmful policies of social media platforms and advocates who said a bill reining in the unchecked power of big tech was long overdue. They're bitterly disappointed that a strong chance to check big tech failed because of congressional apathy. But human rights organizations had argued that the legislation could have led to unintended consequences affecting freedom of speech online. Kosa was introduced nearly three years ago in the aftermath of bombshell revelations by former Facebook employee Frances Haugen about the scope and severity of social media platforms' effects on young users. It would have mandated that platforms like Instagram and TikTok address online dangers affecting children through design changes and allowing young users to opt out of algorithmic recommendations. 'This is a basic product-liability bill,' said Alix Fraser, director of Issue One's Council for Responsible Social Media. 'It's complicated, because the internet is complicated and social media is complicated, but it is essentially just an effort to create a basic product-liability standard for these companies.' A central – and controversial – component of the bill was its 'duty of care' clause, which declared that companies have 'a duty to act in the best interests of minors using their platforms' and would be open to interpretation by regulators. It also would have required that platforms implement measures to reduce harm by establishing 'safeguards for minors'. Critics argued that a lack of clear guidance on what constitutes harmful content might prompt companies to filter content more aggressively, leading to unintended consequences for freedom of speech. Sensitive but important topics such as gun violence and racial justice could be viewed as potentially harmful and subsequently be filtered out by the companies themselves. These censorship concerns were particularly pronounced for the LGBTQ+ community, which, opponents of Kosa said, could be disproportionately affected by conservative regulators, reducing access to vital resources. 'With Kosa, we saw a really well-intentioned but ultimately vague bill requiring online services to take unspecified action to keep kids safe, which was going to lead to several bad outcomes for children, and all marginalized users,' said Aliya Bhatia, a policy analyst at the Center for Democracy and Technology, which opposed the legislation and which receives money from tech donors including Amazon, Google and Microsoft. When the bill was first introduced, more than 90 human rights organizations signed a letter in opposition, underscoring these and other concerns. In response to such criticism, the bill's authors issued revisions in February 2024 – most notably, shifting the enforcement of its 'duty of care' provision from state attorneys general to the Federal Trade Commission. Following these changes, a number of organizations including Glaad, the Human Rights Campaign and the Trevor Project withdrew opposition, stating that the revisions 'significantly mitigate the risk of [Kosa] being misused to suppress LGBTQ+ resources or stifle young people's access to online communities'. But other civil rights groups maintained their opposition, including the Electronic Frontier Foundation (EFF), the ACLU and Fight for the Future, calling Kosa a 'censorship bill' that would harm vulnerable users and freedom of speech at large. They argued the duty-of-care provision could just as easily be weaponized by a conservative FTC chair against LGBTQ+ youth as by state attorneys general. These concerns have been reflected in Trump's FTC chair appointment of Republican Andrew Ferguson, who said in leaked statements he planned to use his role to 'fight back against the trans agenda'. Concerns around how Ferguson will manage online content is 'exactly what LGBTQ youth in this fight have written and called Congress about hundreds of times over the last couple of years', said Sarah Philips of Fight for the Future. 'The situation that they were fearful of has come to fruition, and anyone ignoring that is really just putting their heads in the sand.' Opponents say that even with Kosa's failure to pass, a chilling effect has already materialized with regards to what content is available on certain platforms. A recent report in User Mag found that hashtags for LGBTQ+-related topics were being categorized as 'sensitive content' and restricted from search. Legislation like Kosa does not take into account the complexities of the online landscape, said Bhatia, of the Center for Democracy and Technology, and is likely to lead platforms to pre-emptively censor content to avoid litigation. 'Children's safety occupies an interesting paradoxical positioning in tech policy, where at once children are vulnerable actors on the internet, but also at the same time benefit greatly from the internet,' she said. 'Using the blunt instrument of policy to protect them can often lead to outcomes that don't really take this into account.' Proponents attribute the backlash to Kosa to aggressive lobbying from the tech industry, though two of the top opponents – Fight for the Future and EFF – are not supported by large tech donors. Meanwhile, major tech companies are split on Kosa, with X, Snap, Microsoft and Pinterest outwardly supporting the bill and Meta and Google quietly opposing it. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion 'Kosa was an extremely robust piece of legislation, but what is more robust is the power of big tech,' Fraser said, of Issue One. 'They hired every lobbyist in town to take it down, and they were successful in that.' Fraser added that advocates are disappointed in Kosa failing to pass but 'won't rest until federal legislation is passed to protect kids online and the tech sector is held accountable for its actions'. Aside from Ferguson as FTC chair, it is unclear what exactly the new Trump administration and the shifting makeup of Congress mean for the future of Kosa. Though Trump has not directly indicated his views on Kosa, several people in his close circle have expressed support following last-minute amendments to the bill in 2024 facilitated by Elon Musk's X. The congressional death of Kosa may seem like the end of a winding and controversial path, but advocates on both sides of the fight say it's too soon to write the legislation's obituary. 'We should not expect Kosa to disappear quietly,' said Prem M Trivedi, policy director at the Open Technology Institute, which opposes Kosa. 'Whether we are going to see it introduced again or different incarnations of it, more broadly the focus on kid's online safety is going to continue.' Richard Blumenthal, the senator who co-authored the bill with the senator Marsha Blackburn, has promised to reintroduce it in the upcoming congressional session, and other advocates for the bill also say they will not give up. 'I've worked with a lot of these parents who have been willing to recount the worst day of their lives time and time again, in front of lawmakers, in front of staffers, in front of the press, because they know that something has to change,' said Fraser. 'They're not going to stop.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store