Latest news with #CenterforCounteringDigitalHate
Yahoo
9 hours ago
- Yahoo
X Tests New Process To Reduce Political Division
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. You have to give credit where credit's due: While I don't believe that X's Community Notes approach is an effective way to combat the spread of misinformation in the app (at least as a sole measure, outside of internally-staffed moderation), I do respect that the Community Notes development team is continually working to implement new updates to address key concerns with the Notes system. Today, X's Community Notes team has announced a new experiment that will highlight content which sees agreement from users of differing political perspectives, in the hopes of reducing division and angst in the app. As you can see in this example, X will soon begin showing some users (Community Notes contributors to start with) prompts in the lower segment of some X posts that have been liked by people from opposing sides of the political spectrum. And if you tap through on the 'What do you think?' text, you'll be able to give the X team more insight as to what you like, or don't, about that specific post, which could help X address common elements of division on content. As explained by X: 'A subset of Community Notes contributors - representing a wide range of viewpoints - will occasionally see a new callout in the product. The callout shows based on early and limited Like signals on the post. Contributors will then be able to rate and provide more feedback about the post, helping to develop an open source algorithm that could effectively identify posts liked by people from different perspectives.' X says that these ratings won't affect a post's reach or visibility, but that: 'Depending on insights and feedback from the pilot test, it is possible that in the future, ratings could help spotlight posts that are liked by people who normally disagree.' Why is that important? Because political disagreement is a key flaw within the Community Notes system, with Community Notes only being displayed in the app when Notes assessors of differing political perspectives agree that a note is warranted. It's this proviso that's led to many, many Community Notes never being shown, to anybody, despite these posts often including blatant misinformation. Indeed, according to analysis conducted by the Center for Countering Digital Hate (CCDH) last year, 73% of submitted Community Notes relating to political topics are never displayed on X, despite them often providing valuable context, while another study conducted earlier this year by Spanish fact-checking site Maldita found that 85% of all Community Notes are never displayed to X users for the same reason. On some issues, users are never going to agree, and those often relate to divisive misinformation about hot-button political issues. That's worsened in the current landscape, where even the U.S. President is prone to amplifying questionable claims. As such, Community Notes often fails to keep people informed of relevant context, because people don't agree on the need for such on certain points. But maybe, if X is able to showcase more cases of agreement, that will ease disputes, and get people to view things in a more objective way. I mean, it seems unlikely that Republican and Democrat aligned voters are ever going to agree that the 2020 U.S. Election was manipulated or not, no matter evidence is produced, but maybe, if X can build more community alignment into the system, by showcasing such to respective users, that could at least open more conversation around more topics, as opposed to siloing people into their own echo chambers. It's worth a shot, and it's a fairly simple measure, at this stage, to better demonstrate relative agreement. And if X can somehow dilute this aspect, and ensure more relevant, valuable Community Notes are displayed in the app, that could be a big step forward in ensuring greater protection for X users.


Spectator
2 days ago
- Politics
- Spectator
Should we be above cancelling the cancellers?
I've been mulling over Marco Rubio's latest salvo in the Trump administration's assault on the Censorship-Industrial Complex. The US Secretary of State has announced he'll impose visa bans on foreign nationals judged to be censoring US citizens or US tech companies. And according to one news report, the ban will apply to their family members too. So who might be on this blacklist? Rubio hasn't named names, but I can think of a few candidates. Imran Ahmed, the CEO of the Center for Countering Digital Hate (CCDH)would be hard-pressed to deny his pro-censorship lobby group targets US citizens and US tech companies, because 'Centre' is spelt C-E-N-T-E-R, though the company was set up in the UK. To underline this, the Disinformation Chronicle published what purported to be a leaked internal CCDH document last year declaring that its strategic priority was to 'kill Musk's Twitter'. Ahmed, a British citizen, lives in Washington DC. Awkward. What about Morgan McSweeney? Sir Keir's chief of staff was the founder of the CCDH and only resigned in April 2020 after the organisation had launched a campaign against the spread of 'online misinformation' about Covid-19. Prior to that, it published a 'Don't Feed the Trolls' report, endorsed by Gary Lineker. (Isn't he a troll?) It recommended reporting 'trolls' to the social media companies that publish their posts, which sounds awfully like encouraging censorship, bearing in mind the CCDH's definition of a 'troll' is pretty broad. 'They don't want to 'win' or 'lose' an argument; they just want their ideas to be heard by as many potential converts as possible,' said the report.
Yahoo
01-05-2025
- Health
- Yahoo
Texas House passes bill banning minors from creating social media accounts
AUSTIN (Nexstar) — Texas is setting itself up to be a leader in regulating the social media sphere. Today, the Texas House passed a bill 116-25 that would prohibit children from using social media. House Bill 186 requires a strict verification process to ensure account holders are at least 18 years old. 'I firmly believe that social media is the most harmful product that our kids have legal access to in Texas,' said Rep. Jared Patterson, R-Frisco, the bill's author. Patterson said HB 186 is the most important one he will lay out this session, citing rising rates of self-harm and suicide amongst minors linked to social media use. Patterson previously cited a study conducted by the Center for Countering Digital Hate in 2022. Researchers created a social media account posing as a 13-year-old user, and interacted with content related to body image and mental health issues. Within 2.6 minutes, content related to suicide was recommended. Within eight minutes, content related to eating disorders was shown. READ MORE: Education commissioner calls for cellphone ban in Texas schools 'ER visits are up, anorexia amongst girls are up, pornography addiction and aggression amongst boys up,' said Patterson. 'It is our kids killing themselves at a clip that we've never seen before in the history of the state of Texas, and it's all correlated back to this rapid rise in social media use for young people.' Right before the vote, Patterson dedicated his bill to those who took their own lives after dealing with social media addiction. 'In honor of David Molak and the countless other children who have lost their lives due to the harms of social media,' Patterson said. Maurine Molak, David's mother, has spent the last nine years advocating nationally and in Texas for stricter reforms. 'I lost my son David to suicide nine years ago after he was cyber bullied for months by a group of students on Instagram that were tormenting him,' Molak said. 'I was very moved at [Patterson's] thoughtfulness in remembering David and the pain that David went through.' Some opponents of the bill feel the verification process will give social media companies too much information. 'What about the concern that this is helping these social media companies collect even more data on us,' State Rep. Erin Zwiener, D-Driftwood, asked Patterson. Patterson said he's concerned about even more data being collected for minors who use social media. He warned that harvesting data from young users is a privacy concern, and is used for targeted advertising and exploitative purposes. 'Social media companies know more about you than you know about yourself,' Patterson said. 'Their business is not in connecting people so that they can chat and share cat memes. Their business is data collection.' HB 186 requires that any personal information obtained for verification must be deleted immediately after the process. Others voiced concerns about the rights of children. 'We believe this law is clearly unconstitutional,' attorney Brian Klosterboer with the Texas branch of the American Civil Liberties Union (ACLU) said. 'HB 186 would ban all social media use by anyone under the age of 18, even if someone's parent or guardian wants them to be able to use social media. For example, a lot of young people might be in youth groups or clubs or organizations that connect on social media. They might use social media for research, even to access, maybe tweets from government officials or academic researchers. All of that would be banned under this bill.' However, Patterson believes social media is too harmful to not regulate. 'The U.S. Surgeon General came out in 2024 [and] compared this to cigarettes, the addictive nature of these products,' he said. 'So it's something that I'm going to continue to fight until we finally get something done on this issue.' In addition, the bill would empower Texas's attorney general to hold social media platforms accountable if they allow underage users. It also grants parents the right to request the removal of their child's existing social media account by contacting the platform directly. Upon receiving the request, the platform must delete the account within 10 days. Last session, the legislature passed a bill banning minors from viewing explicit or pornographic content online, enacting a similar age verification process. It also held commercial entities liable if they failed to perform age verification. Patterson believes that banning sites with primarily explicit content is not enough to protect their mental health. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.


RTÉ News
23-04-2025
- Health
- RTÉ News
SkinnyTok and the toxic resurgence of glorifying thinness
SkinnyTok is a social media trend filled with unregulated diet advice and worrying mantras. Kate Demolder writes. TW: disordered eating content, eating disorders. Noticed an uptick in weight-loss-related content on your TikTok algorithm? You're not alone. A recent slew of reports, along with anecdotal evidence, has indicated that content centred around losing weight and disordered eating has penetrated young people's algorithms, which harks back to pro-ana (short for pro-anorexia, a subculture that promotes eating disorders) content from the early aughts. This content comes in myriad forms; from creators with large followings who share videos on why "being skinny is a form of self-respect" to smaller accounts who speak in clipped sentences with mantras splashed across photos of thin models like "don't reward yourself with food, you're not a dog," or "a skinny body is rented, not owned". Naturally, this shift is concerning, but it's also not entirely new. Though the 1990s are chastised for being the birth of thinspo (thin-inspiration content), recent reports suggest that external forces have never been more punitive when it comes to thinness. Back in 2022, a report from a US nonprofit, the Center for Countering Digital Hate, found that TikTok appeared to be pushing videos about disordered eating to 13-year-old users within as quickly as three minutes of their joining the platform. A 2024 study, which asked the question " Does TikTok contribute to eating disorders?" showed results that "algorithms for users with eating disorders (ED) delivered 4,343 per cent more toxic ED videos." Officially, TikTok does not allow content that promotes or glorifies unhealthy or harmful behaviour, policing this through human and AI moderation. In 2022, the platform announced changes to its community guidelines, aimed at cracking down on content promoting "disordered eating". It has since regularly updated these guidelines, stating that they remove all content "promoting disordered eating and dangerous weight loss behaviors (sic), or facilitating the trade or marketing of weight loss or muscle gain products." Today, when one types 'thinspo' into the search bar, a prompt says: "You're not alone; If you or someone you know is having a hard time, help is always available" alongside a resources tab and contact details for a national eating disorder charity. Still, according to its users, the platform is peppered with thinspo-related content. Creators have gotten around platform-led barriers by purposefully misspelling words related to thinness, such as "skinni" instead of "skinny" (a favourite of creator Liv Schmidt, whose account was disabled by TikTok back in September) or insisting their methods are health-centric in place of disordered. This has been bolstered somewhat by content about Ozempic, Mounjaro and other GLP-1 medications, which were created as a diabetes drug and are now prescribed to people whose health has been impacted by their weight. The popularisation of this conversation – Ozempic has become common parlance when discussing weight loss and celebrity culture – has seen to further demonise weight gain, with conversations around bigger bodies returning to the bullying, painful rhetoric we saw in the 2000s. Look no further than the recently-banned 'chubby filter,' which saw users virtually try on a bigger body for entertainment. TikTok is largely known as a social platform for teenagers, with about 60 per cent of users being Gen Z. This makes the issue of pro-ana content on the app particularly worrying. "We know that young people, since Covid-19, have been presenting with eating disorders or disordered eating in higher numbers," Carol McCormack, a Clinical Nurse Manager on St Patrick's Mental Health Services' Willow Grove Adolescent Unit, a Mental Health Commission-approved centre, which provides mental health treatment for ages 12–18. "And while we can't say that social media is the sole reason why eating disorders develop, people would often reference that they would engage with quite a lot of content online that can negatively impact body image and lead to body comparison and body dissatisfaction. "They also notice that once they interact with or even look at one piece of content, they're fed more and more. Such is the nature of the algorithm, but when you're still developing your sense of self and identity, that can influence in a really impactful way." In 2021, the Academy for Eating Disorders published an open letter asking social media platforms to reduce online harms. Since then, it's unclear how much has changed. According to Research and Policy Officer of Bodywhys, Barry Murphy, the entire social media ecosystem as we know it would have to change before pro-ana content could too. "It's part of a wider landscape that's been going on on social media for some time," he says. "Things like outfit of the day videos, skincare tutorials, curated selfies, fitness progress updates… People might enjoy the storytelling aspect, but they all hark back to the fixed idea that there is one way we should be, physically and culturally." This idea is not new, he says. "These trends are likely cyclical, but it's hard to tell the precise origins. Pro-eating disorder content has been around for years, long before broadband and smartphones. Today, there are just more trigger points, like a few years ago when a trend suggested that your waist shouldn't be bigger than the width of an A4 page." Eating disorders are serious, often life-threatening conditions. They can be developed for any number of reasons, and yet misconceptions about what causes them still persist. As such, exact triggers are hard to pinpoint. But it's clear that social media can play a role. In 2021, a report found that Instagram failed to protect those at risk for eating disorders from pro-ana messaging. "This content slips through the cracks," Murphy says. "It's hard to say definitely whether platforms are doing enough, but it's our experience that people are frustrated with their annual reports that show how they're policing such content, when much of that content still comes up for people who don't need it. "What's happening here is a gap between what's written on paper and what's happening in reality. And in reality, this content has been popping up for at least half a decade." The best and perhaps only way to ensure content won't show up on your algorithm again is to ignore it. This, of course, is easier said than done. The fact remains that the regulation of pro-ana content online is largely left up to the individual, as opposed to the fastest-growing social media platform on the internet to face. However, resources exist. And there are people there to help. "Platforms like Cybersafe and Webwise are brilliant for those trying to be more aware of what's going on out there and how to engage with it well," McCormack says. "Aside from that, the only thing parents or guardians can do is create an open dialogue about content like this, and try to potentially either dispel what they're looking at, or discuss what well-being is. As well as clarifying what is healthy, and what is not."
Yahoo
13-03-2025
- Business
- Yahoo
After Axing Fact-Checkers, Meta's Community Notes Will Have Help From X
Meta will begin testing its Community Notes feature-a crowdsourcing approach to content moderation that invites users to add context to posts and rate other users' notes-on Facebook, Instagram, and Threads, beginning on March 18, the company said. The regime not only mimics X's Community Notes, originally launched in 2019 under the name Birdwatch, but will operate on open-source software developed by X. "Initially we will use X's open source algorithm as the basis of our rating system. This will allow us to build on what X has created and improve it for our own platforms over time," the company wrote in a blog post Thursday. Meta plans to gather feedback and "learn from the researchers who have studied" X's tech to make algorithmic adjustments as needed on a rolling basis. The changes come just two months after Meta CEO Mark Zuckerberg announced that the social giant would eliminate third-party human fact-checking roles en masse to "reduce censorship" and "[go] back to its roots." Meta Scraps Third-Party Fact-Checking For X-like Community Notes. Some Claim It's Gone 'Full MAGA' Meta Scraps Third-Party Fact-Checking For X-like Community Notes. Some Claim It's Gone 'Full MAGA' While Meta acknowledges that Community Notes won't serve as a "perfect" approach to content moderation, the company said it expects the product to be "less biased than the third party fact checking program it replaces." In an effort to mitigate bias, the company won't publish notes unless contributors with a variety of viewpoints generally agree with them. Notes won't indicate who authored them. But there are concerns about the efficacy of such a system. Research published in October 2024 by The Washington Post and the Center for Countering Digital Hate found that most Community Notes on X never appear, due to the system's requirement for bipartisan agreement. 74% of accurate notes-those aligning with independent fact-checks-were not shown to users, according to the study, even as misleading election-related posts amassed 2.9 billion views. Meta first introduced fact-checking in December 2016, following backlash over dissemination of misinformation on the platform that may have influenced the 2016 U.S. presidential election. Approximately 200,000 users across Meta's properties have signed up in advance to act as potential contributors to the system, and the company is encouraging more users to join a waitlist. The company will begin testing Community Notes in beta this month and will roll out the system nationally once Meta is "comfortable … that the program is working in broadly the way we believe it should." Notes will be made available in the U.S. market in English, Spanish, Chinese, Vietnamese, French and Portuguese, with plans to expand into other languages in the future. A global rollout is also slated, with third-party fact-checking continuing outside the U.S. for now.