Latest news with #contentModeration


CBC
15-05-2025
- Entertainment
- CBC
Ye's song praising Hitler was pulled from most online platforms. Are they doing enough?
When controversial rapper Ye's new song praising Adolf Hitler was posted on social media platforms and music streaming services last week, most removed the antisemitic track within days. But some experts say these companies are not fast enough to respond or not doing enough to prevent the posting of hateful content — which, in the case of the song by the hip-hop artist formerly known as Kanye West, had already been viewed and listened to by millions. Vlad Khaykin, with the Simon Wiesenthal Center, says tech companies are "falling down on the job" of protecting the public from hate, incitement, harassment and intimidation. "In this specific case, many of these platforms did take action to remove this from their platform. But the truth is it should have never had a presence on those sites in the first place," said Khaykin, who is the human rights organization's executive vice-president of social impact and partnerships for North America. Imran Ahmed, CEO of the Center for Countering Digital Hate, agrees that these tech companies are not taking the most basic, common sense approach to systemically deal with content by "hate actors." Ye praises Hitler, Nazis in livestream interview 2 years ago Duration 2:08 "There are so many things that they could be doing," he said, whether hiring staff to specifically search for such hateful content or some more "sophisticated technological solution." "They could use the same tools that they used to identify copyrighted content." The companies are not transparent about what if any methods they use, he says. "I can't tell you whether or not there are people who are searching for the content." Ye's song and video for Heil Hitler, which glorifies the Nazi leader and includes a sample from a Hitler speech, was removed from a number of streaming platforms, but remains on Elon Musk's social media platform X. It is just the latest antisemitic messaging by Ye, whose X account has been deactivated and reactivated over the past few years because of such posts. This past February, it was deactivated following posts which included Ye declaring himself a Nazi and saying "I love Hitler." But he was soon back on X, where his Heil Hitler (Hooligan Version) video is now nearing 10 million views. Jim Berk, CEO of the Simon Wiesenthal Center, called out X for allowing the song — saying in a statement it had become Ye's "partner in spreading vitriol against Jews" — and for allowing a "flagrant violation of its own rules." "We call on X to remove West from its platform and for other platforms and distributors to refuse to host or monetize this song," he said. "There must be a clear line when it comes to glorification of genocidal regimes, particularly to millions of young people." Neither X nor Musk have responded publicly to these complaints. But a number of other tech platforms say they've pulled Ye's song. A spokeswoman for SoundCloud, said in an email to CBC News that the audio streaming platform had taken steps to remove nearly 400 versions of Heil Hitler. YouTube says it removed the song and will continue to take down re-uploads, while Reddit says it has been removing the song and "any celebration of its message." Although Spotify did not respond, NBC reported that it also seemed to have removed the song from its platform. WATCH | How should we treat Kanye West? Certain tools to ID hateful content However, Khaykin, from the Simon Wiesenthal Center, says we're in an age of technology when certain tools, like AI, can identify problematic content before it is added to a platform. In its annual Digital Terrorism and Hate Report Card, the centre rates how well major digital platforms combat online hate, antisemitism and extremism. The criteria includes how fast a platform removes such content once reported, and whether they have transparency reports with specific data on hate/terrorist content removals. But the centre gave low grades to most of those platforms in its 2025 report — TikTok got a C, Spotify got a C-, and Cs also went to both Google/YouTube and Facebook/Instagram. "It's not really, I think, a matter of capability," Khaykin said. "It's really a matter of will. Does there exist the will to actually, seriously do something about it? And unfortunately, sometimes the will to do the right thing, it bumps up against the profit motive." Ahmed, with the Center for Countering Digital Hate, also questions the companies' priorities. "It's worth remembering that these platforms, if you try and upload a few seconds of a copyrighted piece of music, it will be down in a heartbeat," he said. "But they somehow seem incapable of taking action against a piece of content that glorifies in the murder of millions of Jews." "They seem to be placing less concern about that than they do about someone stealing three seconds of a Taylor Swift song."


Telegraph
08-05-2025
- Telegraph
Online Safety laws could flood our site with fake news, claims Wikipedia
Wikipedia is launching a legal challenge against the Online Safety Act over fears that new requirements could allow false information to appear on its site. The Wikimedia Foundation will challenge legislation that could define Wikipedia as a category one service under the Act. Services in this category will be subject to the most stringent duties to protect users from harmful content online, which would require Wikipedia to carry out user verification – a filter that would damage the way volunteers and communities edit and review content on the site. The foundation said the duties meant it would have to allow any user to block other unverified users from editing or removing content they post, upsetting the delicate hierarchy the site currently has in place. The platform warned this change could mean malicious users could easily post harmful, false and misinformation content, and block Wikipedia's volunteer editors from then removing it. The Wikimedia Foundation said in a blog post: 'Wikipedia is kept free of bad content because of the important work of thousands of members of the public, who can review and improve the content on the website to ensure it is neutral, fact-based and well-sourced.' Sophisticated volunteer communities, working in more than 300 languages, collectively govern almost every aspect of day-to-day life on Wikipedia. Their ability to set and enforce policies, and to review, improve or remove what other volunteers post, is central to Wikipedia's success, notably in resisting vandalism, abuse and misinformation. The blog post said: 'For example, volunteer users worked day and night to ensure that Wikipedia presented neutral and reliable information about the Southport murders at the same time as misinformation and race-baiting spread unchecked on social media. 'However, if Wikipedia is designated as category one, the Wikimedia Foundation will need to verify the identity of Wikipedia users. 'That rule does not itself force every user to undergo verification – but under a linked rule, the foundation would also need to allow other (potentially malicious) users to block all unverified users from fixing or removing any content they post. 'This could mean significant amounts of vandalism, disinformation or abuse going unchecked on Wikipedia, unless volunteers of all ages, all over the world, undergo identity verification.' 'Overregulated' The Online Safety Act is steadily coming into force this year, with social media and other platforms hosting user-generated content becoming subject to a range of duties that require them to protect users, and in particular children, from seeing harmful or illegal content. Firms who breach the new rules face fines of up to £18 million or 10 per cent of global revenue – whichever is greater. The Wikimedia Foundation said it did not oppose online safety regulation, or even the use of a categories system, but said it felt it would be 'overregulated' if designated as a category one service and felt compelled to act. It added: 'Although the UK Government felt this category one duty (which is just one of many) would usefully support police powers 'to tackle criminal anonymous abuse' on social media, Wikipedia is not like social media. 'Wikipedia relies on empowered volunteer users working together to decide what appears on the website. This new duty would be exceptionally burdensome (especially for users with no easy access to digital ID). 'Worse still, it could expose users to data breaches, stalking, vexatious lawsuits or even imprisonment by authoritarian regimes. Privacy is central to how we keep users safe and empowered. 'Designed for social media, this is just one of several category one duties that could seriously harm Wikipedia.'