
Meta Criticized as Online Harassment Rises After Moderation Changes
Meta, the social media giant behind Facebook and Instagram, is facing significant criticism following recent changes to its content moderation policies. According to Meta's own Q1 2025 Integrity Report, online harassment and hate speech have risen since the company implemented a major moderation shake-up earlier this year.
This surge has sparked concerns among regulators, users, and advocacy groups, raising questions about the balance between free expression and online safety. Meta's Content Moderation Overhaul: What Changed?
In early 2025, Meta announced a sweeping revision of its content moderation policies aimed at prioritising free speech. The company reduced its reliance on automated systems to remove content and narrowed its definition of hate speech to focus only on direct and dehumanising attacks.
Additionally, Meta replaced third-party fact-checkers with community-sourced notes—a method similar to that used by rival platform X, though with stricter guidelines and oversight mechanisms in place.
Meta claims these changes have led to a 50% decrease in moderation errors , meaning fewer posts were wrongly taken down. However, the cost of this shift appears to be a rise in harmful content slipping through the cracks. Surge in Online Harassment and Hate Speech
As reported by Meta's Q1 2025 Integrity Report , bullying and harassment content increased slightly but noticeably following the moderation update. Bullying content rose from 0.06–0.07% to 0.07–0.08%, while violent content on the platforms also saw an uptick to 0.09%. These figures may seem small but represent millions of additional harmful posts viewed by users.
This increase is largely attributed to Meta's effort to reduce censorship and errors, yet critics argue the company has sacrificed user safety in the process. Many users have voiced frustration over the perceived lack of protection from abusive content, with some calling for more robust moderation measures. Regulatory Backlash and Public Opinion in the UK
The moderation shake-up comes at a time when governments, especially in the UK, are tightening regulations on online platforms. The UK's Online Safety Act , set to take effect in July 2025, mandates social media companies to swiftly remove illegal content such as terrorist material, child sexual abuse, and fraud. Failure to comply could result in fines of up to £18 million or 10% of global revenue, and in extreme cases, service shutdowns.
Public opinion supports tougher moderation measures. A global survey found that 79% of respondents agree that incitements to violence should be removed from social media platforms. In the UK, the debate continues on finding the right balance between protecting free speech and ensuring users are safe from digital abuse and misinformation. What Lies Ahead for Meta?
Meta's moderation shake-up has ignited a crucial debate on how social media platforms should manage harmful content without overstepping into censorship. With regulatory bodies like Ofcom increasing scrutiny under the UK Online Safety Act, Meta faces growing pressure to refine its approach.
The company is likely to continue evolving its policies to better address the concerns of both users and regulators. Ultimately, the challenge lies in creating an online environment that fosters free expression while safeguarding against harassment and abuse—a balance that is far from easy to achieve.
Originally published on IBTimes UK

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Int'l Business Times
4 days ago
- Int'l Business Times
Meta Criticized as Online Harassment Rises After Moderation Changes
Meta, the social media giant behind Facebook and Instagram, is facing significant criticism following recent changes to its content moderation policies. According to Meta's own Q1 2025 Integrity Report, online harassment and hate speech have risen since the company implemented a major moderation shake-up earlier this year. This surge has sparked concerns among regulators, users, and advocacy groups, raising questions about the balance between free expression and online safety. Meta's Content Moderation Overhaul: What Changed? In early 2025, Meta announced a sweeping revision of its content moderation policies aimed at prioritising free speech. The company reduced its reliance on automated systems to remove content and narrowed its definition of hate speech to focus only on direct and dehumanising attacks. Additionally, Meta replaced third-party fact-checkers with community-sourced notes—a method similar to that used by rival platform X, though with stricter guidelines and oversight mechanisms in place. Meta claims these changes have led to a 50% decrease in moderation errors , meaning fewer posts were wrongly taken down. However, the cost of this shift appears to be a rise in harmful content slipping through the cracks. Surge in Online Harassment and Hate Speech As reported by Meta's Q1 2025 Integrity Report , bullying and harassment content increased slightly but noticeably following the moderation update. Bullying content rose from 0.06–0.07% to 0.07–0.08%, while violent content on the platforms also saw an uptick to 0.09%. These figures may seem small but represent millions of additional harmful posts viewed by users. This increase is largely attributed to Meta's effort to reduce censorship and errors, yet critics argue the company has sacrificed user safety in the process. Many users have voiced frustration over the perceived lack of protection from abusive content, with some calling for more robust moderation measures. Regulatory Backlash and Public Opinion in the UK The moderation shake-up comes at a time when governments, especially in the UK, are tightening regulations on online platforms. The UK's Online Safety Act , set to take effect in July 2025, mandates social media companies to swiftly remove illegal content such as terrorist material, child sexual abuse, and fraud. Failure to comply could result in fines of up to £18 million or 10% of global revenue, and in extreme cases, service shutdowns. Public opinion supports tougher moderation measures. A global survey found that 79% of respondents agree that incitements to violence should be removed from social media platforms. In the UK, the debate continues on finding the right balance between protecting free speech and ensuring users are safe from digital abuse and misinformation. What Lies Ahead for Meta? Meta's moderation shake-up has ignited a crucial debate on how social media platforms should manage harmful content without overstepping into censorship. With regulatory bodies like Ofcom increasing scrutiny under the UK Online Safety Act, Meta faces growing pressure to refine its approach. The company is likely to continue evolving its policies to better address the concerns of both users and regulators. Ultimately, the challenge lies in creating an online environment that fosters free expression while safeguarding against harassment and abuse—a balance that is far from easy to achieve. Originally published on IBTimes UK


Local Germany
4 days ago
- Local Germany
'Tax justice': Germany considers 10 percent levy on internet giants
"This is a question of tax justice," parliamentary state secretary in the digital ministry Philip Amthor told Die Welt newspaper. "Large digital corporations in particular are cleverly engaging in tax avoidance" while German businesses are "treated with no mercy, everything is taxed." "A fairer system must be created here so that this tax avoidance is addressed," he said about the plan to tax advertising revenue from platforms such as Meta's Instagram and Facebook. Germany's media and culture commissioner Wolfram Weimer said earlier the government was drafting a proposal for such a digital tax but would first invite Google and other big tech companies for talks. Weimer -- the former editor of Die Welt and other media -- on Thursday told Stern magazine that "the large American digital platforms like Alphabet/Google, Meta and others are on my agenda". He said he had "invited Google management and key industry representatives to meetings at the chancellery to examine alternatives, including possible voluntary commitments". Advertisement "At the same time, we are preparing a concrete legislative proposal," Weimer added. This could be based on the model in Austria, which has a five percent tax, he said, adding that in Germany "we consider a 10 percent tax rate to be moderate and legitimate". He said that "monopoly-like structures have emerged that not only restrict competition but also over-concentrate media power. This puts media diversity at risk". "On the other hand, corporations in Germany are doing billion-dollar business with very high margins and have profited enormously from our country's media and cultural output as well as its infrastructure. "But they hardly pay any taxes, invest too little, and give far too little back to society." Weimer stressed that "something has to change now. Germany is becoming alarmingly dependent on the American technological infrastructure."


Int'l Business Times
6 days ago
- Int'l Business Times
Former Meta Executive Declares Asking Artists for Permission to Train AI Would 'Kill' the Industry Almost Immediately
Meta's former president of global affairs has stated that having to ask artists for permission to use their content to train AI is "implausible" and would be detrimental to the industry. Nick Clegg, who worked with Meta for almost seven years, was asked about his opinions regarding copyright laws and artificial intelligence while speaking to members of parliament on Thursday. "I think the creative community wants to go a step further," Clegg said, according to The Times . "Quite a lot of voices say, 'You can only train on my content, [if you] first ask'. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data." "I just don't know how you go around, asking everyone first. I just don't see how that would work," Clegg said. "And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight." Clegg made these statements after discussion pertaining to a potential amendment to the Data (Use and Access) Bill. If passed, the amendment would mandate that technology companies disclose the copyrighted works they used to train their AI. Earlier this month, hundreds of creatives, including Paul McCartney, Dua Lipa, Ian McKellen, Elton John and more, signed an open letter supporting the amendment to the Data Bill and urging the government to ensure that AI companies credit the copyrighted work they use. "We will lose an immense growth opportunity if we give our work away at the behest of a handful of powerful overseas tech companies, and with it our future income, the UK's position as a creative powerhouse, and any hope that the technology of daily life will embody the values and laws of the United Kingdom," the letter read, according to The Guardian. "I think people should have clear, easy to use ways of saying, no, I don't. I want out of this. But I think expecting the industry, technologically or otherwise, to preemptively ask before they even start training — I just don't see. I'm afraid that just collides with the physics of the technology itself," said Clegg. Originally published on Latin Times Artificial intelligence AI