logo
#

Latest news with #OpenRightsGroup

Pro-Palestine online content in UK risks censorship, rights groups warn
Pro-Palestine online content in UK risks censorship, rights groups warn

Arab News

time02-08-2025

  • Politics
  • Arab News

Pro-Palestine online content in UK risks censorship, rights groups warn

LONDON: Pro-Palestine online content in the UK could be censored through the twin threat of the Online Safety Act and banning of Palestine Action, human rights groups have warned. Content in support of Palestinians published online could be misconstrued as supporting Palestine Action, a protest group that was proscribed under anti-terrorism laws on July 5, The Guardian reported on Saturday. Index on Censorship, Open Rights Group and other organizations have written to Ofcom, the UK's independent communications regulator, to request clarification on the matter. Signatories to the letter also warned that online content objecting to Palestine Action's banning could be misidentified as unlawful support for the group. Open Rights Group's Sara Chitseko, a pre-crime program manager, told The Guardian: 'Crucial public debate about Gaza is being threatened by vague, overly broad laws that could lead to content about Palestine being removed or hidden online. 'There's also a real danger that people will start self-censoring, worried they might be breaking the law just by sharing or liking posts related to Palestine and nonviolent direct action. 'This is a serious attack on freedom of expression and the right to protest in the UK. We need to ensure that people can share content about Palestine online without being afraid that they will be characterised as supportive of terrorism.' Major online social media platforms such as Facebook, Instagram and TikTok have been advised by Ofcom that they can avoid concerns about meeting the requirements of the Online Safety Act if they are more stringent with censoring content than is mandated by the act. The letter sent to Ofcom by the rights groups warned: 'This approach risks encouraging automated moderation that disproportionately affects political speech, particularly from marginalised communities, including Palestinian voices.' The UK, unlike the EU, lacks a mechanism through which users can appeal the censoring of their online content. Signatories to the letter — which was also sent to Meta, Alphabet, X and ByteDance, owners of the world's top social media platforms — called for the creation of a British dispute mechanism to discourage the censoring of lawful content. The letter added: 'We are concerned that the proscription of Palestine Action may result in an escalation of platforms removing content, using algorithms to hide Palestine solidarity posts and leave individuals and those reporting on events vulnerable to surveillance or even criminalisation for simply sharing or liking content that references nonviolent direct action. 'We are also concerned about what platforms understand by their legal duties regarding expressions of 'support' for Palestine Action.' An Ofcom spokesperson said: 'We have provided detailed guidance to platforms about how to identify the particular types of illegal and harmful material prohibited or restricted by the act, including how to determine whether content may have been posted by a proscribed organisation. 'There is no requirement on companies to restrict legal content for adult users. In fact, they must carefully consider how they protect users' rights to freedom of expression while keeping people safe.'

Palestine Action ban coupled with Online Safety Act ‘a threat to public debate'
Palestine Action ban coupled with Online Safety Act ‘a threat to public debate'

The Guardian

time02-08-2025

  • Politics
  • The Guardian

Palestine Action ban coupled with Online Safety Act ‘a threat to public debate'

The Online Safety Act together with the proscription of Palestine Action could result in platforms censoring Palestinian-related content, human rights organisations have warned. Open Rights Group, Index on Censorship and others have written to Ofcom calling on it to provide clear guidance to platforms on distinguishing lawful expression from content deemed to be in support of terrorism. They say failure to act by the regulator act risks misidentification – including through algorithms – of support for Palestine as support for Palestine Action, which on 5 July became the first direct action protest group to be banned under UK anti-terrorism laws. It also runs the risk of misidentifying objections to Palestine Action's proscription as unlawful support for the group, the signatories claim. Sara Chitseko, a pre-crime programme manager at Open Rights Group, said: 'Crucial public debate about Gaza is being threatened by vague, overly broad laws that could lead to content about Palestine being removed or hidden online. There's also a real danger that people will start self-censoring, worried they might be breaking the law just by sharing or liking posts related to Palestine and non-violent direct action. 'This is a serious attack on freedom of expression and the right to protest in the UK. We need to ensure that people can share content about Palestine online with being afraid that they will be characterised as supportive of terrorism.' The organisations' concerns are exacerbated by Ofcom's advice that platforms can avoid worrying about their duties under the Online Safety Act (OSA) if they ensure they are more censorious than the act requires. 'This approach risks encouraging automated moderation that disproportionately affects political speech, particularly from marginalised communities, including Palestinian voices,' the letter says. Unlike in the EU, there is no independent mechanism for people in the UK to challenge content they feel has been wrongly taken down. The signatories want platforms – the letter has also been sent to Meta, Alphabet, X and ByteDance – to commit to an independent dispute mechanism, if evidence emerges of lawful speech being suppressed. The letter, also signed by Electronic Frontier Foundation in the US and organisations from eight European countries, as well as experts and academics, says: 'We are concerned that the proscription of Palestine Action may result in an escalation of platforms removing content, using algorithms to hide Palestine solidarity posts and leave individuals and those reporting on events vulnerable to surveillance or even criminalisation for simply sharing or liking content that references non-violent direct action. 'We are also concerned about what platforms understand by their legal duties regarding expressions of 'support' for Palestine Action.' The letter comes a week after the OSA's age-gating for 'adult' material came into effect, prompting fears about access to Palestine-related content. For example, Reddit users in the UK have to verify their age to access the Reddit sub r/israelexposed. Ella Jakubowska, the head of policy at EDRi in Brussels, said there would inevitably be suppression of 'critical voices, journalism and social movements around the world. The problem is worsened by automated content moderation systems, well known for over-removing content from Palestinian creators, in support of Black Lives Matter, about LGBTQI+ issues and more. 'It is very likely that in trying to comply with these requirements, platforms would unjustly remove content from people in the EU and other regions.' She said that would contravene laws such as the EU Digital Services Act, designed to strike a balance between keeping people safe online and freedom of expression. An Ofcom spokesperson said: 'We have provided detailed guidance to platforms about how to identify the particular types of illegal and harmful material prohibited or restricted by the act, including how to determine whether content may have been posted by a proscribed organisation. 'There is no requirement on companies to restrict legal content for adult users. In fact, they must carefully consider how they protect users' rights to freedom of expression while keeping people safe.' Meta, Alphabet, X and ByteDance were all approached for comment.

Spotify may delete accounts if users fail new mandatory age checks
Spotify may delete accounts if users fail new mandatory age checks

STV News

time31-07-2025

  • STV News

Spotify may delete accounts if users fail new mandatory age checks

Spotify has warned that users' accounts may be deleted if they fail to pass new age verification checks. The music streaming app has begun asking its users to verify if they are aged over 18 by using facial age estimations and ID verification. On its website, it reads: 'You cannot use Spotify if you don't meet the minimum age requirements for the market you're in. 'If you cannot confirm you're old enough to use Spotify, your account will be deactivated and eventually deleted.' This has sent many on social media into a flurry, with one user saying, 'the fact that you need to verify your age or have your account deleted really shows what's wrong with the world right now'. The Open Rights Group, which campaigns for digital freedoms, said: 'Bad law makes for bad, incoherent outcomes.' The prompt users on the Spotify app are receiving . / Credit: @EdgeE50124/X When will Spotify use an age check? Spotify has told ITV News that users may be prompted to complete an age check for certain age-restricted content, for example, when trying to watch a music video that has been labelled 18+ by its rightsholder. However, it is not clear exactly how consistently the measures are being applied. While some social media users have shared screenshots of their age verification requests on the app, others have said they have not been asked yet. How does it work? Spotify will first ask users to verify their age through facial recognition. Users must take a selfie, which will be analysed using face-scanning technology from verification service Yoti to estimate their age. If the system determines a user is underage, their account will be deactivated. However, Spotify will offer a 90-day grace period. During this time, users will receive an email allowing them to reactivate their account, and then they must complete an ID verification within seven days. To complete ID verification on Spotify, tap your profile picture at the top of the app, go to Settings and Privacy, then select Account, and tap Age Check. If Spotify still can't confirm a user's age during the 90-day grace period, or if no action is taken within seven days of reactivation, the account will be permanently deleted. So on top of all the age restriction bullshit happening in the UK right now, it turns out Spotify is implementing this as well. The fact that you need to verify your age or have your account deleted really shows what's wrong with the world right now. — 💖QTheFemboy💖 (@QuxRo_VRC) July 30, 2025 Spotify has said its service is designed for users aged 13 and over, but it hosts songs and music videos aimed at mature audiences. Last month, The Times reported that the app had also hosted pornographic podcasts, despite Spotify's ban on 'sexually explicit content'. Spotify is the latest tech firm to roll out age checks in a bid to stop children from accessing adult material. The move follows new rules introduced under the UK government's new Online Safety Act, though Spotify has told ITV News: 'Age assurance has been live as of the last few weeks, and is not implemented solely because of any one law.' As of last Friday, tech firms must verify the age of users trying to access pornography and other adult content, such as graphic violence. They must also enforce age limits set out in their terms of service. The new rules have already triggered changes. Porn sites now require age verification, while platforms such as Reddit and X have added age checks on some posts and videos. Companies that fail to comply risk fines of up to 10% of their global turnover and fines of up to £18 million. Ofcom investigates pornographic companies This all comes as Ofcom announced on Thursday that it has launched investigations into 34 pornography websites over concerns they may not be complying with the new age-check rules under the Online Safety Act. The regulator said it had opened formal investigations into whether companies including 8579 LLC, AVS Group Ltd, Kick Online Entertainment SA and Trendio Ltd had 'highly effective' age verification systems in place to stop children accessing pornography across 34 websites. Ofcom said it prioritised these companies based on the level of risk their services posed and the number of users they attract. These new cases add to 11 investigations already under way, including probes into 4chan, an online suicide forum, seven file-sharing services, First Time Videos LLC and Itai Tech Ltd. Wikipedia is currently engaged in legal action with the UK government. / Credit: iStock Wikipedia takes action against government Meanwhile, Wikipedia has launched legal action against the UK government over the Online Safety Act. The Wikimedia Foundation (WMF), the non-profit that runs the site, argues that certain regulations under the law, which classify Wikipedia as a 'category one' service, should not apply to it. Under the act, a company falls into category one if it has content recommender systems and 34 million UK users a month, or if it combines such systems with share functions and has at least seven million users monthly. Wikipedia says that if forced to comply, it may have to either limit the number of users on its site or impose verification on users who don't want it, a move it says would go against its principles. In court last week, WMF's barrister Rupert Paines said the new rules would require platforms to verify users and filter out content from those who aren't verified. He warned this could render Wikipedia articles 'gibberish' unless all users were verified, and noted that many editors rely on anonymity to avoid online harassment or hacking. Get all the latest news from around the country Follow STV News Scan the QR code on your mobile device for all the latest news from around the country

Spotify may delete accounts if users fail new mandatory age checks
Spotify may delete accounts if users fail new mandatory age checks

ITV News

time31-07-2025

  • ITV News

Spotify may delete accounts if users fail new mandatory age checks

Spotify has warned that users' accounts may be deleted if they fail to pass new age verification checks. The music streaming app has begun asking its users to verify if they are aged over 18 by using facial age estimations and ID verification. On its website, it reads: "You cannot use Spotify if you don't meet the minimum age requirements for the market you're in. "If you cannot confirm you're old enough to use Spotify, your account will be deactivated and eventually deleted." This has sent many on social media into a flurry, with one user saying, "the fact that you need to verify your age or have your account deleted really shows what's wrong with the world right now". The Open Rights Group, which campaigns for digital freedoms, said: "Bad law makes for bad, incoherent outcomes." When will Spotify use an age check? Spotify has told ITV News that users may be prompted to complete an age check for certain age-restricted content, for example, when trying to watch a music video that has been labelled 18+ by its rightsholder. However, it is not clear exactly how consistently the measures are being applied. While some social media users have shared screenshots of their age verification requests on the app, others have said they haven not been asked yet. How does it work? Spotify will first ask users to verify their age through facial recognition. Users must take a selfie, which will be analysed using face-scanning technology from verification service Yoti to estimate their age. If the system determines a user is underage, their account will be deactivated. However, Spotify will offer a 90-day grace period. During this time, users will receive an email allowing them to reactivate their account, and then they must complete an ID verification within seven days. To complete ID verification on Spotify, tap your profile picture at the top of the app, go to Settings and Privacy, then select Account, and tap Age Check. If Spotify still can't confirm a user's age during the 90-day grace period, or if no action is taken within seven days of reactivation, the account will be permanently deleted. Spotify has said its service is designed for users aged 13 and over, but it hosts songs and music videos aimed at mature audiences. Last month, The Times reported that the app had also hosted pornographic podcasts, despite Spotify's ban on "sexually explicit content". Spotify is the latest tech firm to roll out age checks in a bid to stop children from accessing adult material. The move follows new rules introduced under the UK government's new Online Safety Act, though Spotify has told ITV News: "Age assurance has been live as of the last few weeks, and is not implemented solely because of any one law." As of last Friday, tech firms must verify the age of users trying to access pornography and other adult content, such as graphic violence. They must also enforce age limits set out in their terms of service. The new rules have already triggered changes. Porn sites now require age verification, while platforms such as Reddit and X have added age checks on some posts and videos. Companies that fail to comply risk fines of up to 10% of their global turnover and fines of up to £18 million. Ofcom investigates pornographic companies This all comes as Ofcom announced on Thursday that it has launched investigations into 34 pornography websites over concerns they may not be complying with the new age-check rules under the Online Safety Act. The regulator said it had opened formal investigations into whether companies including 8579 LLC, AVS Group Ltd, Kick Online Entertainment SA and Trendio Ltd had 'highly effective' age verification systems in place to stop children accessing pornography across 34 websites. Ofcom said it prioritised these companies based on the level of risk their services posed and the number of users they attract. These new cases add to 11 investigations already under way, including probes into 4chan, an online suicide forum, seven file-sharing services, First Time Videos LLC and Itai Tech Ltd. Wikipedia takes action against government Meanwhile, Wikipedia has launched legal action against the UK government over the Online Safety Act. The Wikimedia Foundation (WMF), the non-profit that runs the site, argues that certain regulations under the law, which classify Wikipedia as a 'category one' service, should not apply to it. Under the act, a company falls into category one if it has content recommender systems and 34 million UK users a month, or if it combines such systems with share functions and has at least seven million users monthly. Wikipedia says that if forced to comply, it may have to either limit the number of users on its site or impose verification on users who don't want it, a move it says would go against its principles. In court last week, WMF's barrister Rupert Paines said the new rules would require platforms to verify users and filter out content from those who aren't verified. He warned this could render Wikipedia articles 'gibberish' unless all users were verified, and noted that many editors rely on anonymity to avoid online harassment or hacking.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store