4chan and porn site investigated by Ofcom over online safety
The online message board 4chan is being investigated by the UK communications regulator over failure to comply with recently introduced online safety rules.
Ofcom says it has received complaints over potential illegal content on the website, which has not responded to its requests for information.
Under the Online Safety Act, online services must assess the risk of UK users encountering illegal content and activity on their platforms, and take steps to protect them from it.
Ofcom is also investigating porn provider First Time Videos over its age verification checks, and seven file sharing services over potential child sexual abuse material.
4chan has been contacted for comment.
Ofcom says it requested 4chan's risk assessment in April but has not had any response.
The regulator will now investigate whether the platform "has failed, or is failing, to comply with its duties to protect its users from illegal content".
It would not say what kind of illegal content it is investigating.
Ofcom has the power to fine companies up to 10% of their global revenues, or £18m - whichever is the greater number.
4chan has often been at the heart of online controversies in its 22 years, including misogynistic campaigns and conspiracy theories.
Users are anonymous, which can often lead to extreme content being posted.
It was the subject of an alleged hack earlier this year, which took parts of the website down for over a week.
How can you keep your child safe online?
Seven file sharing services also failed to respond to requests for information from the regulator.
They are Im.ge, Krakenfiles, Nippybox, Nippydrive, Nippyshare, Nippyspace and Yolobit.
Ofcom also says it has received complaints over potential child sexual abuse material being shared on these platforms.
Separately, porn provider First Time Videos, which runs two websites, is being investigated into whether it has adequate age checks in place to stop under-18s accessing its sites.
Platforms which host age-restricted content must have "robust" age checks in place by July.
Ofcom does not specify exactly what this means, but some platforms have been trialling age verification using facial scanning to estimate a user's age.
Social media expert Matt Navarra told BBC News earlier this year facial scanning could become the norm in the UK.
Two porn sites investigated for suspected age check failings
Pornhub leaves France over age verification law
Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
14 hours ago
- Yahoo
Meta sues app-maker as it cracks down on 'nudifying'
Meta has taken legal action against a company which ran ads on its platforms promoting so-called "nudify" apps, which typically using artificial intelligence (AI) to create fake nude images of people without their consent. It has sued the firm behind CrushAI apps to stop it posting ads altogether, following a cat-and-mouse battle to remove them over a series of months. In January, the blog FakedUp found 8,010 instances of ads from CrushAI promoting nudifying aps on Meta's Facebook and Instagram platforms. "This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it," Meta said in a blog post. "We'll continue to take the necessary steps - which could include legal action - against those who abuse our platforms like this." The growth of generative AI has led to a surge in "nudifying" apps in recent years. It has become such a pervasive issue that in April the children's commission for England called on the government to introduce legislation to ban them altogether. It is illegal to create or possess AI-generated sexual content featuring children. Meta said it had also made another change recently in a bid to deal with the wider problem of "nudify" apps online, by sharing information with other tech firms. "Since we started sharing this information at the end of March, we've provided more than 3,800 unique URLs to participating tech companies," it said. The firm accepted it had an issue with companies avoiding its rules to deploy adverts without its knowledge, such as creating new domain names to replace banned ones. It said it had developed new technology designed to identify such ads, even if they didn't include nudity. Nudify apps are just the latest example of AI being used to create problematic content on social media platforms. Another concern is the use of AI to create deepfakes - highly realistic images or videos of celebrities - to scam or mislead people. In June Meta's Oversight Board criticised a decision to leave up a Facebook post showing an AI-manipulated video of a person who appeared to be Brazilian football legend Ronaldo Nazário. Meta has previously attempted to combat scammers who fraudulently use celebrities in adverts by the use of facial recognition technology. It also requires political advertisers to declare the use of AI, because of fears around the impact of deepfakes on elections. How BBC investigation led to Apple removing AI 'nudify' apps Call for ban on AI apps creating naked images of children Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
Yahoo
2 days ago
- Yahoo
Social media giants can ‘get on' and tackle fraud cases, says City watchdog
Tech giants such as Meta do not need further guidance about tackling fraud on their platforms and can 'get on and do it', the boss of the UK's financial watchdog has said at it clamps down on so-called 'finfluencers'. Nikhil Rathi, chief executive of the Financial Conduct Authority (FCA), said fraud is set to be one of the most 'profound issues' facing regulators over the next few years. The boss was asked by MPs on the Treasury Committee whether he would like to see stronger guidance to technology platforms about how to take down fraud and their responsibilities in relation to the Online Safety Act. 'I think they know what to do,' Mr Rathi told the committee. 'I don't think they need guidance. There's plenty of cases where they can get on and do it.' The Online Safety Act will require platforms to put in place and enforce safety measures to ensure that users, particularly children, do not encounter illegal or harmful content, and if they do that it is quickly removed. The FCA has stepped up its crackdown on financial influencers, or 'finfluencers', with numerous take down requests on social media platforms and a handful of arrests. The watchdog's boss was asked whether tech firms were too slow to tackle fraud on their platforms. 'We have to operate within our powers, we can't force the tech firms to take down promotions that we see as problematic and we rely on co-operation from them,' he said. 'I would not say that all tech firms don't co-operate. 'There are some that have invested very significantly, they are proactive, they are responsive and once they've decided to move we've seen significant improvements on their platform.' Referring to Facebook and Instagram owner Meta, he said the issue was both the speed at which harmful content was taken down and that new accounts were being created with 'almost identical content'. Mr Rathi said Ofcom – which oversees online platforms' safety systems – was 'understandably' prioritising child protection and sexual exploitation offences and would 'get to fraud next year'. Pressed further on tech giants being held to account on fraud, Mr Rathi said: 'I think this is going to be one of the most profound issues facing financial regulation for the next several years. 'We talk about big tech firms entering financial services, well many have already entered and provide systemic services adjacent to financial services.' Sign in to access your portfolio

Yahoo
2 days ago
- Yahoo
Ofcom investigates notorious 4chan forum linked to mass shootings
A notorious online message board linked to mass shootings in the United States is facing an investigation by British officials. 4chan is to be investigated by Ofcom, Britain's tech regulator, under the Online Safety Act over the suspected spread of illegal online posts. The forum, which became known for its users' vociferous support for Donald Trump and has been blamed for radicalising mass shooters in the US, is one of nine websites that are the subject of new investigations by Ofcom. The regulator said it had received reports of 'illegal content and activity' being shared on 4chan, which had not responded to its requests for information. The Online Safety Act requires websites to take action against illegal online posts, such as terror content, inciting violence, racial hatred and extreme pornography. Originally launched in 2003 by American developer Christopher Poole, who is referred to by the online moniker 'moot', 4chan is known for its lack of moderation and has long been a byword for the extreme fringes of the internet. It is associated with hacker groups and the far Right. Anonymous members of 4chan's 'Politically Incorrect' forum were enthusiastic supporters of Mr Trump during his 2016 election campaign. 4chan users were also involved in spreading conspiracy theories such as QAnon and 'Pizzagate', which promoted unsubstantiated claims of a Democratic paedophile ring. The site was also the source of claims that Jeffrey Epstein, the financier and sex trafficker, had died, 40 minutes before the news broke in the US media. 4chan was blamed by New York's attorney general for 'radicalising' 18-year-old Payton Gendron, a mass shooter who killed 10 people in Buffalo in May 2022. A report alleged 4chan had been 'formative' in 'indoctrinating him into hateful, white supremacist internet subcultures'. The investigation into 4chan comes as Ofcom ramps up its enforcement of the Online Safety Act, which came into full effect in April this year. The law gives Ofcom the power to investigate websites for failing to do enough to block and remove illegal content. Offending websites can be fined up to 10pc of their turnover, or £18m. They can also be blocked from the UK, while senior managers can receive jail terms for repeated failings. Already, several fringe websites have pulled their services from the UK after facing regulatory scrutiny from Ofcom. Gab, a social network that had courted Right-wing commentators, message board site Kiwi Farms and YouTube rival Bitchute have all blocked UK visitors, accusing the Online Safety Act of interfering with free expression. Ofcom also opened investigations into eight other sites, including a pornography provider accused of failing to stop children accessing its websites and seven file-sharing websites that have allegedly failed to stop the spread of child sexual abuse material. Since 2015, 4chan has been owned by Japanese internet entrepreneur Hiroyuki Nishimura. 4chan was contacted for comment. Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.