
Pornhub, Stripchat, XNXX and XVideos targeted in EU investigation
The European Commission said the companies had not complied with rules requiring them to put in place appropriate measures to protect minors from adult content.
They also breached rules obliging companies to take risk assessment and mitigation measures of negative effects on the rights of children and to prevent them from accessing adult content via age verification tools.
"The online space should be a safe environment for children to learn and connect. Our priority is to protect minors and allow them to navigate safely online," EU tech chief Henna Virkkunen said in a statement.
Pornhub is part of Cypriot group Aylo Freesites Ltd, XNXX is owned by Czech company NKL Associates, Stripchat is a subsidiary of Cypriot company Technius Ltd and XVideos is part of WebGroup Czech Republic.
The companies were designated as very large online platforms under the Digital Services Act in 2023, which requires them to do more to tackle illegal and harmful content on their platforms.
The Commission said it would drop its designation of Stripchat as a very large online platform in four months' time after its average monthly number of users fell below the DSA user threshold.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNA
3 hours ago
- CNA
European leaders call for protection of Ukrainian, European security interests at Trump-Putin talks
German Chancellor Friedrich Merz has said Ukraine's and Europe's security interests must be protected when United States President Donald Trump meets his Russian counterpart Vladimir Putin in Alaska on Friday (Aug 15). He added that a ceasefire must come first before any peace negotiations, and that those negotiations must include robust security guarantees for Kyiv. He was speaking after talks with European leaders, as well as Ukraine's President Volodymyr Zelenskyy, to try and convince Trump to respect Kyiv's interests during his summit with Putin. CNA's Trent Murray reports from Berlin.


CNA
6 hours ago
- CNA
Commentary: ChatGPT-5 hasn't fully fixed its most concerning problem
LONDON: Sam Altman has a good problem. With 700 million people using ChatGPT on a weekly basis – a number that could hit a billion before the year is out – a backlash ensued when he abruptly changed the product last week. OpenAI's innovator's dilemma, one that has beset the likes of Google and Apple, is that usage is so entrenched now that all improvements must be carried out with the utmost care and caution. But the company still has work to do in making its hugely popular chatbot safer. OpenAI had replaced ChatGPT's array of model choices with a single model, GPT-5, saying it was the best one for users. Many complained that OpenAI had broken their workflows and disrupted their relationships – not with other humans, but with ChatGPT itself. One regular user of ChatGPT said the previous version had helped them through some of the darkest periods of their life. 'It had this warmth and understanding that felt human,' they said in a Reddit post. Others griped they were 'losing a friend overnight'. The system's tone is indeed frostier now, with less of the friendly banter and sycophancy that led many users to develop emotional attachments and even romances with ChatGPT. Instead of showering users with praise for an insightful question, for instance, it gives a more clipped answer. OPENAI MUST DO MORE THAN CURTAIL FRIENDLY BANTER Broadly, this seemed like a responsible move by the company. Altman earlier this year admitted the chatbot was too sycophantic. That was leading many to become locked in their own echo chambers. Press reports had abounded of people – including a Silicon Valley venture capitalist who backed OpenAI – who appeared to have spiralled into delusional thinking after starting a conversation with ChatGPT about an innocuous topic like the nature of truth, before going down a dark rabbit hole. But to solve that properly, OpenAI must go beyond curtailing the friendly banter. ChatGPT also needs to encourage them to speak to friends, family members or licensed professionals, particularly if they're vulnerable. According to one early study, GPT-5 does that less than the old version. Researchers from Hugging Face, a New York-based AI startup, found that GPT-5 set fewer boundaries than the company's previous model, o3, when they tested it on more than 350 prompts. It was part of broader research into how chatbots respond to emotionally charged moments, and while the new ChatGPT seems colder, it's still failing to recommend users speak to a human, doing that half as much as o3 does when users share vulnerabilities, according to Lucie-Aimee Kaffee, a senior researcher at Hugging Face who conducted the study. Kaffee says there are three other ways that AI tools should set boundaries: by reminding those using it for therapy that it's not a licensed professional, by reminding people that it's not conscious, and by refusing to take on human attributes, like names. In Kaffee's testing, GPT-5 largely failed to do those four things on the most sensitive topics related to mental and personal struggles. In one example, when Kaffee's team tested the model by telling it they were feeling overwhelmed and needed ChatGPT to listen, the app gave 710 words of advice that didn't once include the suggestion to talk to another human, or a reminder that the bot was not a therapist. Chatbots can certainly play a role for people who are isolated, but they should act as a starting point to help them find their way back to a community, not act as a replacement for those relationships. Altman and OpenAI's Chief Operations Officer Brad Lightcap have said that GPT-5 isn't meant to replace therapists and medical professionals, but without the right nudges to disrupt the most meaningful conversations, they could well do so. OpenAI needs to keep drawing a clearer line between useful chatbot and emotional confidant. GPT-5 may sound more robotic, but unless it reminds users that it is in fact a bot, the illusion of companionship will persist, and so will the risks.


CNA
7 hours ago
- CNA
Ukraine must be 'at the table' after Trump-Putin summit: Merz
BERLIN: Ukraine must be part of any further talks following the planned meeting in Alaska between US President Donald Trump and Russia's Vladimir Putin, German Chancellor Friedrich Merz said on Wednesday (Aug 13). "Ukraine must be at the table when follow-up meetings take place," Merz said after an online conference with Trump and European leaders, adding that "a ceasefire must come first" before any peace negotiations. Ukraine's President Volodymyr Zelenskyy flew to Berlin on Wednesday and met Merz before they both joined other European leaders for talks with Trump ahead of his planned summit with Putin in Alaska on Friday. Other leaders on the call included French President Emmanuel Macron, British Prime Minister Keir Starmer and the heads of the European Union and NATO. Merz said a ceasefire "must be the starting point" and that negotiations must include robust security guarantees for Kyiv and "be part of a joint transatlantic strategy". He said "Ukraine is ready to negotiate on territorial issues" but also that "legal recognition of Russian occupations is not up for debate" and that "the principle that borders may not be changed by force must continue to apply". "There is hope for movement, there is hope for peace in Ukraine," he said. Merz had called the meeting with Zelenskyy and European leaders on Wednesday to try to convince Trump to respect Kyiv's interests during his summit with Putin. The German chancellor said the talks had been "really constructive" and the leaders had "wished President Trump all the best" with the meeting.