Ireland's new age check system kicks in and seeks to stop children accessing ‘adult' video
The age verification system is contained in
Part B of the Online Safety Code from Ireland's media regulator Coimisiún na Meán.
The code, which aims to address harmful and illegal content, applies to video-sharing platforms whose EU headquarters are in Ireland.
Many of these platforms are household names and includes Facebook and Instagram, as well as YouTube, TikTok, X, and Reddit.
However, this means many other major platforms, such as Snapchat, is outside of the remit of the Code and is instead subject to UK online safety legislation.
CyberSafeKids meanwhile noted that children will still have unrestricted access to harmful or pornographic content provided by other commercial operators outside of the Code's remit,
which falls instead under the EU's Digital Services Act (DSA).
The new age verification system seeks to provide an 'effective method of age assurance' that will prevent children from accessing pornography or extreme violence.
Other restricted categories include cyberbullying, promotion of eating and feeding disorders, promotion of self-harm and suicide, dangerous challenges, and incitement to hatred or violence.
In the Code, Coimisiún na Meán notes that 'merely asking users whether they are over 18 will not be enough'.
It added that platforms will 'need to use appropriate forms of age verification to protect children from video and associated content which may impair their physical, mental or moral development'.
The Code requires video sharing platforms to implement 'effective age assurance measures' to ensure that 'adult-only video content cannot normally be seen by children'.
The Code does not mandate a specific type of age verification method but notes that an age assurance measure 'based solely on self-declaration of age by users of the service shall not be an effective measure'.
Advertisement
The video sharing platform is also required to have an 'easy-to-use and effective procedure for the handling and resolution of complaints' around age verification and related issues.
Platforms are also required to provide parental controls that enable parents or guardians to set time limits in respect of video content and to restrict children from viewing video uploaded or shared by users unknown to the child.
The Code was formally adopted last November but platforms were given nine months to make any changes that were needed to their online systems.
CyberSafeKids described today as a 'milestone that formally shifts legal responsibility onto tech companies to protect children online'.
It added that this move 'finally places a clear obligation on platforms to face the reality that underage users are accessing harmful content daily on their platforms, and to implement effective safeguards'.
While Coimisiún na Meán has not mandated a specific type of age verification system, CyberSafeKids said the nine-month implementation period has allowed 'more than enough time to develop robust age verification systems other than self-declaration'.
Meanwhile, CyberSafeKids
expressed concern that the Code does not cover the recommender algorithm system.
The recommender system is an algorithm that uses data to suggest items that a social media user might be interested in.
However, CyberSafeKids warns that 'much harmful content coming through a child's feed originates from this'.
The Irish Council for Civil Liberties previously warned that recommender systems 'push hate and extremism into people's feeds and inject content that glorifies self-harm and suicide into children's feeds'.
CyberSafeKids has called for the Code to be reviewed within 12 to 24 months.
'If the Code has failed to reduce underage access to harmful content, stronger measures must be implemented to keep children safe,' said CyberSafeKids.
'Financial penalties should be quickly and fully imposed for non-compliance, in line with the legislation.'
Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article.
Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.
Learn More
Support The Journal

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Times
an hour ago
- Irish Times
Posting photos of your children online just got a lot riskier
The ethics of sharing photos online has shifted over the years, especially when it comes to children. A few years ago, many of us may have posted back-to-school photos or holiday snaps without thinking twice. But recently I have noticed that even some of the most enthusiastic oversharers in my social media feeds are quietly backing away. This has happened as the politics and the business of 'sharenting' has evolved. And with AI it is all about to get even more fraught. 'Sharenting' was once a jokey portmanteau for the over-enthusiastic social media parent. But it has developed a sharper edge as stories have emerged of children, now teenagers, asking their parents to delete childhood images posted without consent. Some of these young people have gone public, embarrassed and sometimes distressed by the permanent record of them in nappies or having meltdowns. Former child influencers, who had their big milestones and sometimes daily struggles turned into a stream of monetised content, have been some of the most vocal critics. One young person , Cam Barrett, a TikTok influencer, recently testified to a hearing in Washington state that she was terrified to use her real name, 'because a digital footprint I had no control over exists'. Her mother, she said, had shared details of her first period, of illnesses she had and of a car crash she was involved in. Ensuing debates about child exploitation and consent have prompted a political response. France introduced legislation first requiring profits from under-16 influencers to be set aside in protected accounts, and later giving children explicit rights to their own image. In the US, several states are considering similar laws, or updating laws for child actors to cover influencers. This has all prompted many regular platform users to post fewer pictures, use private messaging apps or opt for back-of-head portraits that don't show children's faces. READ MORE Now, a further big shift is under way. If a photo of you, or of your child, exists on the internet, it has likely been used to train the AI models powering text- and image-generating tools. We no longer have to think about who may see our images, but also who processes them. Text- and image-generating tools, such as OpenAI's ChatGPT or Google's Gemini , are underpinned by AI models, and these models need to be trained on vast, almost unimaginable quantities of data; billions if not trillions of snippets of text, photos and videos. The first wave of this came from crawling the web, copying text and images from across millions of websites, forums, blogs and news sites. But eventually tech companies exhausted even the seeming vastness of the internet. They are now in a race to find content to feed a voracious appetite, and the winner stands to dominate the next phase of the internet. The need for data is so acute that Meta considered buying the storied publishing house Simon & Schuster just to have access to its catalogue of human language. In the end, according to court filings , they deemed buying content from publishing houses too slow and too expensive; they are accused of having instead downloaded 7.5 million pirated books and 81 million pirated research papers from file-sharing site LibGen. Owners of copyrighted content have fought back. Creators from journalists to illustrators are suing AI companies for using their work without permission. The core issue is whether scraping the internet for data including copyrighted content qualifies as 'fair use'. So far, the answers are murky. [ Don't be fooled: AI is a long way from being able to think for itself Opens in new window ] Perhaps it is inevitable, then, that the likes of Google and Meta would look closer to home for content to feed the machines. Google transcribed millions of videos posted to its YouTube platform and fed the text to their models (so did OpenAI, which is being sued by YouTube content creators). Meta is trying to work out how it can use the billions, if not trillions, of pieces of content people have uploaded to Facebook and Instagram going back decades. Mark Zuckerberg told shareholders in 2024 that the company's data set is 'greater than the Common Crawl', one of the largest open web data sets used to train language models. By this he means publicly available Facebook and Instagram content – or, in other words, your photos and mine. And it might go even further. Reporting by TechCrunch and The Verge suggests that recent changes to Meta's terms of service may make it possible for the company to use unpublished photos on our phones' camera rolls to train AI models. (Meta has responded that it is not currently using unpublished photos in this way, and that these features are 'opt-in' as part of an AI photo tool, but the reporting suggests it has not ruled out using them in future). This all adds up to the politics of photos of ourselves and others online – whether shared or just taken and stored in our phones – becoming even more fraught. Where once we worried about bad actors stealing photos of kids for unthinkable purposes, we now need to consider the ethics of their voices, gestures and birthday parties feeding energy-guzzling image generators that will be used for ends we can't even imagine yet. Ultimately, we need to consider whether we're happy for our private photos to be used to bolster the market value of some of the most valuable companies that have ever existed. The monetisation of childhood is no longer limited to a lucky (or unlucky) few who sign brand deals; now every photo you've ever taken is a commodity. [ 'Really scary territory': AI's increasing role in undermining democracy Opens in new window ] Liz Carolan works on democracy and technology issues, and writes at


Irish Examiner
12 hours ago
- Irish Examiner
Will tech giants finally take online safety for children seriously?
The wild west of social media self-regulation has come to an end, but the battles that will define this new era have only just begun after a very busy week in this hotly contested space. Last Monday, the second part of Ireland's regulator Coimisiún na Mean's Online Safety Code came into force. It came after a nine-month lead in time for companies to prepare its systems for the code aimed at keeping people, particularly children, safe online. This Part B of the nascent code means that the video-sharing platforms under its remit that allow pornography, like X, must use effective age assurance controls to make sure children can't watch it. In other words, the Elon Musk-owned platform formerly known as Twitter must make sure people are aged 18 or over to view porn that is available on it. There are other aspects to it too including prohibiting and sharing of content harmful to children such as content promoting eating disorders, self-harm or suicide, cyberbullying, hate speech, and extreme violence. Critics have claimed parts of the code are too vague and don't provide clear enough timelines to take action against those in breach. These same critics say it will be on the regulator to show it has the teeth to hold platforms to account. In theory, X or any of the other firms to which it applies like Meta and Youtube could face heavy penalties if they don't adhere to it. €20m or 10% of turnover, whichever is greater, can come in fines for breaches of the code. The latter percentage figure could run into billions of euro for some firms. But, just because the code came into force on Monday, it didn't mean things had changed overnight. Fine Gael TD Keira Keogh, who chairs the Oireachtas Children's Committee, said the following day that children could still set up accounts which 'opens a doorway to unlimited inappropriate, disturbing and damaging content'. 'Parents are understandably frustrated that as of now, nothing has changed and their kids are still at risk of being exposed to all that is sinister in the world of social media,' she said. Given the availability and proliferation of the kinds of nasty content people have become used to on social media feeds, advocates had stressed how much firms shouldn't be let avoid their obligations any longer now Coimisiún na Meán had its powers in place. 'Platforms have benefited from a substantial nine-month implementation period since the Code's publication in October 2024, allowing them more than enough time to develop robust age verification systems other than self-declaration, stringent content controls to prevent child exposure to harmful material, and clear and easy-to-use reporting systems,' charity CyberSafeKids said. It appears that the regulator agreed. No age checks On Wednesday, Coimisiún na Meán wrote to X seeking an explanation as to why there were still no age checks to watch pornography and asking them for an explanation as to how they were complying with their obligations by Friday. 'Platforms have had nine months to come into compliance with Part B of the Code,' it said. 'We expect platforms to comply with their legal obligations. Non-compliance is a serious matter which can lead to sanctions including significant financial penalties.' The regulator also said it would take further action if there is evidence of non-compliance with the Online Safety Code. 'We are continuing to review all of the designated video-sharing platforms to assess their compliance with the Code and will take any further supervisory, investigative or enforcement action required,' it added. The pressure on X and other platforms isn't just coming from Ireland. Across Europe, regulators are trying to get to grips with regulating this kind of content online. In the UK, its Online Safety Act sets out children's codes which came into force on Friday that will see some services, including pornographic websites, starting to check the age of UK users. Again, non compliance can see a fine of 10% of turnover, or even its executives jailed. From Friday, anyone trying to access pornographic content in the UK would've been met with a new check on their age before they could access that site, as platforms clearly got the UK's message. On the other hand, concerns have been raised over a wider restriction on content deemed 'unsuitable' and whether that amounts to censorship online. At home, the Irish regulator's work also fits in with wider European legislation, namely the Digital Services Act, and investigations from the European Commission into major platforms. It's all very complex, but our Online Safety Code sits with the Digital Services Act and the EU's laws on terrorist content online. All together, they're supposed to allow regulators to hold the social media companies to account in a variety of ways. In the UK, Reddit and Bluesky introduced age checks in advance of the new rules coming into force there too showing that platforms are clearly hearing the obligations they now face. Picture: Anatoliy Babiy Under the Digital Services Act, for example, the European Commission recently opened formal proceedings against sites including Pornhub and XVideos while member states also grouped together to take action against smaller pornographic platforms. The Commission said these major sites hadn't put in appropriate age verification tools to safeguard minors. An in-depth investigation is now under way. Curiously timed as it fell within the same week as Ireland's and the UK's safety codes came into force, X did publish the methods it will use to check users ages, which include the use of a live selfie with an AI used to determine age or using someone's email address to estimate their age. 'We are required by regulations including the UK's Online Safety Act, the Irish Online Safety Code and the European Union Digital Services Act, to verify your age for access to certain types of content,' X said on its website. In Ireland, the regulator prescribes that age checks must be robust, effective and protect privacy and it's understood it will be considering X's proposals in this regard. Even in lieu of that, age verification on X appeared to have already come into force as access to such content became restricted over the weekend. Things are changing and changing quickly. Charities working in this space have said that while the legislative obligations on platforms are now clearly present where they hadn't been before, enforcement will be key. In a statement to the Irish Examiner, CyberSafeKids said: 'What we expect to see over the next 12-24 months is tech companies finally stepping up and accepting responsibility and accountability to ensure children are not accessing platforms that were not designed for them in the first place and that they're shielded from the kinds of harmful content they contain. It is still early to fully assess how aggressively and effectively Coimisiún na Meán will act on enforcement; initial results suggest continued and predictable heel-dragging from the large social media providers, so proactive monitoring and swift intervention are now key for the integrity of the Code. It said that if companies continue to drag their heels, the regulator must act firmly to impose quick and substantial financial penalties for non-compliance. Meanwhile, online safety coordinator at the Children's Rights Alliance Noeline Blackwell said given Coimisiún na Meán had opted for a principles-based approach, we will be very reliant on the regulator to be proactive to ensure companies meet their obligations. 'Its Commissioners will need to ensure that they have the people, the expertise, the finances that they need and they will then need to have the will to follow up with the companies,' she said. 'We believe that it is extremely urgent that platforms are scrutinised for compliance and taken to task if they do not comply. 'The real urgency with these regulations is that every day, every hour that the appropriate safeguards are missing is an hour, a day that children active on these platforms are at risk of harm from all the issues that the Code is meant to protect them from. That's the whole point of the legislation. 'It's not a game between the regulator and any or all of the platforms. It's a real threat to children when these systems are not in place.' Read More Social media aimed at kids is driven by profit, not safety


Extra.ie
3 days ago
- Extra.ie
Vine comeback? Elon Musk teases return with a 'desecrating' twist
Elon Musk has announced that he is planning to bring back the iconic social media platform Vine – but with a not so iconic twist. Vine, which was essentially the 2010's version of TikTok, was acquired by Twitter in 2012 and had its plug pulled in 2017. Vine launched an iOS app in 2013 and quickly followed it with a Windows and Android version. The concept was that users could make quick, six-second videos that could easily be shared on other social media platforms. Musk's announcement was greeted by responses like 'Why does he ruin everything he touches?' Pic: Brendan Smialowski/AFP via Getty Images The app itself was not only used to upload content but could also be used to browse uploaded videos and find other creators. It set the tone for the likes of Instagram Reels and TikTok. Its slow shut down began in 2016 when Twitter took down the mobile app and disabled uploads to the platform. Despite it having over 200 million active users at one stage, Twitter did not know how to effectively monetize the popular app and couldn't pay content creators to stick around. Musk has involved AI in a lot of his recent ventures. Pic:Existing content was still viewable for a few more months but on January 17, 2017, it officially no longer existed. However, it now looks like Vine could be brought back from the dead but not in a way that many of the nostalgic lovers of the app will want. Elon Musk tweeted: 'We're bringing back Vine, but in AI form.' We're bringing back Vine, but in AI form — Elon Musk (@elonmusk) July 24, 2025 The owner of X, formerly Twitter, didn't give any further explanation of what that could mean or what it could look like. X users were quick to share how they feel about an AI version of Vine and – as usual – they made their feelings about Elon abundantly clear. One user wrote: 'Why does he ruin everything he touches?' Another added: 'Stop forcing AI to our faces.' A third shared: 'He just has to ruin everything omfg.' Yet another agreed: 'He better not desecrate Vine like this… A whole app for 6 second AI slop?' Considering how much Elon has tried to make his AI assistant 'Grok' work on X since he bought Twitter, it comes as no surprise that he would try to dedicate a whole other app to glamourising AI. Thoughts? Feelings? Opinions?