
Tech minister Kyle vows action on children's ‘compulsive' use of social media
Peter Kyle said he wanted to tackle 'compulsive behaviour' and ministers are reportedly considering a two-hour limit, with curfews also under discussion.
The Cabinet minister said he would be making an announcement about his plans for under-16s 'in the near future'.
He told Sky News: 'I am looking very carefully about the overall time kids spend on these apps.
'I think some parents feel a bit disempowered about how to actually make their kids healthier online.
'I think some kids feel that sometimes there is so much compulsive behaviour with interaction with the apps they need some help just to take control of their online lives and those are things I'm looking at really carefully.'
Sky reported that a two-hour cap per platform is being considered, while night-time or school-time curfews have also been discussed.
Mr Kyle said: 'We talk a lot about a healthy childhood offline. We need to do the same online.
'I think sleep is very important, to be able to focus on studying is very important.'
He said he wanted to 'tip the balance' in favour of parents so they were 'not always being the ones who are just ripping phones out of the kids' hands'.
Mr Kyle also said it was 'total madness' that some adults were able to use apps or gaming platforms to contact children online.
He said 'many of the apps or the companies have taken action to restrict contacts that adults, particularly strangers, have with children, but we need to go further'.
'At the moment, I think the balance is tipped slightly in the wrong direction.
'Parents don't feel they have the skills, the tools or the ability to really have a grip on the childhood experience online, how much time, what they're seeing, they don't feel that kids are protected from unhealthy activity or content when they are online.'
🔒 Children in the UK will lead safer online lives as we've finalised safety measures for sites and apps to introduce from July.
Tech firms must act to prevent children from seeing harmful content, and meet their duties under the Online Safety Act.
➡️ Swipe to read more.
— Ofcom (@Ofcom) April 24, 2025
In a separate interview with parenting site Mumsnet, Mr Kyle said he was 'deeply concerned' about addictive apps being used by children.
Speaking to Mumsnet founder Justine Roberts on Monday, the Technology Secretary said he would be 'nailing down harder on age verification'.
He said: 'I think we can have a national conversation about what healthy childhood looks like online.
'We do it offline all the time. Parents set curfews and diet and exercise as part of a language and a vocabulary within families.
'We haven't had that national debate about what health looks like and a healthy childhood looks like online yet.'
Schools in England were given non-statutory Government guidance in February last year, intended to stop the use of phones during the school day.
But the Conservatives have been calling on the Labour Government to bring in a statutory ban on smartphones in schools.
Mr Kyle said: 'Smartphones should not be used routinely in schools.
'Now, there might be some classes where they are brought in because of a specific purpose in the class, but that has to be determined and it should be the exception not the norm.'
He added: 'If we need to nail down hard on it, we will nail down hard on it.
'But please think very carefully about asking politicians to pass a law which criminalises by definition.
'Because if you pass a law that doesn't criminalise it's not a law that means anything'.
A series of already-announced measures to protect children will come into effect from Friday.
The codes of practice set out by Ofcom include requiring firms to ensure that any algorithms used to recommend content on their platforms must be configured to filter out harmful content from children's feeds.
In addition, the riskiest platforms, such as those hosting pornography, must have effective age checks to identify which users are children.
The checks could be done using facial age estimation technology, asking users to provide photo-ID for verification or a credit card check.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
7 minutes ago
- Reuters
UK's online safety law is putting free speech at risk, X says
LONDON, Aug 1 (Reuters) - Britain's online safety law risks suppressing free speech due to its heavy-handed enforcement, social media site X said on Friday, adding that significant changes were needed. The Online Safety Act, which is being rolled out this year, sets tough new requirements on platforms such as Facebook, YouTube, TikTok and X, as well as sites hosting pornography, to protect children and remove illegal content. But it has attracted criticism from politicians, free-speech campaigners and content creators, who have complained that the rules had been implemented too broadly, resulting in the censorship of legal content. Users have complained about age checks that require personal data to be uploaded to access sites that show pornography, and more than 468,000 people have signed an online petition calling for the act to be repealed. The government said on Monday it had no plans to do so and it was working with regulator Ofcom to implement the act as quickly as possible. Technology Secretary Peter Kyle said on Tuesday that those who wanted to overturn it were "on the side of predators". Elon Musk's X, which has implemented age verification, said the law's laudable intentions were at risk of being overshadowed by the breadth of its regulatory reach. "When lawmakers approved these measures, they made a conscientious decision to increase censorship in the name of 'online safety'," it said in a statement. "It is fair to ask if UK citizens were equally aware of the trade-off being made." X said the timetable for meeting mandatory measures had been unnecessarily tight, and despite being in compliance, platforms still faced threats of enforcement and fines, encouraging over-censorship. It said a balanced approach was the only way to protect liberty, encourage innovation and safeguard children. "It's safe to say that significant changes must take place to achieve these objectives in the UK," it said. A UK government spokesperson said it is "demonstrably false" that the Online Safety Act compromises free speech. "As well as legal duties to keep children safe, the very same law places clear and unequivocal duties on platforms to protect freedom of expression," the spokesperson said. Ofcom said on Thursday it had launched investigations into the compliance of four companies, which collectively run 34 pornography sites.


Times
2 hours ago
- Times
Online Safety Act threatens free speech, says Elon Musk's X
Elon Musk's X platform has claimed the Online Safety Act, the 'heavy-handed' regulator Ofcom, and a planned police monitoring unit are harming free speech in the UK. The company published a post on X titled 'What Happens When Oversight Becomes Overreach' criticising what it saw was a triple-pronged attack on free expression. The post said the 'act's laudable intentions are at risk of being overshadowed by the breadth of its regulatory reach. Without a more balanced, collaborative approach, free speech will suffer.' Ofcom was taking an 'aggressive approach' to enforcement, X said, at the same time as publishing plans to force companies to take down hate speech proactively, which it called a 'double compliance' burden. A national police unit proposed by the Home Office to monitor social media for signs of unrest 'has set off alarm bells for free speech advocates who characterise it as excessive and potentially restrictive', the company claimed. A political row has broken out over the act, which introduced measures to protect children last week. A video of a speech in parliament by the shadow Home Office minister, Katie Lam, about sexual crimes committed by grooming gangs was restricted on X after being flagged as 'harmful content'. Nigel Farage and his Reform UK party have painted it as 'dystopian' legislation and vowed to repeal the laws. Peter Kyle, the technology secretary, said that Farage was siding with predators like Jimmy Savile. Kyle and Ofcom have also come under pressure from US Republicans this week, who have been in the UK to express their concerns about the act's impact on free speech. Jim Jordan, the chairman of the House Judiciary Committee, has called the act 'the UK online censorship law'. He also published communications from the UK Department for Science, Innovation and Technology to social media platforms during the Southport riots last year that expressed concerns about the use of the expression 'two-tier policing'. X said that the act 'is at risk of seriously infringing on the public's right to free expression' and parliament 'made a conscientious decision to increase censorship in the name of online safety'. It claimed the act's measures 'prevent adults from encountering illegal content and steps to ensure age verification that limit adults' anonymity online'. Ofcom denies this. In the wake of the Southport riots, Ofcom proposed measures that would require social media platforms to remove hate speech from feeds. Platforms that are at high or medium risk of carrying hate speech would take it out of algorithmic feeds if it is potentially illegal. X called this a 'double compliance' burden on top of the act. Ofcom admitted the proposal 'has the potential to interfere with users' rights to freedom of expression and association.' Diana Johnson, the Home Office minister, last week proposed a national police unit that would monitor social media for signs of anti-migrant disorder. PATRICK PLEUL/REUTERS X said: 'While positioned as a safety measure, it clearly goes far beyond that intent.' The company said 'a balanced approach is the only way to protect individual liberties, encourage innovation and safeguard children'. While the X post was not attributed to Musk, he retweeted an answer from his Grok chatbot that said: 'Evidence shows Labour has suppressed aspects of free speech via the Online Safety Act's content monitoring.' Ofcom said: 'The new rules require tech firms to tackle criminal content and prevent children from seeing defined types of material that's harmful to them. There is no requirement on them to restrict legal content for adult users. In fact, they must carefully consider how they protect users' rights to freedom of expression while keeping people safe.' Imran Ahmed, the CEO and founder of the Centre for Countering Digital Hate, said: 'The Online Safety Act is a necessary step toward protecting our children in the digital world. Years of work have gone into crafting a law that addresses the real dangers kids face in online spaces, including exploitation, suicide promotion and self-harm. Those who propose to scrap this vital law must explain why they think these heinous online activities are tolerable.' A government spokesman said: 'It is demonstrably false that the Online Safety Act compromises free speech. As well as legal duties to keep children safe, the very same law places clear and unequivocal duties on platforms to protect freedom of expression. Failure to meet either obligation can lead to severe penalties, including fines of up to 10% of global revenue or £18 million, whichever is greater. 'The Act is not designed to censor political debate and does not require platforms to age gate any content other than those which present the most serious risks to children such as pornography or suicide and self-harm content. 'Platforms have had several months to prepare for this law. It is a disservice to their users to hide behind deadlines as an excuse for failing to properly implement it.'


The Guardian
4 hours ago
- The Guardian
Everything the right - and the left – are getting wrong about the Online Safety Act
Last week, the UK's Online Safety Act came into force. It's fair to say it hasn't been smooth sailing. Donald Trump's allies have dubbed it the 'UK's online censorship law', and the technology secretary, Peter Kyle, added fuel to the fire by claiming that Nigel Farage's opposition to the act put him 'on the side' of Jimmy Savile. Disdain from the right isn't surprising. After all, tech companies will now have to assess the risk their platforms pose of disseminating the kind of racist misinformation that fuelled last year's summer riots. What has particularly struck me, though, is the backlash from progressive quarters. Online outlet Novara Media published an interview claiming the Online Safety Act compromises children's safety. Politics Joe joked that the act involves 'banning Pornhub'. New YouGov polling shows that Labour voters are even less likely to support age verification on porn websites than Conservative or Liberal Democrat voters. I helped draft Ofcom's regulatory guidance setting out how platforms should comply with the act's requirements on age verification. Because of the scope of the act and the absence of a desire to force tech platforms to adopt specific technologies, this guidance was broad and principles-based – if the regulator prescribed specific measures, it would be accused of authoritarianism. Taking a principles-based approach is more sensible and future proof, but does allow tech companies to interpret the regulation poorly. Despite these challenges, I am supportive of the principles of the act. As someone with progressive politics, I have always been deeply concerned about the impact of an unregulated online world. Bad news abounds: X allowing racist misinformation to spread in the name of 'free speech'; and children being radicalised or being targeted by online sexual extortion. It was clear to me that these regulations would start to move us away from a world in which tech billionaires could dress up self-serving libertarianism as lofty ideals. Instead, a culture war has erupted that is laden with misunderstanding, with every poor decision made by tech platforms being blamed on regulation. This strikes me as incredibly convenient for tech companies seeking to avoid accountability. So what does the act actually do? In short, it requires online services to assess the risk of harm – whether illegal content such as child sexual abuse material, or, in the case of services accessed by children, content such as porn or suicide promotion – and implement proportionate systems to reduce those risks. It's also worth being clear about what isn't new. Tech companies have been moderating speech and taking down content they don't want on their platforms for years. However, they have done so based on opaque internal business priorities, rather than in response to proactive risk assessments. Let's look at some examples. After the Christchurch terror attack in New Zealand, which was broadcast in a 17-minute Facebook Live post and shared widely by white supremacists, Facebook trained its AI to block violent live streams. More recently, after Trump's election, Meta overhauled its approach to content moderation and removed factchecking in the US, a move which its own oversight board has criticised as being too hasty. Rather than making decisions to remove content reactively, or in order to appease politicians, tech companies will now need to demonstrate they have taken reasonable steps to prevent this content from appearing in the first place. The act isn't about 'catching baddies', or taking down specific pieces of content. Where censorship has happened, such as the suppression of pro-Palestine speech, this has been taking place long before the implementation of the Online Safety Act. Where public interest content is being blocked as a result of the act, we should be interrogating platforms' risk assessments and decision-making processes, rather than repealing the legislation. Ofcom's new transparency powers make this achievable in a way that wasn't possible before. Yes, there are some flaws with the act, and teething issues will persist. As someone who worked on Ofcom's guidance on age verification, even I am slightly confused by the way Spotify is checking users' ages. The widespread adoption of VPNs to circumvent age checks on porn sites is clearly something to think about carefully. Where should age assurance be implemented in a user journey? And who should be responsible for informing the public that many age assurance technologies delete all of their personal data after their age is confirmed, while some VPN providers sell their information to data brokers? But the response to these issues shouldn't be to repeal the Online Safety Act: it should be for platforms to hone their approach. There is an argument that the problem ultimately lies with the business models of the tech industry, and that this kind of legislation will never be able to truly tackle that. The academic Shoshana Zuboff calls this 'surveillance capitalism': tech companies get us hooked through addictive design and extract huge amounts of our personal data in order to sell us hyper-targeted ads. The result is a society characterised by atomisation, alienation and the erosion of our attention spans. Because the easiest way to get us hooked is to show us extreme content, children are directed from fitness influencers to content promoting disordered eating. Add to this the fact that platforms are designed to make people expand their networks and spend as much time on them as possible, and you have a recipe for disaster. Again, it's a worthy critique. But we live in a world where American tech companies hold more power than many nation states – and they have a president in the White House willing to start trade wars to defend their interests. So yes, let's look at drafting regulation that addresses addictive algorithms and support alternative business models for tech platforms, such as data cooperatives. Let's continue to explore how best to provide children with age-appropriate experiences online, and think about how to get age verification right. But while we're working on that, really serious harms are taking place online. We now have a sophisticated regulatory framework in the UK that forces tech platforms to assess risk and allows the public to have far greater transparency over their decision-making processes. We need critical engagement with the regulation, not cynicism. Let's not throw out the best tools we have. George Billinge is a former Ofcom policy manager and is CEO of tech consultancy Illuminate Tech