
UK's online safety law is putting free speech at risk, X says
Empower your mind, elevate your skills
Britain's online safety law risks suppressing free speech due to its heavy-handed enforcement, social media site X said on Friday, adding that significant changes were needed. The Online Safety Act, which is being rolled out this year, sets tough new requirements on platforms such as Facebook, YouTube, TikTok and X, as well as sites hosting pornography, to protect children and remove illegal content.But it has attracted criticism from politicians, free-speech campaigners and content creators, who have complained that the rules had been implemented too broadly, resulting in the censorship of legal content.Users have complained about age checks that require personal data to be uploaded to access sites that show pornography, and more than 468,000 people have signed an online petition calling for the act to be repealed.The government said on Monday it had no plans to do so and it was working with regulator Ofcom to implement the act as quickly as possible.Technology Secretary Peter Kyle said on Tuesday that those who wanted to overturn it were "on the side of predators".Elon Musk's X, which has implemented age verification, said the law's laudable intentions were at risk of being overshadowed by the breadth of its regulatory reach."When lawmakers approved these measures, they made a conscientious decision to increase censorship in the name of 'online safety'," it said in a statement."It is fair to ask if UK citizens were equally aware of the trade-off being made."X said the timetable for meeting mandatory measures had been unnecessarily tight, and despite being in compliance, platforms still faced threats of enforcement and fines, encouraging over-censorship.It said a balanced approach was the only way to protect liberty, encourage innovation and safeguard children."It's safe to say that significant changes must take place to achieve these objectives in the UK," it said. A UK government spokesperson said it is "demonstrably false" that the Online Safety Act compromises free speech. "As well as legal duties to keep children safe, the very same law places clear and unequivocal duties on platforms to protect freedom of expression," the spokesperson said. Ofcom said on Thursday it had launched investigations into the compliance of four companies, which collectively run 34 pornography sites.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
2 hours ago
- India Today
Federal court upholds SEC ‘gag rule' in 3-0 ruling over free speech objections
A federal appeals court on Wednesday upheld the US Securities and Exchange Commission's so-called "gag rule," rejecting a claim it illegally silences defendants who want to criticise the regulator after settling civil enforcement a 3-0 decision, the 9th US Circuit Court of Appeals said the rule was not unconstitutional on its face, but could violate the First Amendment depending on how it is rule, reflecting SEC policy dating to 1972, often requires settling defendants to say at least that they neither admit nor deny the regulator's allegations. Twelve petitioners had been appealing the SEC's decision in January 2024 not to amend the rule, including eight people whose SEC settlements triggered petitioner, former Xerox chief financial officer Barry Romeril, took a similar case to the US Supreme Court in 2022 in an appeal backed by billionaire and longtime SEC critic Elon Musk, but that court refused to consider Wednesday's decision, Circuit Judge Daniel Bress said that while some defendants find the rule coercive, they remained free not to settle, and instead to speak out against the also said the SEC had an interest in deciding how to try its own cases, including by giving defendants different options, knowing that scrapping the rule could lead to fewer settlements."Provided that any limitation on speech remains within proper bounds, and given the background ability to waive First Amendment rights at least to some extent, the SEC has an interest in giving defendants the option to agree to a speech restriction as part of a broader settlement agreement," Bress said challenges to applying the rule could still be brought before the SEC brings enforcement cases, while judges consider settlements, or when the SEC reopens settled cases because of alleged petitioners included the New Civil Liberties Alliance, which challenges perceived administrative law senior litigation counsel Peggy Little said in a statement the nonprofit was disappointed. "Past practice does not excuse unconstitutional government action," she SEC had no immediate Commissioner Hester Peirce dissented from the regulator's decision not to amend the found "scant factual basis" for the rule, and said prohibiting denials of wrongdoing "prevents the American public from ever hearing criticisms that might otherwise be lodged against the government, let alone assessing their credibility."- Ends


Time of India
2 hours ago
- Time of India
Elon Musk's Grok-AI Garners Criticism, Auto-Generating Taylor Swift's Explicit Deep Fakes
Grok AI accused of creating Taylor Swift's explicit deepfakes: Image via X The AI chatbot Grok is again in the headlines, and for all the wrong reasons. Technology is freaking good unless and until it messes up with someone's privacy. On top, if that someone is none other than the $800 million pop singer Taylor Swift , the water seems to be even deeper for Elon Musk 's xAI. Grok recently explored the power to create spicy NSFW content. The new spicy mode makes it a cakewalk for unauthorized entities to grab the mouse and satisfy their quirk by producing some of the most realistic explicit photos. However, the scariest part is that the new tool doesn't even always require user instructions. Grok's latest feature accidentally violates Taylor Swift's privacy The Verge revealed that Taylor Swift stands at the vulnerable end of the new AI features. Reporter Jess Weatherbed asked Grok's spicy video editor to depict 'Taylor Swift celebrating Coachella with the boys.' But what he got instead was enough to leave the Grammy-winning singer sleepless. The feature flawlessly generated 30 images of Swift in revealing clothes. That's not it. The spicy mode automatically produced a video of the singer taking off her dress and dancing sensually in front of a fake, robotic crowd. When it doesn't even do it with a prior nudity request, the leadership should address the internal glitch before it causes further outrage on the social media walls. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Bespoke 3,3.5 & 4 BHK Homes: ₹2.30Cr*+ at Singasandra, Bengaluru Mahindra NewHaven Get Quote Undo Their content filter is clearly malfunctioning, but when someone's privacy is involved, it can be flagged as a major issue in Mask's business. However, this is not the first time the platform came across such an 18+ botch. Taylor Swift is not foreign to getting shoved on the wrong side of technology Last year, Taylor Swift's explicit deep fakes went viral on X, and nobody knew how. Fans went berserk, questioning the flawed safety. After a huge backlash, the X-safety team assured of looking into the matter with urgency and never letting it happen twice. ''Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X, and we have a zero-tolerance policy towards such content,' officials stated with a promise to take down all the other offensive posts as well. Now, given that they are again at gunpoint, people are not wrong to demand stronger background checks of their tools and techs. Elon Musk has claimed that Grok Imagine's usage is rapidly growing like wildfire. He added that 14 million images had already been generated on August 4, and by August 5, this number skyrocketed to 20million. The stat undoubtedly denoted the success of installing the new approach, but unless and until the safety remarks are ensured, everything else will remain a gold brick. Also read: Micah Parsons trade talk makes Aaron Donald hint at coming out of retirement | NFL News - Times of India Catch Rani Rampal's inspiring story on Game On, Episode 4. Watch Here!


Time of India
2 hours ago
- Time of India
Australia regulator says YouTube, others 'turning a blind eye' to child abuse material
Australia's internet watchdog has said the world's biggest social media firms are still "turning a blind eye" to online child sex abuse material on their platforms, and said YouTube in particular had been unresponsive to its enquiries. In a report released on Wednesday, the eSafety Commissioner said YouTube, along with Apple, failed to track the number of user reports it received of child sex abuse appearing on their platforms and also could not say how long it took them to respond to such reports. The Australian government decided last week to include YouTube in its world-first social media ban for teenagers, following eSafety's advice to overturn its planned exemption for the Alphabet-owned Google's video-sharing site. "When left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services," eSafety Commissioner Julie Inman Grant said in a statement. "No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services." A Google spokesperson said "eSafety's comments are rooted in reporting metrics, not online safety performance", adding that YouTube's systems proactively removed over 99% of all abuse content before being flagged or viewed. "Our focus remains on outcomes and detecting and removing (child sexual exploitation and abuse) on YouTube," the spokesperson said in a statement. Meta - owner of Facebook, Instagram and Threads, three of the biggest platforms with more than 3 billion users worldwide - has said it prohibits graphic videos. The eSafety Commissioner, an office set up to protect internet users, has mandated Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp to report on the measures they take to address child exploitation and abuse material in Australia. The report on their responses so far found a "range of safety deficiencies on their services which increases the risk that child sexual exploitation and abuse material and activity appear on the services". Safety gaps included failures to detect and prevent livestreaming of the material or block links to known child abuse material, as well as inadequate reporting mechanisms. It said platforms were also not using "hash-matching" technology on all parts of their services to identify images of child sexual abuse by checking them against a database. Google has said before that its anti-abuse measures include hash-matching technology and artificial intelligence. The Australian regulator said some providers had not made improvements to address these safety gaps on their services despite it putting them on notice in previous years. "In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on-staff," Inman Grant said.