14-05-2025
Internet can't be regulated like TV. Look at how UK, Australia are doing it
At the heart of the debate is a long-standing demand to transpose TV-style regulation to digital, owing to a perceived uneven playing field. At the recently concluded World Audio-Visual and Entertainment Summit, the chairperson of the Telecom Regulatory Authority of India also said there is a regulatory imbalance between traditional TV and digital services.
Digital creators and social media companies are increasingly scrutinised, shadow-banned, or dragged to court for their content. Courts have evolved principles to guide the interpretation of reasonable restrictions on speech on the grounds of decency and morality. In 2014, the apex court held in Aveek Sarkar that a naked photograph of Boris Becker with his fiancée in a magazine was not obscene because it would not incite lustful thoughts in the average person in contemporary society. Yet, in 2025, we find that every other movie or social media post hurts morality and decency.
Late last month, a Supreme Court bench led by incoming Chief Justice BR Gavai asked the Union government to 'consider appropriate legislation' to curb obscene content on digital streaming platforms and social media. The petition before the Court felt familiar: Web series are polluting our morals, and social media is degrading our public square.
Yet the central government did not simply extend TV-style content strictures to digital when it framed rules for streaming apps in 2021, because the two media operate under fundamentally different conditions.
In India, TV viewing is typically a household activity, whereas individuals consume social media and streaming content privately on their phones. Everyone receives the same pre-packaged broadcast content from TV, while digital users select material from personalised feeds. Consequently, standards for obscenity and decency on TV are stricter, as they must account for a wide, unsupervised audience. Digital services, by contrast, offer tools such as age gating and parental controls that minimise the risk of unintended exposure to inappropriate content.
Most countries recognise these inherent differences in public versus private consumption and the technological controls that mediate audiences' exposure, and therefore adopt separate regulatory standards.
Also read: OTT isn't stopping viewers from turning to TV for sports. Now, remove legacy regulations
Drawing the right line
State enforcement should reflect varying degrees of harm that stem from speech, just as content standards vary according to intended purpose and form factor. Legal scholar Evelyn Douek advocates a shift from absolutism and blanket prohibitions to calibrated, proportionate obligations that match the gravity of the risk posed by such content. Global best practice shows how.
The United Kingdom's Online Safety Act, now in force, obliges services to assess and mitigate 'priority illegal content' — terrorism, child-sexual-abuse imagery, violent threats — and provides the media regulator, Ofcom, with the power to impose fines of up to £18 million and, as a last resort, to block non-compliant sites.
Australia's Online Safety Act empowers the eSafety Commissioner to issue removal notices for child-abuse, non-consensual nudity, or extremist material, and also to levy hefty penalties. Canada's proposed Bill C-63 would create a Digital Safety Commission with a regulatory remit extending to sexual exploitation of children, non-consensual intimate images, self-harm content, hate, and extremism.
None of these regimes treats generic 'offence' as an equal evil; they target speech that demonstrably harms democratic life. India, too, places higher obligations on social-media services to proactively monitor and take down child-sexual-abuse imagery and terrorist content.
Also read: TRAI's OTT regulation agenda is confusing. It forgets consumers, serves telco interests
Institutional resilience
Instead of spending precious and limited state capacity on policing obscene content, India should invest in media literacy and public-service broadcasting to build a mature society that is resilient to the effects of harmful speech. And this is exactly what other democracies are doing.
The European Union emphasises the importance of media literacy for navigating the digital media environment and spotting disinformation. The Audiovisual Media Services Directive (AVMSD) mandates Member States to promote media literacy and to report on their efforts every three years. Canada's Digital Citizen Initiative (DCI) funds projects aimed at improving civic, news, and digital media literacy. India's National Education Policy likewise encourages critical thinking and problem-solving, alongside social, ethical, and emotional capacities—but who will bell the cat?
Globally, hybrid oversight models are taking root. The Global Internet Forum to Counter Terrorism coordinates industry databases of extremist content, supervised by public-interest boards. India's Information Technology Act already allows for voluntary codes backed by statutory orders. The primary regulatory goal should therefore be to ensure that the loudest microphones cannot freely amplify the most harmful forms of speech: hate speech, child sexual-abuse material, terror content, and so on. Civil-society organisations must also be more involved in flagging such material.
Obscenity may offend but hate speech can kill. Enforcing existing legal provisions with the seriousness they deserve, and ensuring a more inclusive, proportionate governance process, would do far more to preserve the Republic's moral core than yet another crusade against lewd punchlines.
The author is a media and telecom expert in Koan Advisory Group. Views are personal.
(Edited by Theres Sudeep)