DAVID THUNDER
The UK Online Safety Act was approved by the British parliament with broad cross-party support and came into law on 26th October 2023. It was intended to “(make) the use of internet services…safer for individuals in the United Kingdom.” The Act purported to achieve this by “(imposing) duties…(requiring) providers of services regulated by this Act to identify, mitigate and manage the risks of harm” from “(i) illegal content and activity, and (ii) content and activity that is harmful to children.” The main enforcer of these duties is OFCOM (“Office of Communications”) which regulates communications services in the UK.
On the other hand, laws should not be judged on the lofty intentions of their authors alone, but on the way they are liable to be interpreted and applied by human beings acting under realistic constraints and incentives. The 300-page Online Safety Act, intended to make the internet a safer place for children, is certainly not as innocent and “safe” as it sounds. UK civil liberties group Big Brother Watch believes the Act as it stands “will set free expression and privacy back decades in this country.” Reform UK has promised to do all in its power to scrap the Act completely.
A careful reading of the Online Safety Act reveals that, in spite of significant improvements secured before its passage, it is anything but neutral in its impact on freedom of speech. Most notably, it radically alters the constraints and incentives under which social media companies operate, in particular regarding their content moderation policies, by shifting the onus away from users themselves, onto the communications service provider, for ensuring content produced and disseminated on their service is legally compliant. These changes may well bring some modest benefits for child protection. But these benefits are limited, and are secured at an unacceptably high cost for freedom of speech.
One of the main mechanisms for shielding children from child-inappropriate content, age verification, adds another layer of intrusive bureaucracy to the internet but will have limited efficacy, given that many people under the age of 18 are fully capable of by-passing age verification requirements: they can just use a VPN service to trick regulators into thinking they are accessing the internet outside the jurisdiction of the United Kingdom. Indeed, when the age verification requirements kicked in in July 2025, VPN signups in the UK reportedly surged by 1000-1400%.
While the intention of protecting children from harm is obviously laudable, the British government chose to pursue this aspiration at the cost of creating a regulatory environment distinctly unfriendly to freedom of expression. Unlike the traditional machinery of censorship, in which the government acted directly upon citizens, the UK’s Online Safety Act saves the government the trouble of censoring citizens directly, by imposes somewhat vague obligations upon service providers to “mitigate risk” by flagging and removing illegal content and content deemed harmful to minors.
Now, on its face, it might seem perfectly reasonable to require a social media company to mitigate risks of illegal content and content that is potentially harmful for under-age users. But there are several features of this Act and the monitoring mechanisms contained within it that will make people in the UK who post lawful content in good faith more vulnerable than ever to arbitrary censorship.
To begin with, the idea of “sufficient” compliance with the Act, most notably the obligation to mitigate risks of exposure to illegal content through appropriate moderation policies, is hard to define with any precision, and its operational meaning will ultimately be at the discretion of OFCOM. This opens up the prospects of a worrying level of discretionary power on the part of government agency officials over the limits of speech across the entire UK digital public sphere – exactly the same problem we see in the EU’s Digital Services Act, which imposes similarly vague “due diligence” duties on “very large online digital platforms.”
This broad discretionary power is even more worrying given that the penalties for non-compliance with the Act are extremely steep – up to 10% of a company’s global turnover. In the absence of a clear idea of what “sufficient” compliance might entail in practice, or how OFCOM might interpret it, the logical thing for a social media company that wants to protect its “bottom line” is to err on the side of taking down content in case there is the slightest doubt about its legality.
The net effect of the incentive to err on the side of suppression of potentially illegal content is that a lot of perfectly legitimate and lawful content will be swallowed up by the censorship machine. Certain categories of illegal content, such as “terrorism” content, child sexual exploitation, certain types of violent content, and criminal “incitement to hatred” offences, are deemed to constitute “priority” illegal content and therefore service providers are required to take steps to pro-actively reduce exposure to them pre-emptively (say, by suppressing its visibility for its intended audience) rather than reactively (say, in response to a complaint or allegation by a customer).
Because this must be done at scale, and a finding of non-compliance would be extremely costly to the company, it is inevitable that AI-driven censorship algorithms will be used to shadow-ban or suppress content deemed suspect or “risky.” The problem is, AI-based models analyzing massive amounts of data, particularly if trained to err on the side of intervention to avoid the risk of non-compliance and its associated penalties, will cast the net wide, and suppress content that is lawful and reasonable just because it contains certain “red flag” language patterns or keywords.
Lest all of this sound like the speculative musings of a philosopher, it is worth quoting from an article by Chris Best, the CEO and co-founder of blogging platform Substack, one of the few platforms that truly championed free speech during the pandemic, on how the UK Online Safety Act makes it harder than ever for a business like his to live up to its commitment to create an environment supportive of free speech:
In a climate of genuine anxiety about children’s exposure to harmful content, the Online Safety Act can sound like a careful, commonsense response. But what I’ve learned is that in practice, it pushes toward something much darker: a system of mass political censorship unlike anywhere else in the western world. What does it actually mean to “comply” with the Online Safety Act? It does not mean hiring a few extra moderators or adding a warning label. It means platforms must build systems that continuously classify and censor speech at scale, deciding—often in advance of any complaint—what a regulator might deem unsuitable for children. Armies of human moderators or AI must be employed to scan essays, journalism, satires, photography, and every type of comment and discussion thread for potential triggers. Notably, these systems are not only seeking out illegal materials; they are trying to predict regulatory risk of lawful, mainstream comment in the face of stiff penalties.
To make this a little more concrete, let’s consider at-scale censorship from the Covid era. A large volume of legitimate debate and commentary, including my own, was shut down by “public health” moderation algorithms. For example, when I attempted to upload a blog post to Medium about my experience of being censored for critically discussing controversial issues like vaccination and masking, my post was immediately taken down based on the allegation that it constituted “Covid misinformation.” So apparently, even discussing a past episode of censorship on a different platform was identified by the content moderation algorithms as a “misinformation” offence.
Similarly, when I documented spectacular cases of Big Pharma fraud settlements on LinkedIn, I had my LinkedIn account suspended based on “public health misinformation,” even though what I stated was indisputably true and on public record.
Now, imagine if someone writes a hard-hitting social media post on the problem of child grooming gangs, or a bit of harmless satire with some sexual innuendo attached to it, or a critical discussion of the mindset of this or that terrorist movement, or a candid report of the sentiments of a small town overwhelmed by immigration: what sort of automated content moderation policy designed to minimize exposure to illegal child sexual exploitation content, terrorism, or criminal incitement to hatred toward racial minorities, could pick apart legitimate commentary from illegal content?
Surely there is a high likelihood that a significant number of lawful posts on these topics would be taken down or shadow-banned by an AI-driven content censorship machine, much as the slightest criticism of vaccination or mask policy triggered censorship during the Covid era? Why should we believe that the oversensitivity of the Covid censorship machine, which was even highlighted as problematic by one of its own architects, the CEO of Meta/Facebook Mark Zuckerberg, will not be replicated by companies attempting to comply with the UK Online Safety Act?
The Freedom Blog offers a thoughtful voice in defence of freedom at a time when the pillars of a free society are coming under attack across the West from our very own institutions and governments.
If you appreciate my blog posts and the careful work that goes into them, please consider supporting my work with a paid subscription, by clicking here.
My academic profile and publications are listed at my website, davidthunder.com.
Click here to download the preface and introduction to my book, The Polycentric Republic, for free. Click here to purchase the book.
This article (The UK Online Safety Act Sacrifices Free Speech With Limited Returns for Child Safety) was created and published by David Thunder and is republished here under “Fair Use”





Leave a Reply