The Hidden Mechanisms of Unfreedom: Part III

The hidden mechanisms of unfreedom: part III

The new censorship industry


ALEX KLAUSHOFER

We think we know what censorship is.

In one sense, we do. The right to speak, write and think freely is one of the oldest struggles; tyrants of all kinds have sought to suppress criticism and information they find threatening. The battle for freedom of speech was finally won in the West partly due to a recognition that it’s a condition for the emergence of new ideas and knowledge. And – a point recognised less often – the consensus that free speech is a basic right also came about because, as humans, we have a fundamental need to express ourselves.

But the digital age has taken us into new territory. Freedom of expression is under threat like never before as new technology enables the powerful to control the field of public information to an unprecedented degree. Narratives (aka propaganda) can be created and distributed at scale and alternative perspectives suppressed before the average person even gets to hear them.

I said that progress and change weren’t possible without free speech, but that’s not strictly true. A society where ideas and information are tightly controlled can certainly develop – but only in the direction decided by those in charge. When Censorship marries Digital, you’re on the path to technocracy. Think how China has progressed: with a highly surveilled population, it’s now the world’s biggest manufacturer and exporter. Clues to how a Britain-under-censorship might look in future lie in the discussions the censors are keen to suppress.

The Censorship Industrial Complex, as American journalist Matt Taibbi calls it, centres around the twin terms of “misinformation” (inaccurate information) and “disinformation” (information with the deliberate intention to mislead) which have entered public discourse over the past decade. In the West, this trend began with the idea that geopolitical enemies such as Russia or China might use the internet to manipulate the masses. Now the principal targets of the new censors are domestic challenges to the official line on issues such as health and climate.

The new censorship industry arose out of the authorities’ fear of the free expression provided by the internet, combined with the “we know best” attitude of governments, institutions and experts. Their concern is more with facts than opinions: remember New Zealand’s prime minister Jacinda Arden declaring the government must be the people’s “single source of truth” during Covid?

“The single source of truth”. It’s a phrase that never ceases to chill because of its inherent totalitarianism.

In the course of researching this piece, I went back to some notes I’d been collecting for a while. They contained links to various YouTube videos that might have been helpful. But when I clicked on them, the videos were no longer there.

YouTube is an everyday example of the new censorship in operation, with channels threatened with strikes and videos disappearing overnight. Comments are deleted by the system. Many channels are shadow-banned so that they fail to reach their natural audience. Many of the people in the podcasts I watch have become skilled in navigating this censorious new world, using euphemisms, gestures and even pieces of paper bearing written words to avoid triggering the algorithm.

In the case of my missing videos, I know that one was to do with “BlackRock”, another “D Cummings”, and a third “Apolitical”. Beyond that, I have no idea as to their contents.

That perfectly illustrates the workings of the new censorship: if its architects and their associates get their way, we won’t even know what we don’t know.

The new censorship bodies

The Censorship Industrial Complex has spawned a raft of new organisations whose purpose didn’t exist until recently.

One such is The Center for Countering Digital Hate which explains its mission as stopping “the spread of online hate and disinformation through research, public campaigns and policy advocacy” thanks to its “deep understanding of the online harm landscape”. It was founded in the UK in 2018 by former Labour adviser Imran Ahmed and incorporated by Morgan McSweeney who has held a variety of positions connected to the Labour Party and was campaign manager for Keir Starmer’s leadership bid.

I include these details for a reason. In 2019, the CCDH launched a slur campaign against The Canary, a news website aligned with the movement led by Jeremy Corbyn which posed a threat to the Labour establishment. The campaign used repeated claims of antisemitism to drive away the website’s advertisers and break its business model. “The Canary has announced that, thanks to our campaign, its business model ‘no longer works’ & they’re downsizing,” tweeted the group, appealing for donations so that it could “target new sites”.

In 2020, the CCDH turned its focus to “Covid-19 misinformation”, using the same tactic of trying to damage its opponents’ reputation and business. This time, it targeted Substack, suggesting in The Guardian that Substack writers were “profiting from medical misinformation that [could] seriously harm readers”.

Around the same time, it lobbied politicians to impose legal measures to censor dissenters. In written evidence to the Home Affairs Committee, it argued that channels carrying content which contradicted the government narrative needed to be completely shut down. “Simply removing posts is not enough,” it told MPs. “Meaningful action needs to be taken”.

In a report published in 2024, the CCDH identified a new kind of dangerous disinformation . “The New Climate Denial”, its researchers argued, had progressed from denying manmade climate change to “attacks on climate science and scientists” and “rhetoric seeking to undermine confidence in solutions.” Since social media platforms were profiting from “denialist claims” such as “the weather is cold”, it recommended that Google update its policy to ban “content that contradicts the authoritative scientific consensus on the causes, impacts, and solutions to climate change”.

The work of the CCDH illustrates the behind the scenes push for censorship which parallels public calls for censorship by figures such as Dale Vince, climate activist and owner of Ecotricity. I’ll deal with the part being played by individuals in the new censorship regime later. First, I want to take a look at how the state is using it.

Thou shalt not discuss the weather – or question Net Zero

The affiliations, targets and tactics of the CCDH takes us into the realm of government. As Paul D. Thacker documents in this revealing piece, Ahmed then took the organisation across the Atlantic where it has been helpful to the US government in promoting the official Covid narrative. Leaked documents also reveal attempted election interference as CCDH planned to “kill Musk’s X” to help the Kamala Harris presidential campaign.

In this respect, as Thacker points out, ultimately Ahmed has been “a servant to the power of political parties who deployed him and the CCDH to weaponize the charge of hate speech and misinformation against their enemies”.

How government creates narratives

Who creates the narratives? Whose interests do they serve? Although a full answer to these questions is undoubtedly more complex than “government”, the state’s deployment of narratives is key to understanding the new censorship.

Let’s start with the UK government.

The Counter Disinformation Unit was established in 2019 with a brief to analyse publicly available information to combat foreign interference in European elections. With Covid, its reach expanded to manage the public discussion of government policy: it worked with to social media platforms to flag content that went against the government line, down ranked posts and issued rebuttals. Worse, it subsequently emerged that the unit had been secretly monitoring individuals who were openly critical of government policy.

The individuals monitored by the state included the Oxford epidemiologist Professor Carl Heneghan, children’s campaigner Molly Kingsley and Green Party MP Caroline Lucas.

As Big Brother Watch puts it, the unit that was set up to deal with “disinformation” had in fact been tracking “the lawful, accurate online speech of MPs, academics, journalists, human rights campaigners and the public when they disagree with government policies … speech that was critical of government policies, and even of individual ministers, was being collated and circulated in so-called ‘disinformation reports’, despite the content not fitting remotely within any reasonable definition of disinformation.”

Last year, the parliamentary Culture, Media and Sport Committee called for an independent review of the unit. The cross-party group of MPs were “concerned about the lack of transparency and accountability of the CDU and the appropriateness of its reach”.

What has the government done in response to these criticisms?

It rebranded the unit to stress its focus on national security, renaming it the National Security and Online Information Team. Then it carried on with advancing its censorship plans. Documents obtained by Big Brother Watch say that the Counter-Disinformation Data Platform used by the unit “has the ability to be pivoted to focus on any priority area.”

Pivoting is a thing in the new censorship industry. Those developing the systems to counter disinformation in one area boast how their approach or tool can “pivot” to another. That’s very appealing to their clients, especially when they happen to be governments. It also helps with the PR: justify the introduction of a new measure on the basis of the current bogeyman and you get a censorship mechanism that can be turned to any purpose in future.

The CDDP’s privacy notice effectively warns us that the government envisages doing anything it likes with our personal information. While the platform does not “intend” to collect personal data, it may nonetheless gather content which includes “usernames, social media handles, contact information, personal data embedded within comments or metadata and … special categories of personal data such as political or philosophical opinions”. This information may be shared with other government departments and with external service providers.

A second example illustrates how, by dint of allocating funding, the government can use third parties for censorship purposes. In 2024, the current affairs website UnHerd published a story about what its editor-in-chief Freddie Sayers described as “a worldwide system of censorship that crosses continents”.

UnHerd was at the centre of the story. Staff couldn’t understand why their advertising was doing unusually badly – and then one of their ad companies revealed that the website had been placed on the Global Disinformation Unit’s Dynamic Exclusion List, a digital tool which allows advertisers to exclude web pages and domains inconsistent with their brand. The GDI listing meant that UnHerd was only getting 2-6% of the advertising revenue that a publication of its size would normally generate.

The GDI is a company founded in the UK in 2018 with the stated objective of damaging the business model of websites with content it deems “harmful” or to promote “adversarial” narratives. Between 2019 and 2023 it received almost £2.6 million in funding from the Foreign, Commonwealth and Development Office.

Sayers publicised the issue and received a letter from then Foreign secretary David Cameron which assured him that “the FCDO has not funded GDI since 2023, and there are no current plans to do so”.

The government’s attempts to get and maintain control of the narrative on potentially any issue is an evolving story and, along with the resulting pushback, very British. In accordance with our longstanding tradition of free speech, various parties point out things and make objections. Meanwhile the architects of the new censorship infrastructure say “hmm hmm” and carry on building.

But we can’t lay all the blame at the door of the British government. A new database of US Government spending compiled by digital rights organisation liber-net reveals the enormous resources the US government has been ploughing into attempts to control the narrative around the world. As Rebekah Barnett reports in her Substack Dystopian Down Under, the database reveals it awarded nearly 900 grants related to mis- and disinformation totalling over $1.5 billion. The biggest spenders included the Department of Defense, USAID and the Centers for Disease Control and Prevention, with grants given to projects in places such as Bolivia and Kazakhstan. “There is just about no part of the world that the US Government has not thrown money at to push the fact-checking paradigm amongst journalists and other media professionals, and to ensure the proliferation of fact-checking operations,” says Barnett.

Liber-net director Andrew Lowenthal stresses that the story told by the level and reach of this funding not so much “US government censors the world” but rather that it’s the lead player in the anti-disinformation field.

Both grant funders and recipients hold a worldview that “people at large are untrustworthy,” he says, “that they aren’t capable of making their own decisions, or of discerning reality. And so an elite group is required to come up with solutions to help people make sense of the world. But instead of a grass-roots approach to dealing with this sense-making issue, they believe there needs to be a top-down approach.”

Liber-net, which describes itself as “a growing initiative concerned about corporate and government censorship and a civil society that now advocates for speech controls under the guise of combatting “disinformation” is playing the new censors at their own game.

Each disinformation project has been given a red flag rating for its level of insidiousness, with the highest rating of five flags going to projects that “sought to actively remove (or build large-scale systems for removing) content from the internet, or that involved a high level of surveillance on citizens’ speech”.

At least the colours are pretty. Matt Taibbi identifies the top 50 censors

Fact checking the world

As a journalist, off and on, since 1998, the idea that the media should fact check what the public say is inexpressibly strange to me. Journalism (in its uncorrupted form) is about reporting on what people do or, according a definition from one instructor early in my career, “saying what people say”. Hence all those daft quotes you sometimes find in local news stories; the reporter has simply recorded or written down comments from people in the street. Prescribing and controlling are completely antithetical to journalism’s deeper functions of truth-telling and exposing the workings of power.

The sense of dissonance I felt at the rise of fact checking by mainstream media organisations during 2020 still hasn’t left me. Reuters Fact Check dominated the internet with its constant stream of “corrections” about the many and various responses to Covid and the accompanying restrictions coming from the public. Channel 4 beefed up its existing FactCheck service to broadcast the correct version of what to think on television.

By 2021 it had almost become funny: whenever I saw “Fact Check” pop up on an issue, I knew that those aspiring to be The Single Source of Truth were spooked by the latest round of speculation by the unruly public.

Like the bodies established to counter mis/disinformation, fact checking organisations are new. According to the Reuters Institute for the Study of Journalism, more than 90% of today’s 113 fact checking groups were established since 2010, with about 50 launching in the past two years.

The BBC has been at the forefront of the fact checking movement. In March 2020, Marianna Spring became the BBC’s first Disinformation and Social Media Correspondent with a brief to cover “the human cost of online conspiracy theories”. Spring’s first task was to debunk “mistruths about the pandemic”. But the podcast Disaster Trolls takes off-narrative claims about Covid as emblematic of a wider problem: a scary disinformation monster that could overwhelm us all.

From a conventional journalistic point of view, Spring’s approach is hard to fathom, an attempt to “understand” the mind of the “conspiracy theorist” in order to protect the public from things that are “frightening”. Rather than seeking to discover the truth, disinformation journalism makes an emotional appeal, effectively putting the public in the role of children with the reporter acting as carer-gatekeeper.

Building on BBC Reality Check, in 2023 BBC Verify launched with a team of sixty “specialised” journalists with a brief to “go beyond conventional newsroom techniques”. As a lifelong BBC viewer and sometime freelancer until I cancelled my license in 2021, it’s weird looking through the “fact checked” stories on BBC Verify. Ukraine, Gaze, Trump: on this website the issues of the day come with extra truth.

The BBC’s aspirations don’t stop at checking claims about world events. As UK Column recently reported, BBC Director General Tim Davie, citing the World Economic Forum’s claim that “disinformation is … the biggest short-term risk we face globally” envisages training every child in the country how to think the right thing – the beginnings of a lifelong guardianship of the hearts and minds of future generations.

Move over, Jesuits – the BBC is here!

Then there’s Full Fact, which describes itself as “the UK’s independent fact checking charity”, home to “a team of independent fact checkers who find, expose and counter the harm … of bad information”. Its chief executive is Chris Morris, formerly the BBC’s “first dedicated fact checker”.

Full Fact’s website emphasises its independence, highlighting the donations it gets from members of the public. But when you look at the organisation’s list of funders, its biggest donors in 2023 were Google, Meta, followed by significant donations from the usual global philanthropists. In 2016, the year it was founded, Full Fact received £30,000 from the public purse and unstated amounts from organisations including the Open Society Foundations. Since 2019, a contract with Facebook brings in income according to how many “images, videos and articles” the organisation checks.

A scroll through the topics within Full Fact’s purview – everything from Politics to Environment and Health and much more in between – is both confusing and revealing. Many of the fact checks seem a bit pointless, the kind of information a curmudgeon with a bee in his bonnet might tell you – Cadbury’s creme eggs do say “Easter” on them – or only tangentially useful: Argos isn’t selling smartphones for two quid; the chair of the Ofsted board is not the “head of the UK education system”. Many posts begin with an irritable No, as in: “No, cinnamon is not ‘the best remedy in the world’ to treat diabetes”. The overall impression is of a Gradgrind crossed with Crabby B from Cider With Rosie, a teacher enraged by the untameable vitality of their pupils: why can’t they get everything right? Why must they insist on speculating, phrasing things in their own way, chatting amongst themselves?

If only people would stick to The Single Source of Truth.

You don’t have to scroll through Fact Check’s website for long to see that a certain set of subjects are of particular concern, with the same statements repeated multiple times (repetition is an old mechanism of propaganda). Bovaer is safe. Covid-19 does exist. Weather modification is definitely not happening. Net Zero is not responsible for high energy bills. These – subjects about which the intransigent population continues to speculate and form their own conclusions about – tend to be the issues which, we keep being told, necessitate more taxes and restrictions.

Full Fact also has technocratic solutions to the unruliness of the internet. Its AI team has developed software to help fact checkers around the globe combat “bad information” more effectively. A new AI tool will focus on health misinformation in online videos, ranking them in order of the amount of “harm” they might cause.

The UK government also has to plans to use AI to censor at scale: it’s awarded a £2.3 contract to Faculty AI to develop an artificial intelligence tool to trawl social media for “concerning” posts.

The mechanisms and organisations of unfreedom often overlap. Here’s Demos, star of The Hijacking of the think tanks, proposing that journalists should get out there and fact check the community. Recommendation 10 of Driving Disinformation argues: “Journalists should be enabled to play an active role in online community forums such as those provided by NextDoor or Facebook. Journalists could actively fact-check claims and share accurate information to help support a productive democratic culture in these forums.”

The idea that journalists would have the time to become online moderators for external platforms while doing the day job is laughably uninformed. My long break from journalism was mainly because I couldn’t make a living in an industry in structural decline.

(And I still can’t, so thank you, paying subscribers …)

The values of the new censorship

At first sight, it might look as if the new censorship is all about technology. But it also testifies to a profound shift in the values held by both those who seek to control and by sections of the society desiring more control.

This shift is one in which the avoidance of “harm” has somehow replaced the time-honoured notions of truth and self-expression that underpin free speech. Crucially, it involves a change in the balance of power, making a ruling group the arbiters of what can be read and said.

The last meeting of the UK Internet Governance Forum Meeting, a UN-organised initiative, provides an insight into the mindset of the new censorship establishment. A panel on “the impact of mis and disinformation on democracy” explored the lack of public trust in government and institutions. Since all the panel members worked in the disinformation business, it wasn’t surprising that they attributed the decline in public trust to disinformation. From that, the solution of “managing” the information made available to the public naturally followed.

“The majority of people no longer believe what they read or or see hear,” said Chris Morris of Full Fact. “If they don’t believe anything they don’t trust anything. And if you have no trust, you have no consent … Misinformation creates mistrust and disinformation encourages doubt and it starts to undermine the credibility of the entire process.”

Henry Parker, Vice President of Corporate Affairs at Logically, a company which uses AI to analyse and predict narrative trends, argued that trust could be rebuilt by making sure people had the correct information. “Civilian government organisations operate in a dynamic environment where public expectations, digital communication, and scrutiny continue to rise. These agencies must protect public trust while handling emerging issues with agility.”

Both men had technocratic solutions to the problem of mistrust. Logically is developing an AI product which can be used by social media platforms to identify “incorrect” information in large numbers of posts or videos. Full Fact is working an AI tool focusing on health misinformation in online videos, ranking them in order of the amount of “harm” they might cause.

All panel members, including Hannah Perry from Demos, were united in their conviction that a child-like public must be protected from “harm” by an all-knowing establishment. There was no sense of there being different perspectives on a subject, the risk of the “truth” being hijacked by vested interests, or that positive developments often come from “new truths”.

Nor was there any willingness to look at the root causes of public distrust. As this self-described jaded BBC journalist points out in the Spectator: “The BBC seemed to jettison all pretence at balance during the pandemic. Unverified and often misleading claims about the virus, the efficacy of lockdowns, face-coverings and other non-pharmaceutical interventions were broadcast on a daily basis. When vaccines arrived, the BBC refused to countenance even the mildest journalistic curiosity.”

The notion of “harm” at the heart of the new censorship is a broad one which includes, as Global Disinformation Index founder Clare Melford puts it, facts. “Something can be factually accurate but still extremely harmful,” she said in an interview. “[GDI] leads you to a more useful definition of disinformation.”

Harm can also be done via hurt feelings and exposure to ideas that might make people feel “unsafe”. The New Harmfulness is bound up with a culture of safetyism which, as this article points out, actually makes people weaker and less resilient: “Humans are what author and statistician Nassim Nicholas Taleb calls ‘antifragile’. We ‘benefit from shocks; [humans] thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty.”

There’s been little public debate about a shift of values in which the avoidance of harm supercedes well-established values and practices. The avoidance of harm has become so powerful that it’s now acceptable for an organisation to openly state its intention to damage or destroy businesses.

Check out this explanation of libel by a business law firm:

“Making false written claims that discredit a business is known as libel. This can take the form of spreading misinformation through mediums like newspapers, websites, apps, or social media. The repercussions of libel on a company’s reputation and financial stability are severe, leading to reduced profits and long-term damage.

“Businesses must be vigilant against the menace of libel, particularly through fake and false reviews. Libellous attacks often manifest as deliberate misinformation, where individuals create fictional narratives or embellish negative experiences with the explicit intent to harm a company’s reputation.”

This is exactly the approach used by the CCDH and GDI to target their victims. But in the misinformation business the euphemism for damaging your opponent’s business is “disrupt”.

Again, it’s tempting to put all the blame for the new censorship on powerful actors such as Big Tech and government. But, as the likes of Dale Vince and Richard Coles illustrate, a considerable section of British society has embraced the new censoriousness. The Revd Coles, the pop star turned broadcasting vicar born and bred in a liberal democracy, seems to have no sense that “truth” might be contested, or be subject to change, or be coopted by vested interests. Apparently there’s just a single, monolithic version of The Truth.

.
Let’s censor each other!

This mechanism is a new technological take on an age-old human behaviour.

Siblings tell on each other to their parents. Snitching to teachers at school, when some of the locus of authority has shifted from adults to the peer group, is less well viewed. Good citizens inform the relevant bodies when wrongdoing is taking place. The impulse to tell someone else about a problem, especially if they’re more powerful or better resourced than you, is strong.

This natural impulse can be exploited by the power-hungry to maintain control over entire populations. In the communist German Democratic Republic, not that long ago, as many as one in fifty adults were employed as informants. “By 1989 the Stasi relied on 500,000 to 2,000,000 collaborators as well as 100,000 regular employees, and it maintained files on approximately 6,000,000 East German citizens—more than one-third of the population,” notes the Britannica.

Albania, the most extreme and isolated of all the communist countries, was too broke to have a surveillance industry with employees. But people snooped for free: at the height of the Hoxha regime in the Albanian capital of Tirana, as many as one in three adults worked for the secret police on a voluntary basis, brother informing on sister, wife on husband, neighbour on neighbour.

Has Western society learnt from the horrors of the last century?

“My reporting fingers are itching” someone said in a Zoom call in 2020. She’d seen a shop selling some wares deemed “non-essential” under new government regulations and was dying to dob the business owners in. Politicians such as Kit Malthouse and Ed Miliband encouraged people to report each other if they saw them breaking Covid rules.

Since then, TfL adverts on the London Underground have been inviting passengers to report anything they suspect to be sexual harassment, including staring at someone else.

“See it or experience it on public transport? Text what, where and when to 61016. In an emergency always dial 999. Aware of someone doing this and want to remain anonymous? Call the sexual harassment line on 0800 783 0137.”

Digital technology takes all this to a new level. The Civic Listening Corps is a project created by the Algorithmic Transparency Institute to build capacity within our communities to establish resilience to the impact of misinformation”.

“Civil listening” is the latest jargon for snitching on your fellow citizens, reporting on what the likes of you and me say on Facebook, Telegram and WhatsApp including closed messaging platforms. The new tech tool creates a direct link between informers and the authorities so that, as ATI director Cameron Hickey explains: “the next time you’re scrolling through TikTok or scrolling through Instagram or browsing a private group on WhatsApp and you see something that you think is problematic, you can click the share button or the forward button and send it to this tipline”.

This mechanism of the new censorship industry depends on everyday decisions taken by ordinary people in the moment.

And finally

I try to end most of my Substacks on an up note, with a hint of a better future or a suggestion about what we can do to counter the dark stuff.

This time, it’s a simple request. Most people are completely unaware of the new censorship infrastructure being built the scenes, or what it could mean for us all.

So please share this information, in whichever way suits you – conversation, reposting – as widely as you can.

And because Britain’s crisis of free speech is insufficiently recognised, I’m going to do a short Substack on overt censorship in before moving onto The Mechanisms of Unfreedom IV.


This article (The hidden mechanisms of unfreedom: part III) was created and published by Alex Klaushofer and is republished here under “Fair Use”

••••

The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB)

••••

Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning.

••••

Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.

••••

Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.

Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of The Liberty Beacon Project.

Be the first to comment

Leave a Reply

Your email address will not be published.


*