When the state’s gaze never blinks, innocence becomes a temporary status

CAM WAKEFIELD
Let me take you on a tour of Britain’s future. It’s 2030, there are more surveillance cameras than people, your toaster is reporting your breakfast habits to the Home Office, and police officers are no longer investigating crimes so much as predicting them.
This is Pre-Crime UK, where the weight of the law is used against innocent people that an algorithm suspects may be about to commit a crime.
With a proposal that would make Orwell blush, the British police are testing a hundred new AI systems to figure out which ones can best guess who’s going to commit a crime.
That’s right: guess. Not catch, not prove. Guess. Based on data, assumptions, and probably your internet search history from 2011.
Behind this algorithmic escapade is Home Secretary Shabana Mahmood, who has apparently spent the last few years reading prison blueprints and dystopian fiction, not as a warning about authoritarian surveillance, but as aspiration.
In a jaw-dropping interview with former Prime Minister and Digital ID peddler Tony Blair, she said, with her whole chest: “When I was in justice, my ultimate vision for that part of the criminal justice system was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his Panopticon. That is that the eyes of the state can be on you at all times.”
Now, for those not fluent in 18th-century authoritarian architecture, the Panopticon is a prison design where a single guard can watch every inmate, but the inmates never know when they’re being watched. It’s not so much “law and order” as it is “paranoia with plumbing.”
Enter Andy Marsh, the head of the College of Policing and the man now pitching Britain’s very own Minority Report.
According to the Telegraph, he’s proposing a new system that uses predictive analytics to identify and target the top 1,000 most dangerous men in the country. They’re calling it the “V1000 Plan,” which sounds less like a policing strategy and more like a discontinued vacuum cleaner.
“We know the data and case histories tell us that, unfortunately, it’s far from uncommon for these individuals to move from one female victim to another,” said Sir Andy, with the tone of a man about to launch an app.
“So what we want to do is use these predictive tools to take the battle to those individuals…the police are coming after them, and we’re going to lock them up.”
I mean, sure, great headline. Go after predators. But once you start using data models to tell you who might commit a crime, you’re not fighting criminals anymore. You’re fighting probability.
The government, always eager to blow millions on a glorified spreadsheet, is chucking £4 million ($5.39M) at a project to build an “interactive AI-driven map” that will pinpoint where crime might happen. Not where it has happened. Where it might.
It will reportedly predict knife crimes and spot antisocial behavior before it kicks off.
But don’t worry, says the government. This isn’t about watching everyone.
A “source” clarified: “This doesn’t mean watching people who are non-criminals—but she [Mahmood] feels like, if you commit a crime, you sacrifice the right to the kind of liberty the rest of us enjoy.”
That’s not very comforting coming from a government that locks people up over tweets.
Meanwhile, over in Manchester, they’re trying out “AI assistants” for officers dealing with domestic violence.
These robo-cop co-pilots can tell officers what to say, how to file reports, and whether or not to pursue an order. It’s less “serve and protect” and more “ask Jeeves.”
“If you were to spend 24 hours on the shoulder of a sergeant currently, you would be disappointed at the amount of time that the sergeant spends checking and not patrolling, leading and protecting.”
That’s probably true. But is the solution really to strap Siri to their epaulettes and hope for the best?
Still, Mahmood remains upbeat: “AI is an incredibly powerful tool that can and should be used by our police forces,” she told MPs, before adding that it needs to be accurate.
Tell that to Shaun Thompson, not a criminal but an anti-knife crime campaigner, who found himself on the receiving end of the Metropolitan Police’s all-seeing robo-eye. One minute, he’s walking near London Bridge, probably thinking about lunch or how to fix society, and the next minute he’s being yanked aside because the police’s shiny new facial recognition system decided he looked like a wanted man.
He wasn’t. He had done nothing wrong. But the system said otherwise, so naturally, the officers followed orders from their algorithm overlord and detained him.
Thompson was only released after proving who he was, presumably with some documents and a great deal of disbelief. Later, he summed it up perfectly: he was treated as “guilty until proven innocent.”
Mahmood’s upcoming white paper will apparently include guidelines for AI usage. I’m sure all those future wrongful arrests will be much more palatable when they come with a printed PDF.
Here’s the actual problem. Once you normalize the idea that police can monitor everyone, predict crimes, and act preemptively, there’s no clean way back. You’ve turned suspicion into policy. You’ve built a justice system on guesswork. And no amount of shiny dashboards or facial recognition cameras is going to fix the rot at the core.
This isn’t about catching criminals. It’s about control. About making everyone feel watched. That was the true intention of the panopticon. And that isn’t safety; it’s turning the country into one big prison.
This article (Britain’s AI Policing Plan Turns Toward Predictive Surveillance and a Pre-Crime Future) was created and published by Reclaim the Net and is republished here under “Fair Use” with attribution to the author Cam Wakefield
See Related Article Below
The Police Plan to Roll Out AI in ‘Predictive Analytics’ Should Worry Us All
Every policing area already has an intelligence unit responsible for ‘predictive analytics’

PAUL BIRCH
In light of recent revelations regarding West Midlands Police’s use of artificial intelligence (AI) to fabricate information about Israeli football fans, you would think that the police would be a little hesitant on the wider use of such technology. But you would be wrong.
In a recent interview with the Telegraph, Sir Andy Marsh, the head of the College of Policing, said that police were evaluating up to 100 projects where officers could use AI to help tackle crime. This includes utilising such things as “predictive analytics” to target criminals before they strike, redolent of the 2002 film Minority Report. The aim, according to Home Secretary Shabana Mahmood, is to put the “eyes of the state” on criminals “at all times”. This is to be outlined further in an upcoming white paper on police reform.
The expansion of AI use in British policing is continually being sold as innovation, efficiency and protection. But in reality it marks a decisive step towards a society in which liberty is treated as a risk to be managed. Wrapped in the language of safety and reform, AI represents a quiet but profound transformation of the state’s relationship with its citizens: from upholder of the law to permanent overseer of behaviour.
Every policing area already has an intelligence unit responsible for ‘predictive analytics’. Crimes which are logged into police indices are scrutinised by analysts, who then produce reports and briefings relating to crime hotspots and the like. Appropriate police resources can be subsequently directed to a particular location at a particular time in order to tackle or prevent the crime. AI can never adequately replace a team of trained professionals going through data. It probably can, however, do it at a fraction of the cost, which is more important to most senior officers than civil liberties. Not so much Minority Report as Heath-Robinson.
The core injustice is clear. Policing in a supposedly free society responds to crimes that have already occurred, or prevention involving highly visible uniformed patrols. So-called predictive policing reverses that logic by directing the power of the state at everybody, nearly all of whom will have done nothing illegal. It will be based on statistical IT guesses about what they might do. This is not a mere technical adjustment to policing as some would have us believe; it is a complete change of emphasis to everyone being potentially guilty until proven innocent. Mass surveillance (for that is what it is) will be imposed without charge, without trial and without a verdict due to there being no formal accusation.
Defenders of this approach pretend that there is no threat to an individual’s liberty. That is patently false. Liberty is eroded wherever the state inserts itself permanently into a person’s life. Persistent scrutiny is a form of soft coercion. Knowing that your movements, associations and behaviour are being logged and evaluated by the state is tantamount to coercion. A society in which citizens must behave as if they are always being watched is not free; it is merely orderly.
Worse still, this system destroys any real degree of accountability. Decisions that once belonged to identifiable officers will be attributed to the system or the programme. When mistakes occur, as they inevitably will, there will be no discerning human judgement to interrogate the system, as operators will almost certainly defer to the machine in the first instance. Power will diffuse upward into institutions and outward into private sector software developers, while the citizen will be left in some form of legal limbo facing an unchallengeable process. An algorithm cannot be cross-examined or shamed.
The claim that these systems are objective is also dangerous. AI will not discover truth; it will go through past policing data, solidify past errors and enforce them with mathematical certainty. Historical mistakes will become future risk indicators.
Nobody in Government is stating that the rollout of AI is an experiment. Surveillance infrastructure never retreats. Every database, camera and algorithm built for the worst offenders will inexorably become, over time, available for broader use. Today the target is violent or prolific criminals; tomorrow it could be protest organisers or those deemed by the political class to be a problem. We have already seen this with the policing of social media and the use of Non-Crime Hate Incidents. How can the police be trusted with transformational technology such as this?
Efficiency is the final lie. Any assumed reduction of paperwork, better targeting and smoother processes do not justify expanding state surveillance. And in any case, during my time in the police, the introduction of new technology never reduced the amount of bureaucracy – it merely transferred it from the page to the screen, and often increased it. Swift injustice is not progress.
Enshrining the use of artificial intelligence across UK law enforcement will abolish any anonymity in the public space and replace it with permanent identifiability. Every journey will become traceable, every gathering recordable, every deviation from the norm potentially suspicious. Yes, this already happens during the course of a police investigation, but that is to establish the movements and behaviours of identifiable suspects, not to generally monitor the entire populace.
This is not policing by consent, as per the original Peelian Principles; it is policing by omnipresence and, unlike watching a Hollywood movie, we won’t be able to walk away if we don’t like it.
Paul Birch is a former police officer and counter-terrorism specialist. You can read his Substack here
This article (The Police Plan to Roll Out AI in ‘Predictive Analytics’ Should Worry Us All) was created and published by The Daily Sceptic and is republished here under “Fair Use” with attribution to the author Paul Birch

••••
The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB)
••••
Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning.
••••
Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
••••
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of The Liberty Beacon Project.





Leave a Reply