PETE NORTH
I’m just perusing the policing White Paper published yesterday by the Home Office – billed as the “most significant modernisation in nearly 200 years”. Over on Turbulent Times, t’other North says “To my mind, if we had a grown-up media industry, this would easily qualify as the lead item on every front page, with multiple explanatory and comment articles, together with full coverage of the political treatment including a report on the parliamentary statement by Home Secretary Shabana Mahmood and the subsequent debate”.
Instead, he remarks, a venal press concentrates mainly on personalities, with the Suella Braverman defection to Reform taking a lion’s share of the coverage, along with some tedious trivia about some people called Beckham and a continuation of the Burnham soap opera. The BBC, at the time of writing, devoted its lead website coverage to developments in Minneapolis.
The reason for this, I suppose, is that the people in news rooms assume that because something is not of interest to them, that it’s not important at all. Moreover, to give the subject an proper airing, you would need to examine this paper in much greater detail, and not from a standing start, but the news agenda has already moved on and newspapers no longer employ specialist correspondents any more. The people writing the news don’t really know what they’re looking at. I think I do, though.
My first thoughts on the paper is that it’s not actually police reform at all. Instead the police are being restructured in order to be fold them into the regionalisation process. The report states:
While individual Police and Crime Commissioners (PCCs)5 have made an important contribution to serving their communities, the model has not lived up to expectations. Having a system of police governance that is separate from the existing structures of local government has created unnecessary silos. Mayors and local government leaders are better placed to promote joined-up working to cut crime. We will therefore abolish PCCs, replacing them with directly elected mayors, and where mayors do not yet exist, with Policing and Crime Boards made up of local council leaders. This new system of police governance will reintegrate policing back into the system of local government in England and Wales, enabling greater collaboration across local services.
That there is the smoking gun. For some years now we’ve been in the process of regionalisation. My own local authority was abolished a while back, bringing about a new amalgamated North Yorkshire Council, effectively abolishing local democracy. Police amalgamation is just part of the process and it has no bearing on what is best for policing or whether people actually want it.
The government argues that the current structure is highly inefficient, with each of the 43 forces having its own headquarters, management teams, operational and business support functions and many specialist capabilities. These costs are particularly high in smaller forces, some of whom are struggling to maintain financial resilience.
This, to me, suggestions even more centralisation – (which has been disastrous for Scotland). We have already seen how this plays out, where police services are moved into larger, more remote headquarters so when they nab you for a spicy tweet they have to drive you halfway across the county to book you in, and police officers are left covering a patch spanning hundreds of square miles. However they dress it up, they’re abolishing local policing, and will have a much diminished insight into the local crime landscape.
Cutting to the chase, I don’t think this is going to help matters at all. The police are already overly bureaucratic and this isn’t going to improve things. Whatever reduction in operation overheads you get from centralisation are rapidly offset by the growth of middle tier non-jobs and administration. Nor will it do anything for local accountability.
More to the point, it won’t really fix anything. One can’t help but notice that the average plod is getting thicker – because nobody serious would want to do a dangerous job for crap remuneration, that ultimately amounts to sweeping leaves on a windy day. If the government were actually interested in reforming the police, they’d be looking at the system as a whole, encompassing justice reforms and repairing the prison service.
Part of the reason policing is a dangerous job nobody wants to do is because sentencing doesn’t act as a deterrent to attacking police officers, and police themselves face draconian scrutiny whenever they defend themselves. As to the prisons themselves, it is not at all hyperbole to say that the system is fundamentally broken.
This white paper, is more concerned with bending the police to the broader agenda of regionalisation and centralisation, following on from Tory austerity – which saw hundreds of police stations and local magistrates courts shuttered. On that basis, it’s hard to see this as a programme of reforms, when it’s really just a continuation of the retreat from localism.
This article (Labour quietly abolishes local policing) was created and published by Pete North and is republished here under “Fair Use”
See Related Article Below
Mahmood will unleash two-tier terror with her plan for AI policing
BRUCE NEWSOME
ON MONDAY afternoon Home Secretary Shabana Mahmood told the Commons her police reforms will include the ‘largest-ever rollout of facial recognition’. This includes spending £115million for police forces to roll out Artificial Intelligence systems, overseen by a new organisation to be called ‘Police.AI’.
The Times’s report details her plans to reduce 43 constabularies to 12, each with data analysts and AI software Tasper to predict crime. This is a terrible idea.
Predictive AI never predicts as promised. It always reinforces institutional biases. If police are systemically biased against conservative commentators, white recruits, and ‘openly Jewish’ protesters, predictive AI is only going to reinforce their prejudices. It will also intensify two-tier policing.
Sellers of predictive AI claim that a human is always in the loop, checking the validity of the automated predictions. But this claim is contradicted by their other claim: automation saves personnel. Whatever personnel are left to supervise the AI, those personnel usually don’t know how to validate predictions, can’t be bothered, or challenge only the predictions they dislike. Algorithms, we are told, can forecast crime hotspots, identify likely offenders, select recruits, and allocate resources more efficiently than humans could. But humans are the ones who set up the AI. What is difficult for humans to predict is also difficult for humans to program into AI.
Predictive AI isn’t any more intelligent than a human. It’s not really intelligent at all. Predictive AI is actually machine learning from historical data to forecast future behaviours. It’s just a pattern whisperer.
Predictive AI’s advantage is financial, not operational. It’s quicker and cheaper in finding patterns, but not better. It finds patterns that a disinterested human would recognise as spurious. For instance, AI that was hyped as detecting covid in chest X-rays had merely learned to distinguish adults from children: in its training, all the covid-free patients were children.
Some experts predict that predictive AI will never improve, even though generative AI will continue to improve.
AI’s failure to predict is inherent to the data from which it ‘learns’ or on which it ‘is trained’ in the absence of theory.
Historical data are inherently unpredictive of other periods. This is most obvious when observing an adaptive or contingent behaviour. For instance, if you had used historical data from before the 2000s to predict terrorism in the 2000s, you would not have predicted 9/11. Machine-learning would have predicted negotiable airline hijackings, not suicidal hijackers flying planes into buildings.
Also, samples are inherently unpredictive outside of the sampled demographics or geography. For instance, police across America use the Ohio Risk Assessment System, which was trained on just 452 defendants, all in Ohio, all in 2010. Think how unrepresentative that sample is.
There is a temptation to think that the sample must merely be bigger. Well, the Public Safety Assessment did just that. It was trained on 1.5million subjects across 300 American jurisdictions. But Cook County, Illinois, found that it ‘predicted’ ten times more defendants escalating to violent crimes than actually escalated. Thousands of defendants were jailed unnecessarily. The PSA, despite its sample size, did not represent propensities in Cook County.
Training data reflect self-selection and selection biases, which AI will only reinforce. For instance, if a hiring algorithm is fed CVs from a male-dominated industry, it will inadvertently associate men with success, and select male candidates – thereby exacerbating the gender bias. This isn’t hypothetical: it’s the real-life story of a recruiting tool scrapped by Amazon in 2018.
Now think how West Yorkshire Police, which already discourages applications from whites, could automate such racism. Similarly, predictive policing software PredPol (now Geolitica) was criticised for directing patrols to neighbourhoods which already receive more policing, regardless of changes in actual crime.
When AI is trained on policing data, it does not ‘learn’ criminal patterns, it learns policing patterns. AI becomes part of a self-reinforcing feedback loop, reinforcing the illusion that crime is concentrated where AI says it is. The algorithm is predictive in only the self-fulfilling sense.
Now think of how this feedback loop rationalises police biases. For instance, West Midlands Police have been exposed cracking down on anti-asylum protesters while letting armed Muslim gangs take over Birmingham city centre, and banning Israeli football fans while kow-towing to immoderate Muslim ‘community leaders’. How did West Midlands Police generate the intelligence to rationalise its ban on Israeli fans? Partly with AI, which reported Israeli violence at a football match that never happened.
Predictive AI is also inherently invasive of privacy. Machine learning depends on personal data: online browsing, purchasing, travel, socialising, voting, health, demographics and so on. Social media companies, such as Facebook (now Meta), use such data to predict everything from political leanings to mental health, often without explicit consent. Now imagine the police predicting your crime of racial hatred because you browsed news of a protest outside an asylum hotel, or your crime of Islamophobia because you browsed the facts of Muslim ‘grooming gangs’.
Predictive AI reinforces stereotyping. The Cambridge Analytica scandal of 2018 revealed how predictive AI could reduce millions of users to a few political stereotypes, based on Facebook data, and then feed those stereotypes with targeted political messaging. Netflix claims to recommend what its viewers want to watch, but really it homogenises most viewers around a minority of options. Imagine a school and thence a local authority that stereotypes you as a far-right extremist because you showed students of politics some videos made by Trump supporters. That happened in 2025. AI could help police to identify people who watched such videos on Facebook and thence flag them as far-right extremists.
Predictive AI is used to justify policing that is invasive and repressive, even if it stops short of fighting any crime. Policing has shifted from responding to crimes to intimidating supposed pre-criminals (euphemistically: ‘managing risk’). The presumption of innocence has shifted to presumption of propensity. Probable cause has shifted from preparations for crime to conformity with a pattern. Predictive policing might suppress crime, but it also represses lawful activity and erodes trust. Without trust, policing becomes less effective – at least the intelligence-led kind.
AI undermines accountability. Most predictive systems are proprietary. Thence, the public, police, and even the supplier do not understand how predictions are generated. When an officer acts on an algorithmic recommendation, who is responsible for the outcome? The officer? The constabulary? The vendor? This diffusion of responsibility weakens accountability.
AI undermines due process. Unlike a human, an algorithm cannot be cross-examined or criminalised. The user and programmer can claim ignorance or irresponsibility. Errors are hard to detect, let alone correct. Even good-faith investigations can collapse into ‘computer says so’. Apologists argue that humans are biased too, so algorithmic bias is merely a lesser evil. This is a false choice. Human bias is more contestable and corrigible. Algorithms give bias the appearance of objectivity, and embed it in systems that operate at scale and are difficult to investigate and turn around.
Predictive AI elevates machine automation over human autonomy. This is ‘automation bias’. For instance, IBM’s Watson Health promised to predict patient outcomes but fell short. Nevertheless, doctors were more likely to defer to AI judgments than to colleague judgments, and even their own judgments.
This ‘automation bias’ has been documented in aviation, where pilots are more likely to shut down a healthy engine (a potentially deadly decision) when AI falsely warns of a problem than when a human falsely warns of a problem.
Efficiencies are over-estimated or -valued. Acting on bad predictions is expensive, such as when Cook County jailed thousands of defendants unnecessarily. Correcting for bad predictions is expensive, such as when Kent Police paid £20,000 to Julian Foulkes, a 71-year-old retired special constable, for wrongfully invading and searching his home and arresting him over a satirical tweet that the police misread as inciting the ‘storming [of] Heathrow’. In any case, financial efficiencies are over-valued at the expense of operational effectiveness.
AI which cannot predict outside the historical period or the demographics or geography on which it is trained, which reinforces self-selection and selection biases, invades privacy, feeds stereotyping, encourages invasive and repressive policing, undermines accountability, undermines due process, and automates human judgments is ineffective, even if it is efficient.
This article (Mahmood will unleash two-tier terror with her plan for AI policing) was created and published by Conservative Woman and is republished here under “Fair Use” with attribution to the author Bruce Newsome
••••
The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB)
••••
Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning.
••••
Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
••••
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of The Liberty Beacon Project.





Leave a Reply