An Elegant AF Blog

Periodical tech commentary from the in-between.

Predicting Murder & Ignoring Misconduct; How the UK’s New AI Policing Tool Misses the Mark

Predicting Murder & Ignoring Misconduct; How the UK’s New AI Policing Tool Misses the Mark

Having declared progress on addressing the deep-rooted cultural issues within their ranks, the Metropolitan Police are moving on to a new frontier: predicting who might commit murder...with AI.

The so-called “murder prediction tool”, officially titled "Sharing Data to Improve Risk Assessment" was first reported by The Guardian in early April 2025. It’s a collaboration between the Ministry of Justice, the Home Office, and Greater Manchester Police.

It's still in the research phase. Nonetheless, documents obtained by Statewatch through Freedom of Information requests reveal that data from up to 500,000 individuals including victims, witnesses, and those with safeguarding concerns has already been shared to train the new model.

Crucially this is historic data, drawn from a criminal justice system long criticised for systemic racism, class bias, and misogyny. Feeding this data into a predictive module, risks hard wiring passed injustices into future decisions. Now under the guise of algorithmic objectivity.

Its worth noting; no AI would have been necessary to prevent the 2021 abduction and murder of Sarah Everard by a serving police officer. Nor would predictive policing be required to know who is most likley to be convicted for misusing privledged access to data; recent evidence suggests it's frequently the police officers.

Given this, it's striking that developing a "homicide prediction" tool is seen as a higher priority than addressing ongoing police misconduct and abuse of power.

The concern isn't just about flawed data or unproven technology. It's about where we choose to focus our public resources, and whether new tools will challenge existing systems of injustice, or simply reinforce them.

Edward Aslin

Edward Aslin