WE ARE A MAGAZINE ABOUT LAW AND JUSTICE | AND THE DIFFERENCE BETWEEN THE TWO
March 29 2025
WE ARE A MAGAZINE ABOUT LAW AND JUSTICE | AND THE DIFFERENCE BETWEEN THE TWO

Predictive policing condemned by rights group for encouraging ‘racist and discriminatory’ policing

Predictive policing condemned by rights group for encouraging ‘racist and discriminatory’ policing

Pic by Matt Preston (Flickr, creative comms)

New research has condemned ‘predictive policing’ in the UK, arguing the practice ‘supercharges racism’, resulting in discrimination and infringement of human rights while failing to lower crime levels.

‘Predictive policing’ refers to automated technologies that use data and algorithmic models to predict where crimes will be committed and profile who is ‘at-risk’ of committing those crimes.

These assessments influence a range of policing, including patrols, targeted operations and surveillance, as well as stop-and-searches in what the police say is an effort to stop crimes before they occur. Such data-driven systems are currently being used by over three-quarters of police forces across the United Kingdom.

A new report by Amnesty International has identified that predictive policing systems encourage ‘racist and discriminatory policing and criminalisation of areas, groups and individuals’, which they say perpetuates institutional racism in policing and wider society.

The report highlights the ‘inherent bias’ present in the data being used that is based on ‘racist policing’ practices. It results in the repeated targeting of marginalised and deprived communities who have been over-represented in policing, causing trauma on both an individual and community level and perpetuating a cycle of criminalisation. The impact of these technologies infringes upon the targeted individuals’ human rights, including the right not to be discriminated against, the right to privacy, the right to a fair trial and the freedom of association.

One resident of a ‘high crime’ community said of the impact: ‘It’s labelled a crime hotspot. So when the police enter the area, they’re in the mindset of “we’re in a dangerous community – the people here are dangerous”. It doesn’t matter if they’re young people, they’re still ‘dangerous’ and therefore ‘we can police them violently’ and they do police them violently.’

Amnesty has  advocated for a ban on data-based predictive policing and risk assessment systems. They have also called for, at a minimum, the statutory establishment of higher degrees of transparency through the creation of a publicly accessible register.

A spokesperson for the National Police Chiefs Council, which has previously admitted that ‘policing is institutionally racist’, stated that ‘policing uses a wide range of data to help inform its response to tackling and preventing crime, maximising the use of finite resources. As the public would expect, this can include concentrating resources in areas with the most reported crime. We are working hard to improve the quality and consistency of our data to better inform our response, ensuring that all information and new technology is held and developed lawfully, ethically and in line with the Data Ethics Authorised Professional Practice (APP).’

However, the effect of predictive policing on minimising crime has been debated. The chief executive of Amnesty International UK argues ‘the evidence that this technology keeps us safe just isn’t there; the evidence that it violates our fundamental rights is clear as day. We are all much more than computer-generated risk scores.’

Related Posts