AI in Policing

AI tools are developed with the aim of preventing crime with tools such as computer vision, pattern recognition, and the use of historical data to create crime maps, locations with higher risks of offence. Whilst they may reduce on-the-fly human bias, they may automate systemic biases. For example, facial recognition techniques are less reliable for non-white individuals, specially for black women.

Historical data may reflect the over-policing certain locations whilst under-policing others. Those patterns get encoded in the algorithms, which reinforce the over- and under-policing of the same areas in the future. The abundance of data can also make postcode a proxy for ethnicity.

Filter resources by type or complexity

All AdvancedArticleBeginnerIntermediateReportVideo

Unmasking Facial Recognition | WebRoots Democracy Festival

This video is an in depth panel discussion of the issues uncovered in the ‘Unmasking Facial Recognition’ report from WebRootsDemocracy. This report found that facial recognition technology use is likely to exacerbate racist outcomes in policing and revealed that London’s Metropolitan Police failed to carry out an Equality Impact Assessment before trialling the technology at […]

Read More

How AI Could Reinforce Biases In The Criminal Justice System

Whilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate number of crimes in some areas, which are passed to algorithms, which in turn reinforce over-policing.

Read More