AI in Policing

AI tools are developed with the aim of preventing crime with tools such as computer vision, pattern recognition, and the use of historical data to create crime maps, locations with higher risks of offence. Whilst they may reduce on-the-fly human bias, they may automate systemic biases. For example, facial recognition techniques are less reliable for non-white individuals, specially for black women.

Historical data may reflect the over-policing certain locations whilst under-policing others. Those patterns get encoded in the algorithms, which reinforce the over- and under-policing of the same areas in the future. The abundance of data can also make postcode a proxy for ethnicity.

Filter resources by type or complexity

All AdvancedArticleBeginnerIntermediateReportVideo

Can make-up be an anti-surveillance tool?

As protests against police brutality and in support of the Black Lives Matter movement continue in the wake of George Floyd’s killing, protection against mass surveillance has become top of mind. This article explains how make-up can be used both as a way to evade facial recognition systems, but also as an art form.

Read More

Another arrest, and jail time, due to a bad facial recognition match

A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known black man to be wrongfully arrested based on face recognition.

Read More

How AI Could Reinforce Biases In The Criminal Justice System

Whilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate number of crimes in some areas, which are passed to algorithms, which in turn reinforce over-policing.

Read More

Facial recognition could stop terrorists before they act

In their zeal and earnest desire to protect individual privacy, policymakers run the risk of stifling innovation. The author makes the case that using facial recognition to prevent terrorism is justified as our world is becoming more dangerous every day; hence, policymakers should err on the side of public safety.

Read More

Is police use of face recognition now illegal in the UK?)

The UK Court of Appeal has determined that the use of a face-recognition system by South Wales Police was “unlawful”, which could have ramifications for the widespread use of such technology across the UK. The UK Court of Appeal unanimously decided against a face-recognition system used by South Wales Police.

Read More

UK police adopting facial recognition, predictive policing without public consultation

UK police forces are largely adopting AI technologies, in particular facial recognition and predictive policing, without public consultation. This article alerts about UK police using facial recognition and predictive policing without conducting public consultations. It also calls for transparency and input from the public about how those technologies are being used.

Read More