Whilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate number of crimes in some areas, which are passed to algorithms, which in turn reinforce over-policing.
A report exploring the racial bias challenges of the police’s use of live facial recognition technology in the United Kingdom. It focuses on the implications of the technology for people of colour and Muslims – two heavily surveilled groups in society.
The author makes the case that using facial recognition to prevent terrorism is justified as our world is becoming more dangerous every day; hence, policymakers should err on the side of public safety.
The UK Court of Appeal unanimously decided against a face-recognition system used by South Wales Police.
This article alerts about UK police using facial recognition and predictive policing without conducting public consultations. It also calls for transparency and input from the public about how those technologies are being used.
Algorithms used for predictive policing rely on datasets inherently biased because of historically over- or under-policing certain communities. This results in the amplification of those biases.
A good summary of the differences between predictive analytics – used in AI – and traditional methods, in terms of methods and impact.