A report exploring the racial bias challenges of the police’s use of live facial recognition technology in the United Kingdom. It focuses on the implications of the technology for people of colour and Muslims – two heavily surveilled groups in society.
Algorithms used for predictive policing rely on datasets inherently biased because of historically over- or under-policing certain communities. This results in the amplification of those biases.
A good summary of the differences between predictive analytics – used in AI – and traditional methods, in terms of methods and impact.