AI tools are developed with the aim of preventing crime with tools such as computer vision, pattern recognition, and the use of historical data to create crime maps, locations with higher risks of offence. Whilst they may reduce on-the-fly human bias, they may automate systemic biases. For example, facial recognition techniques are less reliable for non-white individuals, specially for black women.
Historical data may reflect the over-policing certain locations whilst under-policing others. Those patterns get encoded in the algorithms, which reinforce the over- and under-policing of the same areas in the future. The abundance of data can also make postcode a proxy for ethnicity.
Technology has never been colourblind. It’s time to abolish notions of “universal” users of software. This is an overview on racial justice in tech and in AI that considers how […]
Read MoreThis video is an in depth panel discussion of the issues uncovered in the ‘Unmasking Facial Recognition’ report from WebRootsDemocracy. This report found that facial recognition technology use is likely […]
Read MoreAs protests against police brutality and in support of the Black Lives Matter movement continue in the wake of George Floyd’s killing, protection against mass surveillance has become top of […]
Read MoreThere’s software used across the country to predict future criminals. And it’s biased against blacks. This is an article detailing a software which is used to predict the likelihood of […]
Read MoreA New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known black man to be wrongfully arrested based on […]
Read MoreWhilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate […]
Read MoreFacial recognition is not the next generation of CCTV. Whilst CCTV takes pictures, facial recognition takes measurements. Measurements of the distance between your eyes, the length of your nose, the […]
Read MoreIn their zeal and earnest desire to protect individual privacy, policymakers run the risk of stifling innovation. The author makes the case that using facial recognition to prevent terrorism is […]
Read MoreThe UK Court of Appeal has determined that the use of a face-recognition system by South Wales Police was “unlawful”, which could have ramifications for the widespread use of such […]
Read MoreUK police forces are largely adopting AI technologies, in particular facial recognition and predictive policing, without public consultation. This article alerts about UK police using facial recognition and predictive policing […]
Read MoreAlgorithms used for predictive policing rely on datasets which are inherently biased because of historically over- or under-policing certain communities. This results in the amplification of those biases.
Read MoreA good summary of the differences between predictive analytics – used in AI – and traditional methods, in terms of methods and impact.
Read More