If you are reading this, you have been asked to help contribute to, or test this Race and AI Toolkit. Thank you for taking up the mission! Key facts This Toolkit will be available for free globally as an online resource, and will have other types of elements added, such as: An interactive section similar […]
Through a case study of mortgage applications, this article shows how bias might be introduced to AI systems by either bias within historical data, and/or inherent biases of AI programmers and employers. This article gives reasons why this presents a risk to businesses in terms of missing out on customers (refusing credit to creditworthy people) […]
This short article looks at the link between the lack of diversity in the AI workforce and the bias against ethnic minorities within financial services – the “new danger of ‘bias in, bias out’”.
A detailed summary of research by Mark Weber, Mikhail Yurochkin, Sherif Botros and Vanio Markov, breaking down the lack of racial justice in the current US financial system which leads to the loss of the right to financial security, along with examination of solutions.
A study on the discriminatory impact of algorithms in pre-trial bail decisions.
This video is an in depth panel discussion of the issues uncovered in the ‘Unmasking Facial Recognition’ report from WebRootsDemocracy. This report found that facial recognition technology use is likely to exacerbate racist outcomes in policing and revealed that London’s Metropolitan Police failed to carry out an Equality Impact Assessment before trialling the technology at […]
This article explains how make-up can be used both as a way to evade facial recognition systems, but also as an art form.
Machine Bias – There’s software used across the country to predict future criminals and it’s biased against blacks
This is an article detailing a software which is used to predict the likelihood of recurring criminality. It uses case studies to demonstrate the racial bias prevalent in the software used to predict the ‘risk’ of further crimes. Even for a similar crime, a white criminal would be much more likely to be judged low-risk. […]
A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known black man to be wrongfully arrested based on face recognition.
Explains (from a US perspective) how the development of machine learning and algorithms has left financial services at risk of exacerbating biases. Using the example of lending, the article explains how algorithms incorporate biases into our systems, and what organisations can do to limit risk, particularly from a legal perspective.