Racist Robots? How AI bias may put financial firms at risk

Through a case study of mortgage applications, this article shows how bias might be introduced to AI systems by either bias within historical data, and/or inherent biases of AI programmers and employers. This article gives reasons why this presents a risk to businesses in terms of missing out on customers (refusing credit to creditworthy people) […]
AI risks replicating tech’s ethnic minority bias across business

This short article looks at the link between the lack of diversity in the AI workforce and the bias against ethnic minorities within financial services – the “new danger of ‘bias in, bias out’”.
Black Loans Matter: fighting bias for AI fairness in lending

A detailed summary of research by Mark Weber, Mikhail Yurochkin, Sherif Botros and Vanio Markov, breaking down the lack of racial justice in the current US financial system which leads to the loss of the right to financial security, along with examination of solutions.
Measuring racial discrimination in algorithms

A study on the discriminatory impact of algorithms in pre-trial bail decisions.
Unmasking Facial Recognition | WebRoots Democracy Festival

This video is an in depth panel discussion of the issues uncovered in the ‘Unmasking Facial Recognition’ report from WebRootsDemocracy. This report found that facial recognition technology use is likely to exacerbate racist outcomes in policing and revealed that London’s Metropolitan Police failed to carry out an Equality Impact Assessment before trialling the technology at […]
Can make-up be an anti-surveillance tool?

This article explains how make-up can be used both as a way to evade facial recognition systems, but also as an art form.
Machine Bias – There’s software used across the country to predict future criminals and it’s biased against blacks

This is an article detailing a software which is used to predict the likelihood of recurring criminality. It uses case studies to demonstrate the racial bias prevalent in the software used to predict the ‘risk’ of further crimes. Even for a similar crime, a white criminal would be much more likely to be judged low-risk. […]
Another arrest, and jail time, due to a bad facial recognition match

A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known black man to be wrongfully arrested based on face recognition.
Algorithms and bias: What lenders need to know

Explains (from a US perspective) how the development of machine learning and algorithms has left financial services at risk of exacerbating biases. Using the example of lending, the article explains how algorithms incorporate biases into our systems, and what organisations can do to limit risk, particularly from a legal perspective.
AI Perpetuating Human Bias in the Lending Space

There is data which predicts that the introduction of AI into the financial system will bring with it increased racial bias and discrimination. Through a case study of mortgage applications, this article shows how bias might be introduced to AI systems by either 1. bias within historical data, and 2. inherent biases of AI programmers […]