Racist Robots? How AI bias may put financial firms at risk

Through a case study of mortgage applications, this article shows how bias might be introduced to AI systems by either bias within historical data, and/or inherent biases of AI programmers and employers. This article gives reasons why this presents a risk to businesses in terms of missing out on customers (refusing credit to creditworthy people) […]

Algorithms and bias: What lenders need to know

a mass of people

Explains (from a US perspective) how the development of machine learning and algorithms has left financial services at risk of exacerbating biases. Using the example of lending, the article explains how algorithms incorporate biases into our systems, and what organisations can do to limit risk, particularly from a legal perspective.

Student Predictions & Protections: Algorithm Bias in AI-Based Learning

A stylised image of a brain

This short article gives an example of how predictive algorithms can penalise underrepresented groups of people. In this example, students from Guam had their pass rate underestimated versus other nationalities, because of the low number of students in the data set used to build the prediction model, resulting in insufficient accuracy.