This is an overview on racial justice in tech and in AI that considers how systemic change must happen for technology to be support equity.
Through a case study of mortgage applications, this article shows how bias might be introduced to AI systems by either bias within historical data, and/or inherent biases of AI programmers and employers. This article gives reasons why this presents a risk to businesses in terms of missing out on customers (refusing credit to creditworthy people) […]
This short article looks at the link between the lack of diversity in the AI workforce and the bias against ethnic minorities within financial services – the “new danger of ‘bias in, bias out’”.
This article explains how make-up can be used both as a way to evade facial recognition systems, but also as an art form.
Machine Bias – There’s software used across the country to predict future criminals and it’s biased against blacks
This is an article detailing a software which is used to predict the likelihood of recurring criminality. It uses case studies to demonstrate the racial bias prevalent in the software used to predict the ‘risk’ of further crimes. Even for a similar crime, a white criminal would be much more likely to be judged low-risk. […]
A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known black man to be wrongfully arrested based on face recognition.
Explains (from a US perspective) how the development of machine learning and algorithms has left financial services at risk of exacerbating biases. Using the example of lending, the article explains how algorithms incorporate biases into our systems, and what organisations can do to limit risk, particularly from a legal perspective.
There is data which predicts that the introduction of AI into the financial system will bring with it increased racial bias and discrimination. Through a case study of mortgage applications, this article shows how bias might be introduced to AI systems by either 1. bias within historical data, and 2. inherent biases of AI programmers […]
This article argues that leading AI ethics researchers, such as Timnit Gebru, are often promised total academic freedom when recruited or interviewed for in-house roles at technology companies. However, internal roadblocks, lack of employee diversity, and hierarchical issues limit the impact of this research which aims to promote equity, fairness, and accountability in AI products. […]
An article about HireVue’s “AI-driven assessments”. More than 100 employers now use the system, including Hilton and Unilever, and more than a million job seekers have been analysed.