Algorithms and bias: What lenders need to know

a mass of people

Explains (from a US perspective) how the development of machine learning and algorithms has left financial services at risk of exacerbating biases. Using the example of lending, the article explains how algorithms incorporate biases into our systems, and what organisations can do to limit risk, particularly from a legal perspective.

AI Perpetuating Human Bias in the Lending Space

person holding out a handful of coins

There is data which predicts that the introduction of AI into the financial system will bring with it increased racial bias and discrimination. Through a case study of mortgage applications, this article shows how bias might be introduced to AI systems by either 1. bias within historical data, and 2. inherent biases of AI programmers […]

Reducing bias in AI-based financial services

American express cards

This US focused report considers four distinct ways of incorporating Artificial Intelligence into credit lending. It highlights the existing racial bias in credit scores, where white / non-hispanic individuals are likely to have a much higher credit score than black / African American individuals. And argues that although introducing AI could potentially exacerbate current biases, […]

Evaluating neural toxic degeneration in language models

image of a PDF document

This paper highlights how language models used to automatically generate text produce toxic, offensive and potentially harmful language. They describe various techniques that can be employed to avoid or limit this, but demonstrate that no current method is failsafe in preventing this entirely.

Artificial Intelligence & Climate Change: Supplementary Impact Report

Oxford Foundry logo

This report provides an overview of AI and Climate Change with considerations for AI Climate Solutions. It discusses values-based climate communication and the difference between climate engagement and climate manipulation. A good source on localising climate visualisation, and how AI has been utilised to do this (including the risks).

Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.

Google logo on a building

This article argues that leading AI ethics researchers, such as Timnit Gebru, are often promised total academic freedom when recruited or interviewed for in-house roles at technology companies. However, internal roadblocks, lack of employee diversity, and hierarchical issues limit the impact of this research which aims to promote equity, fairness, and accountability in AI products. […]