AI Making Legal Judgements

Algorithms become the arbiters of determinations about individuals (e.g. government benefits, granting licenses, pre-sentencing and sentencing, granting parole). Whilst AI tools may be used to mitigate human biases and for speed and lower costs of trials, there is evidence that it may enforce biases by using characteristics such as postcode or social economic level as a proxy for ethnicity.

The use of commercial AI tools such speech recognition – which have been shown to be less reliable for non-white speakers – can actively harm some groups when criminal justice agencies use them to transcribe courtroom proceedings.

Filter resources by type or complexity

All AdvancedArticleBeginnerIntermediateResearch PaperVideo

Measuring racial discrimination in algorithms

Measuring racial discrimination in algorithms There is growing concern that the rise of algorithmic decision-making can lead to discrimination against legally protected groups, but measuring such algorithmic discrimination is often hampered by a fundamental selection challenge. We develop new quasi-experimental tools to overcome this challenge and measure algorithmic discrimination in the setting of pre-trial bail […]

Read More

The danger of predictive algorithms in criminal justice

A study on the discriminatory impact of algorithms in pre-trial bail decisions.

Read More

Racial disparities in automated speech recognition

Racial disparities in automated speech recognition Automated speech recognition (ASR) systems are now used in a variety of applications to convert spoken language to text, from virtual assistants, to closed captioning, to hands-free computing. By analyzing a large corpus of sociolinguistic interviews with white and African American speakers, we demo… Analysis of five state-of-the-art automated […]

Read More