AI Making Legal Judgements

Algorithms become the arbiters of determinations about individuals (e.g. government benefits, granting licenses, pre-sentencing and sentencing, granting parole). Whilst AI tools may be used to mitigate human biases and for speed and lower costs of trials, there is evidence that it may enforce biases by using characteristics such as postcode or social economic level as a proxy for ethnicity.

The use of commercial AI tools such speech recognition – which have been shown to be less reliable for non-white speakers – can actively harm some groups when criminal justice agencies use them to transcribe courtroom proceedings.

Filter resources by type or complexity

All AdvancedArticleBeginnerIntermediateResearch PaperVideo

The danger of predictive algorithms in criminal justice

Dartmouth professor Dr. Hany Farid reverse engineers the inherent dangers and potential biases of recommendations engines built to mete out justice in today’s criminal justice system. In this video, he provides an example of how the number of crimes is used as proxy for race.

Read More

Adjudicating by Algorithm, Regulating by Robot

Sophisticated computational techniques, known as machine-learning algorithms, increasingly underpin advances in business practices, from investment banking to product marketing and self-driving cars. Machine learning—the foundation of artificial intelligence—portends vast changes to the private sect… This article highlights the benefits of artificial intelligence in adjudication and making law in terms of improving accuracy, reducing human biases […]

Read More