AI Making Legal Judgements

Algorithms become the arbiters of determinations about individuals (e.g. government benefits, granting licenses, pre-sentencing and sentencing, granting parole). Whilst AI tools may be used to mitigate human biases and for speed and lower costs of trials, there is evidence that it may enforce biases by using characteristics such as postcode or social economic level as a proxy for ethnicity.

The use of commercial AI tools such speech recognition – which have been shown to be less reliable for non-white speakers – can actively harm some groups when criminal justice agencies use them to transcribe courtroom proceedings.

Filter resources by type or complexity

All AdvancedArticleBeginnerIntermediateResearch PaperVideo

The danger of predictive algorithms in criminal justice

Dartmouth professor Dr. Hany Farid reverse engineers the inherent dangers and potential biases of recommendations engines built to mete out justice in today’s criminal justice system. In this video, he provides an example of how the number of crimes is used as proxy for race.

Read More

How AI Could Reinforce Biases In The Criminal Justice System

Whilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate number of crimes in some areas, which are passed to algorithms, which in turn reinforce over-policing.

Read More