Dartmouth professor Dr. Hany Farid reverse engineers the inherent dangers and potential biases of recommendations engines built to mete out justice in today’s criminal justice system. In this video, he provides an example of how the number of crimes is used as proxy for race.
Whilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate number of crimes in some areas, which are passed to algorithms, which in turn reinforce over-policing.
A study on the discriminatory impact of algorithms in pre-trial bail decisions.
Analysis of five state-of-the-art automated speech recognition (ASR) systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with white and black speakers. Researchers found that all five ASR systems exhibited substantial racial disparities and highlight these disparities may actively harm African American communities. For example, when speech recognition software is used by […]
This article highlights the benefits of artificial intelligence in adjudication and making law in terms of improving accuracy, reducing human biases and enhancing governmental efficiency.
The current use of AI in reviewing documents, predicting outcome of cases and predicting success rates for lawyers. This article highlights concerns about fallibility and the need of human oversight.
A article on how AI can be used in adjudication and law in general. It highlights that although AI has vast potential, there is not a broad adoption so far.