AI in immigration can lead to ‘serious human right breaches’ (LVL 4)

This video refers to a report from the University of Toronto’s Citizen Lab that raises concerns that the handling of private data by AI for immigration purposes could breach human rights. As AI tools are trained using datasets, before implementing those tools that target marginalized populations, we need to answer questions such as: Where does […]
The danger of predictive algorithms in criminal justice (LVL 4)

Dartmouth professor Dr. Hany Farid reverse engineers the inherent dangers and potential biases of recommendations engines built to mete out justice in today’s criminal justice system. In this video, he provides an example of how the number of crimes is used as proxy for race.
How AI Could Reinforce Biases In The Criminal Justice System (LVL 4)

Whilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate number of crimes in some areas, which are passed to algorithms, which in turn reinforce over-policing.
Apprise: Using AI to unmask situations of forced labour and human trafficking (LVL 4)

Forced labour exploiters continually tweak and refine their own practices of exploitation, in response to changing policies and practices of inspections.The article showcases efforts to create AI tools that predict changing patterns of human exploitation. The authors acknowledge that whilst there are obvious benefits that accurate forecasting tools could bring, there are cases where these […]
AI can be sexist and racist — it’s time to make it fair (LVL 4)

The article raises the challenge of defining fairness when building databases. For example, should the data be representative of the world as it is, or of a world that many would aspire to? Should an AI tool used to assess the likelihood that the person will assimilate well into the work environment? Who should decide […]
Establishing an AI code of ethics will be harder than people think (LVL 4)

The New York police department has built a massive database of 17,500+ individuals believed to be involved in criminal gangs, which is estimated to contain about 95-95% African American, Latino, and Asian American, raising concerns about creating a class of people that are branded with a kind of criminal tag. The article contains a timeline […]
We tested Europe’s new lie detector for travellers – and immediately triggered a false positive(LVL 4)

IBorderCtrl’s lie detection system was developed in England by researchers at Manchester Metropolitan University. It claims that its virtual cop can detect deception by picking on the micro gestures the person makes while answering questions. The university produced a study on 32 people showed 75% accuracy but their participant group was unbalanced in terms of […]
The danger of predictive algorithms in criminal justice (LVL 4)

A study on the discriminatory impact of algorithms in pre-trial bail decisions.
Racial disparities in automated speech recognition (LVL 4)

Analysis of five state-of-the-art automated speech recognition (ASR) systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with white and black speakers. Researchers found that all five ASR systems exhibited substantial racial disparities and highlight these disparities may actively harm African American communities. For example, when speech recognition software is used by […]
Adjudicating by Algorithm, Regulating by Robot (LVL 4)

This article highlights the benefits of artificial intelligence in adjudication and making law in terms of improving accuracy, reducing human biases and enhancing governmental efficiency.