AI in immigration can lead to ‘serious human right breaches’ (LVL 4)

This video refers to a report from the University of Toronto’s Citizen Lab that raises concerns that the handling of private data by AI for immigration purposes could breach human rights. As AI tools are trained using datasets, before implementing those tools that target marginalized populations, we need to answer questions such as: Where does […]

The danger of predictive algorithms in criminal justice (LVL 4)

Screen capture of TedX talk video

Dartmouth professor Dr. Hany Farid reverse engineers the inherent dangers and potential biases of recommendations engines built to mete out justice in today’s criminal justice system. In this video, he provides an example of how the number of crimes is used as proxy for race.

How AI Could Reinforce Biases In The Criminal Justice System (LVL 4)

screen capture of video AI and predictive policing

Whilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate number of crimes in some areas, which are passed to algorithms, which in turn reinforce over-policing.

Apprise: Using AI to unmask situations of forced labour and human trafficking (LVL 4)

Labourers in Thailand

Forced labour exploiters continually tweak and refine their own practices of exploitation, in response to changing policies and practices of inspections.The article showcases efforts to create AI tools that predict changing patterns of human exploitation. The authors acknowledge that whilst there are obvious benefits that accurate forecasting tools could bring, there are cases where these […]

AI can be sexist and racist — it’s time to make it fair (LVL 4)

half the face of a western bride and an Asian bride spliced together

The article raises the challenge of defining fairness when building databases. For example, should the data be representative of the world as it is, or of a world that many would aspire to? Should an AI tool used to assess the likelihood that the person will assimilate well into the work environment? Who should decide […]

Establishing an AI code of ethics will be harder than people think (LVL 4)

timeline diagram of AI ethical scandals

The New York police department has built a massive database of 17,500+ individuals believed to be involved in criminal gangs, which is estimated to contain about 95-95% African American, Latino, and Asian American, raising concerns about creating a class of people that are branded with a kind of criminal tag. The article contains a timeline […]

Racial disparities in automated speech recognition (LVL 4)

a graph showing results for 5 ASR systems when used by black and white Americans

Analysis of five state-of-the-art automated speech recognition (ASR) systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with white and black speakers. Researchers found that all five ASR systems exhibited substantial racial disparities and highlight these disparities may actively harm African American communities. For example, when speech recognition software is used by […]