A study on the discriminatory impact of algorithms in pre-trial bail decisions.
Analysis of five state-of-the-art automated speech recognition (ASR) systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with white and black speakers. Researchers found that all five ASR systems exhibited substantial racial disparities and highlight these disparities may actively harm African American communities. For example, when speech recognition software is used by […]
A report exploring the racial bias challenges of the police’s use of live facial recognition technology in the United Kingdom. It focuses on the implications of the technology for people of colour and Muslims – two heavily surveilled groups in society.
A good summary of the differences between predictive analytics – used in AI – and traditional methods, in terms of methods and impact.
This paper questions how to develop chatbots that are better able to handle the complexities of race-talk – gives the example of the Microsoft bot Zo, which used word filters to detect problematic content and redirect the conversation. The problem was that this is a very crude method, and what topics are deemed `unacceptable’ is […]
Tweets written in African-American English are far more likely to be automatically classified as abusive or containing hate speech.
The open letter highlights why crime predicting technologies tend to be inherently racist.
Ad-delivery is controlled by the advertising platform (eg. Facebook) and researchers have found evidence for particular ad delivery skewing along racial lines for housing ads and for jobs (job ads for janitors and taxi drivers were shown to a higher number of minorities compared to white users).
Commercial AI facial recognition systems tend to misclassify darker-skinned females more than any other group (lighter-skinned males being the most accurately classified).