Measuring racial discrimination in algorithms

A study on the discriminatory impact of algorithms in pre-trial bail decisions.
Climate Change and Social Inequality

UN Working Paper evidence base for conceptual framework of cyclic relationship between climate change and social inequality.
Evaluating neural toxic degeneration in language models

This paper highlights how language models used to automatically generate text produce toxic, offensive and potentially harmful language. They describe various techniques that can be employed to avoid or limit this, but demonstrate that no current method is failsafe in preventing this entirely.
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification

According to this paper researchers from MIT and Stanford University, three commercially released facial-analysis programs from major technology companies demonstrate both skin-type and gender biases, The three programs’ error rates in determining the gender of light-skinned men were never worse than 0.8 percent. For darker-skinned women, however, the error rates ballooned — to more than […]
When your resume is (not) turning you down: Modelling ethnic bias in resume screening

CVs are worldwide one of the most frequently used screening tools. CV screening is also the first hurdle applicants typically face when they apply for a job. They seem particularly vulnerable to hiring discrimination. Despite of decades of legislation on equality and HR professionals’ commitment to equal opportunities – ethnic minority applicants are still at […]
The danger of predictive algorithms in criminal justice

A study on the discriminatory impact of algorithms in pre-trial bail decisions.
Racial disparities in automated speech recognition

Analysis of five state-of-the-art automated speech recognition (ASR) systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with white and black speakers. Researchers found that all five ASR systems exhibited substantial racial disparities and highlight these disparities may actively harm African American communities. For example, when speech recognition software is used by […]
‘Fake news’: Incorrect, but hard to correct. The role of cognitive ability on the impact of false information on social impressions

A study of how corrections to misinformation are understood by people and demonstrate that even once fake news has been debunked it may still persist in place of the truth for some groups of people.
Responsible AI for Inclusive, Democratic Societies: A cross-disciplinary approach to detecting and countering abusive language online

How content moderation can actually censor voices based on misconstruing language used by minorities with toxic environments.
Let’s Talk About Race: identity, chatbots, and AI

This paper questions how to develop chatbots that are better able to handle the complexities of race-talk – gives the example of the Microsoft bot Zo, which used word filters to detect problematic content and redirect the conversation. The problem was that this is a very crude method, and what topics are deemed `unacceptable’ is […]