Evaluating neural toxic degeneration in language models

image of a PDF document

This paper highlights how language models used to automatically generate text produce toxic, offensive and potentially harmful language. They describe various techniques that can be employed to avoid or limit this, but demonstrate that no current method is failsafe in preventing this entirely.

Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification

MIT media lab logo

According to this paper researchers from MIT and Stanford University, three commercially released facial-analysis programs from major technology companies demonstrate both skin-type and gender biases, The three programs’ error rates in determining the gender of light-skinned men were never worse than 0.8 percent. For darker-skinned women, however, the error rates ballooned — to more than […]

When your resume is (not) turning you down: Modelling ethnic bias in resume screening

Journal cover page

CVs are worldwide one of the most frequently used screening tools. CV screening is also the first hurdle applicants typically face when they apply for a job. They seem particularly vulnerable to hiring discrimination. Despite of decades of legislation on equality and HR professionals’ commitment to equal opportunities – ethnic minority applicants are still at […]

Racial disparities in automated speech recognition

a graph showing results for 5 ASR systems when used by black and white Americans

Analysis of five state-of-the-art automated speech recognition (ASR) systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with white and black speakers. Researchers found that all five ASR systems exhibited substantial racial disparities and highlight these disparities may actively harm African American communities. For example, when speech recognition software is used by […]

Let’s Talk About Race: identity, chatbots, and AI

image of a PDF document

This paper questions how to develop chatbots that are better able to handle the complexities of race-talk – gives the example of the Microsoft bot Zo, which used word filters to detect problematic content and redirect the conversation. The problem was that this is a very crude method, and what topics are deemed `unacceptable’ is […]