Digital and Social Media

Many online platforms, like social media websites, news websites and entertainment websites, use AI algorithms to automatically edit, curate, promote and even create content. Many of these algorithms are trained on data which may be biased in some way: it might not be diverse enough to represent the true cultural and ethnic diversity of society, or it could be consciously or unconsciously designed based on stereotyped or politically and socially biased ideas.

The AI algorithms then reflect those biases when performing their task, amplifying social prejudice across the digital and social media platforms that people use everyday. When a biased AI tool is choosing what articles people see in their newsfeed, or editing an image automatically to fit a certain format, it means people are only seeing a biased view of the world.

Filter resources by type or complexity

All AdvancedArticleBeginnerIntermediateResearch PaperVideo

Evaluating neural toxic degeneration in language models

Real toxicity prompts Pre-trained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pre-trained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration This paper highlights how […]

Read More

Let’s Talk About Race: identity, chatbots, and AI

A research paper about race and AI chatbots Why is it so hard for AI chatbots to talk about race? By researching databases, natural language processing, and machine learning in conjunction with critical, intersectional theories, we investigate the technical and theoretical constructs underpinning the problem space of race and chatbots. This paper questions how to […]

Read More

Racial bias in hate speech and abusive language detection datasets

A paper on racial bias in hate speech Technologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five different sets of Twitter data annotated for hate speech and abusive language. Tweets written in African-American English are far more likely to be automatically […]

Read More

Risk of racial bias in hate speech detection

Risk of racial bias in hate speech detection This research paper investigates how insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations.

Read More

Abolish the #TechToPrisonPipeline

Abolish the #TechToPrisonPipeline Crime-prediction technology reproduces injustices & causes real harm. The open letter highlights why crime predicting technologies tend to be inherently racist.

Read More

Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes

The enormous financial success of online advertising platforms is partially due to the precise targeting features they offer. Although researchers and journalists have found many ways that advertisers can target – or exclude – particular groups of users seeing their ads, comparatively little attenti… Ad-delivery is controlled by the advertising platform (eg. Facebook) and researchers […]

Read More

Facial recognition

Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial… Commercial AI facial recognition systems tend to misclassify darker-skinned females more than any other group (lighter-skinned […]

Read More