Trial topic

AI tools are used to spot potentially harmful comments, posts and content and remove them from discussion boards and social media platforms. These tools may often misconstrue language that is culturally different, effectively censoring people’s voices.

Filter resources by type or complexity

Coded Bias: When the Bots are Racist – new documentary film

Coded Bias: When the Bots are Racist – new documentary film

This film cuts across all areas of potential racial bias in AI in an engaging documentary film format. https://youtu.be/jZl55PsfZJQ

Read More
Responsible AI for Inclusive, Democratic Societies: A cross-disciplinary approach to detecting and countering abusive language online

Responsible AI for Inclusive, Democratic Societies: A cross-disciplinary approach to detecting and countering abusive language online

Research paper about responsible AI Toxic and abusive language threaten the integrity of public dialogue and democracy. In response, governments worldwide have enacted strong laws against abusive language that leads…

Read More
Let’s Talk About Race: identity, chatbots, and AI

Let’s Talk About Race: identity, chatbots, and AI

A research paper about race and AI chatbotsWhy is it so hard for AI chatbots to talk about race? By researching databases, natural language processing, and machine learning in conjunction…

Read More
Racial bias in hate speech and abusive language detection datasets

Racial bias in hate speech and abusive language detection datasets

A paper on racial bias in hate speechTechnologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five…

Read More
Risk of racial bias in hate speech detection

Risk of racial bias in hate speech detection

Risk of racial bias in hate speech detectionThis research paper investigates how insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying…

Read More
The algorithms that detect hate speech online are biased against black people

The algorithms that detect hate speech online are biased against black people

The algorithms that detect hate speech online are biased against black peopleA new study shows that leading AI models are 1.5 times more likely to flag tweets written by African…

Read More
Abolish the #TechToPrisonPipeline

Abolish the #TechToPrisonPipeline

Abolish the #TechToPrisonPipelineCrime-prediction technology reproduces injustices & causes real harm. The open letter highlights why crime predicting technologies tend to be inherently racist.

Read More
AI researchers say scientific publishers help perpetuate racist algorithms

AI researchers say scientific publishers help perpetuate racist algorithms

AI researchers say scientific publishers help perpetuate racist algorithmsThe news: An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference…

Read More

Actions you can take

Other topics in this area