Responsible AI for Inclusive, Democratic Societies: A cross-disciplinary approach to detecting and countering abusive language online

Research paper about responsible AI

Toxic and abusive language threaten the integrity of public dialogue and democracy. In response, governments worldwide have enacted strong laws against abusive language that leads to hatred, violence and criminal offences against a particular group. The responsible (i.e. effective, fair and unbiased) moderation of abusive language carries significant challenges. Our project takes on the difficult and urgent issue of detecting and countering abusive language through a novel approach to AI-enhanced moderation

How content moderation can actually censor voices based on misconstruing language used by minorities with toxic environments.