AI tools are used to spot potentially harmful comments, posts and content and remove them from discussion boards and social media platforms. These tools may often misconstrue language that is culturally different, effectively censoring people’s voices.
The algorithms that detect hate speech online are biased against black people A new study shows that leading AI models are 1.5 times more likely to flag tweets written by African Americans as “offensive” compared to other tweets.
Read MoreAbolish the #TechToPrisonPipeline Crime-prediction technology reproduces injustices & causes real harm. The open letter highlights why crime predicting technologies tend to be inherently racist.
Read MoreAI researchers say scientific publishers help perpetuate racist algorithms The news: An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it reportedly planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled “A Deep Neural… Crime […]
Read More