Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification

According to this paper researchers from MIT and Stanford University, three commercially released facial-analysis programs from major technology companies demonstrate both skin-type and gender biases, The three programs’ error rates in determining the gender of light-skinned men were never worse than 0.8 percent. For darker-skinned women, however, the error rates ballooned — to more than 20 percent in one case and more than 34 percent in the other two. This study findings raise questions on how neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated. The main issue being with the lack of diversity (namely ethnical) in the data sets used to train algorithms.

Share this resource:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.