Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification

Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification – MIT Media Lab

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to e…

ccording to this paper researchers from MIT and Stanford University, three commercially released facial-analysis programs from major technology companies demonstrate both skin-type and gender biases, The three programs’ error rates in determining the gender of light-skinned men were never worse than 0.8 percent. For darker-skinned women, however, the error rates ballooned — to more than 20 percent in one case and more than 34 percent in the other two. This study findings raise questions on how neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated. The main issue being with the lack of diversity (namely ethnical) in the data sets used to train algorithms.