When AI is used for changing an image or video file, such as cropping it automatically, any bias built in can adapt that image unfavourably. Cropping, for example, may favour lighter skinned people and reduce the visibility of darker skinned people. This gives a false impression of reality through an apparent reality-based medium (seeing is believing).
Tagging media files with descriptive keywords and search terms also impacts how they are displayed in search results, and can mislead if tags used to retrieve an image are unfavourable or prejudiced. All the above relate to forms of media file manipulation or processing.
Another reminder that bias, testing, diversity is needed in machine learning: Twitter’s image-crop AI may favour white men, women’s chests Digital imagery favours white people in framing and de-emphasises the visibility of non-white people. Strange, it didn’t show up during development, says social networkRead More
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial… Commercial AI facial recognition systems tend to misclassify darker-skinned females more than any other group (lighter-skinned […]Read More