When AI is used for changing an image or video file, such as cropping it automatically, any bias built in can adapt that image unfavourably. Cropping, for example, may favour lighter skinned people and reduce the visibility of darker skinned people. This gives a false impression of reality through an apparent reality-based medium (seeing is believing).
Tagging media files with descriptive keywords and search terms also impacts how they are displayed in search results, and can mislead if tags used to retrieve an image are unfavourable or prejudiced. All the above relate to forms of media file manipulation or processing.
Another reminder that bias, testing, diversity is needed in machine learning: Twitter’s image-crop AI may favour white men, women’s chests Digital imagery favours white people in framing and de-emphasises the visibility of non-white people. Strange, it didn’t show up during development, says social network
Read MoreGoogle Cloud’s AI recog code ‘biased’ against black people – and more from ML land Including: Yes, that nightmare smart toilet that photographs you mid… er, process Digital imagery tagging provides negative context for non white people.
Read MoreGender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial… Commercial AI facial recognition systems tend to misclassify darker-skinned females more than any other group (lighter-skinned […]
Read MoreOnce again, racial biases show up in AI image databases, this time turning Barack Obama white Researchers used a pre-trained off-the-shelf model from Nvidia. Digital imagery tagging provides negative context for non white people.
Read More