Many online platforms, like social media websites, news websites and entertainment websites, use AI algorithms to automatically edit, curate, promote and even create content. Many of these algorithms are trained on data which may be biased in some way: it might not be diverse enough to represent the true cultural and ethnic diversity of society, or it could be consciously or unconsciously designed based on stereotyped or politically and socially biased ideas.
The AI algorithms then reflect those biases when performing their task, amplifying social prejudice across the digital and social media platforms that people use everyday. When a biased AI tool is choosing what articles people see in their newsfeed, or editing an image automatically to fit a certain format, it means people are only seeing a biased view of the world.
AI tools are used to spot potentially harmful comments, posts and content and remove them from discussion boards and social media platforms. These tools may often misconstrue language that is culturally different, effectively censoring people’s voices.READ MORE ▶
Algorithms are used in social media platforms like Facebook to select what to make most visible in people’s news feeds, reinforcing what they already consume based on their profile and interests. This can be gamed to deliberately show misleading information to serve various agendas.READ MORE ▶
New AI tools are trained to generate new written content, like articles or image descriptions. This new text reflects the language the AI has been trained on, and can mirror biases in the training data and text sources.READ MORE ▶
When AI is used for changing an image or video file, any bias built in can adapt that image unfavourably: favouring lighter skinned people and reducing the visibility of darker skinned people. This gives a false impression of reality through an apparent reality-based medium (seeing is believing).READ MORE ▶