AI can be sexist and racist — it’s time to make it fair

The article raises the challenge of defining fairness when building databases. For example, should the data be representative of the world as it is, or of a world that many would aspire to? Should an AI tool used to assess the likelihood that the person will assimilate well into the work environment? Who should decide which notions of fairness to prioritize? The authors posit that it’s paramount that AI researchers engage with social scientist and experts in other areas such as law and that students should examine the social context as they learn how algorithms work. The article looks at annotation of data as a technique to mitigate bias in databases.

Share this resource:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.