AI can be sexist and racist — it’s time to make it fair

Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data.

The article raises the challenge of defining fairness when building databases. For example, should the data be representative of the world as it is, or of a world that many would aspire to? Should an AI tool used to assess the likelihood that the person will assimilate well into the work environment? Who should decide which notions of fairness to prioritize? The authors posit that it’s paramount that AI researchers engage with social scientist and experts in other areas such as law and that students should examine the social context as they learn how algorithms work. The article looks at annotation of data as a technique to mitigate bias in databases.