The article raises the challenge of defining fairness when building databases. For example, should the data be representative of the world as it is, or of a world that many would aspire to? Should an AI tool used to assess the likelihood that the person will assimilate well into the work environment? Who should decide which notions of fairness to prioritize? The authors posit that it’s paramount that AI researchers engage with social scientist and experts in other areas such as law and that students should examine the social context as they learn how algorithms work. The article looks at annotation of data as a technique to mitigate bias in databases.