AI can be sexist and racist — it’s time to make it fair

The article raises the challenge of defining fairness when building databases. For example, should the data be representative of the world as it is, or of a world that many would aspire to? Should an AI tool used to assess the likelihood that the person will assimilate well into the work environment? Who should decide which notions of fairness to prioritize? The authors posit that it’s paramount that AI researchers engage with social scientist and experts in other areas such as law and that students should examine the social context as they learn how algorithms work. The article looks at annotation of data as a technique to mitigate bias in databases.

Share this resource:

Share on facebook
Share on twitter
Share on linkedin
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would welcome your thoughts, please comment.x
()
x