Let’s Talk About Race: identity, chatbots, and AI

This paper questions how to develop chatbots that are better able to handle the complexities of race-talk – gives the example of the Microsoft bot Zo, which used word filters to detect problematic content and redirect the conversation. The problem was that this is a very crude method, and what topics are deemed `unacceptable’ is very value-laden. Consequently Zo wouldn’t engage in any conversations relating to Islam, but would discuss Christianity, for example. It also highlights the importance of considering the data used to train chatbots, as large datasets of user generated content from social media often contain very problematic content.

Share this resource:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.