Why is it so hard for AI chatbots to talk about race? By researching databases, natural language processing, and machine learning in conjunction with critical, intersectional theories, we investigate the technical and theoretical constructs underpinning the problem space of race and chatbots.
This paper questions how to develop chatbots that are better able to handle the complexities of race-talk – gives the example of the Microsoft bot Zo, which used word filters to detect problematic content and redirect the conversation. The problem was that this is a very crude method, and what topics are deemed `unacceptable’ is very value-laden. Consequently Zo wouldn’t engage in any conversations relating to Islam, but would discuss Christianity, for example. It also highlights the importance of considering the data used to train chatbots, as large datasets of user generated content from social media often contain very problematic content.