Let’s Talk About Race: identity, chatbots, and AI

This paper questions how to develop chatbots that are better able to handle the complexities of race-talk – gives the example of the Microsoft bot Zo, which used word filters to detect problematic content and redirect the conversation. The problem was that this is a very crude method, and what topics are deemed `unacceptable’ is very value-laden. Consequently Zo wouldn’t engage in any conversations relating to Islam, but would discuss Christianity, for example. It also highlights the importance of considering the data used to train chatbots, as large datasets of user generated content from social media often contain very problematic content.

Share this resource:

Share on facebook
Share on twitter
Share on linkedin
0 0 votes
Article Rating
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Would welcome your thoughts, please comment.x