New AI tools like GPT-3 (a language processing algorithm) are trained with trillions of bytes of text to generate new written content, like articles or image descriptions. This new text reflects the language the AI has been trained on, and can mirror biases in the training data and text sources. AI is also used to automatically curate news articles in social media feeds, and have been known to reflect bias in the content displayed.
Real toxicity prompts Pre-trained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pre-trained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration This paper highlights how […]Read More
Yeah, great start after sacking human hacks: Microsoft’s AI-powered news portal mixes up photos of women-of-color in article about racism. Blame Reg Bot 9000 for any blunders in this story, by the way News media is being automated and generated by AI which can incur bias from data sets for text generation.Read More