Digital and Social Media

Many online platforms, like social media websites, news websites and entertainment websites, use AI algorithms to automatically edit, curate, promote and even create content. Many of these algorithms are trained on data which may be biased in some way: it might not be diverse enough to represent the true cultural and ethnic diversity of society, or it could be consciously or unconsciously designed based on stereotyped or politically and socially biased ideas.

The AI algorithms then reflect those biases when performing their task, amplifying social prejudice across the digital and social media platforms that people use everyday. When a biased AI tool is choosing what articles people see in their newsfeed, or editing an image automatically to fit a certain format, it means people are only seeing a biased view of the world.

Filter resources by type or complexity

All AdvancedArticleBeginnerIntermediateResearch PaperVideo

Trial topic

AI tools are used to spot potentially harmful comments, posts and content and remove them from discussion boards and social media platforms. These tools may often misconstrue language that is culturally different, effectively censoring people’s voices.

Read More

Evaluating neural toxic degeneration in language models

Real toxicity prompts Pre-trained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment. We investigate the extent to which pre-trained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration This paper highlights how […]

Read More

The Misinformation Edition of the Glass Room

The Misinformation Edition of the Glass Room is an online version of a physical exhibition that explores different types of misinformation, teaches people how to recognise it and combat its spread.

Read More

Deepfakes and Disinformation

Earlier this month, in the aftermath of a decisive yet contested election, MediaJustice, in partnership with MediaJustice Network member WITNESS, brought together nearly 30 civil society groups, researchers, journalists, and organizers to discuss the impacts visual disinformation has had on institutions and information systems. Deepfakes and disinformation can be used in used in racialised disinformation […]

Read More

Coded Bias: When the Bots are Racist – new documentary film

This film cuts across all areas of potential racial bias in AI in an engaging documentary film format.

Read More

‘Fake news’: Incorrect, but hard to correct. The role of cognitive ability on the impact of false information on social impressions

The present experiment examined how people adjust their judgment after they learn that crucial information on which their initial evaluation was based is incorrect. In line with our expectations, the results showed that people generally do adjust their attitudes, but the degree to which they correct their assessment depends on their cognitive ability. A study […]

Read More

Responsible AI for Inclusive, Democratic Societies: A cross-disciplinary approach to detecting and countering abusive language online

Research paper about responsible AI Toxic and abusive language threaten the integrity of public dialogue and democracy. In response, governments worldwide have enacted strong laws against abusive language that leads to hatred, violence and criminal offences against a particular group. The responsible (i.e. effective, fair and unbiased) moderation of abusive language carries significant challenges. Our […]

Read More

Let’s Talk About Race: identity, chatbots, and AI

A research paper about race and AI chatbots Why is it so hard for AI chatbots to talk about race? By researching databases, natural language processing, and machine learning in conjunction with critical, intersectional theories, we investigate the technical and theoretical constructs underpinning the problem space of race and chatbots. This paper questions how to […]

Read More

Racial bias in hate speech and abusive language detection datasets

A paper on racial bias in hate speech Technologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five different sets of Twitter data annotated for hate speech and abusive language. Tweets written in African-American English are far more likely to be automatically […]

Read More

Risk of racial bias in hate speech detection

Risk of racial bias in hate speech detection This research paper investigates how insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations.

Read More

The algorithms that detect hate speech online are biased against black people

The algorithms that detect hate speech online are biased against black people A new study shows that leading AI models are 1.5 times more likely to flag tweets written by African Americans as “offensive” compared to other tweets.

Read More

Abolish the #TechToPrisonPipeline

Abolish the #TechToPrisonPipeline Crime-prediction technology reproduces injustices & causes real harm. The open letter highlights why crime predicting technologies tend to be inherently racist.

Read More

AI researchers say scientific publishers help perpetuate racist algorithms

AI researchers say scientific publishers help perpetuate racist algorithms The news: An open letter from a growing coalition of AI researchers is calling out scientific publisher Springer Nature for a conference paper it reportedly planned to include in its forthcoming book Transactions on Computational Science & Computational Intelligence. The paper, titled “A Deep Neural… Crime […]

Read More

Google think tank’s report on white supremacy says little about YouTube’s role in people driven to extremism

A Google-funded report examines the relationship between white supremacists and the internet, but it makes scant reference—all of it positive—to YouTube, the company’s platform that many experts blame more than any other for driving people to extremism. YouTube’s algorithm has been found to direct users to extreme content, sucking them into violent ideologies.

Read More

Is Facebook Doing Enough To Stop Racial Bias In AI?

After recently announcing Equity and Inclusion teams to investigate racial bias across their platforms, and undergoing a global advertising boycott over alleged racial discrimination, is Facebook doing enough to tackle racial bias? Disinformation driven via bots that game the AI systems of social media platforms to reinforce racial myths and attitudes as well as the […]

Read More

Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes

The enormous financial success of online advertising platforms is partially due to the precise targeting features they offer. Although researchers and journalists have found many ways that advertisers can target – or exclude – particular groups of users seeing their ads, comparatively little attenti… Ad-delivery is controlled by the advertising platform (eg. Facebook) and researchers […]

Read More

AI biased new media generation

Yeah, great start after sacking human hacks: Microsoft’s AI-powered news portal mixes up photos of women-of-color in article about racism. Blame Reg Bot 9000 for any blunders in this story, by the way News media is being automated and generated by AI which can incur bias from data sets for text generation.

Read More

Twitter image cropping

Another reminder that bias, testing, diversity is needed in machine learning: Twitter’s image-crop AI may favour white men, women’s chests Digital imagery favours white people in framing and de-emphasises the visibility of non-white people. Strange, it didn’t show up during development, says social network

Read More

Content Moderation on News and Social Media

AI tools are used to spot potentially harmful comments, posts and content and remove them from discussion boards and social media platforms. These tools may often misconstrue language that is culturally different, effectively censoring people’s voices.

Read More

Misinformation and News Feeds

Algorithms are used in social media platforms like Facebook to select what to make most visible in people’s news feeds, reinforcing what they already consume based on their profile and interests. This can be gamed to deliberately show misleading information to serve various agendas.

Read More

Google Cloud’s image tagging AI

Google Cloud’s AI recog code ‘biased’ against black people – and more from ML land Including: Yes, that nightmare smart toilet that photographs you mid… er, process Digital imagery tagging provides negative context for non white people.

Read More

Facial recognition

Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial… Commercial AI facial recognition systems tend to misclassify darker-skinned females more than any other group (lighter-skinned […]

Read More

Image processing

Once again, racial biases show up in AI image databases, this time turning Barack Obama white Researchers used a pre-trained off-the-shelf model from Nvidia. Digital imagery tagging provides negative context for non white people.

Read More

Automated Content Generation

New AI tools are trained to generate new written content, like articles or image descriptions. This new text reflects the language the AI has been trained on, and can mirror biases in the training data and text sources.

Read More

Image Manipulation and Tagging

When AI is used for changing an image or video file, any bias built in can adapt that image unfavourably: favouring lighter skinned people and reducing the visibility of darker skinned people. This gives a false impression of reality through an apparent reality-based medium (seeing is believing).

Read More
Previous