Blog

With so much buzz about AI at present, we invite blog post contributions which demystify key topics.

And look out for our news updates!

The Misinformation Edition of the Glass Room

The Misinformation Edition of the Glass Room is an online version of a physical exhibition that explores different types of misinformation, teaches people how to recognise it and combat its spread.

Read More

Remote testing monitored by AI is failing the students forced to undergo it

An opinion piece in which examples are given of students who have been highly disadvantaged by exam software, including a muslim woman forced to remove her hijab by software, in order to prove she is not hiding anything behind it.

Read More

Exams that use facial recognition may be ‘fair’ – but they’re also intrusive

News article which argues that whilst AI facial recognition during exams might be fair, it is both an invasion of privacy and is at risk of bringing unwarranted biases.

Read More

In Hong Kong, this AI Reads children’s emotions as they learn…

Facial recognition AI, combined with other AI assessment, is used to spot how children are performing and boost their performance. However, there is concern that it may not work so well for students with non-Chinese ethnicities who were not part of the training data.

Read More

Inbuilt biases and the problem of algorithms

This article details the algorithm used to inform A Level results for students who could not take exams due to the 2020 pandemic. The algorithm took into account the postcode of the student, which meant that students from lower income areas were more likely to have their grade reduced whilst students in high-income areas were […]

Read More

Algorithms can drive inequality. Just look at Britain’s school exam chaos

An outcry over alleged algorithmic bias against pupils from more disadvantaged backgrounds has now left teenagers and experts alike calling for greater scrutiny of the technology.

Read More

Postcode or performance: How the A Level results of 2020 exposed a broken system

Case study explaining algorithm bias inherent in grade prediction for A Level students. Demonstrates the physical impact AI can have, if not scrutinised for bias.

Read More

The problem with algorithms: magnifying misbehaviour

This news example gives an example of bias present in an algorithm governing the first round of admissions into a medical university. The data used to define the algorithms output showed bias against both females and people with non-European-looking names.

Read More

Mary Madden on Algorithmic Bias in College Admissions

A good introductory video to the use of AI in college admissions. Questioning at what point it is acceptable to completely remove the human oversight in admissions.

Read More

AI in immigration can lead to ‘serious human right breaches’

This video refers to a report from the University of Toronto’s Citizen Lab that raises concerns that the handling of private data by AI for immigration purposes could breach human rights. As AI tools are trained using datasets, before implementing those tools that target marginalized populations, we need to answer questions such as: Where does […]

Read More

How AI Could Reinforce Biases In The Criminal Justice System

Whilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate number of crimes in some areas, which are passed to algorithms, which in turn reinforce over-policing.

Read More

Coded Bias: When the Bots are Racist – new documentary film

This film cuts across all areas of potential racial bias in AI in an engaging documentary film format.

Read More
Previous
Next