Unmasking Facial Recognition | WebRoots Democracy Festival

still of video

This video is an in depth panel discussion of the issues uncovered in the ‘Unmasking Facial Recognition’ report from WebRootsDemocracy. This report found that facial recognition technology use is likely to exacerbate racist outcomes in policing and revealed that London’s Metropolitan Police failed to carry out an Equality Impact Assessment before trialling the technology at […]

Amazon scraps AI recruiting tool showing bias against women

man talking

In 2018, Amazon’s use of AI for hiring was discovered to favour male job candidates, because its algorithms had been trained on 10 years’ worth of internal data that heavily skewed male. The algorithm was trained, in effect, to believe that male candidates were better than female candidates.

This tech startup uses AI to eliminate all hiring biases

line up of people in work clothes with no heads visible

This video argues that hiring is largely analogue and broken. This leads to major problems such as inefficiency, ineffectiveness (50% of first-year hires fail), poor candidate experience, and lack of diversity. The hiring process is plagued by gender bias, age bias, socioeconomic bias, and racial bias. Pymetrics intentionally audits algorithms to weed out unconscious human […]

The Misinformation Edition of the Glass Room

Video title page: the Glass Room Misinformation Edition

The Misinformation Edition of the Glass Room is an online version of a physical exhibition that explores different types of misinformation, teaches people how to recognise it and combat its spread.

AI in immigration can lead to ‘serious human right breaches’

This video refers to a report from the University of Toronto’s Citizen Lab that raises concerns that the handling of private data by AI for immigration purposes could breach human rights. As AI tools are trained using datasets, before implementing those tools that target marginalized populations, we need to answer questions such as: Where does […]

The danger of predictive algorithms in criminal justice

Screen capture of TedX talk video

Dartmouth professor Dr. Hany Farid reverse engineers the inherent dangers and potential biases of recommendations engines built to mete out justice in today’s criminal justice system. In this video, he provides an example of how the number of crimes is used as proxy for race.

How AI Could Reinforce Biases In The Criminal Justice System

screen capture of video AI and predictive policing

Whilst some believe AI will increase police and sentencing objectivity, others fear it will exacerbate bias. For example, the over-policing of minority communities in the past has generated a disproportionate number of crimes in some areas, which are passed to algorithms, which in turn reinforce over-policing.