How to use AI hiring tools to reduce bias in recruiting

screenshot of a computer screen

Commercial AI recruitment systems cannot always be trusted to be effective in doing what the vendors say they do. The technical capabilities this type of software offers could be exaggerated. For some of these tools, the statistical bar is low and the correlations they draw just have to be, on average, a bit better than […]

Using AI to eliminate bias from hiring

depiction of a brain using lego and circuitry

Many current AI tools for recruiting have flaws, but they can be addressed. The beauty of AI is that we can design it to meet certain beneficial specifications. A movement among AI practitioners like OpenAI and the Future of Life Institute is already putting forth a set of design principles for making AI ethical and […]

Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive’ practices

The next web logo

HireVue’s “AI-driven assessments,” which more than 100 employers have used on over one million job candidates, use video interviews to analyse hundreds of thousands of data points related to a person’s speaking voice, word selection and facial movements. The system then creates a computer-generated estimate of the candidates’ skills and behaviours and potential fit for […]

PwC facial recognition tool criticised for home working privacy invasion

pwc logo

PwC has come under fire for the development of a facial recognition tool that logs when employees are absent from their computer screens while they work from home. As it is known that image recognition works better for white males than for other parts of the population, would non-white employees be penalised by the tool […]

Deepfakes and Disinformation

Deepfakes and disinformation can be used in used in racialised disinformation campaigns and sensationalist media. This blog articles outlines some strategies and resources that communities can make use of.

Software that monitors students during tests perpetuates inequality and violates their privacy

Asian student sitting a desk looking at a computer

In an opinion piece by a University Librarian, he claims that millions of algorithmically proctored (invigilated) tests are happening every month around the world, increasing exponentially during the pandemic. In his experience algorithmic ‘proctoring’ reinforces white supremacy, sexism, ableism, and transphobia, invades students’ privacy and is often a civil rights violation.