We and AI, PO Box 76297
© We and AI 2020
For many who saw a man get murdered in front of complicit police colleagues and helpless bystanders, the fact that something needs to change in the system which allows this to repeatedly happen without consequence has become ever more apparent.
What might be less apparent is that every one of us in tech and business needs to be part of that change, however racially prejudiced or “woke” we believe ourselves to be.
In the UK, as protesters fill Hyde Park today, many are aware that while systemic racism may not result in as many horrific deaths of black people at the hands of police as in the US, racism is still pervasive within our society and politics. There is nowhere more so than in the tech industry and within businesses leadership, a brief look at the lack of diversity in boardrooms and tech teams across the country will testify to this.
And It is a monumental problem, because while the technology industry moves at pace to automate, innovate and drive efficiencies with data and algorithms, in doing so it codifies and amplifies the biases and inequalities in our society. Businesses and organisations eager to implement these technologies overlook the biased data, inadequate data, historic data, inaccurate data and modelling which inform the predictive algorithms, facial and voice recognition tools which inform them. The result – historic prejudice and inequality mean technology products (and the businesses which run on them), disadvantage BAME people by denying them opportunities to get jobs, financial help, residency, even freedom from incarceration. Products are developed which only work properly for white people, pushing black people particularly further into the margins.
If this is something which you or your colleagues have not yet fully understood the significance of, now is the time to learn. Below is a very small selection from a larger list we are looking at summarising on our website as an accessible resource. Please add your own suggestions on texts to include in the comments section.
Just this year, in 2020:
These examples come from a database of articles compiled by our member Charlie Pownall documenting all sorts of contentious uses of AI which we are working on building into resources.
Our team also noted that healthcare carries a number of issues for non-white people, for technology optimised for pale skin has been seen to risk neglecting skin cancer in those with dark skin.
Videos discussing all of these authors can be found easily online, for example Ruha Benjamin in discussion with Meredith Whittaker
A seminal video is Joy Buolamwini discussing her fight against algorithmic bias
A short explanation of how algorithmic bias works
If however you are already familiar with the issues and the way our systemic bias creeps into code, then there is even more that can be done. Here are some starting points:
Most important to remember, is that, although we have spent the last few years of the rapid AI deployment teaching our machines to be racist, we have been teaching our children for millennia. Machine behaviour is easier to change than human behaviour – IF we take the opportunity to correct it while we still can. Let’s make technology become better than us, not worse, and let’s make ourselves better on the way.
“If you are neutral in situations of injustice, you have chosen the side of the oppressor.” Nelson Mandela