There are many types of different exciting AI applications being used to enhance learning in the class room, from automated ‘smart tutors’ who can assess pupil performance more accurately tailor learning interventions than humans, to facial recognition cameras which can assess pupils’ understanding through analysing their facial expressions.
A range of different ways to make decisions about admissions to universities and schools are being used, including algorithms which use data pulled from students social media channels. There is concern that the data used will be biased against ethnic minorities due to the smaller amounts of data available and human bias used in creating the models.
In 2020 the use of algorithms to replace exams to determine results hit the headlines when it was shown that they penalised students from state schools and low inclome postcodes.
AI tools are developed with the aim of preventing crime with tools such as computer vision, pattern recognition, and the use of historical data to create crime maps, locations with higher risks of offence.
AI is used to help with border control as well as analyse immigration and visitor applications. The implementation so far has flagged the encoding of unfair treatment of individual visa applications based on the person’s country of origin.
AI tools can be helpful in the fight against human rights issues such as terrorism and human trafficking, but privacy rights are a problem.
Algorithms become the arbiters of determinations about individuals (e.g. government benefits, granting licenses, pre-sentencing and sentencing, granting parole).
Modern medicines and treatments can be improved with the progression of AI in medicine by discovering new drugs, personalising treatments, and speeding up chemical trials. But if the racial exclusion common in biomedical research seeps into the data behind AI there’s a risk that these medicines wont be effective for everyone.
AI racial bias has extended directly into vital aspects of patient care. With greater rates of misdiagnosis and underdiagnoses of skin cancers in the non-white populations, this is a prime example of AI systems adding to the global burden of health disparities which significantly impact minority groups.
AI can help to improve resource management within the health care system, by locating gaps in the care system, rebalancing resources, and reviewing patient data to identify priority care patients. But how do we ensure that AI systems distribute resources and care fairly? Several case studies show how bias seeping into AI technologies is underserving black and ethnic minority patients.