Justice

AI has the potential to expedite our overburdened justice systems by improving accuracy and enhancing governmental efficiency, as well as helping in the fight against terrorism and human trafficking. The reality is that often those algorithms enforce racial biases through the use of historical data that perpetuates systemic inequalities and tools that underperform for minority groups. In this section we explore examples of how AI reinforces racial profiling in policing, legal judgements, immigration, and human rights.

AI in Policing

AI tools are developed with the aim of preventing crime with tools such as computer vision, pattern recognition, and the use of historical data to create crime maps, locations with higher risks of offence.

Read More »

AI in Immigration

AI is used to help with border control as well as analyse immigration and visitor applications. The implementation so far has flagged the encoding of unfair treatment of individual visa applications based on the person’s country of origin.

Read More »

AI in Human Rights

AI tools can be helpful in the fight against human rights issues such as terrorism and human trafficking, but privacy rights are a problem.

Read More »