We and AI Response to the Race and Ethnic Disparities Report 2021

Title page and table of contents of the Commission of Race and Ethnic Disparities report 2021

We and AI, a UK non-profit, were one of the 325 organisations that submitted evidence to the Commission on Race and Ethnic Disparities, which resulted in the Report published on the 31st March 2021. We are concerned about the soundness of the way evidence to the Commission has been evaluated, and the cogency of the conclusions drawn from it. 

We believe that in deflecting the abundance of evidence of systemic racism both described and omitted from this Report, the Commission’s approach serves to demonstrate how structural racism operates rather than disprove it. The Report contains some Recommendations which we would welcome within a less divisive context. However, as it stands, the Report risks dangerously disempowering and discouraging efforts to decrease racial and ethnic disparities. As such, we wish to call into question some of the findings and tone of the Report, and to urge individuals and organisations to be even more committed to address the impact and legacy of racism in our society and systems. We do this with particular reference to our evidence on how racial and ethnic bias is embedded into AI systems – perpetuating and even amplifying systemic racism.

Summary of Concerns

In response to the Artificial Intelligence section of the Report, we are concerned with:

  • The disregarding of the significance of equality law violations by UK AI systems
  • The lack of consideration given to the disparities caused by facial recognition technology
  • The understatement of the impact of human cognitive biases on the proliferation of bias in AI systems
  • The problematic way in which factors such as “class” and “geography” are used to explain racial and ethnic disparities
  • The low level of urgency in investigating further regulations of AI systems, indicating a low commitment to addressing disparities caused by systems
  • The potential over-reliance on automated systems to “fix” the human biases which go into them

In response to the Report in general, we are concerned with:

  • Confirmation bias in the Report findings
  • A change in scope from “investigation” to “changing the narrative”
  • The impact of changing the definitions of types of racism
  • The dismissal of the possibility that some “explained” and “unexplained” racial disparities are indicators of institutional and systemic racism
  • Dissonance between the Report evidence, analysis and conclusions
  • A problematic tone throughout the Report
  • Issues within the compiling and publishing of the Report
  • The focus on celebrating current data and evidence, which sets a low bar for expectations of racial and ethnic equity

Our Response to the “Artificial Intelligence” section of the Report

Our evidence submission consisted of recommendations to address the instances or risks of bias against minority ethnic groups caused by the irresponsible use of Artificial Intelligence systems. 

Artificial Intelligence is addressed in the section of the Report covering “Employment, Fairness at Work, and Enterprise”. The Commission makes a recommendation to support the CDEI recommendations on algorithmic decision-making calling the government to:

  • “Place a mandatory transparency obligation on all public sector organisations applying algorithms that have an impact on significant decisions affecting individuals
  • Ask the Equality and Human Rights Commission to issue guidance that clarifies how to apply the Equality Act to algorithmic decision-making, which should include guidance on the collection of data to measure bias, and the lawfulness of bias mitigation techniques”

These recommendations, if adopted, would represent progress in terms of making algorithmic decision-making more visible if not accountable, and provide better visibility of some of the illegal incidences of discrimination and disparity caused by some data and models used in machine learning. However, we make the following further observations about the contents of this section of the Report, which:

1. Disregards the significance of equality law violations by UK AI systems

We welcome the fact that the Report acknowledges that the Equality Act is currently being violated by biased automated decision systems which penalise minority ethnic groups and violate human rights on the basis of race. This was seen in 2020 when The Home Office lost a legal challenge from Foxglove and JCWI against the use of a “racist algorithm” in processing visa applications. 

However, the fact that litigation was  necessary to force the Home Office to stop using what amounted to a “speedy boarding pass for white people”, seems to question the Commission’s finding that UK systems are “not deliberately being rigged” against minorities. 

2. Shows a lack of consideration of disparities caused by facial recognition technology 

The discriminatory use of facial recognition technology in policing is not mentioned in the Report. In answer to the question on “what can be done to enhance community relations and perceptions of the police”, we explained that :

“The police service does not just need to improve how it is perceived, but address any racism in the automated decision tools it currently or is considering using. For example on predictive policing and resource allocation. The Service can also reconsider the use of facial recognition tools which have a much higher rate of false positives amongst people with darker skin, therefore unfairly targeting and criminalising ethnic minorities. Controls and regulations need to be set over how biometrics and their resulting data are used, in consultation with impacted communities.” 

It is therefore disappointing that this was not addressed as an area, in which the use of facial recognition by the Met and South Wales Police has been seen by many to be an example of institutional racism – the use of technology to collect data on people’s faces without consent despite there being a notably high known rate of misidentification of specifically Black and Brown people. A decision to go ahead with technology known to disproportionately negatively impact certain ethnic groups did not bode well for building trust, and it was ruled unlawful in South Wales.

3. Understates the impact of human cognitive biases on the proliferation of bias in AI systems 

The Report states that bias can enter an AI system in three ways, through data, the model, or decisions. The example illustrating the decision element says “a system may give a fair output which humans may be more confident about than is deserved.”. This seems to indicate that this third decision stage referred to is that of a machine decision – the output of the machine. 

However, we note that in addition, decisions are also made by people about the purpose, scope, and applications of a technology product or system, which can be inputs prior to the data and model stage. 

We believe that these input decision stages, as well as the three ways stated (bias entering through data, model, interpretation of output), are areas in which the unconscious biases of a team can be proliferated. An example of this is seen in healthcare as seen in the section on biases in the choices made for AI design and use in this research  in the BMJ.

It is therefore imperative for organisations to:

  • Ensure truly ethnically diverse teams are making decisions about the use as well as the design, development and implementation of AI systems, because at present Black, and people of some other ethnic minorities are significantly underrepresented in teams building AI.
  • Continue to make an effort to understand the unconscious biases and blind spots of teams working on AI, who rarely act on their negative biases intentionally. Creating opportunities to recognise, reflect and discuss them is therefore critical to debiasing, which is why the Commission’s recommendation to stop efforts to understand unconscious bias is potentially extremely damaging. These cognitive biases, inherent within everyone, get codified and even amplified, risking widening the gaps of existing inequalities.
  • Not rely purely on the algorithmic impact assessments suggested by the Commission (though these should be part of the process of developing AI) because these do not cover the whole lifecycle from conception to deployment of an AI system.

These are imperative measures because racial and ethnic minorities can exert only very limited influence in the field of automated decision making through more use of their own “agency” (a theme repeated in the Report) if the data, the systems, and the use of these systems are stacked against them. Indeed, in many cases, they will not even be aware that they are being discriminated against by algorithms.

4. Uses factors such as “class” and “geography” to explain racial and ethnic disparities in a problematic way

The report concludes that causes of disparity are often more to do with factors other than race, such as class, geography, and sex. The use of proxy data in predictive models means that People of Colour who do not live in lower socio-economic households or areas (the Report uses the term “class”), are still judged by models as if they are. Thus systems disproportionately penalise people on the basis of skin colour, not just in employment as mentioned in the Report, but also in other areas such as education. 

A well-known example of damaging algorithmic decision-making was in the initial awarding of A-level results in 2020 via an algorithm which under-predicted students from previously poor-performing areas, leading to unfair prediction rates for students from an ethnic minority background. Therefore, we believe that even if the link between poverty and race has caused a disparity in grade predictions, the decision to reinforce this with the algorithm, showed either a willingness to amplify the disparity, or a disregard for any unintended consequences. 

We therefore call for greater consideration of unintended consequences, which the use of algorithmic decision making has on all underrepresented or marginalised groups. The potential impact and harm of such consequences is increased when factors such as class are combined with structural and individual instances of racism.

5. Indicates a low level of urgency in investigating further regulation of AI systems and a low commitment to addressing disparities caused by systems

The Report finding that the “rate of change in these systems means specific remedies are premature” suggests that further regulation or case specific regulation of technology does not yet need to be pursued. However, the rate of changes in systems is not slowing down, and relying on organisations to implement their own controls is not sufficient. The work being done in Europe and to some extent the UK to investigate and provide regulatory remedies should not be written off as premature. Setting expectations low on the possibility of AI accountability through regulation in addition to internal assessment and existing controls, risks signalling a tolerance of the disparities AI causes. This does not help to demonstrate the trustworthiness of the systems which marginalised communities are being asked to trust.

6. Has a potential over-reliance on automated systems to “fix” the human biases which go into them

We suggest that the statement concluding the section in the Report which addresses Artificial Intelligence, is read with a note of caution “Last, before dismissing any system, it should be compared with the alternative. An automated system may be imperfect, but a human system may be worse”. We welcome the AI systems which aim to reduce the impact of human biases. However, this sentence risks being interpreted as implying that by default AI is likely to be preferable to human intervention. This minimises the negative and pervasive effect of well-documented examples of algorithmic bias, and it overlooks how AI systems can compound the speed and scale of existing inequalities; exacerbating them by making them more “efficient’.  

General concerns about the Findings and Impact of the Report

Furthermore, We and AI would like to express some general concerns about the Report.

1. Confirmation bias in the Report findings

That the selection of the Committee members who have previously gone on record as repudiating the existence of Systemic Racism lead to a confirmation bias (the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories), against accepting the evidence submitted in favour of the its existence

2. A change in scope from investigation, to “changing the narrative

That in a desire to “change the narrative” around racism, the Committee changed its scope from that of addressing the concerns of race campaigners about the continuing tolerance and sanctioning of systems which disadvantage ethnic minorities at a state and institutional level. Instead the Committee suggests new definitions of what it believes it should be investigating. Instead of reporting back on the many racial disparities which are nonetheless clearly detailed in the Report, it provides a commentary on them which chooses to focus on those “underlying causes” of disparity which in many cases puts the onus largely on ethnic minorities themselves to improve their situations. In doing so, it dismisses the impact of colonialism, the slave trade, and the more recent Windrush scandal (for which many families are still awaiting any form of redress). Instead, it tells those impacted to be more “optimistic”, and less “reluctant”. We believe this anti-”pessimism” commentary:

  • Fails to provide the necessary encouragement or guidance for organisations, institutions and individuals to fully address the significant disparities detailed within the Report, or others 
  • Further disenfranchises impacted communities by belittling their challenges and experiences

3. The impact of changing the definitions of types of racism

The Report proposes new definitions of institutional racism, systemic racism, and structural racism without external validation of their interpretation of these widespread terms. Various words often included in dictionary definitions of these terms are omitted within the Commission’s new definition. 

For example, there is no recognition of the role played by rules, practices, norms, cultural representations, and unequal treatments, that result in and support a continued unfair advantage to some people and unfair or harmful treatment of others based on race, as indicators of institutional, systemic or structural racism. 

These omitted elements are usually useful in explaining how a non or anti-racist person can unwittingly be the actuator of institutional or systemic racism. Note that some of this actuation can be in the creation, implementation, or use of algorithms. Without providing this context, it is easy for the dialogues around such terms to focus on whether some person or some institution is either racist or not racist, which is reductive and distracting

4.  The dismissal of the possibility that “explained” and “unexplained” racial disparities are indicators of institutional and systemic racism

The Commission also seeks to distance explained or unexplained racial disparities from being any indicator, or part of a definition of structural, institutional or systemic racism. This is problematic for the following reasons:

  • Explained” racial disparities are described as those in which the cause of inequity or inequality is shown to be due to other factors such as “geography, class or sex”. However, class and geography are often the result of historical racism, so they cannot be dismissed as not being to do with race. Moreover, when they are combined with race, they often have a more negative impact than class and geography alone. Note the implicit connection to any algorithms that utilise factors like geography, class or sex: these can and often do become proxy measures for race in algorithms (as can be seen here and here). 
  • The willingness within the Report to accept and explain that people of certain ethnic backgrounds being more likely to live in deprived areas, or to be of a lower class, is within itself a symptom of Systemic Racism. This attitude that a status quo in which your education, income and where you live are determined by your skin colour is acceptable or even inevitable, is the type of privilege and unconscious bias which many people and institutions build or allow to be perpetuated in their institutions and systems – sometimes unintentionally and sometimes intentionally.  
  • Unexplained” racial disparities are described as those in which the reason for the disparity is not known or attributed to another cited factor (class, geography, sex). Dismissing these “unexplained” disparities as not evidence of Institutional, Structural or Systemic Racism is therefore not possible, as no other cause has been found. Indeed there are several places within the Report in which it is stated that more study is needed. Given the “black box” nature of many algorithms today, these “unexplained” factors can be particularly problematic in their use, and should be noted and accounted for.

5. Dissonance between Report evidence, analysis and conclusions

Even within the new definitions of these terms proposed by the Committee, the conclusions and interpretations don’t match up with the evidence presented in the Report. There have been accusations from academics and the BMJ that the statistics from the Cabinet Offices’ Race and Disparities Unit (RDU) used in the Report are cherry picked. But even aside from this, in all four referenced areas of education, employment, crime and policing and health, there are examples of disparities which meet the criteria given by the Report for describing Institutional Racism. 

For example, the high levels of unjustified stop and searches laid out in the report, the need for certain ethnic groups to send out their CVs more times than others to get a call back, increased risk of hate speech, all meet the threshold of the Commission’s definition of being the result and indicators of “a legacy of historic racist or discriminatory processes, policies, attitudes or behaviours that continue to shape organisations and societies today”. They are certainly not ‘micro-aggressions’. And yet the Report concludes that “the claim the country is still institutionally racist is not borne out by the evidence”.

6. Problematic tone of the Report 

The tone of the Report, which has been experienced as using disparaging and even insulting language to effectively call on those affected by racism to stop complaining about injustices, help themselves, and be more grateful to be in the UK, seems to particularly target Black Caribbean communities. This undermines any recommendations on the need for greater fairness, inclusion and trust, because this reproachful tone does not seem to be used when recommending actions for White people or public bodies to take. For example: 

reluctance to acknowledge that the UK had become open and fairer”

“help themselves through their own agency, rather than wait for invisible external forces to assemble to do the job”

“the need for communities to run through that open space and grasp those opportunities” 

“a new story about the Caribbean experience which speaks to the slave period not only being about profit and suffering but how culturally African people transformed themselves into a re-modelled African/Britain”

7. Issues with the compiling and publishing of the Report

We note other issues to do with the compiling and publishing of the Report which we cannot see as helpful to “building trust” – one of the key recommendations of the Report, let alone becoming trustworthy. These have been reported as:

8. The focus on celebrating current data and evidence, which sets a low bar for expectations of racial and ethnic equity

Finally, we cannot see how a Report which contains so much evidence of continuing disparities in the UK could simultaneously reference the UK as a model for others to adopt. It is hard to see what could be gained for those impacted by disparities by this self-congratulatory aim of becoming  a “beacon” for others. 

We are concerned that it undermines the impetus to adopt the recommendations within this Report with any particular zeal. It is worth noting that the United Nations’s expert on racism and human rights reported in 2019 that the “structural and socio-economic exclusion of racial and ethnic minorities in the UK is striking”. 

The Report, in accepting and indeed celebrating the current levels of racial inequality and racism within UK society and its institutions, sets the bar for success in achieving racial equity very low. This within itself illustrates the type of attitude which underpins structural racism. 

Conclusion

In summary, in response to this Report, the scale of the problem it has revealed, and the further divisions it appears to be causing, we urge individuals and organisations alike to be even more committed to act on the continued and potentially growing harm which the present and historical racial biases in our society manifest in the organisation, systems and structures which uphold and validate them.

At We and AI, we are committed to continue working towards empowering people to take a role in limiting the potential for artificial intelligence to reinforce systemic racism.

We will actively pursue collaborations with governmental, private, public and civil society organisations to ensure that technology is used for the benefit of all. We welcome and encourage volunteers from all backgrounds to join our team and cause.

Share this blog post: