We have been enjoying working with Professor Kathryn Conrad to define some key elements of critical AI literacy, including how it differs from AI and digital literacy, and why it is currently needed as an addition to AI literacy. There is a growing recognition of the need for ways of learning about AI that address not only the wider sociotechnical context, but also enable decision-making not just about “how” to use tools, but ‘when’ and ‘if’ to use them.
We will share more on this project soon with more detail about critical AI literacy, but in the meantime, we wanted to share some key tips we identified which can help anyone wanting to facilitate less technocentric education or deliberations related to AI – and move towards a critical literacies approach. This is especially useful for those looking to empower people to take part in future visioning, policy deliberations, consumer or worker advocacy in a way that is not influenced by existing narratives and preconceptions or misconceptions about technology.
Here, we break down three tips to help make sure a programme or intervention is not actually working against the development of critical thinking about AI. They are to avoid anthropomorphism, the mandating of AI tools within the programme, and the assumption that AI is ‘here to stay’, or is ‘inevitable’:
Anthropomorphism has been described as “a factually erroneous or unwarranted attribution of human characteristics to non-humans” by Placani who states that:
“There is a necessary connection between attributing human traits to AI and a distorting effect on various moral judgments about AI. This distorting effect occurs because attributing human characteristics to AI is currently fallacious, affecting beliefs and attitudes about AI, which in turn play a role in moral judgment.”
Placani details how anthropomorphic language not only is a barrier to understanding technologies, but provides a false conceptual framework for expressing and organising our beliefs and expectations about AI. This conceptual framework in turn influences both conscious and unconscious thinking and ascribing a fallacious human-like agency to act intentionally.
The avoidance of analogous comparisons with human functions, activities or roles need therefore be avoided, along with visual representations of AI using fictitious anthropomorphic images or human brains.
Our definition of critical AI literacies would not mandate the use of AI tools, as this is antithetical to the principle that critical AI literacy should build the capacity for people to make informed choices about tool use.
Furthermore, one project addresses the challenges of capturing the many facets of AI literacy by proposing a new framework of multiple dimensions of AI Literacies, including for example cultural, cognitive, constructive literacies. Also included is a dimension called ‘Critical AI Literacies’, which is initially described as being concerned with examining power structures. It however it concludes with:
“Moreover, these literacies include using AI-generated data to make informed, critical decisions about strategy, resource allocation, and operational processes, ensuring that AI tools are applied responsibly and ethically in educational contexts.”
Using AI-generated data for decision making requires using AI, even if for “critical” decisions, and our focus would be on critical evaluation of AI rather than AI output, which is more of a practical AI skill.
Technodeterministic rhetoric operates from the a priori assumption that use of a given technology is inevitable – and it pervades discussions of AI, even from those who may come at the technologies from an informed, critical perspective. It is not unusual to hear the assertion that the only sphere of positive influence one can have over how AI is used is “making the best of it”, or finding ways to make it more beneficial or ethical. Pressure to use AI tools is often framed as making sure one is amongst the winners, not losers, in the new status quo, or adopting so that you are not “left behind”. There are many popular examples, such as the following:
Assertions that AI is “here to stay” or “inevitable” demonstrate existential or determinist fallacies. Furthermore, expressing beliefs that the current trajectory and paradigm of technological progress is predetermined actively shuts down the space needed for critical thinking. Critical thinking is not only one of the key practices in critical AI literacy, it is an increasingly important skill in the navigation of AI generated content.
AI literacy competency indicators in standard frameworks however reinforce the idea that AI adoption and mastery are the only option.
For example in typical AI literacy frameworks:
Critical AI literacy education which enables futures thinking from a non deterministic perspective should be essential for anyone researching or codesigning with publics in relation to the design, implementation and governance of AI technologies.
AI cannot be ethical without inclusion of considerations about public impact, but this cannot be meaningfully considered if both facilitators and publics have narrow, inaccurate or deterministic mental models of AI. Furthermore, in order to enable the public discourse and political choices necessary for the effective functioning of democratic processes, in which people can make decisions which are truly in line with their values and interests, critical AI literacy should be provided for general populations.
In the absence of widespread critical AI literacy programmes, it is critically important for those tasked with AI education, public engagements, AI ethics or responsible AI to consider the impact of falling into these common pitfalls.