In late 2024, We and AI were commissioned by the Ada Lovelace Institute on their project ‘Making good’, to provide a critical AI literacy programme. Through deliberative engagement with communities in Belfast, Brixton and Southampton, the research explored how people feel about AI, and the opportunities people see for AI and their communities in their visions for public good. This project particularly looked at the role of place and community in relation to people’s expectations of AI and public good, recruiting a diverse range of community researchers and participants from three distinct locations.
We and AI designed and facilitated a critical AI literacy programme consisting of three learning workshops delivered to local participants and community researchers. These learning workshops were with the aim to support participants to connect their vision of public ‘good’ with the broader debates and questions relating to AI technologies so that they could reshape and re-envisage their visions of ‘good’ in the light of AI. This AI literacy programme was designed in close partnership with the Ada Lovelace Institute. Due to the broad nature of ‘public good’ and what this means to different people, the AI literacy programme needed to provide a base for people to start making their own connections between AI and their visions of public good, whilst also being appropriate for individuals with differing experiential knowledge of AI.
Ada Lovelace’s Public Good in AI workstream which this project formed a part of deliberately left agendas open for participants to devise. The community-based element of the research design created opportunities for surprises in research responses and followed the lead of participants in being exploratory. The challenge was therefore to get to some fundamental understandings and principles across the groups and individuals which would enable and encourage anyone to join up their unforeseen agendas to AI debates across society, and to see connections between their interests and AI even if these links weren’t obvious.
A further challenge was that the research cohort was hugely diverse with no prescribed demographic cohort and in three different places. There is not yet much literature on how to educate on sociotechnical concepts of AI, let alone for the range of people experiencing adoption or ‘diffusion’ in different ways and making sense of AI from extremely different perspectives. Tania Duarte and Lizzie Remfry at We and AI found it really valuable to have the opportunity to build on their experience designing public AI learning interventions with the Research lead Eleanor O’Keeffe at the Ada Lovelace Institute; building an approach for a complex context, but with a clear and useful objective.
Due to this open and unknown visioning of public good, a critical AI literacy approach was essential. Critical AI literacy can be differentiated from an approach to AI literacy as a means to facilitate AI use and ultimately consumer adoption. Rather, it supports ways to understand how AI is an ambiguous term and field which needs unpicking and exploring beyond it’s role as a technology for increasing individual productivity. Critical AI literacy provides a deeper understanding of the human and societal inputs which create, shape and uphold AI systems and outputs, and enough understanding of technological processes to discern and critique the appropriate use of AI systems.
As Research Lead at Ada Lovelace Institute put it:
“At a time when public deliberation on AI is urgently needed, organisations like We and AI play a vital role in the ecosystem. Their substantial expertise in both the theory and practice of Critical AI Literacy has been fundamental to shaping an information environment that challenges people to reflect on the societal implications of AI, while empowering them to navigate these issues in line with their own priorities and values.”
Through the three workshops the aim was to:
This programme followed an guided enquiry-based learning approach, which facilitates the development of new knowledge and attitudes through guided and independent exploration of complex questions. This pedagogical approach is particularly well suited to topics and problems for which there is no singular answer(1), as is the case around AI. Workshop content was designed to build familiarity with core concepts relating to AI technologies as well as support participants to develop and apply critical thinking practices. This means that rather than providing simplified technical knowledge on specific AI technologies, the workshops sought to develop critical understanding and flexible ways of thinking by engaging participants’ experiential knowledge of AI and their society.
Each of the three workshops combined traditional information delivery via a presentation, followed by an interactive exercise or game based task to be completed in groups. The presentations were also interspersed with interactive questions or polls which could be responded to in situ by all participants. To encourage ongoing learning and reflection outside of the workshops there was a Community Wall hosted via Padlet, which provided a space for participants to complete homework tasks, such as post an image of an AI system you have interacted with.
| Table 1. Learning aims | |
| Workshop | Aims |
| 1: What is AI? | To introduce uncertainty around how we define AI and situate AI within its historical, social and political contextTo identify why AI is neither artificial nor intelligent To assess the multifaceted and material dimensions of AI, which include people, activities, institutions, and raw materialsTo surface and explore questions about power and ownership which are inherent within all these stagesGroup activity: Sort the components of AI |
| 2: AI and society | To identify why AI is not neutral and includes embedded values, opinions and assumptions To recognise patterns of bias and stereotypes that may be present in AI toolsTo explore through real world examples how AI systems are used and can reflect the biases in data and societyGroup activity: Hands-on experimentation |
| 3: AI and our futures | To identify some unintended consequences that may occur in AI systemsTo explore how cultural, political and neo-liberal value systems are embodied within AI design and developmentTo illustrate that different value systems may lead to competing pictures of good where tradeoffs between these values may need to be madeGroup activity: Role play |
Below we highlight some of the interactive or game-based activities hosted throughout the programme.
Activity 1: Sort the components of AI
This first activity was designed to make the components of AI more transparent, breaking down the so-called ‘black box’ nature of AI. It was designed to facilitate thinking around the human and resource dimensions hidden in AI, which include people, activities, institutions, and raw materials. In groups, participants were supported to sort objects of AI into categories; resources, money, building and daily use. Examples of objects are human data labellers, air conditioning units, a piece of silicon and a stack of money. For each object participants needed to discuss how and where it was important in AI, and decide which category to place it in. Participants interacted both with the exercise but also with each other, facilitating the exchange of technical and social knowledge.
Activity 2: Hands-on experimentation
Using a real world example of an AI technology, the second activity uses experiential learning to foster understanding and reflect on personal experiences. ChatGPT is a conversational chatbot service that can generate human-like text responses based on a user’s input. In groups, participants were supported to explore and interact with ChatGPT and enter a series of predefined prompts. These prompts were around writing a short creative story, participants were encouraged to edit the prompt and explore how they could influence the output through active experimentation. The initial prompts were designed to reveal known biases within generative AI, for example incorrectly assuming all nurses are female. Participants were encouraged to reflect on this experience and how that impacted their understanding of generative AI systems.
Activity 3: Role play
A final activity built on game-based learning, used a role playing game designed to encourage participants to explore AI scenarios through the lens or role of a specific set of values. Participants were assigned a specific value; cooperation, equality, equity and care, which they needed to carry through the whole game. Game-based learning provides the opportunity to explore real or imagined scenarios in an immersive way, whilst facilitators act as ‘game-masters’ supporting participants to work through the game together(2). This potential to create simulated experiences allowed learners to explore different perspectives and values which at times may have differed from their existing beliefs.
In the game, a scenario was presented to each team and then a set of activities to work through and discuss. An example scenario was “You are a team working on an AI tool for receptionists that optimises GP schedules. You need to work through these letters and news articles to work out what has happened with this AI tool”. As participants moved through the game, the AI tool had both successes but also failures. Through this situated learning, participants could explore how by holding different values this could lead to different consequences and competing pictures of ‘public good’ when designing and using AI tools in practice.
The programme offered different opportunities for participants and community researchers to engage with critical AI thinking. The Ada Lovelace Institute particularly wanted to take a critical AI literacy approach as they acknowledged that a barrier to their research on public opinion of AI and public good was often hindered by participants’ preconceptions about AI. They wanted to move beyond just inviting in participants to a conversation about technology, but to empower them through increasing their understanding of capabilities, and increased knowledge of the wider social implications. In this project, critical AI literacy helped open up avenues to recognise and appreciate the different forms of experiential and practical knowledge that communities hold about AI and its implications within their community.
There were tensions between what we were trying to achieve with a critical AI literacy approach, and with participants’ expectations and desires for consumer or practical skills. In an initial survey, where participants were asked what they hoped to learn in the workshops, common responses were “How to use AI as business support” (Participant 5), “How it can help in daily life and how it can make our lives better and less stressful” (Participant 7) or “Even though I use AI daily in terms of searching, I am interested in finding out how to maximise the capabilities of AI” (Participant 17). These opinions reflect the common narrative of the productivity and economic gains perspective of AI.
Through this programme the dialogic learning style was important, participants learned from both the exercise-based approach as well as each other, engaging in a sharing of experiential knowledge. Participants also created new forms of knowledge through these interactions. For example, during the hands-on experimentation with ChatGPT one group narrowed in on how AI can act as a mirror, reflecting back society, whilst another organically concluded that there were serious future implications of the simplification and standardisation of language produced by generative AI systems. Participants were very reflective in the use of AI systems and how this impacts their perspectives of public good “Can AI fix any issue if profit is at the heart of it?” (Participant 25).
As participants engaged in sense-making, both through the literacy programme and within their own lives, the project found that many participants saw AI as a transformational intervention that would have ripple effects across all of society, rather than discrete AI tools that could be used within a particular setting. One of the findings of the project was that “while it was possible for people to develop views about specific technologies in particular contexts, it was also possible for them to hold a parallel set of views, which related to the wider systemic aspects of AI.” As one participant noted:
“It has struck me that the ramifications of AI amount to yet another form of colonialism. Just like the capitalist/corporate system is probably impossible to extricate yourself from, the coming domination of everything we see, hear and do – by algorithms – will make it very difficult not to be a part of it.”
The Ada Lovelace Institute ‘Making Good’ report concluded that “many of the study’s participants felt better prepared to benefit from, navigate and purposely contest any decisions about AI, because they had grown more confident in understanding what implications AI held for them, their communities and their values.”
This project fits within some core work at We and AI, combining critical AI literacy, creative approaches and community based groups. It also gave us the space to work with experts and try new approaches, particularly using role play to explore values within AI system design. The AI literacy programme built on much of our game-based approaches, which we also went on to present at the 2025 ACM Conference on Fairness, Accountability, and Transparency, and as part of London Data Week. We are looking forward to delivering similar interventions for projects or as workshops in the future, and welcome enquiries.
(1) Lee, V. S., Greene, D. B., Odom, J., Schechter, E., & Slatta, R. W. (2004). What is inquiry guided learning. In V. S. Lee (Ed.), Teaching and learning through inquiry: A guidebook for institutions and instructors (pp. 3-15). Sterling, VA: Stylus Publishing.
(2) Gjedde, L. (2013). Role game playing as a platform for creative and collaborative learning. In European Conference on Games Based Learning (p. 190). Academic Conferences International Limited.
Further reading: Ada Lovelace Institute ‘Making good’ report
