As part of a range of bombastic statements about the UK’s uncritical embracing of AI in everything, the Government recently announced the AI Skills Boost. It promised “free AI training for all,” and claimed that the courses will give people the skills needed to use AI tools effectively.
Shortly afterwards, users discovered inaccurate, inaccessible, dangerous, deceptive, and poor quality courses on the £4.1 million AI Skills Hub. They noted the reliance on, and promotion of US tech providers in the flagship programme. In contrast, we know of UK providers who offered to donate their existing AI skills courses (for example focused on supporting the charity sector or looking at AI safety), only to be rejected.
It seems clear that the decision to award the contract to ‘Big 4’ commercial organisation PwC (rather than the proven national data, AI and digital skills providers who tendered) should be investigated. The refusal to work with UK providers in favour of increasing our reliance on US big tech, which does not bring taxes into our economy but does bring the technologies of authoritarianism and power inequity, is alarming and antidemocratic. It goes against what other countries are trying to do in terms of decoupling from US overreliance, as it is shown to be an increasingly dangerous strategy.
We and AI have contributed to articles in Computer Weekly and Tech Policy Press questioning this lack of ‘digital sovereignty’. However, even beyond this, we feel that the Skills Hub represents a dangerous policy misjudgment, and a continued affront to civil society. It represents the refusal to support a wider programme of what we have called public AI literacy (in itself a contested term we will discuss). This is a call we have now been making for 6 years, and one which a wider group wrote an open letter about in July 2025 when the Skills Hub project was first announced.
The open letter voiced concerns over the lack of both utility and vision in the decision to purely focus on providing functional AI skills, as opposed to the national AI literacy provision other countries provide. The letter, hosted by Connected by Data, was signed by civil society representatives and AI literacy experts. Yet despite initial agreements by DSIT and DfE to meet and discuss what additional support the people of UK need to deal with the impact of AI, the offer has not been followed through with an audience.
Now that it is clear that the warnings about the shortfalls of the Skills strategy and programme have been ignored, we are hosting an updated version of the letter on behalf of a wider group. The letter calls for the Government to stop stonewalling on the call for investment in broader AI literacy training. It asks for a commitment to put the UK national and public interests ahead of those of the US big tech providers that the Skills Hub increases our reliance on.
While there are many frameworks and definitions of AI literacy (for example, Council of Europe EU, OECD), most consider that AI literacy is the ability to understand, use, and critically evaluate AI technologies. This means knowing how AI systems work at a basic level, recognising their strengths and limitations, and being able to assess whether AI-generated information is accurate or biased. Crucially, it also involves understanding the ethical implications of AI, from privacy concerns to fairness issues, and using these tools responsibly. As AI becomes increasingly embedded in our workplaces, homes, and social interactions, AI literacy must enable people to make informed decisions about whether, when, and how to engage with these technologies.
Without an awareness of societal context, ethics, impacts and the option to choose a solution that doesn’t involve AI, no amount of skills can enable the centring of public interest. Whether it comes to AI use, investment, procurement and governance. We have therefore been deeply concerned by the degree of technical solutionism, unquestioning evangelism for AI adoption, and disregard for public voice, safety and power present in UK Government AI policy and messaging.
Such unequivocally technooptimistic statements and funding seem, if not outright complicity with Silicon Valley capture, symptoms of a lack of functional or critical AI Literacy in government. They demonstrate the inability to consider commercial claims about the future of AI in context, and with any critical enquiry. Either way, we seem fated to repeat past mistakes made through a lack of understanding of profit-centred social media technologies. The damage that misinformation and polarisation have caused to democracies across the world is seen and felt daily, yet still we lack adequate regulation.
This is why we have been calling for Critical AI Literacies, and working on researching, testing and defining the kind of information and engagement people really need. Skills not just to be onboarded to tech platforms, but to be able to build an understanding of if, how, and under what circumstances AI will provide benefit to them – and in line with their values. This requires the confidence and knowledge to ask questions, and interrogate myths and unsubstantiated hype narratives. It requires encouragement to consider how agendas and interests might impact accountability, and to evaluate actual versus perceived performance.
From our work done on AI literacy frameworks for organisations such as The Royal Society and IEEE, it is clear that AI literacy should include a human element, critical enquiry, and the option of refusal. However, in many frameworks, the critical element is interpreted purely about critically evaluating the output of AI tools, or choosing one tool over another.
As Data and Society put it in their call for going beyond AI literacy to civic strength in an AI Era:
“By framing AI adoption as a matter of individual effort rather than systemic limitations, the concept of AI literacy has been leveraged to market a simplistic solution to the very complex problem of job displacement”.
How sobering it is that in the UK, even this cooption of the concept of AI literacy is not encouraged; instead, the focus is purely and uncritically on AI Skills.
In addition to supporting this open letter, we are responding practically by building an open library of critical AI literacy resources. This includes research, lesson plans, workshop materials, curricula, frameworks. We aim to show that there is another way to learn about AI than learning to be a compliant consumer, and accept the consequences of automation and slop.
To support our call and find out more, you can:
Image credit: Bart Fish & Power Tools of AI / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/