Opinion: By Tania Duarte
In the absence of any funded civil society or national school and adult initiatives in the UK, tech companies fill the vacuum. For instance, Snapchat.AI have an AI literacy guide and Google DeepMind have developed Experience AI.
It can seem sensible to educators and policymakers to make use of free resources and curricula created by tech companies, as they often have a reverence for the seemingly matchless expertise and knowledge of tech engineers and leaders. This is cultivated by media reporting which frequently uses terms such as “AI Godfathers” and, ironically, due to the disempowering lack of AI literacy even among educators and policymakers, as well as an eye on the budget.
However, as they say, there is no such thing as a free lunch. Allowing tech companies to shape and dictate how publics learn about AI means that right from the outset they control the narrative about how we should think about technology, how it impacts us, and what questions should and could be asked.
In fact, it is an established content marketing strategy to provide education for users, whether organisational or individuals:
“An educated customer is a better customer. As a result of education, the customer journey can be dramatically shortened, conversion rates are higher and product adoption is faster… Marketing strategies based on content marketing with a focus on customer education has proved beneficial for many software brands… long before their customers seriously consider purchasing their “software-as-a-service” solutions.”Knihová, Ladislava. (2021). The role of educational content in a digital marketing strategy. 12. 162-178.
There is more in it for AI sellers than there is for society.
From the offset, the idea of technology companies developing literacy courses presents tensions. It is negligent, for instance, to trust that their pedagogy wouldn’t be inherently biased towards creating mental models of AI which support their own commercial imperatives. As it is in their interest, a company that develops technology can implicitly (and even explicitly) paint the picture that their technology will yield benefits for humanity. Companies that aim for Artificial General Intelligence (AGI) for instance, whilst admitting there might be some ethical issues that need to be addressed, have it in their interests to reassure they can be mitigated in the name of innovation. In some cases, this includes the idea that AGI is inevitable.
OpenAI, for example, states that:
“AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.”Open AI blog: Planning for AGI and beyond
What’s more, companies setting the pedagogy means companies choosing what everything means. OpenAI define AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Their mission and description (which is only one, contested, definition of AGI) do not allow for the questioning of whether AGI is inevitable, whether it should be pursued, and whether technology can benefit all of humanity within the current AI ecosystem and incentives. These are topics under considerable debate and this discourse is what education should be fostering. Not coming from a one-sided, potentially hegemonic perspective which encourages questioning only of the resultant details based on predetermined assumptions.
Although to date OpenAI are not providing AI education, they are turned to as ‘experts’ within educational resources. For example in the BBC “A simple guide to help you understand AI”. The first callout in the guide is from Sam Altman, CEO of OpenAI, presenting his “insider view” on the future of chatbots:
“In 10 years, I think we will have chatbots that work as an expert in any domain you’d like. So you will be able to ask an expert doctor, an expert teacher, an expert lawyer whatever you need and have those systems go accomplish things for you.”BBC News website: A simple guide to help you understand AI
The guide ends with an “expert view” quote from Mo Gawdat, a former Google employee, who proposes that AI must be raised like children. Metaphors comparing AI to children learning are misleading and problematic:
“The answer to our future, if we were to re-imagine it, is not found in trying to control the machines or program them in ways that restrict them to serving humanity, it’s found in raising them like a sentient being, and literally raising them like one of our children.“BBC News website: A simple guide to help you understand AI
It is clear that the guide presents us with ways to “help us understand AI” the way big tech companies want you to. Or, in other words, in ways which defer to industry expertise that has a clear agenda of what the future will look like and how we should get there. This discourages critical thinking about their motivations, builds unhelpful mental models of AI, and dampens the reader’s imagination about any alternative ideas of what the future might look like.
Resources provided by Google Cloud also show commercial agendas in the way AI tools are positioned. For example “AI is the backbone of innovation in modern computing, unlocking value for individuals and businesses” is part of a short explainer description provided by Google Cloud. This shows the lens through which tech companies see their products, as opposed to being able to take on a more neutral perspective free of evocative language such as AI being the “backbone” of innovation and “unlocking” value.
Indeed, framing the way we approach AI, the definitions of AI techniques presented as fact by AI education resources are limiting and problematic. The Snapchat guide to AI is recommended as a resource for teachers by the AI in Education “independent and cross sector” body, for instance. Leaving aside a number of considerable ethical issues posed by Snapchat AI which this positioning on the AI and Education hub seems to ignore (and potentially whitewash), the guide itself includes dangerous misinformation. It states:
“Large language models are like a very smart and well-read friend who can understand the context of a conversation and provide insightful responses. They can generate text that is similar to human writing and can be used to help improve language translation, text summarisation, and other natural language processing tasks”SnapChat My AI: A guide to the AI in my pocket
This is misleading because AI chatbots might be able to identify the context in which different words and phrases appear, but they certainly don’t understand the real world context in a way a friend would. It is also dangerous. The description gives false confidence in the “insightful responses” by omitting to include within this description that they are just predicting what answers would normally follow their questions, and as such have no direct relationship with fact or accuracy. It also encourages children to think of chatbots as smart (ie. correct and authoritative), human-like, and caring (ie. friend). It discourages the critical thinking and fact-checking required of any AI chatbot output and builds the false premise that their companion understands them and the context of the query (I repeat, it can only understand the context of words in relationship to each other, not reality). Given the impressive and plausibly human nature of AI chatput output, children (and indeed adults) are already at risk of misinterpreting output as containing intent and sentience in the same way as humans have. This has been called the “ELIZA effect” having been observed since the first chatbot in the 1960s, which was not nearly as sophisticated as we have now. What is needed from an educational resource is a way to support children NOT falling into this trap – not one which makes it more likely. However, SnapChat does not seem concerned with children’s welfare, their objective is to get them to use their app.
We are not advocating that tech companies should not be involved in creating educational resources, and in many cases they are filling critical gaps. Their employees have expertise in techniques, and they often have deep pockets as a result of either VC funding as a result of exciting investors about products, or as a result of selling us their technology. It is also much better to collaborate and get multiple viewpoints to provide balanced materials. What we should not be doing is allowing the companies themselves to frame how we should think about AI and what questions we are equipped to ask. Any input into school curricula or informal education settings should only be accepted within an established and independent AI literacy framework which includes multiple perspectives and allows for more technosocial viewpoints.
However cannot afford to trust tech companies to provide an unbiased perspective, or expect that they will encourage critique of their impact and products. We saw internal employees who seek to question ethics being silenced when Google sacked researchers who wished to publish research questioning the potential impact of releasing large language models. The paper they wished to stifle ended up being incredibly prescient and influential. If their own employees do not have intellectual freedom and are sacked for questioning narratives, how can we expect their courses to encourage balance and critical thinking?
Critical thinking skills in relation to AI are something that is urgently needed at a national, more independent level. They need to be a key part of an AI literacy strategy and developed by a range of independent AI and pedagogy experts, building on established methodology. We need to define at a societal level what it is our children, country, and the world at large need to know to make decisions about AI. We cannot leave it to those profiting from the use and adoption of AI technologies to define our mental models of the many complex and interrelated fields, systems, and applications which sit under the term AI, and the conditions under which they are produced and regulated.
With thanks to Nick Barrow for help with editing.