
Skills-based digital literacy has failed to provide public benefit. So why is upskilling the UK’s only plan? As part of a range of bombastic statements about the UK’s uncritical embracing of AI in everything, the Government recently announced the AI Skills Boost. It promised “free AI training for all,” and claimed that the courses will […]
Read More
A broad taxonomy of AI chatbot harms caused to individual users Introduction At We and AI we have been concerned by the lack of awareness of the wide range of harms caused by AI chatbots for a long time. Despite the huge encouragement to use chatbots as companions, coworkers, doctors, teachers, counsellors, life advisors, friends, […]
Read More
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona By Cinzia Pusceddu Better Images of AI and We and AI have been exploring the role of visual and narrative metaphors in shaping our understanding of AI. As part of this we invited some researchers who have been […]
Read More
This article looks at the global shortage of teachers and how AI might be used to supplement and provide lacking education, and argues that it could be less biased than teachers, thereby resolving inequity.
Read More
This article considers the various ways AI can be used during the pandemic to boost virtual learning, focusing on Chinese company Squirrel AI who are reporting good results with computer tutors and personalised learning, and weighing up the risks, such as surveillance of Muslim Uighurs in Xinjiang.
Read More
Facial recognition AI, combined with other AI assessment, is used to spot how children are performing and boost their performance. However, there is concern that it may not work so well for students with non-Chinese ethnicities who were not part of the training data.
Read More
This article looks at what issues may arise for children from minority and underprivileged communities from replacing teachers with AI.
Read More
Automated essay grading in the US has been shown to mark down African American students and those from other countries.
Read More
This short article gives an example of how predictive algorithms can penalise underrepresented groups of people. In this example, students from Guam had their pass rate underestimated versus other nationalities, because of the low number of students in the data set used to build the prediction model, resulting in insufficient accuracy.
Read More
This article details the algorithm used to inform A Level results for students who could not take exams due to the 2020 pandemic. The algorithm took into account the postcode of the student, which meant that students from lower income areas were more likely to have their grade reduced whilst students in high-income areas were […]
Read More
An outcry over alleged algorithmic bias against pupils from more disadvantaged backgrounds has now left teenagers and experts alike calling for greater scrutiny of the technology.
Read More
Case study explaining algorithm bias inherent in grade prediction for A Level students. Demonstrates the physical impact AI can have, if not scrutinised for bias.
Read More
This news example gives an example of bias present in an algorithm governing the first round of admissions into a medical university. The data used to define the algorithms output showed bias against both females and people with non-European-looking names.
Read More
An article detailing how AI might change admissions in terms of the process, the consequences and how students from some countries could be at risk of bias.
Read More
The paper provides pillars of action for the AI community, and includes a focus on climate justice where the author recommends that environmental impacts should not be externalised onto the most marginalised populations, and that the gains are not only captured by digitally mature countries in the global north. This will require centring front-line communities […]
Read More