To increase public power in AI decision making, we must increase critical AI literacy

We and AI is a nonprofit volunteer social justice organisation supporting greater public input into if and how AI is used.

Our main areas of work are advocacy for critical AI literacy, literacy content design, development and delivery and guidance on developing responsible and inclusive discourse about AI.

Read more

Stock images strongly influence the way people  think about the topics they illustrate. Research has repeatedly shown that many images of artificial intelligence are misleading and unhelpful. Coordinated by We and AI, Better Images of AI is a non-profit collaboration which runs a free library of more inclusive and transparent  images that anyone can use, and a community blog exploring visual narratives of AI.

Access the collection →

Read more

The UK Government recently announced the AI Skills Boost. It promised “free AI training for all,” and claimed that the courses will give people the skills needed to use AI tools effectively. We and AI have contributed to articles in Computer Weekly and Tech Policy Press highlighting the inadequacy of both the approach and the courses, and host an Open Letter calling for an investment not just in skills but public literacy.

Image: Zoya Yasmine / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Read the full article →

Read more

We edited a Topical Collection of research articles in the AI and Ethics Journal to explore perspectives on the impact of AI Hype and its ethical implications. The articles in this collection examine the ways in which myths, misrepresentation, and overinflation associated with AI capabilities and performance can influence policy agendas, business decisions, and individuals. The authors of identify how mechanisms such as framing, linguistic devices, terminology, and representation can influence mental models and the trajectory of future developments relating to AI.

Image: Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Access the collection →

Read more

With soaring investments into AI technologies and infrastructure and unabating coverage of AI stories in the media, AI is widely perceived as inevitable. However, this perception is shaped by narratives that diminish our agency as humans and present technology as advancing independently from human activity and interests. In response we have created a framework which outlines instances of initiatives actively defying narratives about the inevitability of AI. It explores instances of Resisting, Refusing, Reimagining and/or Reclaiming AI and is a conceptual framework for challenging the power dynamics underpinning AI technologies, in a variety of different ways.

Access the full paper →

Read more

AI chatbots have been integrated into many aspects of daily life, often with little discussion or acknowledgement of the possible harms that this technology may cause. We conducted a narrative review of the current and potential harms that can be caused through interaction with AI chatbots, and identified and mapped 13 different types of harms to individual users, many which are still hidden.

Image: Hanna Barakat  & Archival Images of AI + AIxDESIGN / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Read the full article →

Read more

Better Images of AI and We and AI have been exploring the role of visual and narrative metaphors in shaping our understanding of AI. As part of this we invited some researchers who have been conducting different types of research into the topic, to shed light on the ways metaphors can contribute to hype specifically. This informs a project we are conducting on building an annotated AI metaphor database.

Image: Rick Payne and team / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Read the full article →

Our mission is to enable the development of critical thinking skills about AI, particularly among those currently most underrepresented in AI decisions and data, and most vulnerable to the consequences of automation. Our focus is to develop accessible interventions to support people in navigating AI narratives and to make informed decisions which genuinely align with their values and interests.

Blog

  • Reflecting on 5 Years of We and AI

    Reflecting on 5 Years of We and AI

    Celebrating Community, Curiosity, and Collective Change Rameez Raja is a data analytics engineer, storyteller, currently pursuing an MSc in AI at the University of Bath. He is also an active We and AI volunteer, and shares this perspective on the first five years of non-profit organisation We and AI. Last month, I had the privilege…

  • Deepfakes and Synthetic Media Workshops

    Deepfakes and Synthetic Media Workshops

    With the rise of generative AI tools like ChatGPT and image generators, deepfakes have become more convincing and widespread. Anyone with access to a computer can replace a person’s face on-screen with another’s, or create a convincing impression of a person’s voice using speech synthesis. While these technologies may have creative and educational uses, they…

  • What did we learn from “Framing Deepfakes”?

    What did we learn from “Framing Deepfakes”?

    Dr Patricia Gestoso and Medina Bakayeva This is a recap of ane event we hosted in July 2024. Watch the event recording here. In light of the malicious uses of deepfakes, governments have engaged in policy debate, media outlets have frantically reported on the issue, and academics explored the technological advancements of AI-generated media from different…

  • Free online webinar to explore research on the ethical implications of AI Hype, by We and AI on 23th Sep 2024

    Free online webinar to explore research on the ethical implications of AI Hype, by We and AI on 23th Sep 2024

    *** Explores the overinflation and misrepresentation of AI capabilities, featuring insights from over 30 experts across various disciplines.*** Examines AI hype’s impact on public discourse, policy, business, and more. **** Hosted by We and AI, a nonprofit focused on AI literacy for critical thinking. [London, 2024.09.09]: We and AI, a nonprofit organisation committed to enabling…

  • Leaving AI explainers up to tech companies: What could go wrong?

    Leaving AI explainers up to tech companies: What could go wrong?

    Opinion: By Tania Duarte In the absence of any funded civil society or national school and adult initiatives in the UK, tech companies fill the vacuum. For instance, Snapchat.AI have an AI literacy guide and Google DeepMind have developed Experience AI. It can seem sensible to educators and policymakers to make use of free resources…

  • An AI-first economy may be unsustainable and undemocratic

    An AI-first economy may be unsustainable and undemocratic

    Why preparing people needs to come first for our economy Opinion: by Ismael Kherroubi Garcia The UK government has consistently made the case in recent years for promoting innovation through the investment in, and development and deployment of, artificial intelligence (AI) research and systems. However, the government has not made any significant headway in filling…