Rameez Raja is a data analytics engineer, storyteller, currently pursuing an MSc in AI at the University of Bath. He is also an active We and AI volunteer, and shares this perspective on the first five years of non-profit organisation We and AI.
Last month, I had the privilege of attending a deeply moving milestone event: the 5-Year Anniversary of We and AI — an organization that has become a heart-space for critical thinking, creative activism, and collaborative hope in the world of artificial intelligence (AI). What began as a grassroots response to the hype, harm, and exclusionary narratives around AI has grown into something far more powerful: a living, breathing community dedicated to centering inclusion, justice, and public understanding in the AI conversation.
As we gathered on Zoom — dozens of us, some familiar faces and many new ones — it became clear that this wasn’t just a celebration of an organization. It was a celebration of a movement.
We heard from founders, collaborators, partners, and volunteers who have shaped the We and AI journey across its many projects — from Better Images of AI and Deepfakes Literacy workshops to Living with AI public courses and critical policy advocacy. There was laughter, reflection, some bittersweet honesty, and more than one proud mention of the legendary chaos of trying to run interactive online activities without the core funding to pay for platform costs (classic We and AI energy).
Tania Duarte, the ever-inspiring founder, reminded us why We and AI exists: to increase the diversity of people involved in decisions about how AI is designed and used. We and AI is here to enable critical thinking, build critical AI literacy, and ensure that AI systems genuinely reflect the values, interests, and lived experiences of the communities they affect — especially those who have historically been excluded or harmed by technology. This work doesn’t serve the hype cycle — it challenges it.
In 2025, AI headlines are dominated by a handful of familiar names. Huge concentrations of wealth and power are shaping the future of technology behind closed doors. The Elon Musk slide shared sparked recognition among many in our community — not because of personality, but because of what it represents: a system where a few decide, and the rest are impacted.
AI isn’t neutral. It mirrors and amplifies the inequalities of our world. Which is why our mission is to help people — especially those least included in AI conversations — build the critical thinking skills needed to question, challenge, and shape these technologies.
We watched short videos contributions from many inspiring volunteers about what We and AI means to them, and some from our wider community of collaborators. One video that stayed with me was from Dr Mary Stevens, Experiments Programme Manager at Friends of the Earth. Mary has spent the last five years navigating the intersections between AI and environmental justice. Her voice, quiet but resolute, reminded me why this work matters.
She spoke of AI’s potential to democratise, especially in public systems — planning, engagement, and co-created digital assemblies. But she also shared her concern: that in the rush to adopt AI at any cost, we risk pushing our planet further past its boundaries. Black-box technologies, she warned, could become enablers of authoritarianism if we’re not careful. And yet — there was hope in her voice. Hope that environmental concerns are no longer sidelined in tech conversations. That spaces like We and AI are here to bridge these issues, not bypass them.
Another moment that moved me came from Dr Robert Elliott Smith, author of Rage Inside the Machine. I didn’t realise that Tania, our founder, had read his book in 2019 and it had helped inspire the start of We and AI.
Rob’s perspective on AI was both grounded and poetic: “AI is a terribly good aid — it helps a great deal. Then it becomes a burden, a work generator instead of a workshop.” That line — “a work generator instead of a workshop” — hit me. So often, tech is sold to us as a tool for liberation, but it can quickly become just another system that overwhelms rather than empowers. Dr Rob reminds us that true creativity, meaning-making, and ethics still reside in human connection. Between the artist and the viewer. Between the tool and its wielder. Between community and code.
Hearing from others like Dr Jennifer Ting (Co-founder, London Data Week), Tim Davies (Research Director, Connected by Data), and Steph Wright (Head of Scottish AI Alliance) showed how important We and AI is in giving people the confidence to speak up in tech spaces. Whether it was building their first Race and AI Toolkit or running workshops that brought critical AI literacy to schools, the impact has been huge.
We also heard powerful reflections from artists and researchers involved in the Better Images of AI project — a global effort to replace sci-fi robots and glowing blue brains with visuals that are more human, grounded, and socially aware. These images help people see what’s really at stake.
Tania took a moment to thank the many organisations and collaborators who have shaped this journey over the past five years and to celebrate their contributions. From the Ada Lovelace Institute, whose research has deepened public understanding of data and AI, to the Joseph Rowntree Foundation and the Netherlands Institute for Sound and Vision — each has played a part in building a richer, more inclusive AI ecosystem.
These aren’t just partnerships; they are patterns — patterns of collaboration, care, and collective action. It reminded me that We and AI has never been about a single voice or vision, but about weaving together a tapestry of efforts — community by community, project by project — to make AI something people can truly see themselves in.
One of the most moving moments of the evening came through a collective exercise led by Dr Patricia Gestoso, a facilitator and systems thinker, who first met Tania years ago at a Women in Tech conference — a chance encounter that planted seeds of deep mutual respect and shared purpose. Marissa Ellis, also present at that early meeting, reflected on how moments like those — when ideas are still just sparks — often blossom into something bigger when they’re nurtured in community.
Dr Gestoso invited us into what she described as a “thinking environment” — a space for presence, pause, and active listening. As we all settled into silence, Tania read aloud the haunting and timeless poem First They Came by Pastor Martin Niemöller:
First they came for the Communists
Pastor Martin Niemöller
And I did not speak out
Because I was not a Communist
Then they came for the Socialists
And I did not speak out
Because I was not a Socialist
Then they came for the trade unionists
And I did not speak out
Because I was not a trade unionist
Then they came for the Jews
And I did not speak out
Because I was not a Jew
Then they came for me
And there was no one left
To speak out for me
We were then invited to “waterfall share”, responding to the question: What is your freshest thought that comes to mind?
What followed was a raw and honest outpouring. People wrote of not wanting to stay silent when it’s uncomfortable. Of being afraid to speak — and doing it anyway. One person said, “support is listening with your whole self.” Another said, “it’s choosing to stand beside someone, even when it’s not your fight.” Someone else offered, “it means checking your privilege, not just once, but again and again.”
There was a shared recognition that silence, especially in moments of injustice, is not neutrality — it’s complicity. And that We and AI has always been about creating spaces where people feel safe enough to not just learn, but to act.
This exercise wasn’t just an emotional interlude. It was a reminder of why we do this work. That beyond policies, platforms, or code, what matters is our shared humanity, and the courage it takes to show up for one another.
We also looked forward. Tania and Ismail introduced an exciting new crowdsourced project — challenging the myths of AI inevitability through the “4 R’s”: Resist, Refuse, Reclaim, and Reimagine. A living toolkit for hope, in a time of mounting resignation.
There was something so human about the whole event. We weren’t just talking about systems and models — we were talking about each other. About relationships, purpose, joy, burnout, and what it means to do the hard work of imagining better futures together, without losing ourselves in the noise.
I really enjoyed the breakout sessions. In mine, we explored how to better engage different stakeholders in AI conversations. Hearing from Dr Dagmar Monnett and others sparked ideas I hadn’t considered, and made me reflect on the methods I can bring into my day-to-day life moving forward. The word clouds also brought a smile — especially the one asking for our least favourite word of 2025 so far. “DOGE”, “Trump”, and “agentic” made the list, while “woke” appeared as a top favourite. I think that contrast speaks for itself.
Honestly? The energy. The humility. The clear-eyed truth that this work isn’t easy — but it’s necessary. And the sense that We and AI isn’t just a name. It’s a promise.
A promise that we can ask better questions. That we can shape the technologies shaping us. That inclusion isn’t an afterthought, but the very starting point.
I left the call not just informed, but inspired. And reminded that even in a world of language models and machine learning, our humanity — our stories, art, and care — are still what matter most.
If you’ve ever felt overwhelmed by AI headlines or underqualified to be part of the conversation — you’re exactly who this community is for.
Happy 5th Birthday to We and AI. Here’s to five more years of radical inclusion and collective imagining.
RAMEEZ RAJA