Meet the Team

Our volunteers are driven to make AI work for everyone, and come together from many backgrounds, fields and places! You can find out about some of them here.

Directors

Tania Duarte

Founder

Tania is the Founder of We and AI –  a UK non-profit focusing on critical AI literacy, which runs the Better Images of AI library. She is on the Founding Editorial Board of the AI and Ethics Journal, the Editorial Board of the RSA (Royal Society of Arts) Journal, a Lead for the RSA Responsible Artificial Intelligence Network (RAIN), and for TLA Tech for Disability. Tania coordinated the definitions group for the IEEE P7015 Standard for Data and AI Literacy, Skills, and Readiness, and was on the Public Engagement and Ecosystem Strategy Advisory Board of The Alan Turing Institute. She was named one of 100 Brilliant Women in AI Ethics™ 2021 and one of Computer Weekly’s 20 Most Influential Women in UK Tech 2025. Prior to this Tania was a CMO at TPX Impact, head of marketing and communications at FutureLearn, and has an MBA and a Diploma in Digital Business Leadership.

Picture of Steph Wright, a smiling woman of East Asian heritage with short hair

Steph Wright

Board Director

Steph has a diverse background ranging from astrophysics to genomics in academia, film & TV, dance and the third sector. A leader in, and advocate for, ethical, inclusive and responsible AI, she works to ensure that technology benefits the many and not the few. She is currently leading on the delivery of Scotland’s national AI Strategy. She was recognised as one of the 100 Brilliant Women in AI Ethics in 2023, one of the Top 10 Women in Tech in Scotland in 2023 and recently named in the 2025 Digital Leaders AI 100 UK list. She was also awarded the 2024 DataIQ Award for Data & AI For Good Champion.

Marc Goblot

Executive Board Director
Marc had many years in tech leadership in the creative industries and at Accenture.
He shifted to focusing fully on inclusive technology and supporting neurodivergent, disabled and social causes as a result of advocating for his daughter’s services and his own late assessment of autism and ADHD, in addition to ethical tech.
He founded Tech For Disability within the Tech London and Global Tech Advocates, to raise the profile of a disability and neurodivergent ecosystem and inclusive innovation. He met Tania Duarte through that group just as she was starting up We and AI, and they have collaborated closely across organisations ever since.
As an RSA Fellow, he is a committee member and co-lead of Responsible AI, Inclusive Work, Systems Thinking, and the Health & Care networks. At the British Computing Society he manages their Neurodiverse IT & AI groups, and advises UK government departments via the Disability Unit on Assistive tech, AI regarding the impact on marginalised groups.
He is now launching the Digital Diversity Living Lab, as a social tech research and development business to conduct user and systems research of those without voices and co-design inclusive innovation solutions in joint ventures. He draws on a network of specialist expertise, academic disciplines, and lived experience.
Zoya is of South Asian Heritage with long dark hair and a white top

Zoya Yasmine

Company Secretary and Board Director

Zoya is currently pursuing a DPhil in law at the University of Oxford. Previously, she completed an MPhil in the Ethics of AI, Data, and Algorithms at the University of Cambridge and an LLB at LSE. Her thesis focuses on legal overlaps in UK intellectual property and data protection laws and their implications for mitigating biases in medical machine learning models.

Her other research interests include privacy-enhancing technologies in health research, whistleblowing in the technology industry, and human-in-the-loop systems. Alongside academic research, Zoya has worked at numerous start-ups and organisations in the healthcare and AI industry, including GSK, the General Medical Council, and BenevolentAI.

Mark Burey

Non-Executive Board Director

Mark is a highly experienced communications professional and Group Director of PR, Communications, and Marketing at Harrow, Richmond, and Uxbridge Colleges (HRUC) — one of the largest college groups in England. He brings extensive experience from roles at esteemed organisations including The Alan Turing Institute, the national institute for data science and AI, where he was responsible for promoting, protecting and building its reputation for research. Mark has also worked with the BBC’s news and current affairs publicity team, London College of Fashion, University of the Arts, and University of East London, as well as key positions in local government.

His expertise spans strategic communications, public relations, digital engagement, community outreach, and event management. He has held senior roles in Tower Hamlets, Newham, Waltham Forest, and Havering, driving impactful communications strategies across diverse sectors.

Beyond his professional career, Mark is passionate about the creative arts, particularly film, photography, and documentary storytelling. Committed to education, he actively mentors students, supporting their academic and professional growth. His dedication to mentorship reflects his belief in its transformative impact on future generations.

Ismael Kherroubi Garcia

Associate Director

Ismael has been working in the AI ethics space since 2020, when he worked on establishing the Alan Turing Institute’s research ethics committee. He believes that responsible AI cultures can be promoted through practical organisational mechanisms, as informed by his studies in business and philosophy. Since 2022, Ismael has been offering AI ethics and research governance consulting at Kairoi, helping organisations identify crucial tech decisions, anticipate their consequences and implement safeguards to guide decision-making processes. Since 2023, Ismael also leads the Fellow-led AI Interest Group at the RSA (Royal Society of Arts, Manufactures and Commerce). Through practical advice and rigorous research, Ismael promotes the responsible AI revolution, enabling thoughtfulness and inclusivity in the design, development, deployment, usage and governance of AI research and systems.

Members

Alina is a young East Asian woman with long dark hair in a dark shirt

Alina Huang

Alina Huang is a Year 13 student. Last summer, she conducted research through the UCSB Research Mentorship Program, examining how NLP model design shapes algorithmic bias in AI curricula. She founded the Fair Tech Policy Lab, leading an international team to explore algorithmic bias and AI governance, and is a member of the UN AI for Good London Hub. She organized a student policy hackathon on algorithmic fairness. Alina hopes to study technology, ethics, and public policy at university.

Ella is young with long brown hair and sits leaning her chin on her hand smiling

Ella Markham

Ella is a final-year PhD student in the Centre for Doctoral Training in Natural Language Processing (NLP) at the University of Edinburgh, where her research focuses on building computational models of how people learn the meaning of words. She also holds a BA in Psychology and Linguistics from the University of Oxford.

Harriet is a young white woman with long blond hair and striking black glasses

Harriet Humfress

Harriet Humfress is a London-based undergraduate finalist studying Fine Art at St Edmund Hall, Oxford. Her research explores popular beliefs and myths around AI, and how these ideas of anthropomorphism heighten anxieties around embodiment and labour. Her work covers sound and video installations along with small tactile sculptures that aim to both satirise and inform viewers about how these digital systems cannot be unlinked from human emotion and power structures. Her projects often begin with data collection through surveys and workshops to build an understanding of how people engage with AI in their everyday lives, and these data sets then become the material for her works. She is developing this practice as a pathway into further study in AI ethics, with the ambition of developing the public understanding of AI. By using artistic methods to expose hidden assumptions about AI, she aims to contribute to broader conversations to empower people to engage critically with these tools.

Max is a young man with wavy brown hair and facial hair

Max Fullalove

Max graduated from the University of Cambridge, studying Human, Social and Political Sciences. He served as the Editor-in-Chief of the Cambridge Journal of Political Affairs’ tenth edition, and his final year research critically analysed the application of AI systems to political governance. He volunteers as an editor at Agora NI, an open forum for foreign policy think tank, and works as a Tutor. At We and AI, he is currently working on a project looking into the impacts of using metaphors when discussing AI.

Priti is a smiling young woman of South Asian heritage and long dark hair.

Priti Pandurangan

I build information-rich interfaces, moving fluidly between research, design, and prototyping. My personal practice is drawn to alternative ways of working with data – approaches that hold space for ambiguity, lived experience, and emotion. I care about what slips through the cracks – the things that don’t fit neatly in spreadsheets but still very much shape how we understand the world. In that sense, my work seeks to explore what counts as data and how we might honour what’s messy, intangible, and hard to measure.

Ramla is a smiling young woman with dark skin, glasses and is wearing a brown hijab.

Ramla Anshur

Ramla is a design researcher and futurist, exploring how we can design more equitable community led and owned AI futures, to embed cultural, ancestral, and indigenous knowledges through creative practice.

Bruna has long dark hair and leans against a wall

Bruna Martins

Bruna holds an MBA in Business Management, complemented by executive certifications in AI Ethics, Regulation and Compliance from Oxford University and Product Strategy from Northwestern Kellogg. She currently leads the AI Literacy Lab initiative, which through practical advice, hands-on training and ongoing support, ensures that civil society organisations can make informed choices regarding how to engage with AI effectively and responsibly. With over a decade of experience in senior leadership at global technology startups, she brings expertise in communications and driving strategic transformations in fast-paced environments. Her background ranges from co-founding the pioneering crowdsourced journalism collectives to managing complex B2B SaaS products and leading cross-regional teams. Currently serving as Board Member at Bridging Beyond and Advisory Board Member at PBI UK, Bruna combines her expertise with a commitment to ensuring emerging technologies serve humanity’s best interests.

Picture of Fotini smiling

Fotini Skarlatou

Fotini Skarlatou is an HR professional, with 18 years of experience in various roles in the banking industry, ten of which in Internal Communications. She holds a BA in European Studies (German and Hispanic Studies) from Queen Mary University of London, and an MA in Communications, Media and Public Relations from the University of Leicester. In 2024 she completed the AI Fundamentals: Governance Course offered by BlueDot Impact. Her interests include AI Governance, AI Ethics, and AI Safety. She aspires to actively contribute to the field, both within her role in HR, as well as by participating in projects that further AI education and AI safety.

Picture of Joe Bourne with short brown hair

Joe Bourne

I am doing a PhD in Speculative Design and Emerging Technologies at Imagination Lancaster, and am a Partnership Development Lead at the Alan Turing Institute. I’m particularly interested in public understanding and imaginings of emerging technology, and their hopes and fears associated to this.

Picture of Laura with dark hair and glasses

Laura Martinez

PhD in Information and Communication Sciences. She is currently a teaching and research assistant at the Department of Communication and Society of the University of Franche-Comté (ELLIADD). Her research focuses on visual and multimodal methods for analysing devices, mediations, representations, imaginaries and discourses in/about the (digital) urban espace.

Headshot of Liam who is white against blue background

Liam Palette

Liam is a master’s student in AI Ethics and Society at the University of Cambridge, with a background in military intelligence, a first-class degree in Intelligence and International Relations, and professional project management experience. They deliver talks on AI in education and engage with global experts on AI governance and responsible deployment.

A picture of a smiling black woman with shoulderlength braids and hoop earrings.

Onyedikachi Hope Amaechi-Okorie

My name is Onyedikachi Hope Amaechi-Okorie. I’m a Technical Community Advocate with over five years of experience in quality engineering, now fully focused on building inclusive, accessible, and empowering spaces in tech.

I currently support the JSON Schema community and contribute to Mozilla Common Voice, where I help document open data efforts and foster inclusive collaboration. I lead both technical and non-technical community programs, champion accessibility, clear documentation, and sustainable community growth.

I’m also the founder of Spectrum of Speech, a global community that celebrates speech diversity and empowers people who speak differently through storytelling, shared strategies, and collective support. My work sits at the intersection of open source, speech inclusion, and ethical voice technology, exploring how speech norms and voice tools can better reflect the richness of human communication.

Elo Esalomi

Elo is a Year 13 student. In March, she researched testing systems for AI chatbots to reduce biases and discriminatory content with Neha Adapala, winning 3rd place in an international competition. Elo has also collaborated with the AI Fringe, where she hosted a workshop titled ‘Responsible AI: A Policy Approach to Bias Eradication’. She also spoke at a panel discussion about synthetic media and its impact on youth under the AI + Society Forum. She hopes to study Computer Science and Philosophy at University.

Gulsen Guler

Gulsen is a researcher focusing on analysing the impacts of emergent technologies on society and building ways to create more equitable futures. As an ex-social worker, Gulsen’s lived experiences in the field inform her research approaches to have a strong focus on social justice.

Medina Bakayeva

Medina has experience in cyber policy and AI governance. She is a Consultant, Digital Hub at the European Bank for Reconstruction and Development. She has a Master’s degree from UCL in Global Governance and Ethics, with a focus on cyber policy and generative AI.

Neha Adapala

Neha is a Year 12 student, who researched testing systems for AI chatbots and wrote a policy proposal with Elo Esalomi, winning 3rd place in an international competition. Neha has also collaborated with the AI Fringe, where she hosted a workshop titled ‘Responsible AI: A Policy Approach to Bias Eradication’. Additionally, she was a speaker at a panel discussion under the AI + Society Forum (in collaboration with the AI Fringe) talking about synthetic media and its implications on society. She hopes to study Computer Science at university.

Nicholas Barrow

Nick is a Moral Philosopher who specialises in the Value of Consciousness and its intersection with both the Philosophy of Technology and the Philosophy of Well-Being. He is an advisor to We and AI and Research Associate at the Institute of Ethics in Technology. He most recently worked with Patrick Haggard at UCL on the Ethics of Haptic Technology. Before this, he worked as a research assistant on the Better Images of AI project. As Artificial Intelligence Scholar, he achieved his masters in the Philosophy of AI from the University of York (supervised by Prof. Annette Zimmermann), having previously completed his first-class undergraduate degree in Philosophy from the University of Kent.

Beckett LeClair

Beckett LeClair is the Head of Compliance at 5Rights Foundation, the international NGO that seeks to ensure children’s rights are upheld in the digital world. He has prior experience as a Senior Engineer across cybersecurity, safety, and AI R&D. Beckett now takes part in technology standards and policy development around the world.

Hannah Claus

Hannah Claus is a passionate and driven explorer, DeepMind scholar, and Research Assistant at the Ada Lovelace Institute with a vision to positively impact the world by implementing the latest advancements in AI. Fuelled by curiosity and a thirst for discovery, she is dedicated to studying all elements of AI, aiming to leverage connected fields, such as Robotics and NLP, for meaningful change. Beyond the technical aspects, Hannah is deeply invested in the ethical considerations of AI, emphasising the importance of understanding intelligent systems for coexistence. Committed to helping marginalised communities, she actively volunteers and provides opportunities for others to pursue their passions. Analysing the impact AI has on different societies and communities, she works on providing more accessible knowledge and skills on AI in order to improve AI literacy.

Valena Reich

Valena is a MPhil student in Ethics of AI at the University of Cambridge, funded by the Gates Cambridge Scholarship. Before Cambridge, she completed her BA in Philosophy at King’s College London. Her research interests include the ethics of AI, normative ethics, and political philosophy. Valena is researching the problem of moral uncertainty in AI (specifically, how to encode ethics in AI) as part of her dissertation. She has presented her philosophical work in multiple UK-based and international research conferences. At We and AI, she worked on the “AI Literacy” and “Demystifying AI” projects, and is currently leading the “Deepfakes and Generative AI” workshops, creating educational material on AI and its ethical implications. Valena hopes that by putting her philosophical skills and her knowledge in AI to practise, she will contribute towards a more ethical progress in AI.

Dolapo Moses Apata

Dolapo M. Apata is a Research Assistant at We and AI, having completed a Master’s degree in Applied Artificial Intelligence and Data Analytics at the University of Bradford. His academic foundation extends to geology and petroleum geoscience. Throughout his professional journey, Mr. Apata has garnered valuable experience across diverse sectors encompassing education, GIS consultancy, sales, management, the extractive sector, and work within non-governmental organizations. His extensive range of expertise underscores his capacity for adaptability and his unwavering commitment to ongoing personal and professional development. He is particularly interested in ethical discussions around Artificial Intelligence (AI). Especially the impact of AI on individual lives, the society and the economy considering the fact that AI is a General Purpose Technology (GPT) capable of unprecedented and far reaching impacts.

Lizzie Remfry

Lizzie is a Health Data Science PhD student at Queen Mary University of London exploring how we can build more equitable AI systems in collaboration with patients and clinicians. Lizzie has a background in Global Health and Psychology, and her research focues on health inequalities.

Marissa Ellis

Marissa is a strategy, product and change consultant and the founder of Diversily. Her 20 years in tech has covered areas such as digital innovation, data analytics & business intelligence.

Diversily is on a mission to help others to drive positive change through their self service frameworks and workshops that cover change, inclusion and leadership. Their flagship tool The Change Canvas is supporting positive change all around the world.

Ingrid Karikari

Ingrid is an experienced, versatile Digital Consultant and Strategist with a 20-year career of working on diverse public, private and third sector digital projects. She is passionate about creating digital products and services that empower, enrich and transform. She has worked on AI projects in this context and is equally excited by, and wary of, the possibilities that AI and emerging technologies present.

Picture of dark haired Javiera in the mountains

Javiera Montenegro

Javiera holds a Bachelor’s Degree in History and Political Science, and a professional background in Arts Management, Data Privacy and Compliance. She is currently interested in the intersection of AI and ethics, foresight research and models of circular economy.