Meet the Team

Our volunteers are driven to make AI work for everyone, and come together from many backgrounds, fields and places! You can find out about some of them here.

Directors

Alexa Segal

Alexa is a qualified senior solicitor advocate, with over 10 years of corporate litigation investigations experience with UK based law firms, and working internationally.
This includes 6 years at Taylor Wessing LLP and 4 years at Macfarlanes LLP , where she advised on a broad range of matters including complex commercial and financial litigation , corporate and shareholder disputes, regulatory investigations and financial crime (including fraud and bribery) .

Hannan Ali

Hannan Ali is a Funding Manager at City Bridge Foundation, a 900-year-old bridge-owning charity and London’s biggest independent funder, where he works on various place-based grantmaking programmes covering a range of thematic areas. He is a Trustee, Governor, Ambassador, Committee Member, Panellist, Mentor, and Community Volunteer. He graduated in 2014 from the University of Greenwich with a 2:1 in BA Business Management and has since gained 9 years of experience across community development, philanthropy, and small business management. Hannan has lived in Colombia, China, Kenya, Pakistan, and the UK; and speaks English, Urdu, and Spanish.

Joe Massey

Joe is a Researcher at Doteveryone, the responsible technology think tank, and is specifically interested in the intersection of technology and society.With a background in Global Development and Economics, Joe has previously worked to build a peaceful future for Myanmar through the education of its young people.

Marc Goblot

Marc has applied years of digital experience at Accenture and creative agencies with social change and advocacy to advance his interests in inclusion and diversity and how technology can enable capability and equity for all instead of disabling and excluding people. Apart from a focus on AI and its potential to benefit or challenge people outside norms, he connects up the dots from government policy as an advisor to the Disability Unit to an ecosystem for innovation to reach more people with Tech For Disability, on to digital impact consulting for non-profits and the public sector.

Mark Burey

Nicholas Creagh

Paul Rodriguez

Paul is a project and programme manager specialising in programme governance, risk management, enterprise PMO and assurance. He’s currently a director at Tata Consulting Services managing a team of programme governance specialists across a range of client accounts.

Tania Duarte

Founder

Tania is the Founder of We and AI, a UK non-profit focusing on better AI literacy for social inclusion, in order to facilitate critical thinking and more inclusive decision-making about AI. Programmes include Better Images of AI – a collaboration with BBC R&D and a global community of academics, activists, institutes and artists. Tania is on the Founding Editorial Board for the Springer AI and Ethics Journal and on the Public Engagement and Ecosystem Strategy Advisory Board of The Alan Turing Institute. Tania is a Lead for TLA Tech for Disability, a member of the IEEE P7015 Data and AI Literacy, Skills, and Readiness working group, and was named one of WIAIE’s 100 Brilliant Women in AI Ethics 2021. Prior to this, Tania spent 30 years in consultancy, business and marketing management roles in various industries, latterly in tech and startups.

Members

Elo Esalomi

Elo is a Year 13 student. In March, she researched testing systems for AI chatbots to reduce biases and discriminatory content with Neha Adapala, winning 3rd place in an international competition. Elo has also collaborated with the AI Fringe, where she hosted a workshop titled ‘Responsible AI: A Policy Approach to Bias Eradication’. She also spoke at a panel discussion about synthetic media and its impact on youth under the AI + Society Forum. She hopes to study Computer Science and Philosophy at University.

Gulsen Guler

Gulsen is a researcher focusing on analysing the impacts of emergent technologies on society and building ways to create more equitable futures. As an ex-social worker, Gulsen’s lived experiences in the field inform her research approaches to have a strong focus on social justice.

Medina Bakayeva

Medina has experience in cyber policy and AI governance. She is a Consultant, Digital Hub at the European Bank for Reconstruction and Development. She has a Master’s degree from UCL in Global Governance and Ethics, with a focus on cyber policy and generative AI.

Neha Adapala

Neha is a Year 12 student, who researched testing systems for AI chatbots and wrote a policy proposal with Elo Esalomi, winning 3rd place in an international competition. Neha has also collaborated with the AI Fringe, where she hosted a workshop titled ‘Responsible AI: A Policy Approach to Bias Eradication’. Additionally, she was a speaker at a panel discussion under the AI + Society Forum (in collaboration with the AI Fringe) talking about synthetic media and its implications on society. She hopes to study Computer Science at university.

Tristan Goodman

Tristan is a qualified solicitor in England and Wales, having trained and practiced at Slaughter and May. He is now using his legal background to work in AI regulation and policy. In early 2024, he will begin a research fellowship at the Centre for the Governance of AI in Oxford.

Nicholas Barrow

Nick is a Moral Philosopher who specialises in the Value of Consciousness and its intersection with both the Philosophy of Technology and the Philosophy of Well-Being. He is an advisor to We and AI and Research Associate at the Institute of Ethics in Technology. He most recently worked with Patrick Haggard at UCL on the Ethics of Haptic Technology. Before this, he worked as a research assistant on the Better Images of AI project. As Artificial Intelligence Scholar, he achieved his masters in the Philosophy of AI from the University of York (supervised by Prof. Annette Zimmermann), having previously completed his first-class undergraduate degree in Philosophy from the University of Kent.

Beckett LeClair

Beckett is a Senior Engineer and proponent of responsible tech futures.

Hannah Claus

Hannah Claus is a passionate and driven explorer, DeepMind scholar, and Research Assistant at the Ada Lovelace Institute with a vision to positively impact the world by implementing the latest advancements in AI. Fuelled by curiosity and a thirst for discovery, she is dedicated to studying all elements of AI, aiming to leverage connected fields, such as Robotics and NLP, for meaningful change. Beyond the technical aspects, Hannah is deeply invested in the ethical considerations of AI, emphasising the importance of understanding intelligent systems for coexistence. Committed to helping marginalised communities, she actively volunteers and provides opportunities for others to pursue their passions. Analysing the impact AI has on different societies and communities, she works on providing more accessible knowledge and skills on AI in order to improve AI literacy.

Ismael Kherroubi Garcia

Ismael has been working in the AI ethics space since 2020, when he worked on establishing the Alan Turing Institute’s research ethics committee. He believes that responsible AI cultures can be promoted through practical organisational mechanisms, as informed by his studies in business and philosophy. Since 2022, Ismael has been offering AI ethics and research governance consulting at Kairoi, helping organisations identify crucial tech decisions, anticipate their consequences and implement safeguards to guide decision-making processes. Since 2023, Ismael also leads the Fellow-led AI Interest Group at the RSA (Royal Society of Arts, Manufactures and Commerce). Through practical advice and rigorous research, Ismael promotes the responsible AI revolution, enabling thoughtfulness and inclusivity in the design, development, deployment, usage and governance of AI research and systems.

Valena Reich

Advisor

Valena is a MPhil student in Ethics of AI at the University of Cambridge, funded by the Gates Cambridge Scholarship. Before Cambridge, she completed her BA in Philosophy at King’s College London. Her research interests include the ethics of AI, normative ethics, and political philosophy. Valena is researching the problem of moral uncertainty in AI (specifically, how to encode ethics in AI) as part of her dissertation. She has presented her philosophical work in multiple UK-based and international research conferences. At We and AI, she worked on the “AI Literacy” and “Demystifying AI” projects, and is currently leading the “Deepfakes and Generative AI” workshops, creating educational material on AI and its ethical implications. Valena hopes that by putting her philosophical skills and her knowledge in AI to practise, she will contribute towards a more ethical progress in AI.

Dolapo Moses Apata

Dolapo M. Apata is a Research Assistant at We and AI, having completed a Master’s degree in Applied Artificial Intelligence and Data Analytics at the University of Bradford. His academic foundation extends to geology and petroleum geoscience. Throughout his professional journey, Mr. Apata has garnered valuable experience across diverse sectors encompassing education, GIS consultancy, sales, management, the extractive sector, and work within non-governmental organizations. His extensive range of expertise underscores his capacity for adaptability and his unwavering commitment to ongoing personal and professional development. He is particularly interested in ethical discussions around Artificial Intelligence (AI). Especially the impact of AI on individual lives, the society and the economy considering the fact that AI is a General Purpose Technology (GPT) capable of unprecedented and far reaching impacts.

Savena Surana

Savena is an award-winning creative communicator and producer. For over seven years she’s told stories of social good for clients such as the United Nations Foundation, Museum of London and Lego, crafting narratives that inspire change and challenge the status quo. She is also the co-founder of Identity 2.0, a creative studio working at the intersection of digital rights, identity and technology.

Rick Payne

Rick Payne is a PhD researcher at the University of Portsmouth examining data science communication in the context of data for good. He is particularly interested in how data science and AI literacy can enable all communities to engage with the benefits and risks of new technologies. Projects Rick has helped We and AI with includethe Living with AI on-line course and the Better Images of AI programme. Rick is a qualified accountant with an MSc in Organistaional Behaviour. He held senior roles in finance before taking on a thought leadership role at the Institute of Chartered Accountants (ICAEW). Topics covered in this role included the internet of things, advanced data analytics and finance transformation.

Lizzie Remfry

Lizzie is a Health Data Science PhD student at Queen Mary University of London exploring how we can build more equitable AI systems in collaboration with patients and clinicians. Lizzie has a background in Global Health and Psychology, and her research focues on health inequalities.

Marissa Ellis

Marissa is a strategy, product and change consultant and the founder of Diversily. Her 20 years in tech has covered areas such as digital innovation, data analytics & business intelligence.

Diversily is on a mission to help others to drive positive change through their self service frameworks and workshops that cover change, inclusion and leadership. Their flagship tool The Change Canvas is supporting positive change all around the world.

Kimberly Wright

Kimberly Wright is a strategic problem solver, author, researcher, entrepreneur and decision maker on a mission to increase human agency, responsible AI & sustainability.

Anna Montague-Nelson

My name is Anna and I am the Research Manager at Social Business Development. I hold a Master’s degree from the School of Oriental and African Studies (SOAS), where my studies focused on the intersection of AI and Human Security. My primary research interest lies in exploring how AI technologies impact young people and their mental health. With a deep commitment to understanding and improving the well-being of the younger generation, I work to bridge the gap between cutting-edge AI research and its practical implications for mental health support among youth .

Laura Hernández

Laura is a Master of Logic student at Universiteit van Amsterdam, focused on studying topics in the philosophy of artificial intelligence using the frameworks of formal epistemology, rationality and formal semantics. Research interests include data privacy, explainability, ethics and education.

Ingrid Karikari

Ingrid is an experienced, versatile Digital Consultant and Strategist with a 20-year career of working on diverse public, private and third sector digital projects. She is passionate about creating digital products and services that empower, enrich and transform. She has worked on AI projects in this context and is equally excited by, and wary of, the possibilities that AI and emerging technologies present.