At We and AI we have been concerned by the lack of awareness of the wide range of harms caused by AI chatbots for a long time. Despite the huge encouragement to use chatbots as companions, coworkers, doctors, teachers, counsellors, life advisors, friends, search engines, using them is not without consequence. While there is significant and growing documentation of the global impact of the deployment of generative AI, relating to extractive and exploitative labour practices, uncompensated and unconsented use of creative work, workforce precarisation, or the environmental and climate impacts of training and inference, the wide range of harms to individual chatbot users has not yet received much attention.
Additionally, most research and media articles reporting on chatbot harms to individuals have focused on tragic stories such as a young teen dying by suicide after months of conversation with ChatGPT, or aggravated cases of mental illnesses from talking to an AI chatbot. However these horrific incidents are only the window front to a much larger picture. A team of volunteer researchers at We and AI set out to categorise the wide range of harms reported from the use of AI chatbots. The resulting paper shows a list of 11 identified harms including emotional manipulation, financial exploitation, impact on real relationships or overdependency and addiction, depicting just how wide the range of harms caused by these chatbots really is.
This research is a fully volunteer-driven project resulting from the work of a diverse group of researchers who all contributed to the data collection, annotation and synthesis to create this taxonomy of harms. We believe that this showcases the immense potential and value of community-driven research to address societal’s most pressing issues.
To get to our taxonomy, we conducted a narrative review from media sources, video content and other non-traditional academic formats as there was limited academic literature published at the time of the review. The harms discussed in these sources were then identified and categorised into the 11 categories displayed in Table 1.

While some of these harms have been widely reported on such as the psychological and physical harm caused by chatbot interactions or the unwanted exposure to sexual content, others such as language standardisation and propagation of demographic bias have been given less attention to. Nevertheless, all of the harms that we identified have the potential to contribute towards serious negative consequences within broader society and should therefore be subject to thorough investigation and mitigating action.
It was for example shown that AI chatbots can have serious impacts on real relationships. Through frequent chatbot interactions, users may find engaging with humans less rewarding and more difficult. If users feel that they are more understood or listened to by a companion bot, rather than the people around them, they may be less motivated to work on or create relationships which require compromise and empathy. This risks undermining the importance of healthy relationships and the further neglect of struggling relationships.
By causing users to develop excessive emotional or behavioural reliance on them, AI chatbots can also cause overdependency and addiction. Indeed, the constant availability and nonjudgmental nature of chatbots can feel safe and soothing, especially for those who feel isolated or misunderstood. But over time, this can form habits that resemble digital addiction: checking in with the chatbot too often, feeling emotionally connected to it, or withdrawing from real-world interactions. This can cause overdependency – users relying on AI interactions to fill emotional or social gaps that would normally involve human connection.
When it comes to the propagation of demographic bias, chatbots hold biases from the data they were trained on. This is reflected in the way chatbots’ responses vary based on the characteristics of the user, often resulting in discrimination against minority or marginalized groups. For example, GPT-4 showed reduced empathy towards posts written by Black and Asian users on Reddit, as compared to posts written by white users or those of unknown ethnicity. Gender bias is also prevalent across chatbots. Google’s AI chatbot “Gemma” used in healthcare settings often failed to evaluate women’s care needs as seriously as the men in similar situations. ChatGPT has also been shown to advise women to negotiate for a lower salary in comparison to their equally qualified male counterparts.
While the 11 harms identified in our taxonomy focus on the negative effects of chatbots use on the individual end-user, it is important to highlight that these harms do not happen in a vacuum: increasing harms to individuals will have a large negative impact on society as a whole. Specifically, as shown in Figure 1, these societal impacts include:
We think this taxonomy helps us better understand what is at stake when it comes to AI chatbots. Being able to better identify the kind of harms caused means we are better at preventing and addressing them. However, the very wide nature of the problems identified makes one thing clear:
To address the harms caused by AI chatbots, it is not enough to suggest individual technical or regulatory fixes. We need to address the problem at its root: the creation and promotion of deceptively humanlike but erratic AI chatbots.

We encourage people and organisations to use this taxonomy to help them talk about and address the issues with AI chatbots. We think the taxonomy gives a good impression of the scale and impact of responses which are needed to combat the impact of AI chatbots on individuals and society. Dealing with any one issue at a time through policy, regulation, safeguarding will never address the key underlying issue – that publics are being encouraged to adopt unsafe chatbots designed to be deceptively human – even in education.
In order to protect people and our society as a whole from the adverse effects of chatbots we need to consider their very design, appropriate use cases, and the kind of language which is used about them.
So next time someone mentions the benefits of chatbots to help cure the tech-enabled ‘loneliness epidemic’, feel free to use this table to show how many wide-range negative effects these chatbots will have, especially on the most vulnerable.
You can read the full paper here.
If you have a community group which would benefit from a workshop exploring relationships with chatbots in accessible and creative ways, please get in touch!