FRAMING DEEPFAKES title against background of Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0

What did we learn from “Framing Deepfakes”?

Dr Patricia Gestoso and Medina Bakayeva

This is a recap of ane event we hosted in July 2024. Watch the event recording here.

In light of the malicious uses of deepfakes, governments have engaged in policy debate, media outlets have frantically reported on the issue, and academics explored the technological advancements of AI-generated media from different facets. In 2024, as half the world underwent elections, these diverse voices hit a crescendo. 

With deepfakes flowing through our media ecosystems, all levels of society have been confronted with a new reality; one that some have argued is an era of ‘truth decay’. But how are these debates framed? 

From the very term itself –“deepfakes” – to what lies at stake as they become part of the fabric of our media ecosystems, the emphasis we place on these different framings is not trivial. In the study of political science, the framing of an issue can be understood as the process of shaping the interpretation of a social issue and promoting one view over another to drive policy (Frameworksinstitute.org, 2018). 

This is why in July of this year, We and AI held a virtual event titled “Framing Deepfakes” –  which was organised by Dr Patricia Gestoso – where members of the community explored the various ways the deepfakes and the issues stemming from them can be framed. 

Framing Deepfakes: What Influences Public Perception, Policy, and Economic Dynamics?

First, a panel discussed the multifaceted impact of language on shaping public perceptions, policies, and economic dynamics surrounding deepfakes.  We heard from journalist Adrija Bose who has investigated how deepfakes impact women in India, Medina Bakayeva who has conducted academic research on different policy framings and Dr Patricia Gestoso who has analysed case studies from a feminist perspective. Author and technologist Robert Elliott Smith moderated the panel. 

The constructive discussion led to important insights drawn from academic research and journalistic inquiry. Firstly, the panellists agreed that there is far too much emphasis on the impact of deepfakes on elections. The dominance of this discourse overshadows the use of synthetic media to commit gender-based violence and harassment. The panellists urged that this is where the repercussions of synthetic media are felt most critically – a shocking 96% of deepfakes are of non-consensual sexual nature and 99% of the victims are women. 

Another important insight was that attitudes towards synthetic media and the issues it poses vary significantly around the world. In the EU, US and UK the focus is predominantly on three main issue areas: deepfakes and their impact on elections, the use of deepfakes in fraud and cybercrime, and the impact of deepfakes on women. Influence campaigns targeting politicians and celebrities greatly impact public opinion in the Middle-East and South Asia. 

The conversation uncovered some common misconceptions about deepfakes. One important argument made was that their sway on elections is overhyped. However, the impact of synthetic media on democracy is still significant, but the real danger is in the erosion of public trust in media sources over time as more fake content circulates. Another pervasive assumption is that non-consensual sexual deepfakes don’t hurt victims because it’s not their body depicted or that they don’t constitute personal data theft.

When asked about what people can do to defend themselves, the panellists were resonant in that more needs to be done by governments to make deepfakes against vulnerable communities and women prosecutable. It is not fair to put the onus on individuals who have little recourse to justice. 

The panel also recognised that the same technology that creates deepfakes can be used for good. For example, actors in India have used synthetic media for their benefit for creative expression. Synthetic media can also be used in medical settings to help patients who have lost motor, speech, or visual abilities. 

If our eyes and ears can be deceived, how do we trust what we see online?

The next session was a fireside chat with Jacobo Castellanos and Tania Duarte, discussing the work of WITNESS – an international human rights organisation helping people use video and technology to protect and defend their rights.

Jacobo highlighted the importance of making sure that videos are trusted so that, when used in journalism and legal processes, people can verify their authenticity. In that context, WITNESS had been working on how to self-fortify the truth in the age of AI.

He emphasised that the technology is evolving very rapidly, hence we’re all still learning and there are a lot of grey areas around protecting privacy whilst fortifying the truth. For example, in the case where there are real images of a situation that have been shared by different outlets, releasing AI-generated images about the incident can generate confusion and distrust, like in the case of the synthetically altered image of a Colombian activist released by Amnesty International. Even if the intention was to protect the individual, the use of AI prompted backlash against the organisation.

Jacobo talked about three different kinds of solutions to tackle this problem. First, media literacy. For example, looking at the sources and recognising that our ears and eyes can deceive us.

Transparency and disclosure are another category of solutions. That refers to the indicators that can be added to the content that enable users and algorithms to detect what kind of media we’re seeing. For example, adding verifiable metadata like provenance and fingerprinting. Also, visible and invisible watermarking. 

The third group of solutions is detention. Unfortunately, web-based publicly available tools are not very reliable but there are less accessible tools that perform better and they’re improving constantly, as AI tools also get better. 

Overall, it’s important to recognise that all those tools can be bypassed and that the long-term solution is looking at policies that can help fortify the truth.

One specific way this can occur is the “pipeline responsibility” and shifts the onus from the creators of content or even the users to the enablers. For example, the model developers could already include a watermark at the design stage as opposed to relying on adding it to the output. The same can be done with provenance metadata.

Other actors such as social media platforms and news media – in the way they talk about these technologies -.have responsibility too.

Jacobo ended by pointing out that one of the biggest challenges to making progress on this topic is to have multi-stakeholder conversations about nascent technologies. For example, the terminology is evolving very rapidly so the same issues are discussed in diverse spaces using different words and there is a lack of alignment on the meaning of the concepts we use. Additionally, English has been used as the lingua franca for those conversations and it’s important to ensure adequate terminology is also developed in other languages to foster discussions at a global level. 

Words Matter: The Case Against ‘Deepfake’ Terminology

The session was followed by a talk by India Avalon, from The University of Nottingham, who presented the problematic origins and usage of the term “Deepfake” and an exploration of potential language alternatives.

India started by elaborating why the name “Deepfake porn” is disrespectful to the victims of non-consensual sexual AI-generated media and leads to confusion.

For example, the term “deepfake” honours the name of the Reddit user that shared on the platform the first synthetic intimate media of actresses. Additionally, when paired with the label “porn”, it may wrongly convey the idea that it’s consensual. Overall, the term lacks gravitas, disregarding harms.

From the legal point of view, the use of the word deepfake may also hinder justice. There have been cases where filing a lawsuit using the term deepfakes when referring to a “cheapfake” – which consists of a fake piece of media created with conventional methods of doctoring images rather than AI – has blocked prosecution.

Unfortunately, there are other misleading terms often employed to name this pervasive misogynistic use of AI technology. One of them is “revenge porn”, which was first used a decade ago, whereas the deepfake trend only kicked off in 2017. It’s important to highlight that revenge porn is legally very different, as it refers to the disclosure of unedited non-consensual sexual media of another person.

India shared a list of more helpful alternatives. For example, non-consensual intimate imagery (NCII) and image-based sexual abuse (IBSA) are subtypes of Technology-facilitated-gendered-violence/ Technology-facilitated-gendered/sexual violence (TFGV / TFSV).  

NCII, which is very popular, and IBSA, which has the advantage of being precise and naming the specific harms associated with the act, can be used interchangeably to cover the concept of “revenge porn”, but they are also inclusive of other forms of digital abuse, where images are used non-consensually to cause harm. 

In-line with leading scholars in the field, India’s preferred term to refer to “deepfake abuse” is non-consensual synthetic intimate imagery (NCSII), which is a subtype of NCII and IBSA.

She ended her presentation with a call to action to pursue the research to find a suitable term, with a special emphasis on the importance of including the voices of survivors in that endeavour. 

Teaching young people to navigate deepfakes and synthetic media. What can we all learn?

In the final segment, we heard from We and AI’s young volunteers who have been delivering workshops with school children and parents. It included interactive elements and a chance for a group discussion, based on what we can learn from young people’s responses to different types of deepfakes. 

The session was run by Valena Reich Neha Adapala and Elo Esalomi, who are part of the We and AI community and are actively engaged in issues surrounding AI governance, synthetic media and deepfakes.Valena is an MPhil student in Ethics of AI at the University of Cambridge, funded by the Gates Cambridge Scholarship. Neha and Elo are completing year twelve and both Non-Trivial Fellows. Together and in partnership with We and AI they produced workshops on the risks of deepfakes for children across schools in London.

The session started with a quiz to test the ability of the audience to spot deepfakes among a series of images.

Some tips for recognising deepfakes are looking at hands – which may appear unnatural or missing fingers – checking for inconsistent shadows or pixelation around the edges of a face, very smooth skin, and inconsistent backgrounds. However, as the quality of deepfakes improves, it’s crucial to use critical thinking, double-check sources, and question the potential incentives behind the images.

Their research among school children highlighted that children as young as six have been exposed to deepfakes of politicians and other public figures such as celebrities. 

They also mentioned that, in general, people don’t appear to think about the negative effects of deepfakes until they are affected by them. The school students they worked with didn’t see deepfakes as a threat and it’s important to highlight that after the awareness sessions carried out by We and AI, only 10% of students were able to understand deepfake technology and their ethical implications.

The volunteers listed three different ways in which young people get in contact with deepfake technology: as a form of entertainment, deceptive audiovisual disinformation, and to replace workers.

As for positive uses of this technology, young people appreciated creative expression, like empowering them to express themselves through music and storytelling. It can also help personalise learning like tutorials.

Regarding the risks of AI, the team identified job displacement as a consequence of the automation of jobs. They also highlighted that misinformation and disinformation pose significant risk as convincing fake content disseminated on social media can be harmful to young people who often get their news via social media platforms. Finally, unrealistic body images enhanced by synthetic media can contribute to mental health problems among young people and an increase in cosmetic surgery procedures.

At the end of the session, the team shared their recommendations regarding the interaction between young people and AI. Valena, Neha and Elo maintained that young people have the agency and influence to make a difference. 

First, young people need to educate themselves and increase their awareness about this technology, including developing critical thinking. The team also encourages young people to get involved in policy and advocate for ethical AI whilst acknowledging that our media ecosystem plays a major role and the companies in charge must take accountability. The third recommendation was for tech organisations to engage with young people to benefit from their insights to develop innovative and inclusive products.  

Conclusion

This first public event of We and AI on framing deepfakes highlighted the importance of conducting research and holding multidisciplinary and multi-stakeholder discussions. This cross-cutting dialogue is important in ensuring a comprehensive understanding of the benefits and risks of AI. The event highlighted the plethora of ways that framing the issue of ‘deepfakes’ can lead us to different solutions and challenges, all of which are vital in addressing the multifaceted nature of the impact AI-generated media has on our lives.

You can watch the full event recording here.