Commentary: Dr Jenn Chubb
In a previous post on Better Images of AI blog, Dr Jenn Chubb and Dr Liam Maloney described the ways in which the sonic framing of AI in narratives affects the public understanding of science. Building on her recent research about the role of music and sound in AI documentaries, Jenn took a look at a short video described below, to explore the ways in which music and soundtracking influence our perception of AI.
Documentaries about Artificial Intelligence (AI) are often infused with rich, engaging soundscapes that emphasise the excitement and anticipation surrounding AI technologies. Sound plays a crucial role in setting the tone and mood of a film or documentary. By using sound creatively, filmmakers can enhance the storytelling, emphasise key themes, and engage the audience more deeply. This extends to adverts and trailers where sonic branding is used to extend the reach of their products.
Putting to one side the power relations, incentives or self-interest of the artists concerning the type of documentary, I take a look at (and listen to) A Year in AI 2023, a seven minute video or showreel about AI to consider the ways AI ‘sounds‘.
“A Year in AI, 2023” – what a year it was. There is no sign of the usual stereotypes in the film. No robots and no flashing eyes, yet it’s hard not to be emotionally affected by this short film. The pacing, the drama. The message is largely a chilling one which may make for disturbing viewing. But the sonic framing of this short documentary may also have something to do with how we respond to it. This short video was produced to pitch a theory to OpenAI “to connect and collaborate on developing an approach to building safe, intelligent systems to be the most promising path to benefiting humanity”, in line with nature.
AI is softly introduced sonically. The narrative begins accompanied by elegant, harmonic, naturally played orchestral strings in a major key. We are asked whether we want a future which is the ‘product of technology’ or a ‘product of nature’. This is set against a backdrop of imagery of the sky and a flock of beautiful free flying birds. The future is one of ‘shared intelligence’.
The words “Year in AI 2023” appear and a gong sounds indicative of AI as a disruptive and confrontational force against a somewhat bright sided depiction of human nature. We move to a minor key. The strings become slightly less regular and more dissonant – advances such as Chat GPT are described, the sonic pace picks up and the imagery becomes more intense. Emotive, grave orchestral strings in a minor key accompany the narrator who asks whether AI will end the world.
The strings accelerate and pronounced interjections of electronic, divergent sounds interrupt the sonic narrative. The strings become more urgent and cinematic as we are warned that humanity could end. The music stops.
As those interviewed talk of extinction, a repetitive string motif builds tension progressively introducing ascending violins which crescendo allegretto with urgency backed by driving, beating percussion. This repeated pattern implies something about the generality of AI and perhaps its lack of uniqueness. Quite apart from what is already a disturbing narrative, notwithstanding the clear power dynamics the big tech players who narrate this piece represent, the sonic framing serves to provoke fear in the audience.
As a counterpoint, next, we are shown images of birds flying in the sky just as we did at the start of the film and the sound and music quieten. There is an elegance to the spacious, smooth yet connected legato passage which seems to emphasise nature and unique paths which might await the future. We are back to some kind of equilibrium – it feels post-apocalyptic.
The sonic repetition fades as we are ‘led back into the real world’ – the dominant musical line is almost primordial and this is accompanied by a narrative that stresses the ways humans can build the right path for humanity to work alongside technology. The emphasis on the natural world and this sonic framing is suggestive of humans remaining fully ‘in the loop’.
What’s interesting is the pronounced mixing of analogue and electronic sound to represent nature and artificiality. Beyond this, music and sound are used to enhance the binary in the narrative – potentially manipulating the audience to feel a certain way. A sonic representation of AI is urgent, frantic, and intense expressed by repetition and ascension. In contrast, the natural world is sonically expressed by dolce, peaceful, legato strings.
This stark counterpoint serves to draw a distinction between the natural and non material world as though the two are not related despite their clear entanglement.
This blog raises questions about the role music and sound play in the ways we talk about AI. Music is powerful. It can affect our mood, make us run faster, think clearer. It can also distract and distort our emotional response.
Our recent article explores the role of music and sound in AI documentaries as a powerful shaper of public perception toward technology.
Do you feel affected by the musical dynamics used in this short film? We’d love to hear your thoughts.
Dr Jenn Chubb (@AIsonicstories) is a Lecturer at the University of York in the Department of Sociology. She is interested in all things science and society. Jenn is researching the sonic framing of AI in narratives and is currently exploring the public perception of AI generated music as part of a research project called ‘AI, what’s that sound?’.