Photo of a mock news paper cover: The World News, headline THE SKY IS FALLING

AI News Narrative: Headlines Shape Knowledge

Opinion: By Tess Buckley

My Grandpa (or, as I called him, Boppy) used to read the paper every morning. I remember eating my eggs in silence with him and squinting, trying my best to catch whatever was on the other side of his reading. He shook the papers before flipping to the next side as if trying to escape whatever story had entered his mind. In these moments, I often envied my Boppy – that he got to know what was happening in the outside world. A younger me looked forward to the day when I would get to know more, when I would also have the opportunity to discover the outside world.

Well, the day came, and over the years, I have come to engage with many forms of news. Let me give credit and applause to the journalists and academics who allow me to see the outside world for what it is. Yet, amid this appreciation, there lingers a palpable frustration that news can often be uninformative or, worse, misleading.

Anthropomorphising AI: The Convenient Scapegoat

Pop culture plays a significant role in influencing beliefs, shaping societal perceptions and even instilling fears. Although it has the power to entertain it can also potentially misdirect our concerns, while sensationalising certain issues at the expense of more pressing or nuanced problems. In this way, our collective anxieties and worries around influential topics such as AI are sometimes misaligned due to the narrative perpetuated in the media. Narratives do a better job at creating news storms around big names and short op-eds, perhaps distracting us from the work of academics showcasing facts. 

In the swarm of news around this year’s word of the year “AI” I have noticed a structure to the headlines. Structure makes a system. As shared by Barandtunde Thurston, systems are collective stories that we all buy into and changing them rewrites reality. By analysing and rewriting stories, we could imagine new outcomes for old narratives, change our behaviour accordingly and enable more positive outcomes by shifting our perception of AI.

This blog takes Barantunde Thurston’s method of deconstructing headlines and applies it to headlines about AI. I will provide more headlines that you can dissect at your leisure, state why you should care, and finally suggest ways forward.

Deconstructing headlines: Let’s be critical!

Barantunde Thurston found that there was a process to diagramming sentences, breaking them down to understand each one to showcase an overarching theme in news. These kinds of sentence structure and negative patterns in narratives allow for ingrained fear and can determine the direction of resources and public attention. Diagramming a sentence allows for further understanding of how the public receives, comes to know, and experiences a subject. Although Thurston did this exercise with racist headlines, we can also diagram sentences to understand the narratives of AI. 

According to Thurston, breaking down sentences to showcase an overarching narrative, entails identifying the following: 
(subject) / (action) / (target) / (activity) 

In this case:
The subject is usually a robot or algorithm (AI)
The action causes harm to a human.
The target is usually a human  
The activity could be anything…

Let us deconstruct some AI headlines with our new lens:

1. Meet the Robot that will almost certainly steal your job

a.    I.e., Employers must now choose what technologies to integrate to augment their current human employees, instead of, will they choose to automate their current employees or upskill them? If the employer or executive of a company chooses to implement AI, how are they supporting their current employees to take up AI use?

2.   Robot is arrested by police for buying ecstasy on the dark net

a.    I.e., A human has purposed an algorithm to run an art exhibit, and that human has reaped all the benefits of the algorithm work e.g., money from the sale of tickets, PR from news pieces and yet when the risk comes and ‘shit hits the fan’ the entire show is the responsibility of the robot and so it goes to ‘jail’ to do time for its human counterpart.

3.   Racist and sexist AI robots adhered to harmful stereotypes when sorting photos of people” 

a.    I.e., The team or data was biased, and this led to a machine which had engrained machine bias which was then perpetuated at scale. The machine also came to know and inhabit our biased structures and interact with biased users, making it more biased.

4.    Robot kills man at Volkswagen plant in Germany

a.    I.e., If a robot was to ‘kill’ someone it could not be held responsible for such an action and so there would be a human behind the machine that would have to take responsibility for the machine going wrong.

More headlines to dissection at your leisure:

So, why should I care? 

What do narratives like this do? They make us ‘hate’ the robot; they make us fear the robot. Does this not feel like re-directing our frustration and attention? Perhaps a distraction from whom we should really fear, namely humans themselves. We are conveniently caught up in scapegoating AI.

Sometimes we are targets of headlines, which play on our deep-rooted fears seen in classic movies we watched as a kid of robots taking over or being trapped in a machine ourselves. As emerging technology continues to take center stage the number of headlines we see about the fears and threat of systems increase, saturating the content and desensitising us to it all while overwhelming and overburdening our bandwidth of attention. Instead of allowing us the freedom to select the print copy of articles that pique our interest, the digital space strategically positions stories on our lap making them harder to avoid. Targeted advertising can diminish our ability to select the content that we engage with, subtly diverting consumer attention and negatively impacting choice. 

Fear surrounding the supposed threat of AI is growing, one headline at a time. One may ask, which fears are justified?

 Action steps: paths forward to avoid 

Hopefully, we can now recognise the impact of news narratives on shaping perceptions of AI.  This area suggests we collectively contribute to a more nuanced and informed understanding of AI, moving beyond fear-inducing narratives and towards a constructive dialogue about the role of AI in our lives. Although the rhetoric of public engagement with AI is high, specific mechanisms are rare. Ideally, we begin to embrace a more critical stance towards headlines and actively contribute to fostering a more informed public discourse on AI.

To navigate the complexities of AI news narratives, it is essential to cultivate AI literacy, involving the encouragement of a critical stance toward news consumption, particularly in the realm of AI-related headlines. Developing skills to make sense of information, judging its credibility or sensationalisation can contribute to a more nuanced comprehension of AI developments. Alongside seeking diverse outlets to receive news from a range of voices. 

Challenging anthropomorphism is crucial. This entails questioning narratives that excessively attribute human-like intentions to AI and exploring alternative perspectives that underscore the influential role of human decision-makers in shaping AI outcomes. Additionally, supporting informed discussions about AI goes beyond sensational headlines, creating environments where diverse opinions and insights can be shared and constructively discussed. 

Advocating for responsible AI reporting is another vital step, urging media outlets to present balanced and accurate depictions of AI while avoiding fear-inducing narratives. Supporting journalists and academics who contribute to a nuanced understanding of AI’s societal impact reinforces this endeavour. Encouraging responsible innovation entails supporting initiatives that prioritise ethical implications in designing and implementing AI systems, such as this AI and Responsible Journalism toolkit

Promoting ethical AI practices involves advocating for transparency and ethical considerations in AI development and deployment, supporting initiatives that address biases and uphold fairness and accountability. It is also vital to remain informed in AI development. This necessitates knowledge of the latest advancements and seeking information from reputable sources to maintain a well-informed perspective on AI capabilities and limitations. 

Collectively, these actions contribute to fostering a more informed, balanced, and constructive public discourse on AI. The news serves as a gateway to the broader world, a conduit through which we seek a level of understanding akin to my Grandpa’s morning ritual. I fondly recall those silent breakfasts, warm eggs on the table as he pored over the paper, escaping into its stories with every turn of the page. Hopefully, in future mornings, I can trust the paper and its ability to inform me about the outside world.

Tess is an Artificial Intelligence (AI) ethicist, strategist and musician. She completed a Master of Arts in AI and Philosophy from Northeastern University London, where she specialised in biotechnologies and ableism. Her primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media. In particular, she seeks to use philosophical principles to make emerging technologies explainable, and ethical. She also has a personal website.