Dr Liam McLoughlin
Lecturer in Communication & Media (Digital Politics) at the University of Liverpool. His research focuses on how digital platforms mediate political relationships between citizens and representatives. He is also Co-Convenor of the Technology, Internet, and Policy Group (TIP) at the PSA.
X: @Leelum
Email: Liam.McLoughlin@Liverpool.ac.uk
UK Election 2024
Section 6: The digital campaign
62. Local news and information on candidates was insufficient (Dr Martin Moore, Dr Gordon Neil Ramsay)
63. The Al election that wasn’t – yet (Prof Helen Margetts)
64. Al-generated images: how citizens depicted politicians and society (Niamh Cashell)
65. The threat to democracy that wasn’t? Four types of Al-generated synthetic media in the General Election (Dr Liam McLoughlin)
66. Shitposting meets Generative Artificial Intelligence and ‘deep fakes’ at the 2024 General Election (Dr Rosalynd Southern)
67. Shitposting the General Election: why this campaign felt like one long meme (SE Harman, Dr Matthew Wall)
68. Winning voters’ hearts and minds… through reels and memes?! How #GE24 unfolded on TikTok (Dr Aljosha Karim Schapals)
69. Debating the election in “Non-political” Third Spaces: the case of Gransnet (Prof Scott Wright et al)
70. Which social networks did political parties use most in 2024? (Dr Richard Fletcher)
71. Facebook’s role in the General Election: still relevant in a more fragmented information environment (Prof Andrea Carson, Dr Felix M. Simon)
72. Farage on TikTok: the perfect populist platform (Prof Karin Wahl-Jorgensen)
The intervening years between the 2019 and 2024 General Elections saw a proliferation of publicly accessible Artificial Intelligence (AI) products. Tools such as ChatGPT, Midjourney, ElevenLab’s Speech synthesis, and Suno AI music generator, have allowed for the speedy creation of synthetic media content and campaign tools. These are partly marketed as solutions that lower the cost of media creation while simultaneously speeding up production times – a frugal campaigner’s dream.
At the same time, these tools could threaten democracy. A primary concern for this election was the potential of deepfakes: AI-generated images, videos, or audio designed to deliberately mislead viewers through the creation of fake events or statements (Vaccari & Chadwick, 2020). Fears were high. A YouGov poll in May 2024 found 49% of respondents thought AI-generated deepfake videos of politicians are likely to have a fair or great deal of impact on the General Election. While numerous articles such as those from the BBC and CNN warned of an onslaught of AI-driven disinformation.
The perceived AI threat overshadowed the more frequent (and positive) uses of AI-generated content this election. These include the use of image generation for satire; AI tools for campaigners; and even AI-generated candidates. This isn’t to say, however, that disinformation was not present.
The most frequent use of AI-generated media was by citizens and satellite campaigners to create satire, memes or images otherwise supporting particular candidates. Niamh Cashell’s chapter provides examples from Midjourney that includes Rishi Sunak crying, to candidates riding lions victoriously as a form of popular culture. But the campaign also witnessed video content generated by younger audiences as political expression. Highly shared examples include a deepfake of Nigel Farage playing Minecraft blowing up Rishi Sunak’s base, and another of Sunak planning a game of Fortnite after conscripting Year 10 and 11 students into National Service. This is all evidence of a long-standing trend of using creative technology as part of participatory culture, which can be a net positive for democratic engagement.
A second form of AI use was by party campaigners themselves. This ranges from behind-the-scenes tools such as Campaign Lab’s Chatbots designed to train doorknockers and educate on electoral regulation. Synthetic media has also been found in campaign materials. But for the most part, parties have seemingly drawn the line at using AI to directly create images of their candidates or their opponents. Instead, parties mix assets and join content together such as generating a scene using AI, then Photoshopping in relevant faces. Despite some of the reservations of AI use by campaigners (Dommett, 2024), it seems there is a limited, but useful, space for AI-generated media in campaigns materials.
Thirdly is the AI candidate. This is a less frequent, but nevertheless interesting use of AI in this General Election where candidates used AI representations as stand-ins. The first example is AI Steve, an AI-generated political candidate, which stood in Brighton and Hove. This candidate promised to be easy-to-contact and one that citizens could directly control. Certainly, AI Steve was an interesting possibility, but one which ultimately attracted more media attention than voters.
The second example of AI candidates are paper candidates who use AI-generated representations. Some paper candidates, who have little chance of success and minimal financial assistance from the party to campaign, used AI-generated media in an attempt present themselves as credible candidates despite their lack of resources. In one instance, a Reform candidate standing in the Labour safe seat of Clapham & Brixton Hill is represented by an AI image – claimed in a post to be due to a lack of photographers. In this election, it seems AI-generated content has allowed paper candidates to present themselves as more just a name and provide a more individualised content than boilerplate campaign material.
Finally, the fourth trend in AI-generated media was disinformation with a few potentially impactful cases. Most of these were to be found via the Facebook Ad Library which included deepfakes of Rishi Sunak and Keir Starmer. However, these were poorly produced financial scams. It seems to-date that the technology behind deepfake videos and photos is unable to produce convincing visual content, which may explain the limited occurrence of this type of disinformation within our data collection. It should be noted that disinformation may have been obfuscated by its spread through private groups, which researchers may struggle to access.
More convincing is the deepfake audio clips which proliferated on social media – with a fake audio of Labour candidates Wes Streeting and Luke Akehurst containing disparaging comments on Gaza and the electorate respectively. This is especially interesting as Streeting won his seat of Ilford North by only 528 seats, a decrease of -20.7% compared to 2019 – partially due to the issue of Gaza. It’s the impact of this case that certainly deserves further exploration.
The 2024 General Election was not the AI election, but it certainly showed us the fledgeling uses of these tools by citizens and candidates during the campaign.