Shitposting meets Generative Artificial Intelligence and ‘deep fakes’ at the 2024 General Election

Dr Rosalynd Southern

Senior Lecturer in Political Communication at the University of Liverpool. Her work focuses on how digital and social media are used for political communication by politicians, parties and ordinary citizens. She has published work on the use of social media by parties and / or candidates during elections, online incivility towards women politicians, and the use of humour to talk about politics.

Email: R.Southern@liverpool.ac.uk

UK Election 2024

Section 6: The digital campaign

62. Local news and information on candidates was insufficient (Dr Martin Moore, Dr Gordon Neil Ramsay)
63. The Al election that wasn’t – yet (Prof Helen Margetts)
64. Al-generated images: how citizens depicted politicians and society (Niamh Cashell)
65. The threat to democracy that wasn’t? Four types of Al-generated synthetic media in the General Election (Dr Liam McLoughlin)
66. Shitposting meets Generative Artificial Intelligence and ‘deep fakes’ at the 2024 General Election (Dr Rosalynd Southern)
67. Shitposting the General Election: why this campaign felt like one long meme (SE Harman, Dr Matthew Wall)
68. Winning voters’ hearts and minds… through reels and memes?! How #GE24 unfolded on TikTok (Dr Aljosha Karim Schapals)
69. Debating the election in “Non-political” Third Spaces: the case of Gransnet (Prof Scott Wright et al)
70. Which social networks did political parties use most in 2024? (Dr Richard Fletcher)
71. Facebook’s role in the General Election: still relevant in a more fragmented information environment (Prof Andrea Carson, Dr Felix M. Simon)
72. Farage on TikTok: the perfect populist platform (Prof Karin Wahl-Jorgensen)

During the Labour conference in October 2023, a Twitter-user calling themselves ‘El Borto’ released a Generative Artificial Intelligence (GAI) generated voice recording purporting to be Keir Starmer. In it, ‘Starmer’ is abusive to his aide over a supposedly forgotten iPad, calling him a “fucking moron”. The video rapidly went viral, receiving well over a million views, with several news sites picking it up. This clip led to much debate around the rise of AI ‘deepfakes’ and their potential threat to democracy, particularly in the run up to an election. It had a sheen of authenticity due to it being ‘leaked’ on the first day of conference, when leaks often occur, and echoing earlier scandals, such as Gordon Brown’s ‘BigotGate’.

However, it was also clear that this was a joke or, in internet vernacular, a ‘shitpost’. This is a long-standing practice in tight-knit internet communities where people are ‘in’ on the joke. Looking at the account’s name (a Simpson’s reference) and avatar (Bort from the Simpson’s) should have alerted any factchecker or even vaguely social media-literate internet user that this was a joke account. However, a joke can become unintended disinformation if it reaches outside of its intended audience, something which often happens on Twitter. It can then make its way onto other platforms with the context removed. I wrote about this for the last Election Analysis report, before GAI became mainstream. In 2019, it was faked images of Jo Swinson bragging on Facebook about killing squirrels that went viral. The concern at this election, however, was that more people could be duped by convincing GAI content as opposed to more obvious photoshops. 

Fears that GAI is being used to maliciously damage democracy may be overstated, however. During the election itself there were some examples of GAI ‘deep-fakes’ being deployed, but some of these were not technically a ‘deep-fake’. One clip posted by user ‘Men for Wes’ purported to show Shadow Health Secretary Wes Streeting calling Diane Abbott a “silly woman”. This, however, appeared to be the account holder doing an impression of Streeting and splicing the audio into the real clip, rather than any actual GAI content. A day before the election, another audio clip purporting to be Streeting being rude to a voter went viral. Again, this was a poor impression of Streeting, which he himself responded to, calling it a “shallow fake”.

Another video, which was certainly closer to a deepfake, showed Labour candidate Luke Akehurst, who drew some criticism for being selected for a seat he had no previous connection to, calling the residents of his prospective constituency “thick Geordie cunts”. Again, however, despite being a real video of Akehurst, with the mouth manipulated by GAI, the audio was clearly an unserious impression of Akehurst. This could more accurately be described as a ‘dumbfake’ – manipulated media that is not believable and is almost certainly not meant to be taken seriously. The pinned tweet of the account that posted the Akehurst video was another GAI video of ‘Princess Diana’ doing “heroic defending” during several 1990s football matches. This tells us that the user has image manipulation skills, but that they also deploy those skills largely for laughs. This makes for easy basic fact-checking. 

One concern is how more elite actors reacted to these clips. Some news outlets ran the story about Starmer abusing his aide seriously at first, before pulling it. The initial rush to get the story out was quickly corrected but it is likely some people read the originals first and not the correction. During the election, the BBC’s Disinformation and social media correspondent put out several reports based on these instances. She conflated ‘deepfakes’, ‘dumb fakes’ and ‘mash-ups’, the latter being where real clips of politicians are remixed to say different things. These have been circulated for years now, with accounts like ‘Cassetteboy’ gaining a huge following for his mashups based on David Cameron and Jacob Rees-Mogg speeches. They are obvious satire and not meant to be taken seriously. It is not helpful or informative to put these all into the same category. A joke clip is akin to the satire that has always existed in a healthy democratic public sphere, whereas a fully-rendered and supposed to be believable and believed deepfake with the express intention of damaging a politician is a clear threat to democracy. Conflating them obscures what a real threat would look like. 

All this is not to downplay the potential democratic risks of GAI – they are real and potentially serious. But we shouldn’t take our eye off disinformation perpetuated by more elite sources and spread via more prosaic means. One example here could be that The Conservatives were fact-checked repeatedly for spreading false claims about Labour’s ‘£2000 tax rise’. However, they simply repeated this verbally, largely via traditional media platforms, which was then amplified via certain elements of the press. No GAI needed. In this context, a joke for one’s friends that gets out of hand may not be the most urgent threat to democracy that needs tackling.