Prof Helen Margetts, OBE FBA
Professor of Society and Internet at the University of Oxford, where she was Director of the Oxford Internet Institute 2011- 2018. She is Director for Public Policy at The Alan Turing Institute for Data Science and AI, where she leads a research programme on AI, public policy and democracy.
Email: helen.margetts@oii.ox.ac.uk
UK Election 2024
Section 6: The digital campaign
62. Local news and information on candidates was insufficient (Dr Martin Moore, Dr Gordon Neil Ramsay)
63. The Al election that wasn’t – yet (Prof Helen Margetts)
64. Al-generated images: how citizens depicted politicians and society (Niamh Cashell)
65. The threat to democracy that wasn’t? Four types of Al-generated synthetic media in the General Election (Dr Liam McLoughlin)
66. Shitposting meets Generative Artificial Intelligence and ‘deep fakes’ at the 2024 General Election (Dr Rosalynd Southern)
67. Shitposting the General Election: why this campaign felt like one long meme (SE Harman, Dr Matthew Wall)
68. Winning voters’ hearts and minds… through reels and memes?! How #GE24 unfolded on TikTok (Dr Aljosha Karim Schapals)
69. Debating the election in “Non-political” Third Spaces: the case of Gransnet (Prof Scott Wright et al)
70. Which social networks did political parties use most in 2024? (Dr Richard Fletcher)
71. Facebook’s role in the General Election: still relevant in a more fragmented information environment (Prof Andrea Carson, Dr Felix M. Simon)
72. Farage on TikTok: the perfect populist platform (Prof Karin Wahl-Jorgensen)
At the start of 2024 – a year when half the world’s citizens had a chance to vote – technology leaders such Sam Altman, head of Open AI, raised concerns about ‘AI and democracy’. They really meant ‘AI and the next US election’. But elections across the world are running for the first time after the release of so-called ‘generative AI’, which allows anyone to generate text, images, audio and video with written prompts. So, along with most 2024 elections, the UK election was labelled ‘the AI election’.
How might generative AI damage an election? First, concerns relate to information. The democratic landscape could be deluged with low quality, potentially harmful misinformation at scale. Second, influence. AI might turbocharge perniciously targeted political advertising and persuasion, and be used to generate abuse, threats and intimidation at scale. So, what happened in the campaign leading up to 4th July? I deal with information and influence in turn.
Information
There were clearly bots at work in the campaign, with most evidence pointing to Reform as perpetrator of accounts across X, Instagram, Facebook and TikTok. These were pumping out low quality propaganda, but it is not clear that they were generated with AI. Accounts like GenZboomer, identified by a BBC initiative to capture propaganda-style content, claimed to be real humans albeit not willing to talk to a journalist.
What about deep fakes? There was little evidence of people seeing very harmful deep fakes – even to rival the audio deepfake of the London Mayor expressing inflammatory pro-Palestinian views that emerged in the May 2024 London election, that Sadiq Kahn described as almost causing ‘serious disorder’. The most worrying evidence of an AI effect comes more from AI-hype than AI itself. A Turing Institute research survey showed that although only about 6% of respondents recalled being exposed to political deep fakes, over 90% were concerned that deepfakes could increase distrust in misinformation or manipulate public opinion.
Influence
A key issue here was highly targeted, personalised political microtargeting. All the main parties ran online advertisements during the campaign. But targeting was unsophisticated, in part because most social media platforms have tightened restrictions on targeting specifically for political advertising. Furthermore, when it comes to personalizing messages, new research shows that targeted messages devised with GPT4 did not become more persuasive however many attributes were used – the “best message” of GPT4 was just as good.
AI could reinforce ‘negative persuasion’, that is hate, abuse and intimidation – a longstanding concern in British politics that worsened during 2024, especially after Musk took over and rebranded Twitter as X. NBC reported that the platform was monetizing racist and antisemitic hashtags like #whitepower and a the NGO Global Witness claimed that 10 accounts spread 60,000 posts containing “extreme and violent” hate speech, disinformation and conspiracy theories, viewed 150 million times during the election. Gender disparity effects were demonstrated even before the election, with female candidates reporting to an Electoral Commission study during the May elections that online threats against them had got worse. Again, it is unclear that abuse is AI-generated – yet. But already observable effects on politics are worrying. Turing Institute research shows that 77% of women are not comfortable/not all comfortable with expressing political opinions online (far more than for men experiencing similar levels of abuse).
AI and our democratic future
So it wasn’t the AI election, but what can it tell us about AI’s impact on our democratic future? All the AI tools are there for the feared deluge of political propaganda, even if they didn’t materialise this time, and generative AI will continue to evolve and develop. But 2024 suggests that the effects of hype around AI and election safety are also concerning. The focus on misinformation (also by AI assistants such as Microsoft’s Copilot) can itself decrease trust in political information. In future, AI-powered negative persuasion may increase intimidation – and fear of intimidation decrease willingness to participate in political life.
One challenge with assessing the effect of AI is that in this election, the ‘standard of truth was very low’, as Channel 4’s former political editor judged at LSE Election Night. Record levels of distrust in UK politicians’ claims was evidenced by 58% of people saying they ‘almost never’ trust ‘politicians of any party to tell the truth when they are in a tight corner’, up 19 points from 2020. Rishi Sunak’s characterisation of Labour’s tax plans were widely circulated even after being refuted by Treasury officials, leading to a warning from the UK Statistics Authority to all political leaders in the campaign. AI-powered platforms are used to disseminate such claims, but the root lies elsewhere.
Democracy is for daily life – not just elections. In a democratic landscape with a focus on misinformation (be it from AI, fear of AI or politicians themselves), the danger is that people no longer trust any political information, even about the date, time, rules or results of the election itself. Future focus needs to be on prioritising the capability to get ‘good’ democratic information out, rather than relying on the information market.