Dr Marco Bastos
University College Dublin Ad Astra Fellow at the School of Information and Communication Studies and Senior Lecturer in Media and Communication in the Department of Sociology at City, University of London.
Email: Marco.Bastos@city.ac.uk
Twitter: @toledobastos
Section 6: The Digital Campaign
- Digital campaign regulation: more urgent than ever?
- Did the Conservatives embrace social media in 2019?
- #GE2019 – Labour owns the Tories on Instagram, the latest digital battlefield
- Spot the difference: how Nicola Sturgeon and Jo Swinson self-represented on Twitter
- “Go back to your student politics”? Momentum, the digital campaign, and what comes next
- Taking the tube
- “Behind the curtain of the targeting machine”: political parties A/B testing in action
- Against opacity, outrage and deception in digital political campaigning
- The explosion of the public sphere
- Big chickens, dumbfakes, squirrel killers: was 2019 the election where ‘shitposing’ went mainstream?
Influence operations and propaganda on social media emerged in the run-up to electoral events in 2016 and continue to challenge policymakers. These operations rely on coordinated and targeted attacks where the accounts and profiles sourcing the content disappear in the months following the campaign. User accounts may be suspended from social platforms for violating standards and Terms of Service, such as posting inappropriate content or displaying bot-like activity patterns; others are deleted by the malicious account holders to cover their tracks. The modus operandi of these operations often consists of amplifying original hyperpartisan content by large botnets that disappear after the campaign. The emerging thread is then picked up by high-profile partisan accounts that seed divisive rhetoric to larger networks of partisan users and automated accounts.
It is against this landscape of information warfare that political campaigns seek to influence the public opinion. Social media platforms ramped up efforts to flag false amplification, remove “fake accounts,” and prevent the use of highly optimized and targeted political messages on users. These efforts sought to clear social platforms from “low-quality content,” including user accounts, posts, and weblinks selected for removal. The removal of social media posts and accounts thus constitutes the central line of action against influence operations, misinformation, false or fabricated news items, spam, and user-generated hyperpartisan news. While social platforms rarely disclose content that was flagged for removal, some companies have released publicly the community standards used to remove content from their services.
Studying the politics of deletion on social platforms is thus an exercise in reverse engineering, as content that has been blocked from social platforms is likely to be problematic content no longer available. As such, the volume of deleted accounts and posts linked to campaigns can be used to gauge the extent to which a given election on social media was plagued by problematic content. The process of verifying if content remains available is however cumbersome. Moreover, election campaigns need to be monitored in real time, as once a post is deleted by a user or blocked by the platform it disappears from the platform altogether; similarly, deleting a tweet automatically triggers a cascade of deletions for all retweets of that tweet. This specific affordance of social platforms has of course facilitated the disappearance of posts, images, and weblinks from the public view.
The 2019 UK General Election appears to have been relatively trouble-free. On the eve of the vote, only 6.7% of election-related tweets had been removed and less than 2% of the accounts were no longer operational. This figure is in line with previous studies reporting that on average 4% of tweets disappear, but contrasts with the referendum campaign, where 33% of the tweets leading up to the referendum vote have been removed. Only about half of the most active accounts that tweeted the referendum continue to operate publicly and 20% of all accounts are no longer active. These accounts were particularly prolific: while Twitter suspended fewer than 5% of all accounts, they posted nearly 10% of the entire conversation about the referendum. Partisan affiliation was also a good predictor of tweet decay in the referendum campaign, as we found more messages from the Leave campaign that disappeared than the entire universe of tweets affiliated with the Remain campaign.
Ephemerality is perhaps an expected affordance of social media communication, but it is not an expected design of political communication and deliberation across social platforms. Influence operation can exploit this affordance by offloading problematic content that is removed from platforms before the relentless – though time consuming – news cycle has successfully corrected the narratives championed in highly volatile social media campaigns. This amounts to the involuntary but spontaneous gaslighting of social platforms: the low persistence and high ephemerality of social media posts are leveraged to transition from one contentious and unverified political frame to the next before mechanism for checking and correcting false information are in place.
Ultimately, the politics of deletion allow for daisy-chaining multiple disinformation campaigns that disappear as soon as rectifying information or alternative stories starts to emerge.
Campaigners adopting the “Firehose of Falsehood” model can offload social media messages rapidly, repetitively, and continuously over multiple channels without commitment to consistency or accuracy. The high volume posting of social media messages can be effective because individuals are more likely to be persuaded if a story, however confusing, appears to have been reported repetitively and by multiple sources. In this context, counterpropaganda methods and the fact-checking of social media posts are particularly ineffective. On social media platforms, as it turns out, nobody will know you are a troll.