Spitting Images: Tracking Deepfakes and Generative AI in Elections Methodology

October 10, 2024
4 min read

The year 2024 will see the largest electorate in history, with more than 60 countries—approximately 49% of the global population—holding national elections. At the same time, generative artificial intelligence has democratized the ability to create realistic fake or altered images, videos, and audio recordings—including those of political candidates. Spitting Images is an interactive tool tracking significant deepfake campaigns and instances of generative AI use around global elections.  

A global map showing and categorizing political deepfake instances can help voters, policymakers, and researchers understand the prevalence and potential impact of AI on elections. By identifying patterns and allowing users to see these incidents unfold in real time, the map provides a comprehensive geographic overview. This breadth enables a comparison of deepfake usage across different regions, highlighting common tactics and unique challenges political actors face across countries.

Methodology

The tracker’s start date is September 1, 2023, about one month prior to Slovakia’s September 30 parliamentary elections. In the days leading up to that election, deepfake audio clips spread across social media, purportedly capturing Slovakia’s liberal party leader discussing politically damaging topics such as vote rigging and raising the price of beer. This highly visible instance of generative AI electoral interference marked one of the first publicly recognized and extensively covered deepfake uses ahead of the 2024 election cycle.

The Spitting Images map tracks the following information:
•    category of deepfake: Generative AI image, video, audio, or mixed types
•    timeline of deepfakes’ deployment
•    associated elections and countries
•    descriptions of the content
•    related media coverage

This project focuses specifically on instances of deepfakes and AI-generated audiovisual content (audio, video, images, or combinations thereof), rather than all generative AI uses (for example, generative text). This project defines a potentially "election-related" deepfake as one that involves a politician’s likeness, is spread or created by a political candidate or party or relates directly to political candidates, political parties, elections, or electoral processes (for example, polling results or the ballot process). Thus, this project includes deepfakes occurring not only in the immediate leadup to an election, but also in the weeks and months surrounding it.

It would be difficult, if not impossible, to report with high confidence on all deepfakes posted to the internet, regardless of reach and impact. Nor would it likely be a helpful way of understanding AI’s electoral impacts. Therefore, the project does not include all instances of AI-generated materials posted online. Instead, it focuses on deepfakes or generative AI instances that have garnered attention from the media, among experts, and/or through fact-checking organizations.  

A deepfake instance is included if it reached a level of online traction significant enough to generate international media coverage or fact-checking analysis. If a report of a deepfake was published to a smaller or local media outlet, GMF Technology seeks to identify the original source of the deepfake as well as to corroborate coverage with a secondary media outlet, fact-checking organization, academic study, or expert analysis to meet the criteria for inclusion. Each instance of deepfake recorded must be featured in credible reporting to be included in the map. If the reported story contains a campaign of deepfake media (several deepfakes with the same narrative distributed within a similar timeframe), the instance will concern the entire campaign and not individual instances of content. The GMF Technology team includes language explaining whether a deepfake has been verified by experts or independent sources, or whether generative AI use is contested. If an alleged AI-enabled instance is proven to be authentic media with no generative AI technology used, we will delete it from the map.

While non-consensual sexual or intimate deepfakes are included in the map when they meet all other inclusion criteria, the map does not link to the original content or to sources that link to the original content. With this approach, GMF Technology aims to raise awareness about the prevalence and issues surrounding non-consensual sexual deepfakes impacting politicians and elections, without inadvertently contributing to their dissemination.

Spitting Images: Tracking Deepfakes and Generative AI in Elections is continuously updated by the GMF Technology team to reflect new media coverage of electoral deepfakes. The dataset behind the interactive map is based on open-source information. GMF Technology regularly monitors Google news alerts for media coverage of deepfakes in countries holding national elections and also relies on crowd-sourced media coverage of deepfakes, including from incidentdatabase.ai, generative AI research from Check Point, and other public sources. While news monitoring focuses on national elections, if a deepfake concerning a local or municipal election is the subject of mainstream media reporting, GMF Technology will include it in the tracker.

The map focuses on English-language media, but includes select examples of non-English media for countries that may not attract international English-language coverage. If a primary source is in a language other than English, the translated title follows in parentheses.

The map is a living document, and we welcome electoral deepfake submissions, feedback, insights, or documented corrections to keep it up to date and accurate. Please email any feedback or additional instances for inclusion in the tracker to [email protected].