This year promises to be a banner one for an elective government, with billions of people – or more than 40% of the world’s population – able to vote in elections.
But nearly five months into 2024, some government officials are quietly wondering why the looming risk of AI has apparently not materialized yet. Even as voters in Indonesia and Pakistan have gone to the polls, they see little evidence that viral deepfakes are distorting election results, according to a recent article in Politico, which cited “national security officials, tech company executives and outside watchdog groups.” AI, they said, did not have the “mass impact” they expected. That is a painfully short-sighted view. The reason? AI could be disrupting the elections right now, but we just don’t know.
The problem is that officials are looking for a Machiavellian version of Balenciaga’s Pope. Remember the AI-generated images of Pope Francis in a puffer jacket that went viral last year? That’s what many now expect from generative AI tools – which can summon human-like text, images and videos en masse, making it as easy to spot as previous persuasion campaigns that supported Macedonia’s Donald Trump or spread divisive political content on Twitter and Facebook from Russia. So-called astroturfing was easy to identify when a series of bots said the same thing thousands of times.