Photo Credit: pexel
If you thought the past two presidential elections have been chaotic, you haven’t seen anything yet. The scope of what was possible in 2016 and 2020 pales in comparison with what’s possible using artificial intelligence (AI). With so many people experimenting with AI chatbots and AI-content generation tools, it’s only a matter of time before political supporters, pollsters, and election campaign managers decide to test the limits of what’s possible with AI.
AI has the potential to wreak havoc on elections
A simple example from pop superstar Katy Perry is all that you need to realize just how out of control AI has already become. By now, you’ve probably seen an image of Katy Perry wearing an amazing-looking flower dress for the 2024 Met Gala in New York City. But here’s the thing – Katy Perry was never in New York, she never attended the Met Gala, and she never wore that dress. But the image was so convincing that it fooled everyone – even Katy Perry’s mother!
And that’s the problem – if AI is good enough to fool your mom, then it’s obviously good enough to fool millions of potential voters. So imagine the potential here if people create AI-generated images or videos of Donald Trump or Joe Biden. What would happen if an image surfaces of Trump or Biden doing something truly reprehensible? And what would happen if that image appears as a nasty “October surprise,” with just days to go before the election?
We’re already at a point where even mainstream media outlets are having a hard time distinguishing between “real” and “AI” videos. There’s a famous example from July 2022, for example, when Joe Biden gave brief comments about the events of January 6. The Democrats posted the following “You can’t be pro-insurrection and pro-American” video featuring Joe Biden:
Two years later, and it’s still hard to figure out if it’s real or fake. For one, Joe Biden doesn’t blink his eyes once in an entire 17-second clip. That should be a clear tip-off that it’s AI-generated. And there’s something about the voice and face that’s just, well, off. Yet, the White House says it was completely real. So we have a real problem here. It’s getting to the point where even highly educated, well-informed voters might have trouble discerning fact from fiction.
Social media to the rescue?
Against this backdrop, it’s easy to see why the biggest social media platforms are tightening up their AI policies ahead of the election. They don’t want to be blamed for throwing the election to either candidate. Thus, TikTok, Meta, and YouTube are all instituting new rules. All AI-generated content will need to be clearly designated as such. So, for example, that Katy Perry-at-the-Met Gala image now has all kinds of “this might have been created by AI” disclaimers on it.
Moreover, AI companies are also taking new steps to ward off potential problems. OpenAI, the company behind ChatGPT and DALL-E, is partnering with Microsoft on a $2 million fund to spot deepfakes. And Sam Altman of OpenAI has been very outspoken about the potential dangers of AI.
So, in theory, the guardrails are in place. In theory, we should not have anything to worry about in the 2024 presidential election. But, as we know from previous experience, expecting the big social media giants of Silicon Valley to regulate themselves is a lost cause. So keep your eyes out for potential AI deepfakes and other examples of sophisticated AI manipulation as we head into the final months of the year.