Photo Credit: pexels
Starting in November, Google is going to start cracking down on “synthetic content,” which it defines as all image, video or audio content created with the help of artificial intelligence. From now on, if politicians are going to create political ads with the help of AI-generated content, they are going to have to divulge that fact.
The big fear, of course, is that AI could determine the fate of the 2024 presidential election. We’re not just talking about the potential for “deep fakes” to fool voters, either. We’re talking about the stealthy use of AI-generated content to share a political narrative that simply is not true. If you’ve ever spent some time on a platform like Midjourney or DALL-E, both of which use generative AI to create images based on text prompts, you’re probably aware of just how easy this is to do.
AI and a new era of dirty politics
The one example that everyone is talking about, of course, is the 30-second political ad that the Ron DeSantis campaign team ran back in June. At around the 25-second mark, the video contained images of former President Donald Trump hugging Dr. Anthony Fauci.
Donald Trump became a household name by FIRING countless people *on television*
But when it came to Fauci… pic.twitter.com/7Lxwf75NQm
— DeSantis War Room 🐊 (@DeSantisWarRoom) June 5, 2023
The ad suggested that maybe, just maybe, Trump was supporting Fauci during the entire COVID lockdown era, and that we shouldn’t trust him now. But here’s the thing – the image was completely fake. It’s just like the thousands of other fake Donald Trump images out there, many of them showing him wearing an orange prison jumpsuit, and all of them created with the help of AI.
Another example of a political ad that Google is trying to clamp down on is a 30-second campaign ad from the RNC that purported to show a dystopian United States after the reelection of President Biden in 2024.
The only problem, of course, is that the ad was built entirely with AI imagery. Scenes of migrants surging over the border and government troops imposing martial law in San Francisco were entirely fake. Maybe this would have been obvious to the average U.S. voter, but Google is now requiring a specific disclosure of any AI-generated content.
However, if you read the fine print here, it’s easy to see how politicians might get around the rules. For one, Google’s definition of “synthetic content” is somewhat arbitrary. It does not apply to certain basic AI-generated effects, such as image resizing, color corrections, or background edits. The basic idea, says Google, is that you simply can’t make it appear that someone is doing or saying something that they did not say or do.
If Google now, then who’s next?
It’s easy to see how Google’s efforts to get ahead of the AI-generated content issue is going to lead to other tech companies doing the same thing. Facebook, for example, has a very sophisticated AI operation in Silicon Valley, and could decide to crack down on any AI-generated status updates appearing on its platform. The same goes for X (formerly Twitter), which might decide to censor any memes featuring “synthetic content.”
One thing is sure – nobody wants to be accused of letting “the bots” win the election, such as what purportedly happened in 2016. Back then, Facebook was blamed for letting bots trafficking in misinformation and disinformation run rampant. Since nobody wants to let the AI bots win in 2024, don’t be surprised if the rules and guidelines get increasingly tighter as we get further into the election cycle.