Photo Credit: pexels.com
If you’re worried about the potential negative impact of social media on the 2024 presidential election, you shouldn’t be. That’s according to Meta, the parent company of Facebook, Instagram, and WhatsApp. Meta is going to great lengths to convince us that everything is under control, and that the content you see on Facebook or Instagram can absolutely be trusted as we head into the final months of the 2024 presidential campaign.
Who’s worried about deep fakes and AI chatbots?
First and foremost, says Meta, it is putting into place stringent new guidelines to address the rise of artificial intelligence (AI). Meta will now require advertisers to disclose when they use AI tools to create, alter, or influence a political ad. In short, if advertisers are using so-called “AI methods” to depict certain individuals or events, they will have to tell people that those ads were AI-generated. If they use AI tools to alter the appearance of a certain individual or event, they will also need to tell people.
You get the idea – Meta doesn’t want political candidates (or their supporters) to pull a fast one on an unsuspecting electorate. In theory, the stringent new guidelines should prevent things like fake AI-generated images being shared across social media and going viral before they can be debunked. They should also prevent the creation of deep fakes, or anything similar that might confuse a voter, such as AI-generated audio that appears to be coming from a certain candidate.
No repeat of 2016 or 2020?
Of course, what Meta really has in mind is avoiding a repeat of the 2016 or 2020 presidential elections. Both of these elections have been plagued by concerns that they were somehow manipulated using social media. People still talk about all the “Russian bots” that supposedly helped elect former President Donald Trump back in 2016. And the 2020 campaign was all about the “fake news” and “misinformation” appearing across all of Meta’s social media platforms. Many voters didn’t know what was true, and what was fake. Nearly four years later, people are still mixed on whether the whole story of Hunter Biden’s “laptop from hell” was true or not.
Meta also wants to avoid any controversy surrounding a last-minute “October surprise” that completely changes the face of an election. To do that, Meta will block any new political ads during the final week of the campaign. Older ads that have already been vetted will be allowed to run, but new ads will be strictly forbidden.
And, on top of all that, Meta says it has a literal army of people working around the clock. Meta claims it has more than 40,000 people working in “safety and security.” It says it has more than 100 partners worldwide to combat misinformation and election lies, all working across a variety of different languages.
And it says it has invested more than $20 billion since 2016 to guarantee that every election is 100%, positively squeaky-clean. It has done everything possible to crack down on hate speech and campaign bullying. And it has put into place tight new guidelines for “state-controlled media” in order to prevent state-run propaganda from tipping the scales of an election.
So what could possibly go wrong?
Despite all of these steps by Meta, do you somehow get the feeling that it just won’t be enough? In the post-pandemic era, it just seems like the whole mail-in-your-ballot, “vote from home” approach has enormous potential for abuse. Especially at a time when thousands of undocumented migrants are streaming over the U.S. border every day, and we seem to have lost control over who’s actually in our country these days.
So keep your eyes and ears open. Meta may think it has solved the problem of election integrity for social media, but the 2024 election could still end up being the most divisive and contentious one yet.