Photo Credit: Shutterstock
If you think about it, much of modern advertising is really just a clever illusion. For 30 or 60 seconds at a time, you are temporarily suspended in an imaginary world where paid actors attempt to convince you that buying a certain product will instantly change your life. Sure, the products are real, but not much else is. So it’s perhaps not surprising that digital advertisers continue to experiment with new technologies that blur the line between fantasy and reality. Cartoon avatars and CGI were just the beginning – we’re now headed into a brave new world filled with holograms, virtual reality, augmented reality and various other forms of “mixed reality” – including new face-swapping and voice-generating technologies powered by AI.
The ESPN deepfake ad
The example that everyone is talking about now is a new ESPN and State Farm commercial used to promote a 10-part documentary series on Michael Jordan and the Chicago Bulls. Advertisers – unable to meet in person due to economic lockdown measures – instead decided to use a little computer tomfoolery to create a “deepfake” video ad featuring ESPN Sports Center anchor Kenny Mayne. In basic terms, what the advertisers did was superimpose audio from a 60-year-old mouth on a 38-year-old body. As a result, the Kenny Mayne of 1998 appeared to be making prophetic statements about State Farm’s sponsorship of an ESPN documentary more than 20 years into the future. Well played, ESPN.,
If you didn’t look closely, this might have been a little startling. What the heck did I just see? Only by re-watching the video on a social media platform like YouTube could you see what happened. The telltale sign of any deepfake (at least for now) is that the mouth always moves in a way that’s not quite natural. And ESPN made clever references to the fact that this was a bit of computer-generated wizardry within the ad itself, even if the network didn’t come out and clearly mark this commercial as a “deepfake.”
Implications of using deepfakes in ads
Which, of course, raises all sort of troubling questions. Just as social media influencers are now asked to clearly mark when they are being paid to promote a product, shouldn’t advertisers be required to mark when they are using computer-generated fakes to promote a product? Presumably, Kenny Mayne was in on the joke, but what if an advertiser decides to create a deepfake ad without the consent of a certain celebrity endorser? Imagine the chaos that would ensue if advertisers start to use celebrity likenesses without their permission as part of new deepfake promotions.
By now, you’ve probably seen your fair share of deepfakes – like the new Elon Musk deepfake available for Zoom or the famous deepfake of President Obama appearing to say all sorts of outlandish things, thanks to actor Jordan Peele. Put another way, the deepfake genie is now out of the bottle. Literally. The creators of the Elon Musk deepfake for Zoom have made the software publicly available on Github, so any enterprising coder is now able to play around with this technology. Combine this sort of software with a massive facial recognition database, and it might be possible to make any face in the world say anything you’d like. Moreover, just check out YouTube – type “deepfakes” into the search bar, and you get all kinds of examples.
That might have some scary implications for the world of politics or religion, but not as many implications for the world of advertising. Remember – the future of advertising has always been fake. (If you’re not convinced, just re-watch an old episode of “Mad Men” to see how Don Draper’s team at Sterling Cooper came up with ad campaign ideas designed to appeal to people’s emotions and needs.) The game hasn’t changed, only the technology. As long as advertisers are ethical about how they use the likenesses of people, and as long as they let audiences in on what they are doing, prepare to be entertained and even amused by what comes next.