top of page
  • Aiforgood Contributor

Is it getting harder to know what is real?

Updated: Jan 7, 2021

First posted on Linkedin December 18th 2020


Is it getting harder to know what is real? Well I wouldn’t be surprised if your answer to this question is yes. Especially with news headlines claiming fake news and videos of famous figures or even deceased celebrities appearing in ads or giving speeches that never happened. This phenomenon called Deepfakes is sparking discussions and interest all over the internet, but this new media is especially intriguing for the AdTech industry.


So how do Deepfakes work? First an algorithm needs a suitable amount of training data. For example videos and audio of famous movie stars or politicians. Then using two generative adversarial networks (GANs) one creates the fake video and the other tries to detect whether it is fake. In this way the video goes through different iterations until the final product can no longer be detected as fake by the second algorithm. The results of this combination are becoming incredibly more and more sophisticated. Just take a look at these Deepfakes of the former US president Barack Obama giving speeches he never made.


The potential for misuse is obvious and surely there will need to be greater ethical guidelines around how this technology is used. We still may be a few years down the road before we can no longer tell what is real or not, but it looks like we are heading in that direction. One industry that seems to be a first mover is the advertising industry. Advertisers are already catching on to the potential and starting to implement this technology in some creative ways.


Here is an example of ESP and State Farm who released a game-changing commercial during the screening of NBA documentary “The Last Dance”.


What makes this interesting is how the advertisers chose to let the audience know the video is fake. Kenny Mayne, a well-known sports announcer, from the 90”s makes his incredibly specific and accurate predictions about 2020, even going as far as telling the audience where this very same video will be used. This is an incredibly smart and funny way to tell the audience what they are watching is not real. And in this way the advertisers are setting a standard, perhaps even an unwritten rule that when using Deepfake technologies in advertising it is important for building trust to let the audience know that what they are seeing is not real. But as seen in this video, It doesn't have to be done with a warning label over the video. Since the technology is so new, there is plenty of room for creativity within an ethical framework.


David Beckham speaks nine languages to launch Malaria Must Die Voice Petition



Here is another clever example of how advertisers are using Deepfakes. Reading the title of this video you might think wow, I didn’t know David Beckham speaks 9 languages. Well of course he doesn’t, so again advertisers are finding creative ways to let the audience know that what they are watching is not real.


As a new technology, Deepfakes presents many opportunities for brands and marketers, especially while the world is still facing the unprecedented restrictions from the global COVID-19 pandemic. Min Sun, the Chief AI Scientist at Appier, says this technology is a good opportunity because access to talent for spokesperson-led campaigns is now limited. He goes on to say that Deepfakes let you use a local actor to play a part and then alter the footage and add the spokesperson in later.


Advertisers and marketers as early adopters may be the first to realize the commercial value of Deepfakes, but they might also be the ones to lead the way in terms of the ethical use of the technology as well. Tech ethicist David Polgar states in an article from the Drum “The general public strongly desires to know if what they are interacting with is real,” He continues to say “That doesn’t mean people don’t want synthetic media, it just means that blurring the line between real and synthetic without transparency is disrespectful”. Polgar goes on to say “The advertising industry can and should take a stance and clearly distinguish between the two.”


By taking a stance and creating self imposed norms around transparency and ethical use of Deepfakes, people will become more accustomed to its use in media and it will be more widely accepted and available instead of feared and regulated.

The AdTech industry has an opportunity here to lead the way in setting the standard in letting the audience know that what they’re seeing is not real. Marketers understand that building a relationship with customers is key to success and this is predicated on trust. Since how generative adversarial networks work isn’t exactly common knowledge, using the technology without transparency will lead to distrust and that's not good for brand loyalty.


With such an intriguing technology the lure will be strong for advertisers, the question is can they lead the way in creating the ethical norms around how these technologies are used and set the ethical trend for future uses of Deepfakes in other industries.


Written by Boice Lin, Appier, Head of Global Partnership. Advisor for Aiforgood Asia

Post: Blog2_Post
bottom of page