The lines blur between truth and fiction.
Let’s play a game. The GIF below contains a real video of Russian President Vladimir Putin speaking and also a deepfake — a video generated by a computer algorithm in which a person in an existing video is replaced with someone else’s image.
Can you tell which one below is the real Putin and which one is the fake?
If you guessed the left one as the real one, then you’re correct.
Let’s try another one. Which one is the real Obama, and which one is the fake?
Did you say the right one was the real one? If so, you’re right on the money!
What’s the point of this so-called “game?” Well, turns out that this isn’t so much of a game anymore. With the rising precision of deepfake technology, it’s becoming harder and harder to tell the difference between what is real and what is fake on the internet.
This has huge implications for the vast quantities of information shared on social media, and other sources on information. Can we trust these videos of our nation’s leaders that we see on our Facebook home page? If not, then we must inform ourselves of the danger that is deepfakes.
Deepfakes are Artificial Intelligence programs that utilize machine learning to create incredibly realistic videos and audio segments that actually never happened. In order to create a deep fake, two machine learning networks are needed in a system called generative adversarial networks (GAN), according to CSO.
In GANs, one network is the “forger” while the other is the “detector” — hence the name. A data set of real video footage is fed to the “forger,” which analyzes the subject under different lighting and different angles, according to IEEE Spectrum.
The network then “learns” what the subject looks like, and can use this information to modify other’s faces. The “detector” works to recognize the forgery. When it does, it reports its results to the “forger,” which uses these results to learn from its mistakes and create a new fake. The cycle repeats until the “detector” no longer detects a fake, according to CSO.
Producing a deepfake is remarkably easy, at least, according to “u/deepfakes” the Reddit account whose popularity inspired the technology to be named after it.
In an interview with Vice, the creator of “u/deepfakes” stated that they used Tensorflow, an open-source machine learning program that Google makes available to the public. They explained that all you would need is a few hours and a commercially available graphics card.
Who uses deepfakes? Today, deepfakes are used for things ranging from museum visits to blockbuster movies. The Dali Museum in St. Petersburg, Florida, uses a deepfake of Salvador Dali as a tour guide for visitors to provide a surreal experience, according to The Verge.
Also, by using the existing footage of Ronnie Walker, the producers of “Fast & Furious 7″ utilized a deepfake of the actor since Walker had shot half of the movie before tragically dying in a car crash, according to Business Insider.
Even though the producers of this movie had consent from Walker’s family to use his likeness, there are instances where the deepfake users of a celebrity are not given the permission to do so. One notable incident came this April when videos of Jay Z rapping Shakespeare’s “To be or not to be” and Billy Joel’s “We Didn’t Start the Fire” were uploaded to YouTube, according to Forbes.
Not only did the video mimic Jay Z’s voice, but it also copied his unique flow. Jay Z filed copyright strikes against the YouTube video, but technically speaking, the works don’t belong to Jay Z: Shakespeare’s works are public domain, and “We Didn’t Start the Fire” belongs to Billy Joel.
The only thing that “belongs” to Jay Z is his voice, which is not protected by copyright. Thus, YouTube has refused to take down the videos and are currently locked in a legal battle with Jay Z.
Beware, reader. The deepfake issue goes beyond the court. Take deepfake pornography. With the increasing accessibility of deepfake technology, anyone could be the face of some pornography that would be posted somewhere and some time and, consequently, exposed to thousands of onlookers.
Indeed, this isn’t some underground trade that only insiders know about. Rather, deepfake pornography has victimized popular celebrities, such as Gal Gadot, Scarlett Johansson, and Kristen Bell. However, deepfake pornography has alarming implications for everyday people as well. It can be used as a tool for public harassment and bullying where targets are manipulated into humiliating and dehumanizing pornographic videos. Some call this revenge porn. The victims, especially celebrities, have spoken out against deepfake pornography, helping bring the issue into the spotlight.
Take presidential campaigns. Given how fast rumors can spread on social media, a deepfake video of a candidate could be posted to exaggerate the military record of a candidate, to make accusations and disparaging comments of an opponent, or to show individuals falsely claiming they had an affair with the candidate.
For example, a deepfake video of Nancy Pelosi that acquired more than 2.5 million views in a matter of weeks on Facebook alone shows a version of her who stutters her words and appears confused, making it seem that she is drunk, according to CBS News.
Evidently, this video became a dirty campaign strategy to undermine Pelosi. More importantly, the Nancy Pelosi deepfake video has caught the attention of U.S. intelligence officials, who are heeding the ramifications that deepfakes could have on 2020 presidential elections. The potential of these AI-manipulated videos ultimately reveal ways people can deceive, intimidate, slander, undermine trust, and misattribute others.
So what have been the corporation’s and government’s responses? When a deepfake video of Mark Zuckerburg was posted on Instagram in June 2019, many hoped that finally a mega-corporation such as Facebook would be able to put an end to the rampage of these deepfake videos. However, Facebook did not remove the video (as well as the deepfake video of what appeared as a drunk Nancy Pelosi) despite all the rumors and controversy.
Due to Facebook’s fake video policy, they insisted that they could only reduce the exposure these videos are getting by filtering them from the “Explore” and trending pages, according to Vice. However, this policy is one of the issues that is allowing deepfakes to slip through the cracks of our virtual world and blurring the lines between truth and fiction.
In December 2018, a Senate Bill proposed to make it illegal to “create with the intent to distribute, a deep fake with the intent that the distributions of the deep fake would facilitate criminal or tortious conduct under Federal, Sate, local, or Tribal law.” Yet, this bill would serve no change in the current law, where it was always a crime to “facilitate” any sort of crime.
At this point, it seems that deepfakes and the algorithms behind their production should be already wiped off from the face of the earth, right? Not so fast. With AI-technology becoming more precise and almost seamless, the rates at which deepfakes are being detected are becoming smaller and smaller each year.
In 2018, since deepfakes weren’t trained on mimicking blinking, these deepfake videos showed unnatural blinking patterns, and many researchers for deepfake detection algorithms quickly caught onto that. However, soon after, many deepfake producers started implementing new techniques that would train the technology to learn blinking patterns of their models. These deepfake technologies will always find a way to combat deepfake-detecting algorithms by improving upon the flaws that these detectors are able to catch. People will continue to fall for the trap, and this game of cat-and-mouse will only persist.
So what can we do to stop this madness? We’ve seen how businesses and the federal government have tried to tackle this issue and how they have greatly failed. Despite their valiant efforts, the matter of fact is that deepfakes cannot be prevented from being spread. Instead of trying to fight deepfakes, we must address the underlying ignorance that makes these deepfakes so effective. How might we do this, you may ask? First of all, we shouldn’t trust everything we see on social media. Before blindly sharing a video that aligns with our beliefs, we need to take a holistic look at both sides of the argument. This knowledge puts us in a position to judge whether the media is trustworthy or not. Secondly, if we notice something that seems incongruous, we should take the initiative to validate the information and quickly warn others of any misinformation.
Without a doubt, deepfakes pose an incredible threat to our society. Yet, we can’t fight fire with fire (algorithms with more algorithms). Instead, we need to address the root of the problem by promoting awareness because it’s only a matter of time before you’ll hear about another celebrity being the face of some unwanted scandal.