A few weeks ago, California Governor Gavin Newsom signed into law Assembly Bill No. 730 (AB 730), which would prohibit the distribution of “materially deceptive audio or visual media” of candidates with the intent to damage that candidate’s reputation until 2023.
Such visual media, also known as “deepfakes,” have become a significant point of interest during the 2020 presidential election, primarily due to the widespread distribution of doctored videos such as one in which House Speaker Nancy Pelosi appears to be verbally impaired during a press conference in May.
This bill was signed into law in conjunction with Assembly Bill No. 602 (AB 602), which would ban deepfakes of a pornographic nature made without the express consent of the individual(s) depicted in videos.
Though such laws would certainly ensure a certain degree of control over such deceptive videos, they are largely ineffective, as there are no controlled methods through which the production or distribution of deepfakes can be screened for malicious intent.
These videos are often so realistic that even experts may encounter difficulty in discerning whether or not a video had been modified for subversive purposes.
According to the Associated Press (AP), notable organizations such as the American Civil Liberties Union (ACLU) of California urged Newsom to veto the bill.
“Despite the author’s good intentions, this bill will not solve the problem of deceptive political videos; it will only result in voter confusion, malicious litigation, and repression of free speech,” legislative director of the ACLU Kevin Baker told the AP.
The creation of deepfakes is not a particularly new phenomenon. Hollywood has long engaged in the implementation of CGI to recreate images of dead actors, but the technology used was often incredibly expensive and complex, making it unavailable to the general population, according to the magazine The Week in its article, “Rise of the deepfakes.”
However, in December 2017, people were alerted to the severity of the prominence of deepfakes when a Reddit user, by the name of “Deepfakes,” began to post alarmingly realistic videos in which celebrities’ faces were superimposed upon pornographic videos.
Less than a month later, FakeApp, a free and widely distributed program through which users could easily create their own deepfakes through artificial intelligence (AI) programs, was reportedly downloaded over 120,000 times, according to The Week.
Such heightened interest in this technology is extremely alarming.
Deepfakes can often be extremely dangerous in that they misinform the populace about political figures, thus undermining campaigns, and can even compromise the reliability of evidence submitted to trial in a court of law.
Hence, rendering the circulation of manipulated media a crime, as well as requiring disclaimers to accompany such media would appear to be a sound solution.
However, legislation such as the recently introduced Congressional DEEPFAKES Accountability Act (or, more accurately, the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act), though a step in the right direction, are not viable or enforceable solutions to the prevalence of deepfake technology or the malicious production of misleading videos.
Moreover, such solutions are temporary; after the law sunsets in 2023, public figures and even ordinary citizens would again be susceptible to being made the subject of incriminating doctored videos unless extreme measures such as severely limiting technology involved in the creation of deepfakes were taken.
Even if legislation such as that listed in Assembly Bills 730 and 602 could be enforced, lawmakers must still consider the long-term consequences of deepfakes and ensure that any legislation addressing this issue maintain some degree of permanence outside of the context of the upcoming presidential election in order to more thoroughly encompass the widespread impact of deepfakes.