Not everything is what it seems: Deepfakes, a legal perspective

Not everything is what it seems: Deepfakes, a legal perspective

Written By: Michael Walsh

Deepfakes are spoofed images or videos that are created using machine learning algorithms. Deepfake algorithms use tools such as TensorFlow, a free open source machine learning platform popularized by Google, to create digitally manipulated spoofs that are nearly indistinguishable from unmodified footage. These algorithms use existing photos and videos on the internet to create a simulated version of a person’s face and then superimpose that spoofed face onto someone else’s body.[1] If such an algorithm is trained with enough data, which should not be difficult to procure (considering that 200 million selfies were published to Google Photos in 2016), the algorithm can develop a dynamic version of a fake face (or one of a notable figure), and superimpose that digital mask onto the target body, creating imitations that are nearly indistinguishable from the unmanipulated source videos.[2]

More recently, it has been demonstrated that the technology can be used to track intricate facial expressions and superimpose them in real time.[3] The underlying technology can be used to supplement our creative imaginations by streamlining CG or “generated adversarial network” (GAN) effects (which are used to create much of the computer generated imagery in modern movies),[4] but bad actors inevitably end up using the technology for nefarious purposes such as creating antagonistic political spoofs[5] or fake celebrity porn.[6]

The potential for abuse is evident, but studies suggest that industry leaders, policymakers, and legal professionals are taking particular interest in the trajectory of deepfakes.[7] Congress has also passed several bills requiring federal agencies to develop reports about the legal implications of deepfake and GAN technology.[8]

State Legislative Countermeasures

Several other state bills have been introduced to quell the potential abuse of deepfake technology. Texas banned the distribution of deepfaked videos that are intended to sway elections.[9] California proposed a bill imposing criminal and civil consequences for the exchange of nonconsensual deepfake pornography, but the bill was eventually dismissed pursuant to time requirements in Art. IV, Sec. 10(c) of the California Constitution. However, the considerable civil penalties originally proposed in A.B. 1280[10] were effectively modified and consolidated into A.B. 602.[11] Virginia also recently passed legislation prohibiting the “malicious” dissemination of manipulated video “with the intent to coerce, harass, or intimidate.”[12] New York followed suit by passing a bill banning the use of “digital replica” in pornographic work.[13] Most of these bills require a showing of some derivative of malice or intent to harm in order to circumvent free speech protections.

Tort Theories

While state legislatures and Congress invest in developing statutory protection specific to deepfakes, the Electronic Frontier Foundation contend that there are already several legal theories to protect against deepfake abuse, namely tortious theories including extortion, harassment, false light, defamation, and intentional infliction of emotional distress.[14]

Regardless, deepfake and GAN technology has spurred considerable interest and concern in the public and legal communities alike. It is imperative to carefully consider ethical, technical and legal solutions to ensure the benefit of deepfakes while simultaneously mitigating their risk.


[1] David Güera, Deepfake Video Detection Using Recurrent Neural Networks, Video and Image Processing Laboratory, (VIPER) Purdue University (2018), https://engineering.purdue.edu/~dgueraco/content/deepfake.pdf [https://perma.cc/KQZ7-9BVA].

[2] Kevin Roos, Here Come the Fake Videos, Too, New York Times (March 4, 2018), https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html [https://perma.cc/52NQ-QXUL].

[3] Matthias Nießner, Face2Face: Real-time Face Capture and Reenactment of RGB Videos, http://www.niessnerlab.org/papers/2019/8facetoface/thies2018face.pdf [https://perma.cc/7B4B-H6G5].

[4] Dave Itzkoff, How ‘Rogue One’ Brought Back Familiar Faces, New York Times (Dec. 27, 2016), https://www.nytimes.com/2016/12/27/movies/how-rogue-one-brought-back-grand-moff-tarkin.html [https://perma.cc/8JKZ-S646].

[5] Maheen Sadiq, Real v Fake: debunking the ‘drunk’ Nancy Pelosi footage – video, The Guardian (May 24, 2019), https://www.theguardian.com/us-news/video/2019/may/24/real-v-fake-debunking-the-drunk-nancy-pelosi-footage-video [https://perma.cc/FF33-F3ZW]; James Vincent, Watch Jordan Peele use AI to make Barack Obama deliver a PSA about fake news, The Verge (Apr. 17, 2018), https://www.theverge.com/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed [https://perma.cc/3GPZ-TK66].

[6] Samantha Cole, AI-Assisted Fake Porn Is Here and We’re All Fucked, Vice (Dec. 11, 2017), https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn [https://perma.cc/7GM8-Z47R].

[7] Matthew Ferraro, Deepfake Legislation: A Nationwide Survey—State and Federal Lawmakers Consider Legislation to Regulate Manipulated Media, Wilmer Hale (Sept. 25, 2019), https://www.wilmerhale.com/en/insights/client-alerts/20190925-deepfake-legislation-a-nationwide-survey [https://perma.cc/EJ2Q-AESV]; Miles Brundage et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Feb. 2018), https://arxiv.org/pdf/1802.07228.pdf [https://perma.cc/8B2E-RPHH]; Bobby Chesney, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Cal. L. Rev. 1753 (2019).

[8] H.R. 3600, 116th Cong. (2019), https://www.congress.gov/bill/116th-congress/house-bill/3600 [https://perma.cc/F2B7-5RRD]; H.R. 3494, 116th Cong. §§ 707, 715 (2019), https://www.congress.gov/bill/116th-congress/house-bill/3494/text [https://perma.cc/P684-WKRT]; S. 1348 116th Cong. (2019), https://www.congress.gov/bill/116th-congress/senate-bill/1348 [https://perma.cc/7C4Q-U8WF]; H.R. 4355, 116th Cong. (2019), https://www.congress.gov/bill/116th-congress/house-bill/4355 [https://perma.cc/9MK9-9ZYH]; H.R. 3230, 116th Cong. (2019), https://www.congress.gov/bill/116th-congress/house-bill/3230 [https://perma.cc/W27A-EDKC].

[9] Tex. S.B. 751 (2019), https://www.capitol.state.tx.us/BillLookup/History.aspx?LegSess=86R&Bill=SB751 [https://perma.cc/4ZFX-QM3S].

[10] A.B. 1280 (Cal. 2019), https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1280 [https://perma.cc/3VWL-NNUJ].

[11] A.B. 602 (Cal. 2019), https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602 [https://perma.cc/YC88-72SE].

[12] Va. Code Ann. § 18.2- 386.2 (2019), https://law.lis.virginia.gov/vacode/title18.2/chapter8/section18.2-386.2/ [https://perma.cc/D9GL-S646].

[13] S. 5959C (N.Y. 2019), https://www.nysenate.gov/legislation/bills/2019/s5959/amendment/c [https://perma.cc/C7VJ-EJAX].

[14] David Greene, We Don’t Need New Laws for Faked Videos, We Already Have Them, Electronic Frontier Foundation (Feb. 13, 2018), https://www.eff.org/deeplinks/2018/02/we-dont-need-new-laws-faked-videos-we-already-have-them [https://perma.cc/3MFV-EHUN].

Viewing Message: 1 of 1.
Warning

Important: Read our blog and commenting guidelines before using the USF Blogs network.