The Use of AI Deepfakes: From Pornography to Political Campaigns

From the classic novel 1984 to the beginning of Donald Trump’s political campaign in 2016, the term ‘fake news’ has circulated as a buzzword and has likely infiltrated every person’s subconscious mind as a concern about the media they consume.[1] As the use of artificial intelligence becomes more accessible, this reality is becoming more palpable and widespread. The victims of this fake news, however, are not just the consumers of this media - but those depicted in it. Through the use of ‘deepfakes,’ the lines between reality and fiction are blurred more than ever.  

What is a Deepfake?

            Deepfakes are highly realistic, yet altered, media that makes it look as if one person is doing something that they are not via a ‘face-swap,’ or putting one person’s face onto another's.[2] AI and machine learning creates these deepfakes.[3] This technology used to be incredibly expensive and difficult to use, but as time has progressed, deepfake technology is quite accessible now.[4] Harmless funny videos that use this technology exist on the internet depicting things like Nicolas Cage acting in movies he has never been in, like Indiana Jones or Terminator 2. Creative minds have also used the technology to develop humorous images, like Pope Francis wearing a large extravagant puffer jacket.[5] However, because the technology is also incredibly realistic, it creates serious concerns for those depicted in deepfake media doing unfavorable activities.[6]

Figure 1: Pope Francis in Puffy Winter Jacket[7]

Deepfake Pornography 

            One of the most common uses of deepfakes is to create pornographic content.[8] A study indicated that 96% of all deepfake videos online are pornographic.[9] Not only is the majority of deepfake content pornographic in content but this kind of pornography is heavily sought after.[10] As of 2023, nearly 150,000 explicit deepfake videos with over 3.8 billion views appeared across 30 sites.[11] Those videos depict everyone from the most famous, well-known stars to innocent individuals, their unsuspecting faces spliced onto pornographic content.[12] Creators of this content graft female celebrity faces like Emma Watson and Taylor Swift onto the bodies of actual porn stars.[13] Other sites even offer a service where their consumers can place real-life colleagues’ or friends' faces onto other nude bodies.[14]

             Sites that exclusively show deepfake pornographic material host most of the online deepfake videos on the internet.[15] These sites generally have their own advertising, showing a growing market in this area.[16] However, the individuals depicted in these videos are often victims, as the videos use their faces in these acts without their consent.[17]

Unfortunately, it is not only mega superstars like Taylor Swift who are the victims of this content but also minor children.[18] In New Jersey, a group of over 30 high school girls were victims of deepfake pornography.[19] With this growing phenomenon and victims accumulating, the question arises: what will our legislatures do to protect victims from being depicted this way?

The Legislature’s Response 

            Currently, no federal legislation regulating the use of deepfake technology exists.[20] Although no federal laws currently address this issue, growing movements urge lawmakers to act  against this escalating phenomenon.[21] For example, on February 24, 2024, hundreds of individuals signed a letter asking lawmakers to (1) fully criminalize deepfake child pornography, (2) create criminal repercussions for creators and distributors of deepfake porn, and (3) require software developers to prevent their product’s from being used to create harmful deepfakes with repercussions for those who fail to do so.[22] Congress members have introduced several bills with many of these goals in mind, none of which have succeeded.[23]            

Technology, particularly AI, develops at rapid rates. Naturally, our lawmakers feel compelled to respond swiftly, too.[24] Without any federal laws enacted, many states have taken matters into their own hands.[25] Indiana, Texas, Virginia, and Hawaii are just some states that have enacted broad laws creating criminal penalties for those who share non-consensual deepfake pornography, ranging from one to five years of jail time.[26] New York has added to existing laws that criminalize revenge porn—posting an ex-partner’s nude photos or videos after a breakup without their consent—to include deepfakes, as well.[27] Still, even with these laws, to date, no known records exist of a state prosecuting someone for sharing non-consensual deepfake pornography.[28] In addition, the fast-paced nature of deep-fake technology innovation and artificial intelligence makes it increasingly difficult for legislation to keep up with it and the new and unique challenges it poses for society.[29]

The Real Perp: Individuals or Corporations?

             While current state legislation mainly targets the individuals creating and distributing this content, some of the largest and most influential technology companies play a major role in the perpetuation of deepfake pornography circulating online.[30] Google, Amazon, X (formerly Twitter), and Microsoft all own companies that traffic viewers to sites that have AI deepfake porn.[31] Individuals use Google to search for deepfake porn, people circulate posts showing the content across X, and Amazon’s web server hosts many deepfake porn websites. [32]

             Brandie Nonnecke, a specialist in tech policy, suggests that self-governance is the answer to correct the growing issue, not legislation.[33] She thinks these companies should implement checks to ensure that when content uses someone’s likeness, the person agrees first.[34] In her words, the solution is for these companies to “grow a conscience.”[35] Other activists and company shareholders ask that the AI deepfake websites be delisted entirely from search engines and web servers.[36]

The fate of these victims being placed into the hands of private companies may seem problematic, but Google claims to have taken some action to protect them.[37] Google’s spokesperson said that Google’s ranking system is designed to prevent these deepfake sites from shocking people when they search.[38] Google has also implemented a way for victims of involuntary deepfake pornography to report and request the removal of pages that include this content.[39]

Deepfakes and Politics 

The issue of non-consensual deepfake pornography is growing, and its victims come from all different walks of life. However, deepfake technology also presents a growing threat to society’s trust in journalism and political campaigning.[40] Fake news is already a concern to Americans, and the realistic nature of deepfakes makes it even more likely for consumers to believe that the fake is real.[41] This concern has even attracted the attention of U.S. intelligence officials, who warn of the dangers this technology poses to U.S. elections.[42]

Regulating this use of deepfake technology poses unique challenges for legislatures because of the nature of copyright laws and the technology.[43] Because deepfakes are not exact copies of material but content transformations, copyright laws work against legislatures who may want to regulate them.[44] The legal landscape is also challenging to maneuver since a complete ban on the technology would likely raise serious First Amendment issues.[45]

The most glaring danger this technology poses in American media may not be the spread of misinformation but the creation of distrust in every media source.[46] This phenomenon has been called ‘information apocalypse’ or ‘reality apathy.’[47] This phenomenon presented itself in the most recent presidential election between Kamala Harris and Donald Trump.[48] On Truth Social, Donald Trump posted AI-generated images of Taylor Swift and her fans supporting his candidacy.[49] In response to the fake images, Taylor Swift publicly endorsed Kamala Harris on Instagram.[50] But for celebrities who are neither ready nor interested in endorsing a presidential candidate, AI-generated images like these pose challenges for celebrities and fans who encounter the images.  

Politicians themselves are also victims of deepfake technology.[51] In 2019, an altered video of Nancy Pelosi speaking slowly, as if intoxicated, circulated across social media.[52] Videos like this are more challenging for the person depicted to discredit, especially if the video has already damaged their credibility in the eyes of some viewers. Additionally, in 2018, a political party in Belgium created a deepfake of Donald Trump to encourage their citizens to sign a climate petition.[53]

California is just one state attempting to stop this misinformation in media and political campaigns.[54] In September 2024, California Governor Gavin Newsom signed the California AI Transparency Act.[55] This legislation will require providers of AI generative systems to (a) make available an AI detection tool, (b) offer an option to disclose if an image is AI-generated, (c) include a latent disclosure in AI-generated content, and (d) enter into a contract with licensees requiring them to include in the AI’s capability such a latent disclosure in content the system creates or alters.[56] This law will not go into effect until January 2026, but these disclosures may help prevent the ‘reality apathy’ consumers of media face.[57] However, the issue of using someone’s likeness to endorse something or someone they do not support or find reprehensible would still be an issue.

Recourse for those Affected

            Using celebrities’ likenesses through AI-generated images and videos is a growing phenomenon with widespread repercussions.[58] Those who choose not to be in the spotlight may become victims of such content.[59] Because of the protections of copyright laws and the First Amendment, those affected may have little recourse.[60] Even with the few state criminal statutes that exist outlawing non-consensual AI-generated deepfake porn, no one charged with such a crime to date.[61] Even California’s law attempting to target misinformation in AI images does not do anything to prevent someone’s likeness from being used in the images without their consent in the first place.[62] Is the only hope for these victims to trust companies like X and Google to take action themselves to stop this? Now, this appears to be the best option for victims.

            Reality blurring with fiction has been a widespread fear for many years. While some lawmakers attempt to thwart this reality, technology advances much faster than legislatures can keep up with. In addition, the market for this content is growing and incredibly lucrative. Lawmakers must acknowledge that deepfake technology not only threatens those in the public eye but every politician, person, and child. The solution is not clear, but the problem is real and growing fast.

[1] See George Orwell, 1984 (1949) (1984 is a novel that explores an imaginary future where the government controls every aspect of people’s lives and discusses the dangers of altering media to encourage a political agenda).

[2] Alex Alexandrou and Marie-Helen Maras, Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos, 23 Int’l J. Evid. & Proof 255 (2018).  

[3] Id.

[4] Id. at 256.

[5] AI Generated Image of Pope Francis in Puffy Winter Jacket in https://commons.wikimedia.org/wiki/File:Pope_Francis_in_puffy_winter_jacket.jpg (2023).

[6]Alexandrou, supra note 1, at 256.

[7]AI Generated Image of Pope Francis in Puffy Winter Jacket, supra note 5.

[8] Henry Ajder et al., The State of Deepfakes: Landscape, Threats, and Impact, 1 (2019).  

[9] Id.

[10] Davey Alba and Cecelia D’Anastasio, Google and Microsoft are Supercharging AI Deepfake Porn, Bloomberg Law (Aug. 24, 2023), https://news.bloomberglaw.com/artificial-intelligence/google-and-microsoft-are-supercharging-ai-deepfake-porn.

[11] Id.

[12] Id.

[13] Id.

[14] Id.

[15] Ajder, supra note 6, at 6.

[16] Id.

[17] Susie Ruiz-Lichter, Why the Taylor Swift AI Scandal is Pushing Lawmakers to Address Pornographic Deepfakes, 14 Nat’l L. Rev 354 (2024).  

[18] Id.

[19] Id.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Madyson Fitzgerald, States Race to Restrict Deepfake Porn as it Becomes Easier to Create, Stateline (Apr. 10, 2024, 5:00 AM), https://stateline.org/2024/04/10/states-race-to-restrict-deepfake-porn-as-it-becomes-easier-to-create/.

[25] Id.

[26] Id.

[27] Id.

[28] Alba, supra note 8.

[29] Mika Westerlund, The Emergence of Deepfake Technology: A Review, 9 Technol. Innov. Manag. Rev. 39, 44 (2019).

[30] Alba, supra note 8.

[31] Id.

[32] Id.

[33] Id.

[34] Id.

[35] Id.

[36] Id.

[37] Id.

[38] Id.

[39] Google Search Help, Remove Explicit Non-consensual Fake Imagery from Google, https://support.google.com/websearch/answer/9116649?sjid=16280261518274850936-NA (last visited Feb. 21, 2025).

[40] Westerlund, supra note 29, at 40.

[41] Id.

[42] Id. at 42.

[43] Id. at 44.

[44] Id.

[45] Scott Nover, South Korea Banned Deepfakes. Is that a Realistic Solution for the US?, GZERO AI (Oct. 8, 2024), https://www.gzeromedia.com/gzero-ai/south-korea-banned-deepfakes-is-that-a-realistic-solution-for-the-us#toggle-gdpr (last visited [date]).

[46] Westerlund, supra note 29, at 43.

[47] Id.

[48] Amanda Silberling, Could Trump’s AI-generated Taylor Swift Endorsement be Illegal?, TechCrunch (Aug. 19, 2024), https://techcrunch.com/2024/08/19/could-trumps-ai-generated-taylor-swift-endorsement-be-illegal/?guccounter=1.

[49] Id.

[50] Ana Faguy & Madeline Halpert, Taylor Swift Endorses Harris in Post Signed ‘Childless Cat Lady’, BBC News (Sept. 11, 2024), https://www.bbc.com/news/articles/c89w4110n89o.

[51] Westerlund, supra note 29, at 43.

[52] Id.

[53] Id.

[54] Arsen Kourinian, Howard W. Waltzman, Mickey Leibner, New California Law will Require AI Transparency and Disclosure Measures, Mayer Brown (Sept. 23, 2024), https://www.mayerbrown.com/en/insights/publications/2024/09/new-california-law-will-require-ai-transparency-and-disclosure-measures.

[55] Id.

[56] Id.

[57] Id.

[58] Ajder, supra note 6, at 6.

[59] Alba, supra note 8.

[60] Id.

[61] Id.

[62] Kourinian et al., supra note 54.

Next
Next

Trained or Trouble: Is AI Learning or Infringing?