216.696.8700

Un-Nudify Me: Removal Options for Deepfake Pornography Victims

December 13, 2023
NCAA

The AI Deepfake Pornography Pandemic

The rapid advancement of artificial intelligence (AI) technology has created new challenges that significantly disrupts established legal principles. One facet of AI, with a large potential for abuse, is its seamless capability to replicate a person’s likeness using AI generators, leading to the proliferation of deepfakes.

Deepfakes, or AI-generated images and videos of people engaging in certain actions, have caused concern across multiple industries, posing threats to the intellectual property, personality rights, and online reputations of every person. Furthermore, the disturbing trend of deepfake pornography adds an added layer of complexity, leading victims to question their legal options for removing the damaging content.

In February, we explored the legal implications of AI generated nude images in our article Nudify Me. At the time, the targets of Deepfake pornography were primarily celebrities and other famous people. As predicted, the issue rapidly expanded to target a wide variety of individuals, including children. Recent articles in the Washington Post and the Wall Street Journal provide more detailed accounts into the disturbing issues faced by the child victims of AI generated nude images. Make no mistake- AI generated revenge porn of a minor is child pornography or child sexual abuse materials (CSAM).  This means that children likely can, and will be, federally prosecuted for generating nudes of their classmates.

How AI Works

Just a quick primer on AI as it relates to generative images.  AI, or technically ML, (Machine Learning) is a type of computer programming that trains a computer algorithm to recognize patterns. The program then generates similar patterns based on a command.  “Draw me a picture of a dog playing poker” would require the program to know what a “dog” is and what a poker game looks like.  Indeed, it would have to know what an image is, how to generate an image, what style and context to use, and a host of other things.  To get that computer to generate an image that accurately meets your prompt, the computer would have to use a “training set” of images, which likely consists of billions of images.  Based on the content of these images (some have dogs in them) and the description of these images (my dog fido), the AI algorithm “learns” not only what a “dog” looks like, but what different breeds of dogs look like.  That “training set” may be (and often is) any available image on the public Internet, including images from social media platforms like Facebook, LinkedIn, Instagram, or X.

For AI generated pornography, the images can be derivative or synthetic.  In a derivative image, the actor could take a picture of, say Jennifer Lawrence in a bikini downloaded from the Internet, and use an AI website tool like “nudify me” (or dozens of others we won’t promote here) and generates that same picture, but without the bikini. In fully synthetic mode, a program like midjourney or Dall-E can be used to generate an image like “photorealistic image of Jennifer Lawrence naked on a beach at sunset in 4k…” and the AI program will generate this based on its knowledge of the terms “beach,” “sunset,” “naked,” and, of course — Jennifer Lawrence.  In both cases however, the AI program “knows” what Jennifer Lawrence looks like the same way you or I do – based on pictures of her from movies, TV, magazines, and the World Wide Web.  In each case, the resulting nude image did not exist until the AI program created it.

For the purposes of this article, we will only discuss the scenario in which AI generated pornography is created depicting an identifiable person in a sexually explicit situation that did not occur.  This is different from the generic “revenge porn” image where the person depicted may have created or participated in the creation of an image, but objects to the spread of their image.

You Can’t Sue What You Can’t See

The first problem with AI generated porn, and indeed any harmful internet content, is that the Internet is vast, and often gated, place.  Before a victim can take action, he or she must know of the existence and location of offending images as well as who is responsible for its creation, posting, and dissemination.  This is no small task.  While commercial image scanning services can be effective at scanning the internet for content which infringes on intellectual property rights, these services are not yet at a place where the software can effectively find AI images that seem to be similar to images of identifiable persons.  Thus, if you want to find AI generated deepfakes about yourself, you may be limited to conducting simple google searches.

But this method would not work to find content behind a gated site, or in a person’s direct messages, texts, or emails. Unfortunately, this means that most victims find out about the existence of AI generated deepfakes from the reportings of another person.

The goals of the victims of revenge porn are (1) removal; (2) prevention of continued dissemination; (3) investigation into the source of the image; and (4) compensation and/or punishment.  These goals often conflict, however, as litigating over deepfake images may raise the profile of the images and exacerbate the harm to the victim. For the purposes of this article, we will focus on the number one goal of most deepfake pornography victims- content removal.

How to Remove AI Generated Nude Images

The DMCA Takedown

The victim of AI generated nude images that are posted on a standalone website, like PornHub, may be able to use the Digital Millennium Copyright Act (DMCA) to remove the content. The DMCA generally requires the removal of materials posted in violation of copyright law.  However, it is not clear whether a fully deepfake image created by a training model that includes a copyrighted image (and particularly a non-registered copyrighted image) comes under the DMCA. The language of the statute (and the recent California federal AI case) suggest that it does not.

The DMCA requires the person requesting the takedown to certify under penalties of perjury that they are the owner or agent of the owner of a copyrighted work that is being infringed, and to provide a copy of the work (or a link to the work) that is allegedly infringed.  A digital doppelganger may or may not constitute an infringing derivative work, and therefore the DMCA may or may not apply.  Nevertheless, a takedown request may be directed to the DMCA agent, even if it is not based on the DMCA. Other ways to contact the hosting site may include emails to abuse@offendingsite or similar addresses – there’s a lot of legwork that has to be done here.  Use of protocols like “whois” may give you more information about the “owner” or registrant of the site.

Terms and Conditions Violations

Every social media platform establishes its own terms of use, which outline acceptable and unacceptable use of these platforms.  While these terms may differ slightly across social media sites, they uniformly prohibit certain types of content. Examples include spam, nudity, hate speech, violent material, and impersonation. If the content in question violates these guidelines, users may have the option to report and remove it.

Reporting procedures vary among platforms, ranging from content flagging to detailed web portal submissions. While this can be a straightforward means to address content issues, it’s important to note that platforms retain full discretion in determining whether a violation has occurred.

The Court Order

Obtaining a court order is another way that content, like deepfakes, can be removed. A court order is an instruction issued by a judge that compels individuals and websites to comply with specified terms, which can include removal. To secure a court order for content removal, a person must file a lawsuit against the person responsible for publishing the content.

In the case of deepfake pornography, a person may have state civil claims for violations of revenge porn statutes, deepfake statutes, harassment or threat statutes, or privacy related tort law.

Google and Other Search Engine Removals

In addition to removal, the victim of AI deepfakes may be able to have search engines like Google remove the offending imagery from search engines by “delisting” or “delinking” it. This way, a general search for the content will not yield the image.  Google and other search engines have specific portals that allow an individual to request removal of content from search engine indexing.

Google’s portal specifically states that it will remove the following content from search results:

  • Content shows (a person) nude, in a sexual act, or in an intimate state. (This may include, but is not limited to “revenge porn.”)
  • Content falsely portrays (a person) in a sexual act, or in an intimate state. (This is sometimes known as a “deep fake” or “fake pornography.”)
  • Content incorrectly associates (a person) with pornography.

Google’s recent incorporation of deepfake pornography into its removal portal marks a positive trend, highlighting how various websites are actively working to provide solutions for victims of AI-generated revenge porn.

Other Resources

There are numerous organizations that provide support and resources for the victims of AI generated revenge porn. For example, organizations like the Cyber Civil Rights Initiative provide options for requesting the delisting of revenge porn which may also work for AI generated pornography.  The Federal Trade Commission also provides guidance on steps a person can take to mitigate the risks of revenge porn.  Additionally, child victims of deepfake pornography can contact the National Center for Missing and Exploited Children (NCMEC) and review available resources for CSAM victims.

Conclusion

AI generated images are a huge issue further complicated by the underdeveloped legal landscape.  Thankfully, there are removal options for victims. If you are dealing with AI generated nude images, a content removal attorney can help you weigh your options for removal. Please contact Internet & Defamation Attorney Alexandra Arko (ALA@kjk.com; 216.716.5642) or Cyber Security attorney Mark Rasch (MDR@kjk.com; 301.547.6925) for more information.