Kim Kardashian vs. Deepfakes

Can the Kardashian approach to taking down ultra-realistic fake videos work for others?

Kim Kardashian West leaves the White House on Thursday.
Kim Kardashian West leaves the White House on Thursday.
Alex Wong/Getty Images

Just a few weeks ago, a doctored video of House Speaker Nancy Pelosi speaking with falsely slurred speech made waves in media and led to a congressional investigation. It was a high-profile example of a “deepfake,” which scholars Danielle Citron and Robert Chesney have defined as “hyper-realistic digital falsification of images, video, and audio.” Deepfakes have also come for Mark Zuckerberg, with a widely shared video in which he ironically appears to comment on the dangers of deepfakes, and Kim Kardashian West, in a video that similarly portrays her speaking about digital manipulation.

Falsified photos, audio, and video aren’t new. What’s different and frightening about today’s deepfakes is how sophisticated the digital falsification technologies have become. We risk a future where no one can really know what is real—a threat to the foundation of global democracy. However, the targets of deepfake attacks are likely concerned for more immediate reasons, including the risks of a false video depicting them doing or saying something that harms their reputation.

Policymakers have suggested various solutions, including amending Section 230 of the Communications Decency Act (which basically says that platforms aren’t responsible for content uploaded by their users) and crafting laws that would create new liability for creating or hosting deepfakes. But there is currently no definitive legal answer on how to stop this problem. In the meantime, some targets of deepfakes have used a creative but flawed method to fight these attacks: copyright law.

Recently, there have been reports that YouTube took down that deepfake depicting Kardashian based on copyright grounds. The falsified video used a substantial amount of footage from a Vogue interview. What likely happened was that Condé Nast, the media conglomerate that owns Vogue, filed a copyright claim with YouTube. It may have used the basic YouTube copyright takedown request process, a process based on the legal requirements of the Digital Millennium Copyright Act.

It’s easy to understand why some may turn to an already-established legal framework (like the DMCA) to get deepfakes taken down. There are no laws specifically addressing deepfakes, and social media platforms are inconsistent in their approaches. After the false Pelosi video went viral, tech platforms reacted in different ways. YouTube took down the video. Facebook left it up, but added flags and pop-up notifications to inform users that the video was likely a fake.

However, copyright law isn’t the solution to the spread of deepfakes. The high-profile deepfake examples we’ve seen so far mostly appear to fall under the “fair use” exception to copyright infringement.

Fair use is a doctrine in U.S. law that allows for some unlicensed use of material that would otherwise be copyright-protected. To determine whether a specific case qualifies as fair use, we look to four factors: (1) purpose and character of the use, (2) nature of the copyrighted work, (3) amount and substantiality of the portion taken, and (4) effect of the use upon the potential market.

This is a very broad overview of an area of law with thousands of cases and possibly an equally high number of legal commentaries on the subject. However, generally speaking, there’s a strong case to be made that most of the deepfakes we’ve seen so far would qualify as fair use.

Let’s use the Kardashian deepfake as an example. The doctored video used Vogue interview video and audio to make it seem like Kardashian was saying something she did not actually say—a confusing message about the truth behind being a social media influencer and manipulating an audience.

The “purpose and character” factor seems to weigh in favor of the video being fair use. It does not appear that this video was made for a commercial purpose. It’s arguable that the video was a parody, a form of content often deemed to be “transformative use” for fair use analysis. Basically, this means that the new content added or changed the original content so much that the new content has a new purpose or character.

As for the nature of the copyrighted work, the video interview likely lies somewhere between a news item (more likely to qualify fair use) and a creative film (less likely to qualify as fair use). One factor that could weigh against this video being fair use is simply the amount of the copyrighted work that was used. This deepfake may have used a substantial amount of the original interview’s video and audio content. However, depending on how long and involved the original interview was, it’s possible this snippet only used a small portion of the original.

One key factor in fair use analysis is whether the new use (the deepfake in this example) would have a negative effect upon the market value of the original (the interview). Here, it is unlikely that watching this deepfake would make people less likely to watch or purchase access to the original interview. It’s also unlikely (though this is an arguable point) that the deepfake would somehow cause harm to the market value of the original. The two videos are too different in scope and character, and people would likely know that the two are different and do not come from the same producer.

It is understandable that some targets of deepfakes may use the copyright takedown process as an easy way to remove deepfakes. But the issue here isn’t copyright infringement. Other legal avenues already exist: In the United States, individuals may be able to sue over deepfakes on legal doctrines including privacy torts (particularly “false light”), right of publicity, harassment, defamation, and more. It may still make sense for legislators to create specific laws targeted at deepfakes, too.

The existing legal avenues aren’t without potholes. As with any problematic content online, it can be difficult to find the people responsible. Posters are sometimes anonymous or hard to reach legally for various reasons. It is much easier and more direct to simply sue the platform.
The deepfake problem might not be a copyright problem, then, but it is a problem that underscores the greater issue: the power of tech platforms and how to regulate social media in a way that protects user rights.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.