Band-Aids Don’t Fix Bullet Holes: The Bad Blood Between Deepfakes and Adequate Solutions

Ainsley Marlette, Contributing Member 2023-2024

Intellectual Property and Computer Law Journal

I. Introduction

The rapid advance of Artificial Intelligence (“AI”) has created a growing interest in the consequences such technology brings to society.[1] AI has several subfields, one of which is deepfakes. Deepfakes use AI to transform existing content and swap people with each other to create images, audio, and video hoaxes.[2] Questions surrounding AI’s legal and regulatory governance have raised concerns about personhood, as it becomes more challenging to differentiate between what is real and fake.

Deepfake AI images and videos of celebrities and ordinary people have become increasingly prominent in the digital world.[3] Celebrity Jihad is an entire pornographic website featuring AI-generated pornography.[4] On January 25, 2024, deepfake images of Taylor Swift began circulating the internet at a frightening speed, specifically on the social media platform X. This led to the trending search “Taylor Swift Artificial Intelligence” which quickly amassed 27 million viewers and roughly 260,000 likes within just 19 hours.[5] In response, X blocked any searches with “Taylor Swift,” pledged to remove the deepfake images from the platform and take “appropriate actions” against the accounts that shared them.[6]

 The incident on X raises a significant concern for both celebrities and other individuals. Can a person sue for damages caused by deepfakes on social media? If so, who is liable for the damage? This article argues that deepfakes are not protected speech under the First Amendment, and social media companies should be held responsible for deepfakes if they are not removed quickly.

II. Background

What Are Deepfakes?

The term “deepfake” is a combination of “deep learning” and “fake,” which manifests as hyper-realistic digitally manipulated works.[7] Deepfakes include face swaps, audio clips, facial reenactments, and lip-synching.[8] Deepfakes debuted in 2017 when an anonymous Reddit user posted then-existing pornographic videos with celebrities’ face-swapped with the original ones.[9]  Deepfakes are the product of AI applications that merge, combine, replace, and superimpose images and video clips to create fake products that appear authentic.[10]Historically, the term carries a negative connotation due to the dubious nature of the concept. Most commonly, we see deepfakes in illicit contexts, like creating non-consensual pornography (particularly revenge porn), bullying, and fake news.[11] While deepfakes often carry a negative connotation, they have also been used for positive purposes in various situations. For example, the 2020 HBO documentary “Welcome to Chechnya” used deepfake technology to hide the identities of Russian LGBTQ refugees.[12] Another example is David Beckham, English football icon, using a variation of deepfake technology (visual synthesis) to translate a video promoting his Malaria No More campagin in nine different languages.[13] On a sillier note, deepfake technology has been used to resurect Tupac Shakur and add him into a Snoop Dogg music video.[14] However, the vile use still looms over the term. A study done in 2021 found that 90-95% of deepfake videos are nonconsensual pornographic videos, with most of the victims being underage women.[15]

The sophistication of technology only increases the damage that deepfakes can cause, as the proficiency of the software becomes more difficult to detect.[16] While data on this topic is sparse, one study suggests that people cannot reliably detect deepfakes and raising awareness of the issue does not improve their detection accuracy.[17] Thus, many viewers assume that the content in front of them on social media is genuine.

Deepfake Regulation

State and Federal Legislation

Given the incipient state of deepfake technology, the legislative and legal regime governing such systems has yet to fully develop.[18] As of this writing, there is no federal legislation to address the potential threats of deepfake technology within the United States.[19] However, Congress passed the National Defense Authorization Act, which, in Section 5709, requires the Director of National Intelligence to report on foreign weaponization of deepfakes and deepfake technology.[20] While this Act protects national security at an international level, it does not address the threat deepfakes have within the United States.

The lack of federal legislation has led some states to enact their own deepfake laws. However, since the creation of deepfakes, only ten states have legislation that specifically targets those who create and share explicit deepfake content. This includes California, Florida, Georgia, Hawaii, Illinois, Minnesota, New York, South Dakota, Texas, and Virgina.[21] California and New York have specific laws that allow people to sue deepfake creators in civil court.[22] As of September 2023, Louisiana, Massachusetts, and New Jersey were the only states with proposed laws regulating deepfakes.[23] While existing laws may provide some recourse against harmful uses, this patchwork protection leaves most of the United States vulnerable to harm inflicted upon them through content creators who use their likeness in deepfake production. If the content is outside the relevant state’s jurisdiction, the legislation is inapplicable, leaving victims of deepfakes with no opportunity for recourse.[24]

Constitutional Amendment

The First Amendment states, “Congress shall make no law … abridging the freedom of speech.”[25] Importantly, the Supreme Court has interpreted the First Amendment to broadly protect artistic expression. It extends to all mediums; “ pictures, films, paintings, drawings, and engravings, both oral utterance and the printed word.”[26] Considering the wide range of protected expression and the Framers’ intent, deepfakes would receive protection under the First Amendment. However, does this mean that nonconsensual pornographic images created by deepfakes are protected under free speech? Likely not, as there are exceptions to First Amendment protection for things like obscenity and child pornography.[27]

Communications Decency Act

The Communications Decency Act of 1996 (CDA) added Section 230 to the Communications Act of 1934, protecting online service providers from legal liability for content created by users of their services.[28] Section 230 was drafted and enacted to respond to the liability concerns of online providers and ensure that individuals would be able to freely participate online.[29]

Section 230(c)(1) states that websites and people who run them cannot be held responsible for content other people post on their website. Courts have developed a three-part test to identify whether Section 230’s shield bars a claim of liability.[30] To determine whether a website can be held responsible for a contributor’s content on their site, the court considers three factors: (1) whether the website is a platform for people to share content, (2) whether someone using the website created the problem or if it was the website itself, and (3) whether the lawsuit is blaming the website for what someone else said or did.[31] The defendant must meet all three parts of this test to gain the benefit of Section 230’s liability protections.

There are, however, situations where the liability shield does not apply. Section 230(e) enumerates situations where the shield does not influence criminal law, intellectual property law, state law, or the Electronic Communications Privacy Act of 1986.[32] For example, Section 230 would not shield a computer service provider from prosecution under 18 U.S.C. § 2252, the federal prohibition on material involving the sexual exploitation of minors, if the elements of that offense are met.

III. Discussion

What Can Taylor Swift Do?

Deepfakes can cause irrevocable emotional, financial, and reputational harm.[33] But who is liable for inflicting this harm? The short answer is that no one knows. As the legal landscape surrounding AI liability continues to evolve, it appears that current legal frameworks suggest companies and individuals, not AI itself, will face potential liability for its actions.[34]

In states that have deepfake laws, like California, Swift could directly sue the creator of the images. However, as previously mentioned, if the content was not in a state with laws like California, then she does not have an opportunity for recourse. Furthermore, if Swift could identify the creator, suing them would not provide a complete solution on two accounts. Firstly, the creator’s anonymity makes it difficult to hold them accountable. Secondly, the widespread availability of deepfake technology means another creator could easily replicate the act. It would be helpful, however, if Swift were to sue the platform that made such images accessible to the world within a matter of hours.[35]

Examining the Legal Challenges and Potential Recourse

So, could Swift sue X? Using the Court’s current interpretation of §230 of the Communications Decency Act, the answer is no. Since 1996, courts have interpreted the relevant statutory language as “creating a broad exemption from liability” and to exempt the negligent publishing of offensive or unlawful content.[36] For example, in Dart v, Craigslist, the plaintiff sued Craigslist because the website’s adult section constituted a public nuisance.[37] The district court held that “A claim against an online service provider for negligently publishing harmful information created by its users treats the defendant as the ‘publisher’ of that information.”[38] To lose the benefit of the exemption from liability granted by 47 U.S.C. §230 based on content posted by third parties, the publisher must materially contribute to the creation of the material by controlling the content posted by the third party or by taking actions to ensure the creation of unlawful material.[39] Even in extreme cases like child sex trafficking ads on a defendant’s website, the courts enforce absolute immunity. This occurs even when the substantive facts underlying a plaintiff’s claim are compelling.[40] Further, 47 U.S.C. §230(c)(1) holds that no provider or user of an interactive computer service shall be treated as the publisher or speaker of the information provided by another content provider.[41]

Nevertheless, there is an interesting dichotomy between how 47 U.S.C. §230 plays out compared to how it was designed. It was created, in part, to incentivize free and open internet as “a myriad of avenues for intellectual activity.”13 While the text provided broad protection, it was never designed to create absolute immunity, only a limited “Safe harbor from liability for online providers engaged in self-regulation.”14 At its inception, §230 was meant to be a ‘Good Samaritan’ provision to protect users from online indecency.[42] With over 300 reported decisions addressing immunity claims, a majority have found a website entitled to immunity from liability claims.[43] Quickly, a law that was meant to make good Samaritan efforts developed into enforcing bad Samaritans through lack of liability.[44]

Recently, legal debates around §230 have focused on whether platforms are liable when they promote illegal content.[45] This issue was a core question in the Supreme Court case of Gonzalez v. Google LLC.[46] Ultimately, the court did not decide the §230 issue in Gonzales. Taylor Swift’s public experience with the dark side of deepfakes looks to change an area that is desperate for change. Since January 25, 2024, we have already seen some states take significant steps. For example, a Missouri lawmaker has introduced a new bill known as the “Taylor Swift Act,” which intends to offer legal safeguards against AI deepfake violations.[47] Similarly, multiple Tennessee lawmakers have proposed bills to expand the penalties and limitations of AI.[48]

IV. Conclusion

While it is unfortunate that these legal issues have only come to light in response to Taylor Swift’s deepfakes, it might be the push needed for legislation to catch up with the modern age of technology. Hopefully, Swift gets some sense of justice to remedy the unimaginable pain the incident has caused. But the truth is, no one really knows how that will happen or what it will look like. And, maybe, that justice will come in the form of increasing the public’s knowledge of such disastrous and malicious acts of internet violence. [49]


[1] Adam Thierer, Andrea Castillo O’Sullivan, & Raymond Russell, Artificial Intelligence and Public Policy 3 Mercatus center at   Geo. Mason 3 (2017), https://deliverypdf.ssrn.com/delivery.php?ID=662005122118001104084115116108019030024072085041037020022064114014064110031095030007055063002017114034038000068002023126077118024015017023051112026000007088106089037064037001106126084084076093104068005127117104113064107120118108074020021077106118120&EXT=pdf&INDEX=TRUE.

[2] https://www.techtarget.com/whatis/definition/deepfake

[3] Taijuan Moorman, X Restores Searches for Taylor Swift Following Sexually Explicit Deepfake Images, USA Today (Jan. 30, 2024), https://www.usatoday.com/story/entertainment/celebrities/2024/01/29/taylor-swift-x-twitter-searches-no-results-explicit-ai-images/72393604007/.

[4] Brut. (@Brutamerica), TikTok (Jan, 25, 2024), https://www.tiktok.com/@brutamerica/video/7328059697255255339?lang=en.

[5] Elizabeth Napolitano, X Blocks Searches for “Taylor Swift” After Explicit Deepfakes Go Viral, CBS News (Jan. 29, 2024),https://www.cbsnews.com/news/taylor-swift-deepfakes-x-search-block-twitter/.

[6] Id.

[7] Mika Westerlund, The Emergence of Deepfake Technology: A Review, 9 Tech. Innovation Mgmt. Review, 39, 40 (2019), https://timreview.ca/article/1282.

[8] James Vincent, Why We Need a Better Definition of ‘Deepfake,’ VERGE (January 30, 2024, 6:22 PM), https://www.theverge.com/2018/5/22/17380306/deepfake-definition-ai- manipulation-fake-news.

[9] Meredith Somers, Deepfakes, Explained, MIT Mgmt. Sloan Sch. (July 21, 2020), https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained.

[10] Id.; See also Westerlund supra note 7.  

[11] Dave Johnson, What is a Deepfake? Everything You Need to Know About the AI-Powered Fake Media, Bus. Insider, (Aug. 10, 2022), https://www.businessinsider.com/guides/tech/what-is-deepfake; See also Westerlund supra note 7.  

[12] Id.

[13] Mike Leaño, Deepfakes Have Productive Enterprise Uses Too, Frontier Enterprise (Jan. 26, 2023), https://www.frontier-enterprise.com/deepfakes-have-productive-enterprise-uses-too/#:~:text=%E2%80%9CDeepfake%20technology%20can%20foster%20accessibility,to%20be%20used%20in%20research.

[14] Id.

[15] Caroline Quirk, The High Stakes of Deepfakes: The Growing Necessity of Federal Legislation to Regulate This Rapidly Evolving Technology, Princeton Legal Journal (June 19, 2023), https://legaljournal.princeton.edu/the-high-stakes-of-deepfakes-the-growing-necessity-of-federal-legislation-to-regulate-this-rapidly-evolving-technology/#_ftnref2.

[16] Shouvik Das, Deepfake Makers: Why is it so Hard to Catch Them?, MINT: Technology (Nov. 9, 2023, 11:08 PM), https://www.livemint.com/technology/why-is-it-hard-to-catch-those-who-make-deepfakes-11699550983350.html#:~:text=A%20deepfake%20is%20more%20sophisticated,powerful%20hardware%20and%20software%20tools.; https://timreview.ca/sites/default/files/article_PDF/TIMReview_November2019%20-%20D%20-%20Final.pdf.

[17] Nils C. Köbis et al., Fooled Twice: People Cannot Detect Deepfakes But They Think They Can, iScience 1 (Nov. 19, 2021), https://www.cell.com/action/showPdf?pii=S2589-0042%2821%2901335-3.

[18] Jack Langa, Deepfakes, Real Consequences: Crafting Legislation to Combat Threats Posed by Deepfakes, 101 B.U. L. Rev761, 774 (2021).

[19] See Quirk supra note 13.

[20] Id.

[21] Elliott Davis Jr., These States Have Banned the Type of Deepfakes That Targeted Taylor Swift, U.S. News (Jan. 30, 2024, 4:06), https://www.usnews.com/news/best-states/articles/2024-01-30/these-states-have-banned-the-type-of-deepfake-porn-that-targeted-taylor-swift.

[22] Clare Stouffer, What Are Deepfakes? How They Work and how to Spot Them, Norton (Nov. 1, 2023), https://us.norton.com/blog/emerging-threats/what-are-deepfakes.

[23] Id.

[24] Asha Hemrajani, China’s New Legislation on Deepfakes: Should the Rest of Asia Follow Suit?, The Diplomat (March 8, 2023), https://thediplomat.com/2023/03/chinas-new-legislation-on-deepfakes-should-the-rest-of-asia-follow-suit/#:~:text=It%20specifically%20prohibits%20the%20production,using%20artificial%20intelligence%20(AI).

[25] U.S. Const. amend. I.

[26] Kapkan v. California, 413 U.S. 115, 119 (1973).

[27] See Hemrajani supra note 22.

[28] Telecommunications Act of 1996, Pub. L. No. 104-104 (1996), https://www.govinfo.gov/app/details/PLAW-104publ104.; 47 U.S.C. § 230 (2011).

[29] Kelly O’Hara & Natalie Campbell, What is Section 230 and Why Should I Care About It?, Internet Soc’y (Feb. 21, 2023), https://www.internetsociety.org/blog/2023/02/what-is-section-230-and-why-should-i-care-about-it/.

[30] Universal v. Lycos, 478 F.3d 413, 418 (1st Cir. 2007).

[31] Id.

[32] 47 U.S.C. § 230 (2011).

[33] Joey Schneider, Missouri Lawmaker Introduces ‘Taylor Swift Act’ to Fight AI Deepfakes, News Nation (Feb. 10, 2024 at 8:48 PM), https://www.newsnationnow.com/business/tech/missouri-taylor-swift-act-ai/.

[34].Hannah Albarazi, These are the High-Stakes AI Legal Battles To Watch In 2024, Law 360 (Jan. 1, 2024 8:02 AM), https://www.law360.com/articles/1774888/these-are-the-high-stakes-ai-legal-battles-to-watch-in-2024?copied=1.

[35] Dadchats on tiktok

[36] Jeffery A. and Lisa S. Hill v. StubHub, Inc., 727 S.E.2d 550, 561 (2012).

[37] Id. at 560.

[38] Id.

[39] Hill, 727 S.E.2d at 561.

[40] Id.

[41] 47 U.S.C. § 230(c)(1) (2011).

[42] Mary Graw Leary, The Indecency and Injustice of Section 230 of The Communications Decency Act, 41 Harv. J. of L. & Pub. Policy 553, 573 (2018).

[43] Hill, 727 S.E.2d at 558.

[44] See Leary supra note 40.

[45] Alan Z. Rozenshtein, Interpreting the Ambiguities of Section 230, Brookings (Oct. 26, 2023), https://www.brookings.edu/articles/interpreting-the-ambiguities-of-section-230/.

[46] Gonzalez v. Google LLC, 598 U.S. 617 (2023).

[47] See Schneider supra note 31.

[48] Angele Latham, Taylor Swift, Deep fakes, Free Speech and The Push in Tennessee to Regulate AI, The Tennessean (Feb. 12, 2024 12:26 PM), https://www.tennessean.com/story/news/politics/2024/02/12/taylor-swift-first-amendment-push-ai-regulation-tennessee-united-states/72399975007/.

[49] See DadChats note 35.

Leave a comment

Blog at WordPress.com.

Up ↑