Safe Harbor? What the Supreme Court’s Decision in Twitter, Inc. v. Taamneh Means for Content Hosting Sites

Noah Cothern, Contributing Member 2023-2024

Intellectual Property and Computer Law Journal

Introduction

Social media platforms are widely used by people from all nations and backgrounds. As of 2022, 4.59 billion people worldwide use social media.[1] This represents roughly 59 percent of the global population.[2] The titanic scale of the active userbase presents very unique challenges for the operators of these platforms. Leading platforms such as Facebook and Twitter permit users to upload any desired content subject to certain moderation policies.[3] These platforms make use of manual, automated, and AI-assisted review to check whether content violates moderation policies.[4] Despite this, illicit content frequently makes it through the review process and becomes available for interaction by all users. Amid these failures of content moderation, there are growing calls for social media companies to be held accountable for any damage caused by content posted on their platforms.[5] In May 2023, the Supreme Court decided Twitter, Inc. v. Taamneh, 598 U.S. 471, a case with vital implications for the future of social media.[6]

This article first explores the future implications of Taamneh for content hosting companies in the U.S. Part II of this article gives the background of the Taamneh decision. Part III of this article argues that Taamneh was a decision limited in several key respects. As social media companies continue to expand their platforms to play more active roles, future litigation is likely to occur.

Background

In early 2017, Abdulkadir Masharipov entered the Reina nightclub located in Istanbul, Turkey, and fired over 120 bullets into a crowd of people.[7] Masharipov’s attack killed 39 people and injured 69 others.[8] He carried out the attack on behalf of the Islamic State of Iraq and Syria (ISIS).[9]

One of the people killed in the attack was Nawras Alassaf, whose family brought suit against Facebook, Inc., Google, Inc., and Twitter, Inc.[10] The family alleged that the companies  aided and abetted ISIS’ attack and were therefore liable for civil damages under 18 U.S.C. §2333(d)(2) of the Justice Against Sponsors of Terrorist Act (JASTA) because the platforms provided an outlet for terrorist organizations to distribute propaganda and seek recruits.[11]

18 U.S.C. §2333 provides that any U.S. national “injured … by reason of an act of international terrorism” may bring suit in the federal court and may recover treble damages.[12] However, bringing suit against one who commits international terrorism is virtually impossible in most circumstances. Congress passed JASTA which expanded 18 U.S.C. §2333 to provide for a form of secondary liability.[13] Now, under 18 U.S.C. §2333(d)(2), any U.S. national injured by  an act of international terrorism may assert liability against “any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.”[14] There is an additional requirement that the act of international terrorism must have been committed by “an organization designated as a foreign terrorist organization” under 8 U.S.C. 1189.[15] The parties never disputed that the attack was an act of international terrorism committed by a designated foreign terrorist organization.[16] The dispute was therefore whether Twitter, Facebook, and Google’s conduct constituted “aid[ing] and abett[ing], by knowingly providing substantial assistance.”[17]

When Congress passed JASTA, it included what it intended to be the proper legal framework for discerning civil aiding and abetting liability: Halberstam v. Welch, 705 F.2d 472 (D.C. Cir. 1983).[18] Halberstam concerned whether the wife of a burglar could be liable as an accomplice when the husband committed a murder.[19] While she was unaware of the murder, she had helped her husband’s criminal enterprise by falsifying financial records for him.[20] The Court laid out a 3-part test to determine liability: (1) the party whom the defendant aids must perform a wrongful act that causes an injury; (2) the defendant must be generally aware of his role as part of an overall illegal or tortious activity at the time that he provides the assistance; and (3) the defendant must knowingly and substantially assist the principal violation.[21] To determine what assistance is “substantial” the Court provided 6 inquiries: (1) the nature of the act assisted; (2) the amount of assistance; (3) whether defendant was present at the time of the illegal act; (4) the relation to the tortious actor; (5) the defendant’s state of mind; and (6) the duration of the assistance.[22]

The Court cautioned that “rigidly” following Halberstram’s test “risks missing the mark”, but that following it provides a “conceptual core” that has underscored aiding-and-abetting liability for centuries.[23]

With this rough test in hand, the Supreme Court found that the defendants in Taamneh did not intentionally provide substantial assistance to ISIS because the nexus between defendants and ISIS was “far removed.”[24] When the nexus is too attenuated, a plaintiff must demonstrate “culpable participation through intentional aid” that is “pervasive and systemic.”[25] Defendants had not actively aided the dissemination of terrorist content, rather, they “failed to do “enough” to remove” it.[26] Importantly, there was no evidence that defendants provided any assistance to the Reina terrorists in particular.[27] The Court therefore found in favor of the defendants and reversed a judgment of the Ninth Circuit to the contrary.[28]

Discussion

Limiting Factors of Taamneh

Tamneh contained certain elements that temper the broader application of its holding. Content hosting platforms are unlikely to be able to rely on Taemneh to claim immunity from all 18 U.S.C. §2333(d)(2) claims in the future. The manner in which the Court distinguished Taamneh from Halberstam illustrates this point. Linda Hamilton was not present at the scene of the murder committed by her husband.[29] Linda Hamilton did, however, systemically and intentionally assist her husband’s burglaries by laundering his stolen goods.[30] Plaintiffs insisted that Halberstam demonstrates that aiding and abetting ISIS generally satisfies § 2333(d)(2).[31] However, the Court disagreed with this reading as Twitter was never alleged to have been in any way used by the ISIS terrorist cell that actually committed the Reina attack.[32] This raises the bar for secondary liability, but not to the level of complete immunity.

Plaintiffs in Taamneh constructed allegations demonstrating that ISIS used defendants’ platforms as a means of communication and that defendants’ recommendation algorithms suggested ISIS’ content to users most likely to be interested in that content.[33] However, much of the wind was taken out of these arguments by the fact that none of defendants’ platforms were used to plan or coordinate the Reina attack.

The Court, however, affirmatively holds the door open for similar arguments made on new facts. The Court points out that “defendants overstate the nexus that § 2333(d)(2) requires between the alleged assistance and the wrongful act.”[34] They reiterate that a defendant need not know “all the particulars of the primary actor’s plan” to be liable for aiding-and-abetting.[35] Further, “even more remote support can still constitute aiding and abetting in the right case[36] (emphasis added). In some extreme situations, a defendant’s support to an enterprise can be so systemic that the defendant is aiding and abetting every wrongful act committed by that enterprise.[37] The Court does not stop there – they go further into the realm of speculation to state that, in situations where a platform offers its routine services in an “unusual way” or selectively choses to promote content made by particular terrorist groups, any potential plaintiffs could establish liability with a lesser showing of scienter.[38]The Court’s unusually extended emphasis here on counterfactuals may indicate an eagerness to consider future challenges to social media platforms.

Broadening Factors of Taanmeh

Despite the language of the Court making clear that the defendants could have been liable, Taamneh nevertheless contains many elements that provide broad protections for platform hosts. For example, courts do not ordinarily view phone manufacturers as being liable for illegal deals made over a call – yet the Court made this precise analogy to the defendants’ conduct.[39] According to them, the “mere creation” of platforms, even when used by bad actors for illegal and terrible ends, is not itself culpable.[40] Additionally, the Court held that recommendation algorithms – at least as they existed at the time of the Reina attack – do not constitute substantial assistance because the algorithms “appear agnostic” about the nature of the content recommended.[41]

This view could, however, be challenged by cases testing these recommendation algorithms at their logical endpoints. In a hypothetical situation where it could be conclusively shown that a terrorist organization’s rise to power would not have occurred but-for the dissemination of propaganda performed exclusively by recommendation algorithms, it would be a stretch to declare the algorithms provided no “substantial assistance” under the test of Halberstam. Similarly, as recommendation algorithms become more integrated with artificial intelligence and more zealously distribute content, it stands to reason that the “passive assistance” the Court found in Taanmeh would eventually fail to describe the role these algorithms play.[42] The analogy of a phone manufacturer used by the Court becomes less and less apt as content hosting platforms continue to grow in sophistication. Indeed, major social media platforms have recently invested heavily in the burgeoning AI industry with goals of “helping [organizations] reach the audiences they care about most.[43]

Despite these future possibilities, as the law currently stands, companies that host platforms generally accessible to the public which employ recommendation algorithms are protected from civil liability to the extent they give no further aid to the content of terrorist organizations. Ultimately however, in a case where a plaintiff suing under § 2333(d)(2) properly alleges that the terrorists who committed an attack made active use of a social media platform to facilitate the attack and that platform took steps beyond providing its ordinary services – such as with more zealous, sophisticated, and AI powered algorithms – the holding in Taamneh would be in jeopardy. As the world becomes increasingly sophisticated, such a case becomes more likely.

Conclusion

The boundaries of civil liability for tortious content made on social media platforms are unknown. Taamneh provides an individual point of reference, but a complete framework remains unknown. The advent of the AI boom will continue to challenge conventional notions of what it means to aid and abet. As social media companies enhance and expand their platforms, the passive and uninterested role they played in Taamneh is likely to become a more active and engaged one.


[1] Stacy Jo Dixon, Number of Social Media Users Worldwide From 2017 to 2027, Statista, (Aug. 29, 2023) https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/.

[2] Id.

[3] The X Rules, TWITTER, https://help.twitter.com/en/rules-and-policies/x-rules (last visited Sep. 14, 2023), see also Facebook Community Standards, META, https://transparency.fb.com/policies/community-standards/ (last visited Sep. 14, 2023).

[4] Rem Darbinyan, The Growing Role Of AI In Content Moderation, Forbes, (June 14, 2022, 6:45 AM) https://www.forbes.com/sites/forbestechcouncil/2022/06/14/the-growing-role-of-ai-in-content-moderation/?sh=6eb3e2b74a17.

[5] Michael D. Smith and Marshall Van Alstyne, It’s Time to Update Section 230, Harvard Business Review, (Aug. 12, 2021) https://hbr.org/2021/08/its-time-to-update-section-230.

[6] Devin Dwyer, Supreme Court Sides with Twitter, Google in High-Stakes Cases on Social Media, Terrorism, ABC News, (May 19, 2023, 11:37 AM)  https://abcnews.go.com/Politics/supreme-court-shields-twitter-social-media-giants-liability/story?id=99426988.

[7] Twitter, Inc. v. Taamneh, 598 U.S. 471, 478-79 (2023).

[8] Id. at 479.

[9] Id. at 478.

[10] Id. at 479.

[11] Id.

[12] 18 U.S.C. §2333(a).

[13] Taamneh, 598 U.S. at 483.

[14] 18 U.S.C. §2333(d)(2).

[15] Id.

[16] Taamneh, 598 U.S. at 484.

[17] Id.

[18] Id. at 485.

[19] Halberstam v. Welch, 705 F.2d 472, 474 (D.C. Cir. 1983).

[20] Id. at 475.

[21] Id. at 477.

[22] Id. at 488.

[23] Id. at 493.

[24] Taamneh, 598 U.S. at 506-07.

[25] Id. at 506.

[26] Id.

[27] Id.

[28] Id. at 507.

[29] Halberstam, 705 F.2d at 475.

[30] Id. at 495.

[31] Id. at 494.

[32] Taamneh, 598 U.S. at 498.

[33] Id.

[34] Id. at 495.

[35] Id.

[36] Id. at 496.

[37] Id.

[38] Id. at 502.

[39] Id. 499.

[40] Id.

[41] Id.

[42] Id.

[43] Santosh Janardhan, Reimagining Our Infrastructure for the AI Age, Meta, (May 18, 2023)  https://about.fb.com/news/2023/05/metas-infrastructure-for-ai/ See also Lora Kolodny, Elon Musk Plans Tesla and Twitter Collaborations with xAI, His New Startup, CNBC, (July 14, 2023, 7:04 PM) https://www.cnbc.com/2023/07/14/elon-musk-plans-tesla-twitter-collaborations-with-xai.html.


Leave a comment

Blog at WordPress.com.

Up ↑