Supreme Court to Address Big Tech Immunity Under Section 230

October 14, 2022

Last week, the Supreme Court granted certiorari for two cases challenging Section 230 of the Communications Decency Act. The result of the Supreme Court’s review has the potential to change how big tech and social media giants moderate internet content on their sites.

What is Section 230?

Section 230 of the Communications Decency Act of 1996 generally protects tech and media companies from legal liability for publications made by third-party users, stating:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

While Section 230 contains some exceptions, the protections are broad. Supporters of Section 230 are concerned that placing a legal obligation on websites to moderate content posted by third parties would create an overburdensome duty that would hinder free speech and innovation. But opponents are concerned that Section 230 enables websites to turn a blind eye to misconduct occurring on their platforms, resulting in content moderation policies that are applied inconsistently.

A ruling from the Supreme Court narrowing the scope of the immunities granted by Section 230 would change the internet as we know it. As technology and user generated content have evolved, so have discussions on the scope of Section 230. In 2020, Justice Clarence Thomas acknowledged the issues presented by Section 230 and alluded that change might be impending, stating:

“Paring back the sweeping immunity courts have read into §230 would not necessarily render defendants liable for online misconduct. It simply would give plaintiffs a chance to raise their claims in the first place. Plaintiffs still must prove the merits of their cases, and some claims will undoubtedly fail. Moreover, States and the Federal Government are free to update their liability laws to make them more appropriate for an Internet-driven society.”

The Cases Under Review

The cases under the Supreme Court’s review are Gonzalez v. Google and Taamneh v. Twitter. In both cases, Petitioners claim that Google and Twitter permitted ISIS terrorists to utilize their platforms to organize, recruit and facilitate terrorist attacks resulting in the death of family members. Petitioners specifically seek recourse against the social media platforms.

Specifically, the Plaintiff in Gonzalez v. Google brought various civil claims against Google, suggesting that Google assisted ISIS in spreading videos on its YouTube platform when its algorithm recommended ISIS videos to individuals based on their account characteristics. The respective district and United States Court of Appeals for the Ninth Circuit dismissed the claims, citing Google’s immunity under Section 230. The Supreme Court agreed to review this decision and will make a determination on the scope of Section 230’s protections and whether they apply to the algorithmic recommendations of a social media website.

In Taamneh v. Twitter, Plaintiffs brought claims against Facebook, Twitter and Google, alleging that the platforms aided and abetted ISIS members by failing to take aggressive action to prevent terrorism on their platforms. The Northern District of California dismissed Plaintiffs claims and Plaintiffs appealed to the United States Court of Appeals for the Ninth Circuit, which declined to consider the application of Section 230. The Supreme Court will also review this decision and decide whether Section 230 immunities apply to a website that regularly works to identify and prevent terrorism but fails to take aggressive action.

The cases under review have the potential to redefine the scope of a social media platform’s immunities under Section 230.

Current Exceptions to Section 230 Immunity

Section 230 immunity is not absolute. Courts have held that service providers that agree to remove content but fail to do so are not protected by Section 230, as seen in the 2009 ruling of Barnes v. Yahoo!, Inc. Likewise, when website providers have knowledge of criminal activity and negligently fail to warn users of that risk, Section 230 protections do not apply as asserted in Doe v. Internet Brands, Inc.

Congress has also taken steps to chip away at Section 230 protection by removing Section 230 immunity for websites facilitating or promoting online sexual exploitation by passing the Fight Online Sex Trafficking Act (FOSTA).  Specifically, FOSTA provides a civil cause of action for violations of federal trafficking laws and permits victims to sue the perpetrators of their trafficking and anyone who “knowingly benefits . . . from participation in a venture which that person knew or should have known” was engaging in sex trafficking. Unfortunately, this exception has had extremely weak practical application in court when applied to websites hosting Child Sexual Abuse Material (CSAM).

For example, in Jane Does, et al v. Reddit, Inc. the Ninth Circuit Court of Appeals found that a platform hosting CSAM content on their servers is still protected by Section 230 if the CSAM content is published by third party users of that site, and the CSAM content is not present as a result of that platform’s own conduct. The court held that CSAM content was not present as a result of a platform’s own conduct, even when the victim of CSAM requested removal to the platform, the platform sometimes, but not always remove the content, the platform did nothing to prevent future reposting, and the platform derived advertising revenue from that content.

While the Ninth Circuit declined to remove Section 230 immunity when violations of FOSTA were alleged in the Jane Does v. Reddit case, they did acknowledge the shortfalls of FOSTA, stating that the statute:

“Retains only a limited capacity to accomplish its original goal of allowing trafficking victims to hold websites accountable.  However, this is a flaw, or perhaps a feature, that Congress wrote into the statute, and is not one we can rewrite by judicial fiat.”

Undoubtedly, exceptions to Section 230 remain extremely limited, insulating social media sites from civil liability in most circumstances.  This makes the Supreme Court’s review of Gonzalez v. Google and Taamneh v. Twitter even more paramount.

How An Overhaul of Section 230 Could Impact Content Removal

As user generated content grows, so do concerns regarding content moderation. Because internet platforms moderate content according to each site’s terms of use, infringing content is defined differently by each website, and removal requests are processed inconsistently, often on a site-by-site and case-by-case basis. Additionally, as technology evolves, removal processes have become increasingly more automated, meaning that a ‘bot’ might conduct a review in lieu of a person.

The Supreme Court’s adjudication of Gonzalez v. Google and Taamneh v. Twitter could have major implications for tech and social media companies by waiving immunities for platforms that fail to properly moderate their content and algorithms. Currently, people being harmed by online actors typically only have valid civil claims against the perpetrator. For example, if a person is being impersonated on Facebook, the cause of action would be against the person operating the impersonation account and not Facebook. Likewise, if a person is the target of a defamatory fake review campaign on Google, the cause of action would be against the publisher of the content and not Google. A narrowing of Section 230 would potentially permit causes of action against service providers like Facebook and Google.

With the future of content moderation at stake, it is clear that these cases will be closely followed by advocates on both sides of the Section 230 debate. If you have questions about the Communications Decency Act, or if you are the victim of an online attack and want to weigh your options, please contact KJK Defamation & Content Removal Attorney Alexandra Arko (ALA@kjk.com; 216.716.5642).