216.696.8700

Social Media Wars: Section 230 & Big Tech

November 19, 2020
NCAA

On Nov. 17, 2020, both Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg testified before the United States Senate Judiciary Committee on issues related to the social media sites’ actions which were decried as “censorship” by some politicians. The topic of the hearing revealed the nature of the questioning – it was a hearing on “Breaking the News: Censorship, Suppression, and the 2020 Election.”

Republicans and Democrats both seem to agree that social media operations are biased. They simply disagree about the nature of that bias. Democrats, for example, have denounced Facebook as a “right wing echo chamber” with a “conservative bias” with algorithms that encourage right wing engagement in general and extremist behavior in particular. If you measure “engagement” per post in the United States, the top engaged political sites are those of conservatives like Dan Bongino, Ben Shapiro, David Harris, Jr., Franklin Graham and “Blue Lives Matter,” as well as that of President Trump himself. They argue that algorithms built into these and other platforms tend to force individuals down various “rabbit holes” of conspiracy theories and disinformation. Democrats also criticize social media organizations for what they perceive as inaction in the 2016 election where foreign governments and nonstate actors perpetrated a massive disinformation campaign designed to promote chaos and disrupt the 2016 election.

On the other hand, conservatives point out that certain conservative content – such as that related to information found on Hunter Biden’s computer – was restricted from reposting by Facebook and Twitter, and that other conservative content – like allegations of massive election fraud and election stealing – have been either blocked or labeled with warning labels by the social media outlets. Conservatives complain that social media outlets have a liberal bias, and that they censor conservative voices, even to the extent of “deplatforming” or removing certain voices from their platform. Conservatives have vowed to reform “big tech” and move to more conservative, less moderate and less moderated social media platforms like MeWe and Parler.

Each side alleges bias, prejudice and censorship. Each wants the tech giants regulated, or worse – broken up. But neither side can agree on the nature of the bias, or the nature of the remedy. Indeed, they cannot even agree what these social media outlets actually are.

Decent Communications

In 1994, the “Wolf of Wall Street” – Stratton Oakmont Investments – was upset by something that was posted on a Prodigy (remember Prodigy) “Money Talks” message board alleging (correctly, it turns out) that Stratton Oakmont was a fraud and a scam. The investment company sued Prodigy, which at the time served two functions. It provided its own BBS (Bulletin Board Service) or message forums, and it provided dial-up access to the World Wide Web. The New York Supreme Court had to decide what “Prodigy” was, and determined that it acted as a “publisher” of the content of its users. It had the ability to moderate and control content, to edit or review, to decide what was published and what was not, and to remove offending or actionable content. As a publisher, it had liability under ordinary principles of defamation law. In fact, the user who posted the comments about Stratton Oakmont would not have been able to cause “harm” to the investment company but for the broad distribution provided by the Prodigy service. The Court held that, as a publisher, Prodigy was liable for the content it disseminated – even if that content was generated by third-party users or subscribers.

In response to that case, Congress passed Section 230 of the Communications Decency Act which provided that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In plain English, the statute determined that a provider of interactive computer services – an ISP like Prodigy, a forum, a website hosting service, a social media site like MySpace (remember that?) Facebook, YouTube or Twitter, or even communications platforms like Signal, Slack, Zoom, Google Meet or Microsoft Teams were not liable as a “speaker” for what OTHERS said or published on their platforms. The statute provided broad immunity to these providers not only for the content that was disseminated or distributed on their platforms, but ultimately for their decisions to distribute the content.

While “social media” sites like Facebook and Twitter did not exist when Section 230 was passed, to a great extent the purpose of that provision was to encourage their creation. Indeed, the CDA says that its purpose is:

  1. To promote the continued development of the Internet and other interactive computer services and other interactive media;
  2. To preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation;
  3. To encourage the development of technologies which maximize user control over what information is received by individuals, families and schools who use the Internet and other interactive computer services;
  4. To remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material; and
  5. To ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking and harassment by means of computer.

Section 230, for good and for ill, encouraged the creation of forums where speakers could speak. It’s not that the speakers were not liable for the content of their speech – the law did nothing to change the law of defamation, privacy or other torts with respect to the “speakers.” The law encouraged the creation of the platforms where the people engaged in this speech. Without the legal protection afforded by Section 230, there would be no Facebook, no Twitter, no Parler. If the purpose was to encourage a lot of speech – well, it worked.

Backpages and Child Abuse

Section 230 did contain certain limitations, both in its original form and as subsequently amended. For example, when forum Craigslist was found to have discriminated in its housing ads by showing certain ads only to certain people, it was held liable for violation of federal anti-discriminations laws. When Backpages (and Craigslist) were found to have either permitted or actively encouraged the posting of “escort” ads (the service, not the car), Courts struggled with whether Section 230 gave them absolute immunity for the ads which encouraged sexual abuse and slavery. While Craigslist agreed to voluntarily remove the ads, Backpages initially did not. Ultimately, Congress carved out an exception to Section 230 indicating that these entities might be held liable under criminal sex trafficking laws. The law also required these providers to enable parental controls and to report certain Child Sexual Abuse Material to the National Center for Missing and Exploited Children. So the “immunity” provided by Section 230, broad as it is, is not absolute.

What IS Social Media?

At the Judiciary Committee hearings, Congress continued to struggle with defining what social media platforms actually were. Chairman Lindsay Graham continued to analogize Facebook and Twitter to newspapers and publishers – entities which pick and choose what articles to print and what messages to deliver – and act as “editors” and “publishers” of articles. Graham pointed out that newspapers are liable for defamatory statements they publish, and suggested that social media outlets should be held to the same standards. In other words, to reinstate the holding in Stratton Oakmont. However, they also want the social media platforms to stop “censoring” the speech of conservatives. These propositions – liability for user generated content and additional liability for preventing user generated content – are, of course, inconsistent if not incompatible.

This goes to the role of social media. Mark Zuckerberg, the CEO of Facebook, asserts that Facebook is a mere platform, a place where people can meet and talk. Initially, to the chagrin of liberals and democrats, he took the position (well, he claimed to take the position) that Facebook could not or would not remove content even if such content were offensive, inciteful and factually inaccurate. The defense to offensive speech, he surmised was not removal of that speech, but rather more speech. In the battle for ideas, truth will ultimately win out. His service was a mere forum. What people said and did on that platform was for them to decide. In his testimony before the Committee, he continued to cling to that position, albeit in a modified form, recognizing that certain content was not protected speech (e.g., child pornography, advocacy of violence and “fighting words”) and that other harmful speech can and should be “regulated” by the platform itself.  Indeed, even Parler, which claims to advocate a no censorship approach, prohibits materials that are harmful to minors (including protected pornography and nudity), speech which advocates criminality, speech which promotes terrorism and speech which is offensive or tortious (defamation, invasion of privacy, false light and rights to identity?)

There are various efforts afoot to amend Section 230. The Justice Department has proposed to eliminate from immunity protection social media sites’ decisions to remove or restrict access to content, and make them liable for such decisions. While a social media site would not automatically be deemed a “publisher” of content that it chose to permit to be disseminated, it would be liable if, for example, it permitted the dissemination of things like content that promoted “terrorism” or “violent extremism,” content that promoted “self-harm” or other content that was “unlawful.” So social media sites could restrict content only if they had “an objectively reasonable belief that the material falls within” those categories.

The DOJ 230 proposal would also remove immunity if it were alleged that the entity acted “purposefully and with the conscious object to promote, solicit, or facilitate material or activity by another content provider that the service provider knew or had reason to know would violate federal criminal law.” Remember, the 230 immunity as currently constructed is not simply immunity from liability, it is immunity from suit. Thus, if a party (or a prosecutor) alleged that the materials carried solicited, facilitated or violated ANY federal criminal law (and at last count there were more than 14,000 federal criminal laws and regulations) the provider could be sued as long as the suit alleged that the provider “had reason to know” that the content of a third party violated the law. Thus, things like ads for state licensed and regulated recreational and medicinal marijuana facilities – or postings by users about such facilities – would promote a violation of federal criminal law. Facebook could be sued or prosecuted if it let someone call ham “turkey ham” or advocating the free range of pigs or posting a video of a dog scaring livestock or suggesting that shrubbery be removed from a federal park. If someone on social media “consults, combines, confederates, or corresponds with any pirate or robber upon the seas” then the social media site can be prosecuted. IF someone online advocates the “impairing” of a government made fence, wall, or enclosure, the platform could be prosecuted for that act. The “knew or should have known” standard is not a very high bar, since the provider might be deemed to “know” everything that someone posted on its site. There’s a difference between the site itself seeking to further the criminal activity (aiding and abetting, criminal facilitation, conspiracy), and it simply being used as a platform that others use to commit crimes. This provision would make social media sites responsible for policing – in every sense of the word – the actions of its subscribers.

Another proposed provision would remove the immunity in civil cases if the site “knowingly disseminated” material or engaged in activity and had “actual notice” that the material was objectionable and failed to remove or restrict access to the material and report it to law enforcement where required by law, and failed to preserve related evidence. Another provision would remove immunity where the provider has a final court judgment indicating that illegal content or activity is on its platform, but fails to remove that content within a reasonable time. It would also provide immunity to the platform for the act of removing the content as ordered by a court. The Barr proposed changes to Section 230 would also require providers to “state plainly and with particularity the criteria” they use to moderate content and if they don’t, they won’t have acted in “good faith” in removing or restricting content, and will essentially be deemed a publisher of that content. They won’t be able to change their minds and restrict new and offensive content that they hadn’t thought of, since that action would not be “consistent with the terms of service and use” and would not be consistent with their “prior representations” regarding their content moderation policies. Thus, if Facebook said that it would restrict foreign state-sponsored misinformation during a presidential campaign, but did not say it would do so during a recount, then restricting Russian disinformation would be a waiver of immunity. The Barr proposal would also require providers to give those whose content is restricted under the policies notice of the restriction, an opportunity to respond and the ability (presumably) to override the restriction, unless this would end up notifying a terrorist or criminal, would interfere with law enforcement or would risk imminent harm to others.

All told, the Barr/DoJ proposal to change Section 230 would make providers publishers for many purposes — and would limit their ability to decide what content to have on their sites and what content to prohibit.

In the end, as the hearing demonstrated, everyone wants content they like to be widely available. Everyone wants to say what they want online, and not have it restricted or restrained, particularly in a way they think is arbitrary. But they want to keep out the lies and offensive content of the folks with whom they disagree. That’s hardly a standard that could be enforced by the Courts, much less by social media. And that’s why the hearings were more thunder than illumination.

If you have any questions regarding Section 230 or any of the content covered above, reach out to Mark Rasch at mdr@kjk.com or 301.547.6925.

 

KJK publications are intended for general information purposes only and should not be construed as legal advice on any specific facts or circumstances. All articles published by KJK state the personal views of the authors. This publication may not be quoted or referred without our prior written consent. To request reprint permission for any of our publications, please use the “Contact Us” form located on this website. The mailing of our publications is not intended to create, and receipt of them does not constitute, an attorney-client relationship. The views set forth therein are the personal views of the author and do not necessarily reflect those of KJK.