The President and free speech: consequences of Twitter’s fact-checking indication

Amélie Heldt, Leibniz Institute for Media Research, Hans-Bredow-Institut, Hamburg, Germany

PUBLISHED ON: 04 Jun 2020

Since Twitter labelled a tweet by Donald Trump as ‘potentially misleading’ and indicated that it was fact-checking the statement made, the US President signed an ‘Executive Order on Preventing Online Censorship’, mainly targeting a piece of legislation which provides immunity from liability for internet services and, after all, referred to as the „Twenty-Six Words that Created the Internet”. The dispute itself is not a new one: although a heavy user of Twitter, Trump has been accusing social media platforms of discriminating conservative viewpoints and of unfairly penalising right-wing users. Nevertheless, this Executive Order marks a new level of escalation and an unprecedented threat to social media. The situation in itself illustrates how torn we are when it comes to intermediary immunity or rather liability because of the challenging questions that come along with regard to freedom of expression and the protection of deliberative space. Changing the rules for platform immunity is particularly complicated in the US due to the broad scope of application of freedom of speech, even more so because of a doctrinal cul-de-sac in First Amendment theories.

Divergent expectations?

While we consider social media platforms to constitute important parts of the digital public sphere - they host much of our daily communication - and subsequently expecting them to take responsibility and protect democratic values, we refute the idea that platforms become arbiters of speech. In other words, we criticise the platforms’ power over online speech but we want them to fix problems that go beyond corporate responsibility. Of course, the companies benefitting from the attention economy are neither neutral nor innocent bystanders – their services have an undeniable effect on society. Nevertheless, it is important to note that the legal regime created for ‘interactive computer services’ was meant to stimulate ‘freedom of speech in the new and burgeoning Internet medium’ (Zeran v. America Online, Inc., 129 F. 3d 327 - Court of Appeals, 4th Circuit 1997). Nowadays, the current administration is in favour of a law on ‘platform fairness’ which would take away the platforms’ discretion over the content.

Section 230 under attack

Under section 230 (c) (1) of the Communication Decency Act (CDA), platforms are, in principle, not liable for user-generated content because they are not considered publishers or editors: ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’. This provision from 1996 paved the way for services hosting third-party content without the legal obligation to monitor it. It is often referred to as a highly relevant internet regulation due to its beneficial effect on the internet economy since it protects from liability. It also leaves at the platforms’ discretion ‘to restrict access to or availability of material’ that they consider unwanted under section 230 (c) (2) CDA. At the same time, it is also deemed to be the ground on which harmful content can be uploaded and propagated online, and has been subject to controversies in the past years. The summary of the criticism expressed in the Executive Order is that platforms aren’t neutral when it comes to user-generated content: they curate content and, hence, they become editors. A less comprehensive scope of application would lead to a more strict intermediary liability for third-party content, and probably change the platform economy.

From a German perspective, a moderate form of liability for unlawful content might seem relatively reasonable since our legal system allows speech-restricting laws if they meet the constitutional requirements of Art. 5 (2) German Basic Law. However, the underlying principle of “same rules online than offline” cannot be transferred to the First Amendment because its scope of application is much broader, and the federal legislator is not allowed to pass laws that could limit freedom of speech for US citizens, which in turn means that there are only very rare exceptions to the strict scrutiny of the First Amendment. Besides, social media platforms are considered speakers themselves, hence protected by the First Amendment against the coercive power of the state.1 On the flip side, information intermediaries have gained such extensive power over online communication that they are no longer mere “pipes” through which third-party content simply flows, but rather global actors that govern speech in the digital public sphere, which is why some consider them quasi state actors.

Social media platforms are private, not state actors

Whether large social media platforms should be considered state actors and, therefore, bound to the First Amendment, has been extensively discussed in legal scholarship in the last years (inter alia: Zatz 1998; Berman, 2000; Citron/Richards, 2017; Langvardt, 2017; Peters 2017; Wu, 2017; Keller, 2018). Under the state action doctrine, private parties are exempt from applying third-party fundamental rights enshrined in the Bill of Rights. The public forum doctrine was developed by the Supreme Court to guarantee First Amendment rights in spaces that ‘have immemorially been held in trust for the use of the public, and, time out of mind, have been used for purposes of assembly, communicating thoughts between citizens, and discussing public questions.’ (Hague, 307 U.S. at 515). Only when private parties fall under the public function or the entanglement exception, can they be treated as state actors and eventually provide a public forum.

The Executive Order cites two cases by the Supreme Court (Packingham v. North Carolina; Pruneyard Shopping Center v. Robins) dealing with the question of whether private actors are providing a public forum. In Packingham, the Supreme Court did refer to social media platforms as the ‘modern public square’ of the digital age but with the purpose of declaring a law under which the government could restrict access to such platforms unconstitutional. This case was precisely about preventing the state from wielding power over the access to social media, but without calling them public fora in the doctrinal sense. So far, courts have refrained from applying the company-town analogy (Marsh v. Alabama) to other private properties used for expressive activities. Recently, courts have repeatedly emphasised that social media platforms are not state actors under the current doctrine. Knight Institute v. Trump showed that while platforms might host governmental speech and therefore become a designated public forum under the doctrine, they are still private actors. In YouTube v. PragerU, the Ninth Circuit court affirmed that hosting speech is not a ‘traditional, exclusive public function’ and that ‘despite YouTube’s ubiquity and its role as a public-facing platform, it remains a private forum’. (I have elaborated on this question to which extent platforms can factually be the hosts of public discourse and at the same time enforce their own rules on users, i.e. moderate content, without providing a public forum in the legal sense, in this open access paper.)

Consistent ambiguity vis-à-vis platform liability

The results of such jurisprudence are, again, two-sided. On the one hand, it gives platforms immense power over what can be said, and, in the US, this makes them more powerful than the state. On the other, it allows them to moderate content and to ban content perceived as harmful because they are not bound by the First Amendment. They can ban misinformation and mark user-generated content as potentially misleading, even if the user is a government official. They can serve the public interest in times of uncertainty due to the pandemic by providing access to trusted third-party sources. They can facilitate the propagation of images of police violence and governmental power abuses. Eventually, they decide over ‘post-truth politics’ to some extent without any legitimation apart from large user numbers. It goes without saying that the doctrinal debate on state actors and public fora is more nuanced and complicated than it could ever be reproduced in this short opinion piece. Still, it is essential to bear in mind that the larger constitutional framework described here is built on the arguments of democracy, truth, and autonomy (Emerson, 1963; Yemini, 2020). The goal is not to uphold the ‘free marketplace of ideas’ no matter what but to protect the societal goals enshrined in freedom of speech.

References

  • Berman, P. S. (2000). Cyberspace and the state action debate: The cultural value of applying constitutional norms to private regulation. U. Colo. L. Rev., 71, 1263.

  • Citron, D. K., & Richards, N. M. (2017). Four Principles for Digital Expression (You Won't Believe# 3). Wash. UL Rev., 95, 1353.

  • Emerson, T. (1963). Toward a General Theory of the First Amendment. Yale L. J., 72, 877.

  • Keller, D. (2018). Internet platforms: observations on speech, danger, and money. Hoover Institution's Aegis Paper Series, (1807).

  • Langvardt, K. (2017). Regulating online content moderation. Geo. LJ, 106, 1353.

  • Peters, J. (2017). The Sovereigns of Cyberspace and State Action: The First Amendment's Application – Or Lack Thereof – To Third-Party Platforms. Berkeley Tech. LJ, 32, 989.

  • Wu, T. (2017). Is the First Amendment Obsolete?. Knight First Amendment Institute’s Emerging Threats series.

  • Yemini, M. (2020). Missing in 'State Action': Toward a Pluralist Conception of the First Amendment. Lewis & Clark L. Rev., 23, 1149.

  • Zatz, N. D. (1998). Sidewalks in cyberspace: Making space for public forums in the electronic environment. Harv. JL & Tech., 12, 149.

Footnotes

1. See also the Center for Democracy and Technology’s lawsuit against the Executive Order on Preventing Online Censorship, June 2, 2020.

Add new comment