Hybrid institutions for disinformation governance: Between imaginative and imaginary

Martin Fertmann, Research Program 2: "Regulatory Structures and the Emergence of Rules in Online Spaces", Leibniz-Institute for Media Research/Hans-Bredow-Institut, Hamburg, Germany, m.fertmann@leibniz-hbi.de
Bharath Ganesh, Media Studies, University of Groningen, Netherlands
Robert Gorwa, Berlin Social Science Center (WZB), Germany
Lisa-Maria Neudert, Oxford Internet Institute, University of Oxford, United Kingdom

PUBLISHED ON: 16 May 2022

This opinion piece is part of a miniseries advancing key reflections in disinformation governance. While bridging the gap between social science findings and policy proposals, the texts address some fundamental questions to foster future research agendas: (i) how should underlying conceptions, most notably notions of “truth”, support the increasing weight that states and companies dedicate to curbing disinformation? (ii) how can such an intricate empirical field inform potential governance solutions? and (iii) can these conceptual and empirical challenges profit from institutional innovation, as a way to reconfigure traditional power dynamics? This article discusses the potential of new institutions for disinformation governance. The other pieces focus on empirical evidence and conceptual challenges.

When it comes to disinformation, who should be the arbiter of the much contested idea of truth, if anyone: private platforms, state authorities, or some other institution? Some argue that state authority has proven ill-equipped to protect users from harm, given that only a fraction of online falsehoods are actually illegal under current legislation. Private platforms have in turn created complex rules moderating the various forms of harmful but legal content on their networks. However, their rule-making power is largely unchecked, and users and government actors alike have grown wary of the power companies wield over public discourse, as is evident in the many ongoing global conversations around ‘platform regulation’.

Given the oft-decentralized, polyarchic nature of platform governance, and the many interests and stakeholders involved, policy discussions around issues like disinformation frequently feature a hopeful discussion of potentially innovative regulatory approaches that could be marshaled to respond quickly and effectively to issues of platform governance. Whether through more classic co-regulation, or through the creation of informal industry-led self-regulatory organizations following government pressure, at their best, these efforts are built upon past experience from media law, telecommunications regulation, and corporate governance in other complex industries, like finance. At their worst, they are ‘shadow regulation’ or ‘jawboning’, a way for states to exert pressure on influential private gatekeepers and get them to act against perfectly legal content: out of sight and out of mind. What are some of the risks, benefits, and most important developments in this ‘hybrid governance’ landscape, both in the disinformation space, as well as in other areas of user-generated content moderation?

The Turn to Informal and “Hybrid” Institutions for Platform Governance

For more than a decade, large platforms have established voluntary organizational arrangements to take over, or at least advise on, content decisions. While the roots of this can be found in the earliest efforts by the European Union to engage telecommunications providers and emerging internet companies in roundtables, declarations, and voluntary principles around issues like child safety (Livingstone et al., 2013), informal arrangements with some degree of civil society, government, and industry involvement have become an increasingly popular remedy across virtually all user-generated content domains. Examples reach from multi-stakeholder-dialogue formats for industry standard-setting, such as the Global Network Initiative (GNI) launched in 2008 (Samway, 2016), as well as a host of individual companies’ initiatives, such as Google’s “Advisory Council on the Right to be Forgotten” (back in 2014) regarding data protection and privacy.

As public attention on content moderation issues has grown, more and more companies have created “Councils”. Bodies of independent experts offering guidance on content questions have been instituted by Twitter (2016), Twitch (2020) and TikTok (2021). These may be best described as a form of “stakeholder engagement”, functioning with little institutional structure (e.g. without a formal charter, voting processes, or other clearly defined functions). They largely remain opaque, lack accountability, and do little to communicate their activities publicly. Meta’s highly-publicized Oversight Board is an exception in this regard, featuring more institutional substance in the form of clearly defined governance documents, as well as procedures for fulfilling its advisory and appeals body functions (Klonick, 2020; douek, 2019), even though critics still question its fundamentally narrow remit. Despite their differences, what these industry-led efforts have in common is that they engage in governance below the threshold of illegality (or even below private rule violation) as firm-level self-regulatory bodies.

Through these kinds of initiatives, as well as agreements like the EU Code of Conduct on combatting illegal online hate speech and the EU Code of Practice on disinformation, companies’ private rule-making and enforcement practices have been shaped by government (and sometimes, civil society) preferences. Alongside informal channels, this is also happening through formal ones: legislation is increasingly integrating platform rule-making and enforcement practices into their regulatory machinery, both through informal coordination, as well as through new institutions. European regulation is implementing new arbitration boards in its Digital Services Act (Art. 18), likely influenced by the independent co-regulatory boards and arbitration committees that can be part of the moderation process under § 3 (6) and § 3c of the German NetzDG law. Meanwhile in Germany, the debate around new institutions for platform governance is picking up the pace with the governing parties agreement to create (not yet defined) “Social Media Councils” (Plattformräte) in their 2021 coalition treaty (p.17).

The Case of the Global Internet Forum to Counter Terrorism (GIFCT)

Do current forms of hybrid governance privilege corporate power and interests alongside those of the largest national governments? The Global Internet Forum to Counter Terrorism (GIFCT) was established by major tech companies in 2017 to coordinate efforts to moderate terrorist content across different platforms. Its central ‘product’ is a hash sharing database, accessible to GIFCT’s 18 member companies that includes cryptographic fingerprints of images, video, audio, and text considered to be in promotion of ‘violent extremism.’ These companies run automated matching functions on all the new material uploaded by their users, preventing its upload in the case of likely matches, allowing them to act at a scale and speed impossible with user flagging and human moderators in the loop.

The organization has been actively critiqued by civil society, with widespread concerns about the secrecy and opacity of the database, and the potential false positives that its hash database might generate. Scholars and civil society have pointed out that GIFCT only includes jihadist content in its hash database, and other types of terrorism appear in the database after violence occurs. This is because no extreme right-wing groups are sanctioned by the UN Security Council, which is what GIFCT uses to define what content is held in the hash sharing database. Despite extensive research and reports into these ‘taxonomical’ issues, the organization has not indicated that it will make any changes to these processes, implying that this double-standard is due to states’ refusal to sanction right-wing extremist groups.

In 2019 and 2020, GIFCT undertook significant reforms to include a wider range of stakeholders: it developed an independent advisory committee with members from civil society, academia, and government, while maintaining an operational board that was composed of corporate members; established working groups which included academics, policy makers, and member companies; and actively began funding research. Despite these important incremental developments, GIFCT remains a form of ‘private multilateralism’ in which corporate interests and procedures dominate the design and operation of the institution despite the inclusion of ‘stakeholders’ (Borelli, 2021). While GIFCT has made minor improvements in transparency, its transparency reports remain underdeveloped and sparse. Despite the extensive response and ongoing criticism from NGOs and civil society, GIFCT’s improvements on human rights remain incomplete, both the organization and the hash-database have not been meaningfully restructured.

Lessons from GIFCT: The Potential Pitfalls of Hybrid Governance

One concern that has been articulated by some observers of GIFCT, and of other developments in the realm of automated content moderation mode broadly, is that we could see a ‘GIFCT-ification’ (or ‘hash-ification’) of other content areas. As the underlying technology has already diffused from the copyright space (with systems like ContentID) into areas of national security and public safety (child sexual abuse material, violent extremism), what is stopping governments from seeking, for instance, to pressure companies to remove Russian war propaganda or other forms of disinformation even more widely, to get firms to set up a similar institution to go after other forms of politically undesireable content?

Informal governance initiatives can be powerful, and can lead to changes in firm standards and processes that may affect billions of people around the world. For that reason, we need far more research into these institutions, and the emerging processes of ‘hybrid’ platform governance more broadly, to better assess their effects on the status quo. How do these kinds of governance initiatives interact with the incentives, biases, and power dynamics that are already baked into how platforms moderate content? What are the long-term effects on public, formal regulation: do these efforts make it less likely that effective government-led regulation might be achieved, by lessening the demand of policymakers or the public for transformative policy change? It is clear that blurring the lines between private and public regulation has a lot of advantages for at least some of the actors involved (Gorwa, 2019); does civil society also benefit, and is it able to meaningfully use these fora to push their agenda and improve user rights, despite their relatively scarce resources and institutional power?

As of right now, these are difficult questions to answer empirically. Many of these fora are closed to researchers; governments have yet to meaningfully develop processes for monitoring and measuring the effect of their policy interventions. Nevertheless, given the complexity and speed at which the current platform regulation landscape is developing, ‘hybrid’ and informal governance is likely here to stay, and these questions should be central to policy researchers in the years to come.

Footnote

The miniseries is a product of discussions with a group of experts convened by Clara Iglesias Keller, Stephan Dreyer, Martin Fertmann and Keno Potthast at the Digital Disinformation Hub of the Leibniz-Institute for Media Research | Hans-Bredow-Institut, on 24. February 2022. We thank all the participants for their input in the discussions reflected in this miniseries.

References

Borelli, M. (2021). Social media corporations as actors of counter-terrorism. New Media & Society, 146144482110351. https://doi.org/10.1177/14614448211035121

Douek, E. (2019). Facebook’s “Oversight Board:” Move fast with stable infrastructure and humility. North Carolina Journal of Law & Technology, 21(1). https://scholarship.law.unc.edu/ncjolt/vol21/iss1/2

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

Klonick, K. (2020). The Facebook Oversight Board: Creating an independent institution to adjudicate online free expression. Yale Law Journal, 129(2418). https://ssrn.com/abstract=3639234

Livingstone, S., Ólafsson, K., & Staksrud, E. (2013). Risky social networking practices among “underage” users: Lessons for evidence-based policy. Journal of Computer-Mediated Communication, 18(3), 303–320. https://doi.org/10.1111/jcc4.12012

Samway, M. (2016). The Global Network Initiative: How can companies in the information and communications technology industry respect human rights? In D. Baumann-Pauly & J. Nolan (Eds.), Business and Human Rights: From Principles to Practice.

Add new comment