Between evidence and policy: Bridging the gap in disinformation regulation

Pranav Manjesh Bidare, Center for Internet and Society, Stanford University, United States
Stephan Dreyer, Media Law & Media Governance, Leibniz-Institute for Media Research/Hans-Bredow-Institut, Hamburg, Germany
Clara Iglesias Keller, Digital Disinformation Hub, Leibniz-Institute for Media Research/Hans-Bredow-Institut, Hamburg, Germany, c.keller@leibniz-hbi.de

PUBLISHED ON: 16 May 2022

This opinion piece is part of a miniseries advancing key reflections in disinformation governance. While bridging the gap between social science findings and policy proposals, the texts address some fundamental questions to foster future research agendas: (i) how should underlying conceptions, most notably notions of “truth”, support the increasing weight that states and companies dedicate to curbing disinformation? (ii) how can such an intricate empirical field inform potential governance solutions? and (iii) can these conceptual and empirical challenges profit from institutional innovation, as a way to reconfigure traditional power dynamics? This article discusses empirical evidence for governing disinformation. The other pieces focus on conceptual challenges and the potential of new institutions for disinformation governance.

The expansion of digital platforms has contributed to numerous transformations in (social) communication. Fluxes in information and attention have become more complex, as private actors opaquely engage in automated content curation. Among the many concerns regarding these dynamics is their influence on the distribution of disinformation – understood here as information that is false or distorted, purposefully created, and distributed to cause harm.

In recent years we have witnessed an unprecedented growth of literature on what has been referred to as an “information disorder”. Multidisciplinary research efforts have been employed to understand actors, causes, effects, and the effectiveness of countermeasures to digital disinformation and the surrounding phenomena. In a similar vein, digital disinformation has inspired regulatory initiatives in various national contexts. These initiatives are manifold; some criminalise speech, some limit data usage, while others propose a broader regulatory framework for digital platforms’ business models.

Many of these strategies rely on disputed empirical findings on how disinformation mechanisms operate and their effects – or on no empirical findings at all. This gap between scientific evidence and policy calls into question the legitimacy of these regulatory initiatives, since the adoption of measures whose need is contested, or not backed by evidence, compromises their legitimate justification. Similarly, this mismatch also threatens the proposed policies’ potential to meet the ends that inspired them.

The evidence: what we do (and do not) know

The social media era has provided new channels and means to distribute disinformation, and therefore, more opportunities for gaining empirical evidence. Some of its aspects that have received great attention from scholarship include disinformation’s effectiveness in being perceived as credible, as well as its (varying) influence on behaviour. These, and other assessments of disinformation’s concrete repercussions (i.e. how people receive and distribute information) are further challenged by indications that individuals often spread disinformation not because they believe it, but as a way to express their identity and reach audiences.

At the same time, a lot remains up for debate. First, empirical evidence is still centred on experiences in the Global North, and it is often reductively translated to different contexts. Second, the jury is still out on a number of questions, like the actual relevance of, and proper methodology to, identifying political bots. Or on the degree to which disinformation affects electoral outcomes, since current evidence tells us that it does not change voters’ stances, but it can strengthen their existing will if the narrative fits. More investigation would also be welcome on specific questions, like the diversity of actors on the supply side of disinformation (its “ecosystem”), and what it means for building statutory and non-statutory responses.

Against this intricate empirical background, regulators around the world find themselves under pressure to effectively address disinformation. The distribution of false information through digital means is a pervasive societal phenomenon; it stands among other abuses of freedom of expression - e.g. hate speech or defamation - that democracies need to cope with. It continues to pose severe risks to society and democracy, by presenting itself under different guises that include electoral propaganda, public health information (which often creates scepticism about public health initiatives, such as anti-vaccine and anti-mask propaganda, and even denial of the existence of threats to public health), or war propaganda.

It would be intuitive to assume the advanced research will provide a consistent empirical basis for this complex task. However, proposing evidence-based policy for disinformation has proved to be a challenging task.

The policy: motivations and evidence-based regulation

Policy initiatives often rely on controverted - or often plain debunked - assumptions about how disinformation works. This is the case, for instance, of mechanisms that persecute individual criminal responsibility for distribution of disinformation (like in Ethiopia), even though elements like intention are often conflated with the volatile effects of disinformation on individual behaviour. Similarly, some strategies restrict the disputed dynamics of inauthentic behaviour (proposed, for instance, in Brazil).

We can speculate a few reasons for this mismatch between evidence and policy. The information disorder is inseparable from social circumstances and the power clashes that shape political order, and finding a legitimate role for statutory regulation is indeed challenging. Also, we can interpret it as an insinuation that the political pressure to regulate disinformation might run over from empirical evidence, or even that motivations for policy making in this field are often moved by crowd pleasing.

In any case, this is a problem because there is a legitimate expectation that policy proposals will provide at least some link between empirical evidence, motivations, mechanisms, and their outcomes. By electing strategies that address popular, but controversial aspects of disinformation, regulators risk not only failing to fulfil their goal, but also overly restricting fundamental rights, especially the right to freedom of expression. If we do not have a secure methodology to differentiate bots from real profiles, how can we bind platforms to flag, or even delete these accounts without threatening legitimate speech? Similarly, if we target individuals with criminal responsibility, how will we mitigate possible chilling effects? Granular persecution of individual online behaviour is unlikely to produce minimally relevant effects on structured disinformation campaigns.

Bridging evidence and policy

It takes two sides to fill the gap between research and policy in disinformation. First, disinformation scholars and researchers must actively expand research to shed more light on (what are still) obscure parts of the disinformation ecosystem, as a way to allow for more nuanced policy and civil society solutions.

On the other hand, regulators have more to overcome. They must acknowledge that part of the solution to the “information disorder” lies beyond policy making, and put into perspective the role of legislation on disinformation. This means realising that statutory legislation is not the right place to hunt speech by imposing one version of what is understood to be truth, much less under criminal sanctions (in this sense). But digital disinformation policy certainly profits from broader conversation about platforms’ business models and limits to personal data usage.

The focus of regulatory approaches must transition from defining disinformation (and focussing on tailored strategies that are likely to be over restrictive of rights and possibly ineffective) to visiting and understanding nuanced empirical findings. They show us that disinformation is fundamentally a social, human-centric problem, whose relevance is dependent on technological and political circumstances that transcend debates about what is real, and what is not.

Footnote

The miniseries is a product of discussions with a group of experts convened by Clara Iglesias Keller, Stephan Dreyer, Martin Fertmann and Keno Potthast at the Digital Disinformation Hub of the Leibniz-Institute for Media Research | Hans-Bredow-Institut, on 24. February 2022. We thank all the participants for their input in the discussions reflected in this miniseries.

Add new comment