Truth, intention and harm: Conceptual challenges for disinformation-targeted governance

Naomi Appelman, Institute for Information Law, University of Amsterdam, Netherlands, n.appelman@uva.nl
Stephan Dreyer, Media Law & Media Governance, Leibniz-Institute for Media Research/Hans-Bredow-Institut, Hamburg, Germany
Pranav Manjesh Bidare, Center for Internet and Society, Stanford University, United States
Keno C. Potthast, Leibniz-Institute for Media Research/Hans-Bredow-Institut, Hamburg, Germany

PUBLISHED ON: 16 May 2022

This opinion piece is part of a miniseries advancing key reflections in disinformation governance. While bridging the gap between social science findings and policy proposals, the texts address some fundamental questions to foster future research agendas: (i) how should underlying conceptions, most notably notions of “truth”, support the increasing weight that states and companies dedicate to curbing disinformation? (ii) how can such an intricate empirical field inform potential governance solutions? and (iii) can these conceptual and empirical challenges profit from institutional innovation, as a way to reconfigure traditional power dynamics? This article discusses conceptual challenges. The other pieces focus on empirical evidence and the potential of new institutions for disinformation governance.

Political debates across the world call for ambitious legal and statutory regulation of disinformation. While these ambitions are likely to face several challenges in their realisation, key challenges appear at the very outset, emerging from the lack of a legally workable concept of disinformation. The criteria of current definitions of disinformation are only to a very limited extent, if at all, suitable as starting points for governance or regulatory measures. We need to come up with alternative criteria that enable us to navigate questions of (state and platform) power, freedom of expression, and the proportionality of countermeasures. This opinion piece shows why the current focus on truth/falsehood and intention bear both risks and uncertainties from a legal point of view, and suggests we instead focus on the potential to harm.

The definitorial issue

Most definitions of disinformation, which mainly stem from communication research with the aim to delineate the phenomenon to other forms of information disorders, centre on two criteria: the falsehood of a statement, and the issuer’s intention to mislead recipients (Wardle & Derakhshan, 2017; Tandoc et al., 2017; HLEG, 2018). We argue that both criteria are unsuitable for determining disinformation from the purposes of uniform legal regulation.

Truth and Falsehood

Truth itself is a contested concept. It is subject to many different (philosophical) schools of thought and, without delving too deep in that debate, truth cannot be conceived of as an absolute. Rather, it feeds on subjective and intersubjective perceived facts, and reality should be considered the realm where common subjective perceptions align with a shared understanding of the world. Truth is therefore mainly a social construct, subject to a societal process of negotiation. Against this background, determining falsehood is a highly complex societal process that runs into three core challenges in a democratic society based as it is on the rule of law and fundamental rights:

  • First of all, to prevent state power from dictating an “official truth”, the determination of truth must happen through society negotiating truths and falsehoods, as well as determining the rules for such negotiations. Truth-seeking has to happen within society as a form of deliberation through public discourse, instead of merely being a pursuit of the state and its authorities. The new Russian speech laws, for instance, serve as a contemporary example of the perils of state-authorised truth.
  • Second, it is often hard to prove the absolute falsehood of all doubtful statements. The realm of objectively false statements remains very small, even within a shared understanding of the world, while the bulk of popularly contested disinformation manifests as statements that are difficult, if not impossible, to objectively disprove. While the law provides procedures and systems to “find the truth”, it does not sufficiently aid in this effort, since there is no useful gradation to measure small or large deviations from the truth; plus, legal truths remain non-scalable, and mere social or procedural constructions.
  • And third, most of the content we talk about when we discuss disinformation is not illegal and cannot simply be removed. Importantly, the freedom of expression not only protects truthful, but also untruthful expression, or even lies. When mixed statements contain doubtful factual claims and opinions, or when battled factual statements are being used as a form to express one’s opinion or worldview, the content might be “awful but lawful”, and still protected by freedom of expression rights. This content, while it may qualify as disinformation from the perspective of the freedom of expression, cannot simply be restricted by statutory means only because it is false.

Intention to mislead

The second criterion that we are examining is “intent” as a defining characteristic of disinformation. Proving the intention to mislead requires proof of both the speaker’s awareness of falsehood, and also that the speaker intended to deceive or mislead others, or to create harm. Legally proving intention has always been cumbersome, and in many cases this evidence cannot be brought forward; for example when a statement consists of partial truths (malinformation), when doubtful information is disseminated unknowingly (misinformation), or when the person making the statement knows that the content is false, but has either no intention to deceive, or the content itself lacks the potential to deceive.

As difficult as it already is to legally prove an intent in the case of an initial publication, it is almost impossible to do so in the case of further dissemination, e.g. when a person is knowingly sharing satire, ironic, or socio-critical content, or when someone is spreading conspiracy theories that they believe in. Crucially, often people share doubtful statements because they identify with its content or message; sharing (doubtful) statements can be seen as a form of expression. Through the subsequent dissemination without intent to harm, disinformation turns into misinformation. Moreover, the deceptive potential of the harmful type of content we are examining is independent of any intention, making this criterion seemingly irrelevant to effectively address the spread of this content. Overall, holding on too tightly to the criterion of ‘intent’ in stipulated disinformation policies could result either in the regulation of opinions and sentiments, or become wholly impotent to address the actual spread of the harmful content.

Alternative criterion from a legal perspective: potential for harm

As we have presented above, the criteria that determine disinformation definitions run into severe hurdles when it comes to operationalising them for governance and regulation aimed at tackling its spread. A far more useful question would be whether, and in which constellations, the content in question creates harm. The legal question, therefore, would be whether the content is endangering or violating legal interests. Such risks might in some cases stem from the content of a statement itself, although often it stems from its wide spread and/or visibility, and sometimes from its potential to mislead due to its look, feel, or source.

Where content unjustifiably interferes with protected rights and positions, statutory law may indeed provide legal countermeasures. The greater the potential damage to a legal interest, the stronger the legal countermeasures might be.

If we attempt to use the potential negative effects of a doubtful statement as a criterion, the real problem with regulating disinformation by law becomes clear: the concrete harmful effects are very difficult, if not impossible, to establish. Clearly, we need a lot more research on the actual effects of doubtful information. Currently, the evidence of measurable and tangible effects of debatable content is rather limited effects-wise (such as in the op-ed, “Between evidence and policy: bridging the gap in disinformation regulation”). Potentially, doubtful information affects a variety of legal interests, inter alia, individual autonomy/freedom of choice, unimpaired individual political opinion formation, freedom of information, or personality rights; but also supra-individual legal interests like, diversity of opinions, integrity of elections, and trust in democratic institutions. This plurality of potentially affected rights and interests understandably increases the complexity for any form of operationalisation of potential harm.

By exchanging truth/falsehood and intention with harm as the legally relevant criterion for governing doubted statements, as suggested here, we believe governance will be better suited to tackle disinformation: While truth is something that should primarily be constructed by society, harm is something that law always has taken as a starting point for (re-)balancing interests. Our take is deeply rooted in libertarianism and classical liberalism. While harm of course is also constructed, this is done by law, taking into account the granted (human) rights of all parties involved. Here, law is able to balance contradictory rights; this entails solutions where one can find an equilibrium between the two or more affected rights. In contrast to that, truth/falsehood are rather binary concepts, while balancing contradictory rights ideally enables both rights to persist. Where truth/falsehood is basically an either-or question, harm offers a continuum of transgressions of personal rights. The formula here is: The more infringement potential a statement shows, the more the rights and interests affected weigh in applying the balancing. While harm, and, even more so, potential harm is a rather uncertain concept, it can be more helpful when we talk about a "spectrum of right decisions" in dealing with disinformation.

Requirements for potential countermeasures

Building on these points, we can identify three levels of harm and the appropriate regulatory machinery to address these risks:

  • When a statement results in an imminent and severe danger to legally protected rights, traditional statutory law can provide general regulatory measures that are both prompt and proportionate, e.g. criminal law, obligations to delete content. Although most content commonly understood to be disinformation will likely not fall into this category.
  • In cases where a statement shows reasonable potential to infringe on protected rights, a balancing test (for instance, freedom of expression vs. personal rights) must assess whether legal measures can be applied. This has to happen on a case-by-case basis, potentially during court, or court-like procedures.
  • Finally, and likely applicable to the majority of doubtful claims: the uncertain potential of a statement to infringe individual rights renders state measures as probably disproportionate. These cases must be regulated through two approaches: First, support of trust in journalism specifically to strengthen communicators who have a vested interest and an obligation to seek truth, and second, to make doubts visible, and thus support the societal process of negotiating truth (e.g. by reporting tools and flagging mechanisms, or by tagging or labelling content with the results of a fact-checking exercise).

Outlook: The issue of platform power

Governing content from this last level, though, mainly falls within the platforms’ own sphere of action. Acting on such statements – which are usually the most common phenotypes in practice – hints at how platforms’ assessments and decisions will also shift discursive power away from society, and towards private actors. This issue of platform power can be seen with regard to, inter alia, the definition and decision on falsehood, the definition of and potential prioritisation of journalistic content, the selection of fact checkers and (sometimes) their internal organisation and procedures, the design and visibility of flags, tags, or warnings, and the decision on countermeasures like deprioritization, demonetisation, or deletion of a statement. There is a strong need to deliberate on minimum standards for human rights considerations in platforms‘ decisions and procedures, e.g. regarding uniform processes, the possibility for the persons affected to object a decision, as well as transparent and auditable processes. While all these issues are commonplace in platform governance discussions, deciding on the truth or falsehood of statements involves a specific risk tied to meddling with societal negotiations of finding truth, with far reaching impact on society and culture. As a result, establishing and shaping bodies and procedures that can effectively regulate disinformation also requires due consideration and care (such as in the op-ed “Hybrid institutions for disinformation governance: Between imaginative and imaginary”).

Footnote

The miniseries is a product of discussions with a group of experts convened by Clara Iglesias Keller, Stephan Dreyer, Martin Fertmann and Keno Potthast at the Digital Disinformation Hub of the Leibniz-Institute for Media Research | Hans-Bredow-Institut, on 24. February 2022. We thank all the participants for their input in the discussions reflected in this miniseries.

References

European Commission, Directorate-General for Communications Networks, Content and Technology. (n.d.). A multi-dimensional approach to disinformation: Report of the independent High level Group on fake news and online disinformation [Report]. Publications office. https://data.europa.eu/doi/10.2759/739290

Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “Fake News”: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143

Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making (Report DGI(2017)09). Council of Europe.

Add new comment