The perils of legally defining disinformation

: EU policy considers disinformation to be harmful content, rather than illegal content. However, EU member states have recently been making disinformation illegal. This article discusses the definitions that form the basis of EU disinformation policy, and analyses national legislation in EU member states applicable to the definitions of disinformation, in light of freedom of expression and the proposed Digital Services Act. The article discusses the perils of defining disinformation in EU legislation, and including provisions on online platforms being required to remove illegal content,


Introduction
European Union policy on disinformation has been premised upon the notion that disinformation is not per se illegal, but it is harmful, and the European Commission has made a distinction between illegal content (such as child sexual abuse material or hate speech), and harmful content (such as disinformation) (European Commission, 2020a;2020b). As the EU's independent High-Level Expert Group on fake news and online disinformation (HLEG) emphasised, disinformation is 'not necessarily illegal' , but it can 'nonetheless be harmful for citizens and society at large' , and falls ' outside already illegal forms of speech' , such as defamation, hate speech, and incitement to violence (HLEG, 2018, p. 10). Indeed, the EU Code of Practice on disinformation recognises that the notion of disinformation is 'without prejudice to binding legal obligations' and emphasises the 'delicate balance which any efforts to limit the spread and impact of otherwise lawful content must strike' (European Commission, 2018a).
However, there is a growing realisation some EU member states may in fact make disinformation illegal, and have been increasingly doing so during the Covid-19 pandemic (Commissioner for Human Rights, 2020). Indeed, the European Commission recently admitted that some EU member states 'already had provisions, including of criminal nature, related to disinformation and one Member State introduced a specific new criminal offence for spreading of disinformation during the state of emergency' (European Commission, 2020b, p. 11). The Commission only named Hungary. However, based on the results of a survey of legislation in 27 EU member states (Betzel et al., 2021), conducted by the European Regulators Group for Audiovisual Media (ERGA), 1 this article demonstrates that many other EU member states have national provisions that apply to the notion of disinformation, including criminal legislation. This includes, for example, Lithuania's Law on the Provision of Information to the Public, which explicitly prohibits disseminating disinformation (Art. 19(2)); Malta's Criminal Code, which prohibits spreading false news (Art. 82); and France's Freedom of the Press Law, which prohibits publication of false news (Art. 27).
Notably, there is a lack of in-depth scholarship among EU legal studies focusing on the current definitions of disinformation, and on whether national legislation in EU member states may actually apply to these definitions of disinformation (Betzel et al., 2021;Craufurd Smith, 2020;Nuñez, 2020). As such, the purpose of this article 1. Ronan Ó Fathaigh and Natali Helberger were co-authors of Betzel et al., Notions of Disinformation and Related Concepts (ERGA, 2021). The authors are very grateful to ERGA for the results of the survey conducted.
is to discuss the national legislation applicable to the definitions of disinformation, and crucially, to examine the implications for European policy, in particular the EU's proposed Digital Services Act (DSA, 2020). The article discusses the perils that may be involved in defining disinformation in EU legislation, and including provisions on platforms being required to remove 'illegal content' , which may end up being applicable to overbroad laws at national level criminalising false news and false information (DSA, Art. 2(g)). This is because rather than being merely harmful content, disinformation may in fact be illegal content at national level.
The article is structured as follows: Section 1 begins with a discussion of the most prominent definitions of disinformation that are the basis of EU disinformation policy. Section 2 then describes the ERGA survey, and examines EU member states' legislation that is applicable to disinformation, particularly a plethora of false news and false information laws that are in operation. Section 3 then assesses the implications for the right to freedom of expression, the free flow of information in Europe's digital single market, and the EU's planned Digital Services Act.

Policy definitions of disinformation
The increased regulatory and societal attention towards the impact of disinformation on democratic society created an enormous amount of research on the many types of disinformation, the data-driven mechanisms that underpin its distribution, its impact on democracy, and how to tackle its spread (Möller et al., 2020, trans.;Bayer et al., 2019). However, as stated, there has been surprisingly little attention paid in legal scholarship to the specific legal definitions of disinformation, and how these definitions relate to current legal frameworks in Europe. In contrast, in disciplines outside of legal scholarship, such as journalism and media studies, and science and technology studies, there is a rich literature on the definitional problems associated with the notions of disinformation and false information (e.g., Andersen and Søe, 2020;Epstein, 2020). Notably, scholars such as Van Hoboken et al.
(2019), have begun to unpack the definitions of disinformation, and are building upon the work of the likes of Tandoc et al. (2018), who similarly examined the many scholarly definitions of the related-notion of 'fake news' . Building on this work, this section analyses influential definitions that form the basis of EU disinformation policy, in order to relate these in the subsequent sections to relevant national provisions and, finally, draw out the implications for European policy.
When the definition of disinformation is explicitly discussed, the general consensus seems to be that there is no clear, uniform or legal definition ("Joint Declaration on Freedom of Expression", 2020; Tambini, 2020;Van Hoboken et al., 2019;Nyakas et al., 2018). However, within the European context there does seem to be a convergence towards three influential definitions that have come a long way in harmonising and standardising the academic and policy debate on disinformation.
These definitions are from Wardle and Derakhshan (2017), the European Commission (EC) (2018a) and the High Level Expert Group on fake news and online disinformation (Directorate-General for Communications Networks, Content and Technology, 2018). The EC definition is of particular importance, as it is the current policy definition, and is implemented in the EU Code of Practice on Disinformation (European Commission, 2018b). Although other definitions will be touched upon for reference, these three will form the core of the analysis. In an influential report for the Council of Europe, Wardle and Derakhshan analysed disinformation in the wider context of information disorder and, using the dimensions of harm and falseness, contrasted it with misinformation and malinformation: 'Mis-information is when false information is shared, but no harm is meant. Dis-information is when false information is knowingly shared to cause harm. Mal-information is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere' (Wardle & Derakhshan, 2017, p. 20).
Subsequently, in their report for the EC, the HLEG defines disinformation as 'false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit' (HLEG, 2018, p. 10 (2018a, s. 2.1) and the HLEG similarly expanding the definition to include 'inaccurate' information (2018, p. 10). These differences in formulation can have an even more extensive effect for freedom of expression when these definitions would function as a legal term as opposed to a policy term (Radu, 2020, p. 3;Meyer et al., 2020, p. 16 sively, but still separates disinformation and policy dedicated to combating it from already regulated expression. The EC has repeatedly emphasised this distinction, stating for example that its policy on disinformation is 'without prejudice to the applicable legal rules at Union or national level relating to the issues discussed, including disinformation containing illegal content' (2018a, 2.1). As such, the leading policy definition of disinformation in the EU seems to conceptualise it as outside of current categories of illegal content. The distinction EU policy seems to make between, on the one hand disinformation as harmful content and, on the other, already regulated forms of illegal content, does limit the scope of the concept. However, it also risks missing ways in which enhanced enforcement of already existing legal norms could contribute to limiting the spread of disinformation.
Analysing these three influential definitions of disinformation using the five different elements of factual nature, harm, intent, profit and dissemination primarily revealed that, all three definitions are exceedingly broad, insufficiently specified and, as such, not fit to function as a legal category but should, rather, continue to be considered as indicating a policy domain. This also leaves ample room for EU member states to interpret these terms widely.

National legislation applicable to disinformation
This section describes and discusses the range of legislation at EU member-state level which is applicable to the definitions of disinformation. The analysis was based on legal desk research, and also the responses to a survey sent in August 2020 by ERGA to national audio-visual regulatory authorities (NRAs) in 27 EU member states (Betzel et al., 2021 Finally, there are provisions specifically targeting false information during elections, which also align with the definitions of disinformation. The most notable is that contained in France, under the 2018 Law on the fight against the manipulation of information, which provides that during the three months prior to an election, a judge may order any proportionate and necessary measures to stop the dissemination of 'any allegation or charge of an inaccurate or misleading fact likely to alter the sincerity of the forthcoming vote, which is deliberately, artificially or in an automated and massive manner, disseminated through an online public communication service' (Law on the fight against the manipulation of information, Art. 1).
As such, the provision requires intention (i.e., 'deliberately'), false information (i.e., 'inaccurate or misleading fact'), and potential harm (i.e. 'fact likely to alter the sincerity of the forthcoming vote'); in addition to being disseminated artificially or in an automated and massive manner. Thus, the provision is limited to potential harm to an election, and not harms such as public order or public harm. The definition is further limited in that an element is the method of dissemination, i.e., artificially or automatedly, and on a massive scale. Further, in Austria, the Criminal Code makes it an offence to disseminate 'false news' during an election that is likely to influence voters (Criminal Code (Austria), Art. 264).
Based on the forgoing, we can draw a number of conclusions on the definitions applicable to disinformation. First, we can identify three key elements that emerge from the various definitions applicable to disinformation at member state level, namely (a) false information, (b) disseminated with a specific intention (malicious or bad faith) (c) and causes certain harms. In addition, while most of the definitions contain some core common elements they also differ widely in detail, such as causing a behavioural change, or use of a certain method of dissemination. In particular, there are varying specific harms mentioned, including economic harm, public harm, personal harm, personal dignity, harm to elections, and harm to public health measures (e.g., Covid-19 measures). Further, many definitions focus on the act of communication to the public, rather than being solely limited to the creator of the content. While the method of dissemination is also a factor, such as in France, which explicitly incorporates the amplification of false information, with the element that false information must have been disseminated artificially or in an automated and massive manner.
We also identify the type of national legislation applicable to disinformation, which is in many instances of a criminal nature. Thus, one of our main findings is that the notion of disinformation, rather than merely concerning harmful content, is in fact illegal content in a number of EU member states, and most notably, subject to criminal law and sanctions. Moreover, we note that national regulatory approaches differ considerably in terms of scope, addressee, and legal sanctions.
These differences and the push at the level of member states to declare disinformation illegal has potentially far reaching implications for fundamental rights.

Implication for fundamental rights and EU policy
This section examines the implications of the findings from the comparative analysis of national legislation applicable to disinformation. First, we discuss the compatibility of laws applicable to disinformation with the right to freedom of expression, as guaranteed under international and European human rights law. Second, we consider how substantial divergences in national legislation may also raise obstacles to the free flow of information, and freedom to provide services, and constitute serious obstacles for the EU harmonisation project and internal market (European Parliament, 2020, § 15). Building on this, we assess what our findings mean for the EU going forward, particularly in relation to the Digital Services Act, which seeks to impose new societal responsibilities on platforms in the area of content moderation, and as such can also inspire (legal) debates outside Europe.  Poland, 2008, para. 51). In light of the foregoing, there are considerable issues with the compatibility of national legal provisions which are applicable to disinformation with freedom of expression.

The Digital Services Act and disinformation
The different national approaches to regulate disinformation trigger not only concerns with respect to fundamental rights, and freedom of expression in particular, but also in relation to the freedoms under the European Treaty, and the objectives of the internal market. According to Art. 26 (2)  proaches also differ in what exactly is subject to regulatory intervention, the mere distribution of disinformation, or also the production thereof (e.g., Lithuania). It is not difficult to see how the disparities in national approaches can create considerable legal uncertainty and obstacles for any provider of digital content services in Europe. Indeed, such fragmentation and uncertainty is not that uncommon when it comes to internet policy. This is certainly true for digital platforms but also, for example, for all the websites of national news organisations. Not only will news organisations be forced to familiarise themselves with a growing array of national regulations of the content of online media. Seeing the width and relative ambiguity of some of the definitions, there is a real danger that, say, a YouTube news video that critically discusses the way European member states have so far approached the Covid-19 crisis, can be considered 'news that can potentially harm civil order or the public's trust towards the State or its authorities or cause fear or worry among the public or harm in any way the civil peace and order, ' in the sense Article 50 of the Criminal Code of Cyprus, or 'reckless disregard for its truth or falsity at times of special order' in the sense of Section 337(2) of the Hungarian Criminal Code.
The different national approaches to regulate disinformation can be seen as part of a broader trend at the level of member states to adopt regulations that deal with the impact of social media platforms for national media systems, democratic processes and the realisation of public values (Schulz, Potthast, & Helberger, 2021, trans.). In response to the emerging divergent national approaches, the European Crucially, in the DSA, the concept of 'illegal content' is central. Article 2 of the DSA defines 'illegal content' as 'any information, which, in itself or by its reference to an activity, including the sale of products or provision of services is not in compliance with Union law or the law of a Member State, irrespective of the precise subject matter or nature of that law' (DSA, 2020, Art. 2(g)). This definition captures all the instances of disinformation as defined in the national laws described in the previous section. By referring to the national regulations, the DSA essentially incorporates the divergent national approaches that do conceptualise disinformation as 'illegal content' firmly into the system of the Act. From the perspective of the realisation of the internal market, the DSA can therefore be expected to further accentuate national divergences rather than harmonise the national approaches to disinformation.
In addition, there are a number of further provisions in the DSA which are potentially applicable to disinformation. First, under Article 26 DSA, so called 'Very Large Online Platforms' 2 (VLOPs) will be required to carry out risk assessments of 'significant systemic risks' stemming from the functioning of their services, including the 'intentional manipulation of their service' , by 'means of inauthentic use or automated exploitation of the service, with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security' (DSA, 2020, Art. 26).
The The first of these is Article 8 DSA, which will create an explicit legal mechanism for national judicial and administrative authorities to issue orders for online platforms to 'act against' specific user content that is deemed 'illegal content' . Platforms are required to inform the national authority 'without undue delay' of the effect given to the order, and the action taken (DSA, Art. 8(2)). Crucially, as discussed above, illegal content is given quite a broad definition under the DSA. As such, all of the national provisions at EU member-state level on disinformation, false news and false information, are applicable under Article 8 DSA. Thus, Article 8 DSA will create an explicit EU law mechanism to facilitate national judicial and administrative authorities ordering platforms to remove content under national provisions applicable to disinformation. This has profound implications for freedom of ex-pression online in Europe. This is because, where individuals are prosecuted under such national laws for their online expression, or an administrative authority finds online content violates national law, Article 8 DSA will now make it easier for national authorities to go a step further, with platforms being instrumentalised to allow national authorities to also have such content removed from the online environment. Indeed, it may also indirectly incentivise platforms to introduce harsher content moderation in response to such instrumentalisation.
Further, Article 14 puts in place a notice-and-action mechanism for illegal content, with platforms required to put in place mechanisms to 'allow any individual or entity to notify them of the presence on their service of specific items of information that the individual or entity considers to be illegal content, ' (DSA, 2020, Art. 14(1)) and platforms must process any notices that they receive, and take their decisions, 'in a timely, diligent and objective manner' (DSA, 2020, Art. 14(6)). Thus, because of the broad definition of illegal content, and because some member states criminalise the distribution of disinformation, notice and take down mechanisms will also apply to disinformation. This is another instance where the DSA essentially extrapolates national rules that ban disinformation and give them effect vis-à-vis platforms.

Conclusion
This article has discussed the influential definitions that form the basis of EU disinformation policy, and subsequently analysed the national legislation in EU mem- what extent the DSA will be able to take away obstacles to the free flow of information as a result of the disparate national approaches to tackling disinformation.
The considerable freedom that the DSA leaves at least VLOPs to decide on how to deal with disinformation as a systemic risk, furthermore, creates new risks from the perspective of freedom of expression. It also lays bare the potential frictions with national approaches to the regulation of disinformation. As became apparent from the national comparison, most member states that have regulated disinformation consider disinformation as a matter of national security, peace, public order, etc., and many adopted regulations in the area of criminal law. Crucially, the competencies of the European Commission in this area are very limited. The DSA appears to create a unified framework for platforms to deal with unlawful disinformation, but upon closer scrutiny does little to remove the underlying disparities in the national approaches, or provide a unified framework of how platforms should deal with those disparities. Indeed, the DSA does little to harmonise national approaches, or point to ways for platforms to deal with national disparities. This will be a problem particularly for smaller platforms that offer their services across borders but that cannot afford a team of lawyers that speaks all member states' languages.
What then are the possibilities moving forward? Either the European Commission clarifies that the DSA's definition of illegal content also includes content that has been identified as unlawful under national disinformation laws. Doing so, however, will then trigger a difficult follow-up question about the extent to which the DSA is maximum harmonisation and the national rules need to be adapted.