The perils of legally defining disinformation

Ronan Ó Fathaigh, Institute for Information Law, University of Amsterdam, Netherlands, R.F.Fahy@uva.nl
Natali Helberger, Institute for Information Law (IViR), University of Amsterdam, Netherlands
Naomi Appelman, Institute for Information Law, University of Amsterdam, Netherlands

PUBLISHED ON: 04 Nov 2021 DOI: 10.14763/2021.4.1584

Abstract

EU policy considers disinformation to be harmful content, rather than illegal content. However, EU member states have recently been making disinformation illegal. This article discusses the definitions that form the basis of EU disinformation policy, and analyses national legislation in EU member states applicable to the definitions of disinformation, in light of freedom of expression and the proposed Digital Services Act. The article discusses the perils of defining disinformation in EU legislation, and including provisions on online platforms being required to remove illegal content, which may end up being applicable to overbroad national laws criminalising false news and false information.
Citation & publishing information
Received: June 8, 2021 Reviewed: August 11, 2021 Published: November 4, 2021
Licence: Creative Commons Attribution 3.0 Germany
Funding: For N. Helberger, the research was in part funded by the European Research Council (grant no. 638514), and was conducted under the PERSONEWS ERC-STG project. Further, R. Ó Fathaigh and N. Helberger received funding from the Dutch Media Authority (Commissariaat voor de Media) for their contribution to Betzel et al., Notions of Disinformation and Related Concepts (ERGA, 2021).
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Disinformation, EU law, EU legislation, Freedom of expression, Platforms
Citation: Ó Fathaigh, R. & Helberger, N. & Appelman, N. (2021). The perils of legally defining disinformation. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1584

Introduction

European Union policy on disinformation has been premised upon the notion that disinformation is not per se illegal, but it is harmful, and the European Commission has made a distinction between illegal content (such as child sexual abuse material or hate speech), and harmful content (such as disinformation) (European Commission, 2020a; 2020b). As the EU’s independent High-Level Expert Group on fake news and online disinformation (HLEG) emphasised, disinformation is ‘not necessarily illegal’, but it can ‘nonetheless be harmful for citizens and society at large’, and falls ‘outside already illegal forms of speech’, such as defamation, hate speech, and incitement to violence (HLEG, 2018, p. 10). Indeed, the EU Code of Practice on disinformation recognises that the notion of disinformation is ‘without prejudice to binding legal obligations’ and emphasises the ‘delicate balance which any efforts to limit the spread and impact of otherwise lawful content must strike’ (European Commission, 2018a).

However, there is a growing realisation some EU member states may in fact make disinformation illegal, and have been increasingly doing so during the Covid-19 pandemic (Commissioner for Human Rights, 2020). Indeed, the European Commission recently admitted that some EU member states ‘already had provisions, including of criminal nature, related to disinformation and one Member State introduced a specific new criminal offence for spreading of disinformation during the state of emergency’ (European Commission, 2020b, p. 11). The Commission only named Hungary. However, based on the results of a survey of legislation in 27 EU member states (Betzel et al., 2021), conducted by the European Regulators Group for Audiovisual Media (ERGA),1 this article demonstrates that many other EU member states have national provisions that apply to the notion of disinformation, including criminal legislation. This includes, for example, Lithuania’s Law on the Provision of Information to the Public, which explicitly prohibits disseminating disinformation (Art. 19(2)); Malta’s Criminal Code, which prohibits spreading false news (Art. 82); and France’s Freedom of the Press Law, which prohibits publication of false news (Art. 27).

Notably, there is a lack of in-depth scholarship among EU legal studies focusing on the current definitions of disinformation, and on whether national legislation in EU member states may actually apply to these definitions of disinformation (Betzel et al., 2021; Craufurd Smith, 2020; Nuñez, 2020). As such, the purpose of this article is to discuss the national legislation applicable to the definitions of disinformation, and crucially, to examine the implications for European policy, in particular the EU’s proposed Digital Services Act (DSA, 2020). The article discusses the perils that may be involved in defining disinformation in EU legislation, and including provisions on platforms being required to remove ‘illegal content’, which may end up being applicable to overbroad laws at national level criminalising false news and false information (DSA, Art. 2(g)). This is because rather than being merely harmful content, disinformation may in fact be illegal content at national level.

The article is structured as follows: Section 1 begins with a discussion of the most prominent definitions of disinformation that are the basis of EU disinformation policy. Section 2 then describes the ERGA survey, and examines EU member states’ legislation that is applicable to disinformation, particularly a plethora of false news and false information laws that are in operation. Section 3 then assesses the implications for the right to freedom of expression, the free flow of information in Europe’s digital single market, and the EU’s planned Digital Services Act.

1. Policy definitions of disinformation

The increased regulatory and societal attention towards the impact of disinformation on democratic society created an enormous amount of research on the many types of disinformation, the data-driven mechanisms that underpin its distribution, its impact on democracy, and how to tackle its spread (Möller et al., 2020, trans.; Bayer et al., 2019). However, as stated, there has been surprisingly little attention paid in legal scholarship to the specific legal definitions of disinformation, and how these definitions relate to current legal frameworks in Europe. In contrast, in disciplines outside of legal scholarship, such as journalism and media studies, and science and technology studies, there is a rich literature on the definitional problems associated with the notions of disinformation and false information (e.g., Andersen and Søe, 2020; Epstein, 2020). Notably, scholars such as Van Hoboken et al. (2019), have begun to unpack the definitions of disinformation, and are building upon the work of the likes of Tandoc et al. (2018), who similarly examined the many scholarly definitions of the related-notion of ‘fake news’. Building on this work, this section analyses influential definitions that form the basis of EU disinformation policy, in order to relate these in the subsequent sections to relevant national provisions and, finally, draw out the implications for European policy.

When the definition of disinformation is explicitly discussed, the general consensus seems to be that there is no clear, uniform or legal definition (“Joint Declaration on Freedom of Expression”, 2020; Tambini, 2020; Van Hoboken et al., 2019; Nyakas et al., 2018). However, within the European context there does seem to be a convergence towards three influential definitions that have come a long way in harmonising and standardising the academic and policy debate on disinformation. These definitions are from Wardle and Derakhshan (2017), the European Commission (EC) (2018a) and the High Level Expert Group on fake news and online disinformation (Directorate-General for Communications Networks, Content and Technology, 2018). The EC definition is of particular importance, as it is the current policy definition, and is implemented in the EU Code of Practice on Disinformation (European Commission, 2018b). Although other definitions will be touched upon for reference, these three will form the core of the analysis. In an influential report for the Council of Europe, Wardle and Derakhshan analysed disinformation in the wider context of information disorder and, using the dimensions of harm and falseness, contrasted it with misinformation and malinformation: ‘Mis-information is when false information is shared, but no harm is meant. Dis-information is when false information is knowingly shared to cause harm. Mal-information is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere’ (Wardle & Derakhshan, 2017, p. 20). Subsequently, in their report for the EC, the HLEG defines disinformation as ‘false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit’ (HLEG, 2018, p. 10). Finally, the EC in its 2018 Communication on tackling online disinformation considers disinformation to be ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm’ which is understood to be ‘threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens' health, the environment or security’ (2018a, s. 2.1).

In analysing these definitions, we build upon the work of Van Hoboken et al., who employ four elements that, to a certain extent, are common to the different definitions: the veracity or misleading nature of the information, social harm, intention of the actor and economic gain (2019, p. 17). Similarly, Tandoc et al. map out the different definitions of the (much-maligned term) fake news along the dimensions of facticity and intent (2018, p. 138), and Möller et al. (2020, trans.), subsequently employ these dimensions to map out the different types of disinformation (p. 11). The ERGA report follows the mapping of the different definitions of disinformation made in Van Hoboken et al., but rightfully notes that Bayer et al. (2019) add the two elements that the information relates to a matter of public interest, and crucially, that the information is strategically disseminated (p. 18). However, these two elements are glaringly missing in the three influential definitions central to this paper. For this reason, whether the information relates to a matter of public interest or whether it is strategically disseminated will not function as one of the criteria for analysis. Building on these different classifications, and avoiding unnecessary overlap, this section will analyse the three prominent definitions according to (i) factual or misleading nature of the information, (ii) harm, (iii) intention of the actor, (iv) economic gain, and finally, (v) strategic dissemination.

Even though the three definitions have made substantial steps in standardising what disinformation is, and the Code of Practice on Disinformation claims that the EC and HLEG definitions of disinformation fully overlap (European Commission, 2018b, p. 1), we will see that their scope differs considerably on closer inspection. Crucially, all three definitions share some articulation of the first three elements, although there are important differences in nuance. With regard to the factual nature of the information, Wardle and Derakhshan refer to ‘information that is false’ (2017, p. 20) where the EC specifies this by referring to ‘verifiably false’ information but, simultaneously, expands the definition by including ‘misleading information’ (2018a, s. 2.1) and the HLEG similarly expanding the definition to include ‘inaccurate’ information (2018, p. 10). These differences in formulation can have an even more extensive effect for freedom of expression when these definitions would function as a legal term as opposed to a policy term (Radu, 2020, p. 3; Meyer et al., 2020, p. 16). The large variety of types of ‘false’ information and, consequently, the considerable difference in scope of the three definitions can be underlined by considering the typology Möller et al. (2020, p. 13) made of disinformation and Tandoc et al. (2018, p. 148) of fake news along a continuum of high to low ‘facticity’. Besides these discrepancies in scope, the underlying notion of all three definitions that the ‘veracity’ of information can be established (at all) is highly contested in several fields such as communication science (Farkas & Schou, 2018; Waisbord, 2018), casting doubt on the usefulness of these definitions.

Similarly, all three definitions explicitly refer to the harm created by the disinformation. Wardle and Derakhshan cast their net the widest by including harm to ‘a person, social group or country’ (2017, p. 20) while the EC and the HLEG both only refer to ‘public harm’ (2018a, s. 2.1; 2018, p. 10). This public harm is, subsequently, defined by the HLEG as ‘threats to democratic political processes and values, which can specifically target a variety of sectors, such as health, science, education, finance and more’ (2018, p. 10). The EC aligns itself with this definition, only adding ‘policymaking processes’ (2018a, 2.1). Notably, all three definitions share that the harm does not need to have actually occurred for the information to be qualified as disinformation. However, there are two striking differences in their inclusion of the element of harm between Wardle and Derakhshan and the HLEG on the one hand, and the EC on the other hand. The EC ends its definition of disinformation with ‘and may cause public harm’ which implies that whether or not the disinformation can, or actually has caused, public harm is not material for it being qualified as disinformation. This is in contrast to Wardle and Derakhshan and the HLEG that consider harm one of the material conditions for disinformation. Interestingly, this immediately connects to the third element: intent. In this regard, the definitions of both Wardle and Derakhshan and the HLEG require the intent of the actor to be directed at causing the harm, whereas for the EC disinformation requires intent to be on deceiving the public. Tandoc et al. (2018), similarly to the EC, understand intent to be directed at misleading people, not on causing harm, (p. 147). Due to connecting the intent to misleading people, the definition employed by the EC is strikingly broad as it includes any false information that is made with the intention to deceive the public. Further, all three definitions are silent on how the intentions of different actors involved in the creation and distribution of disinformation can intersect. As such these definitions seem to presuppose one actor behind disinformation, as opposed to a network or collection of different actors.

Further, the element of economic gain is notable, as it does not appear in the Wardle and Derakhshan definition, while it plays a very prominent role in the definition of the EC and the HLEG. In both these definitions the economic profit is seen as an alternative, or additional, aim of spreading disinformation. The EC and HLEG see the goal of disinformation as either economic gain, or to ‘intentionally deceive the public’ (EC) or to ‘intentionally cause public harm’ (HLEG). This considerably widens the scope of these definitions to any type of false information in a commercial context. Consequently, when just considering the text of the definitions the EC and HLEG have developed, any type of misleading advertisement would qualify as disinformation.

The final element, identified by Bayer et al. as ‘strategic dissemination’, refers not to a property of the information or the people behind it, but to how they act upon this information (2019, p. 12). Bayer et al. identify the manner of dissemination, strategic or assisted by AI, as one of the most defining characteristics of online disinformation (2019, p. 12). Similarly, all three influential definitions do include an action such as the ‘creation’ of the information for Wardle and Derakhshan, the ‘creation, presentation and dissemination’ for the EC, or the ‘design, presentation and promotion’ for the HLEG. In both the EC and HLEG definition it is unclear whether these different actions are cumulative requirements and how these closely related notions should be differentiated. Although two out of three definitions do contain the dissemination element, they omit emphasising the strategic dissemination, and how this is facilitated via online communication. This omission is striking as the current policy interest in disinformation is directly related to, and to a large extent sparked by, the new possibilities and challenges posed by online communication and, specifically, social media platforms (Van Hoboken et al., 2019; Bayer et al., 2019; EC, 2018a, s. 2.1).

Besides the definition itself, a very clear way in which both the HLEG and the EC have narrowed the scope of their definitions of disinformation is in separating them from existing legal categories. The HLEG stated this most firmly in declaring that disinformation does not overlap with any existing legal norm and that, as such, their report ‘does not deal with issues arising from the creation and dissemination online of illegal content, which are subject to regulatory remedies under EU or national laws, nor with other forms of deliberate but not misleading distortions of facts such as satire and parody’ (2018, p. 12). The EC draws the line less decisively, but still separates disinformation and policy dedicated to combating it from already regulated expression. The EC has repeatedly emphasised this distinction, stating for example that its policy on disinformation is ‘without prejudice to the applicable legal rules at Union or national level relating to the issues discussed, including disinformation containing illegal content’ (2018a, 2.1). As such, the leading policy definition of disinformation in the EU seems to conceptualise it as outside of current categories of illegal content. The distinction EU policy seems to make between, on the one hand disinformation as harmful content and, on the other, already regulated forms of illegal content, does limit the scope of the concept. However, it also risks missing ways in which enhanced enforcement of already existing legal norms could contribute to limiting the spread of disinformation.

Analysing these three influential definitions of disinformation using the five different elements of factual nature, harm, intent, profit and dissemination primarily revealed that, all three definitions are exceedingly broad, insufficiently specified and, as such, not fit to function as a legal category but should, rather, continue to be considered as indicating a policy domain. This also leaves ample room for EU member states to interpret these terms widely.

2. National legislation applicable to disinformation

This section describes and discusses the range of legislation at EU member-state level which is applicable to the definitions of disinformation. The analysis was based on legal desk research, and also the responses to a survey sent in August 2020 by ERGA to national audio-visual regulatory authorities (NRAs) in 27 EU member states (Betzel et al., 2021). The NRAs were asked whether there was national legislation which specifically defined disinformation, or whether there was legislation that was applicable to the notion of disinformation. Based on comparative law desk research and the results of the ERGA survey responses, national legislation in 11 EU member states was examined, namely Austria, Croatia, Cyprus, the Czech Republic, France, Greece, Hungary, Lithuania, Malta, Romania, and Slovakia. Our review of EU member state legislation shows that a number of EU member states have legislation applicable to these notions of disinformation, and most worryingly from a freedom of expression perspective, criminal law provisions carrying possible prison sentences upon conviction.

First, Lithuania is the only EU member state that has an explicit statutory prohibition on disinformation, and specifically defines the term disinformation in its legislation. In this regard, under Article 19 of the Law on the Provision of Information to the Public, it is specifically prohibited to disseminate ‘disinformation’ and information which is slanderous and offensive to a person or degrades human dignity and honour. Notably, the legislation contains a definition of disinformation, namely ‘intentionally disseminated false information’ (Art. 2). As such, this aligns with Wardle and Derakhshan’s definition, in that the definition contains the elements that (a) the information must be false, (b) there must be a specific intention; and (c) causes certain harms. However, it is limited to causing harm to a specific person, and does not include public harm; while there is no requirement of economic gain (as envisaged in the EC’s definition). Further, the focus is on the dissemination of disinformation, and not the creation.

Importantly, while Lithuania is the only EU member state with a specific legislative definition of disinformation, a number of EU member states have legislation that aligns with the EC’s and Wardle and Derakhshan’s definition of disinformation, but is not specifically termed disinformation. Instead, the most common legislative provisions applicable to disinformation (i.e., ‘information that is false and deliberately created to harm a person, social group, organisation or country’) are rules contained in criminal laws on ‘false news’ and ‘false information’, which are in force in Austria, Croatia, Cyprus, the Czech Republic, France, Greece, Hungary, Malta, Romania, and Slovakia.

This finding is consistent with the Communication issued by the European Commission during the Covid-19 pandemic in 2020, noting that some EU member states had criminal law provisions related to disinformation (European Commission, 2020b). The Commission only named Hungary for introducing a ‘specific new criminal offence for spreading of disinformation during the state of emergency’ (European Commission, 2020b, sec. 6). However, it is important to discuss the plethora of laws in other EU member states that are similarly applicable to disinformation. These can be grouped into false news or false information laws contained in criminal legislation and non-criminal legislation; false news laws enacted during the Covid-19 pandemic; and laws targeting false news and false information during elections.

First, in terms of criminal laws applicable to disinformation, a number of examples are important to set out. Beginning with Malta, Article 82 of the Criminal Code criminalises the spreading of false news, and makes it an offence to ‘maliciously spread false news which is likely to alarm public opinion or disturb public good order or the public peace or to create a commotion among the public or among certain classes of the public’ (Criminal Code (Malta), Art. 82). Notably, the offence carries a possible three-month prison sentence. This false news provision aligns with Wardle and Derakhshan’s definition of disinformation, as there is a requirement of (a) false information, (b) intention (i.e., maliciously), and (c) harm (i.e., to public opinion or good order). It also partly aligns with the European Commission’s definition of disinformation (‘false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm’), but lacks the element of economic gain.

Similar to Malta, there are six other examples of false news provisions applicable to disinformation. First, in France, while not criminal legislation, Article 27 of the Law on Freedom of the Press law prohibits the dissemination of ‘false news attributed to third parties when made in bad faith, has disturbed the public peace, or has been likely to disturb it,’ and carries a possible fine of € 45,000 (Law on the Freedom of the Press, Art. 27). Again, this aligns with the definition of disinformation, as it involves (a) false information, (b) intention (i.e., made in bad faith); and (c) harm (i.e. disturbing public peace). Second, in Croatia, the Law on Misdemeanours against Public Order and Peace makes it an offence to spread ‘false news’ that will ‘disturb the peace and tranquility of citizens’ with the offence carrying a possible 30-day prison sentence (Law on Misdemeanours against Public Order and Peace, Art. 16).Third, in Greece, Article 191 of the Criminal Code contains a lengthy provision which criminalises the dissemination of ‘false news’ which causes ‘fear to an indefinite number of people or to a certain circle or category of persons, who are thus forced to carry out unplanned acts or to cancel them, at the risk of causing damage to the country’s economy, tourism or defence capacity or disrupt its international relations’ (Criminal Code (Greece), Art. 191). Notably, this seems to require a showing of specific actions (e.g., cancellations), rather than influencing beliefs or opinions. Fourth, in the Slovak Republic, Section 361 of the Criminal Code makes it a criminal offence to ‘deliberately creates the danger of serious concerns among the population of a certain location or at least a part thereof by disseminating a false alarming news’, and carries a possible two-year prison sentence (Criminal Code (Slovak Republic), Sec. 361). Fifth, in the Czech Republic, the Criminal Code also criminalises ‘intentionally caus[ing] a threat of serious concernment of at least a portion of population of a certain area by spreading alarming news that is untrue,’ and carries a possible two-year prison sentence (Criminal Code (Czech Republic), Sec. 357). Finally, the Criminal Code of Cyprus makes it an offence to disseminate ‘false news’ or ‘news that can potentially harm civil order or the public’s trust towards the State or its authorities or cause fear or worry among the public or harm in any way the civil peace and order,’ and the offence carries a possible two-year prison sentence (Criminal Code (Cyprus), Art. 50).

In terms of legislation enacted during the Covid-19 pandemic targeting disinformation, Hungary’s legislation is perhaps the most commented upon. This was because under Covid-19 legislation, Section 337(2) of the Criminal Code on ‘fearmongering’ was amended, and provides that ‘publishing a statement one knows to be false or with a reckless disregard for its truth or falsity at times of special legal order with intent to obstruct or prevent the effectiveness of protective measures shall be construed a felony offense and shall be punishable by imprisonment between one to five years’ (Criminal Code (Hungary), Sec. 337(2)). Thus, the provision requires (a) false information, (b) intention, and (c) specific harm related to Covid-19 (i.e. obstruct or prevent the effectiveness of Covid-19 protective measures). It should also be noted that Section 338 of the Criminal Code also makes it an offence to ‘state or disseminate any untrue fact, which is capable of disturbing public peace’, and carries a possible three-year prison sentence (Criminal Code (Hungary), Sec. 338). In addition, in Romania, a 2020 Presidential Decree permitted the communications regulator to order the removal of online content that ‘promotes false news’ regarding Covid-19 and Covid-19 prevention measures (Decree no. 195 (2020) on the establishment of the state of emergency, Art. 54).

Finally, there are provisions specifically targeting false information during elections, which also align with the definitions of disinformation. The most notable is that contained in France, under the 2018 Law on the fight against the manipulation of information, which provides that during the three months prior to an election, a judge may order any proportionate and necessary measures to stop the dissemination of ‘any allegation or charge of an inaccurate or misleading fact likely to alter the sincerity of the forthcoming vote, which is deliberately, artificially or in an automated and massive manner, disseminated through an online public communication service’ (Law on the fight against the manipulation of information, Art. 1). As such, the provision requires intention (i.e., ‘deliberately’), false information (i.e., ‘inaccurate or misleading fact’), and potential harm (i.e. ‘fact likely to alter the sincerity of the forthcoming vote’); in addition to being disseminated artificially or in an automated and massive manner. Thus, the provision is limited to potential harm to an election, and not harms such as public order or public harm. The definition is further limited in that an element is the method of dissemination, i.e., artificially or automatedly, and on a massive scale. Further, in Austria, the Criminal Code makes it an offence to disseminate ‘false news’ during an election that is likely to influence voters (Criminal Code (Austria), Art. 264).

Based on the forgoing, we can draw a number of conclusions on the definitions applicable to disinformation. First, we can identify three key elements that emerge from the various definitions applicable to disinformation at member state level, namely (a) false information, (b) disseminated with a specific intention (malicious or bad faith) (c) and causes certain harms. In addition, while most of the definitions contain some core common elements they also differ widely in detail, such as causing a behavioural change, or use of a certain method of dissemination. In particular, there are varying specific harms mentioned, including economic harm, public harm, personal harm, personal dignity, harm to elections, and harm to public health measures (e.g., Covid-19 measures). Further, many definitions focus on the act of communication to the public, rather than being solely limited to the creator of the content. While the method of dissemination is also a factor, such as in France, which explicitly incorporates the amplification of false information, with the element that false information must have been disseminated artificially or in an automated and massive manner.

We also identify the type of national legislation applicable to disinformation, which is in many instances of a criminal nature. Thus, one of our main findings is that the notion of disinformation, rather than merely concerning harmful content, is in fact illegal content in a number of EU member states, and most notably, subject to criminal law and sanctions. Moreover, we note that national regulatory approaches differ considerably in terms of scope, addressee, and legal sanctions. These differences and the push at the level of member states to declare disinformation illegal has potentially far reaching implications for fundamental rights.

3. Implication for fundamental rights and EU policy

This section examines the implications of the findings from the comparative analysis of national legislation applicable to disinformation. First, we discuss the compatibility of laws applicable to disinformation with the right to freedom of expression, as guaranteed under international and European human rights law. Second, we consider how substantial divergences in national legislation may also raise obstacles to the free flow of information, and freedom to provide services, and constitute serious obstacles for the EU harmonisation project and internal market (European Parliament, 2020, § 15). Building on this, we assess what our findings mean for the EU going forward, particularly in relation to the Digital Services Act, which seeks to impose new societal responsibilities on platforms in the area of content moderation, and as such can also inspire (legal) debates outside Europe.

3.1 Freedom of expression

The first issue is the implication of the finding that 11 EU member states have laws which are applicable to disinformation, especially for the right to freedom of expression, guaranteed under European fundamental rights law (EU Charter, 2012; ECHR, 1950), and international human rights law (ICCPR, 1966). Indeed, the European Commission itself warned that criminal laws passed during the Covid-19 pandemic which define crimes in relation to disinformation can lead to ‘self-censorship’, and raise ‘particular concerns as regards freedom of expression’ (European Commission, 2020b, p. 11). In this regard, international human rights law is quite clear that such laws are incompatible with freedom of expression. The four international special mandates on freedom of expression have stated that laws containing prohibitions on dissemination of ‘false news’, which are ‘vague and ambiguous’, are ‘incompatible’ with international standards on freedom of expression, and ‘should be abolished’ (Joint Declaration, 2020, s. 2(a)). In particular, the UN Special Rapporteur on freedom of expression has emphasised how the concept of disinformation is an ‘extraordinarily elusive concept to define in law’, and susceptible to providing executive authorities with ‘excessive discretion to determine what is disinformation, what is a mistake, what is truth’ (“Joint Declaration on Freedom of Expression”, 2020, para. 42). As such, the penalisation of disinformation is ‘disproportionate’ under international human rights law (“Joint Declaration on Freedom of Expression”, 2020, para. 42). Further, the UN Human Rights Committee has found that prosecution for the ‘crime of publication of false news’ on the ground that the news was false, is in ‘clear violation’ of the right to freedom of expression (Human Rights Committee, 1999, para. 24).

A similar position exists under European human rights law in relation to laws on false information. The European Court of Human Rights has unanimously held that a prosecution for ‘dissemination of false information’ under Ukraine’s election legislation, violated the right to freedom of expression under Article 10 ECHR (Salov v. Ukraine, 2005). The Court held that Article 10 ECHR ‘as such does not prohibit discussion or dissemination of information received even if it is strongly suspected that this information might not be truthful’ (Salov v. Ukraine, 2005, para. 113). In a similar vein, as Van Hoboken et al. (2020) have discussed, the European Court has also delivered three unanimous judgments concerning a provision under election legislation in Poland, where election candidates can apply to a court for an order prohibiting publication of campaign material or statements containing ‘untrue data or information’, and where the court is required to examine the application ‘within 24 hours’ (Brzeziński v. Poland, 2019, para. 28). Notably, the Court found in all these judgments that proceedings under this law violated Article 10. Crucially, in its 2019 judgment Brzeziński v. Poland, the Court unanimously found a violation of Article 10 as the national courts had ‘immediately classified as lies’ statements made by a local politician during an election, and ‘[b]y following such an approach the domestic courts effectively deprived [the politician] of the protection afforded by Article 10’ (Brzeziński v. Poland, 2019, para. 58). Similarly, in Kwiecień v. Poland, the Court found serious deficiencies under proceedings for ‘untrue information’ during an election, and even held that the ‘fairness of the proceedings may be called into question’ (Kwiecień v. Poland, 2007, para. 55). While in Kita v. Poland, the Court also unanimously held there had been a violation of Article 10 over the ‘untrue information’ proceedings, finding the national courts ‘unreservedly qualified all of [the statements] as statements which lacked any factual basis,’ and the ‘standards applied’ by the national courts were ‘not compatible with the principles embodied in Article 10’ (Kita v. Poland, 2008, para. 51). In light of the foregoing, there are considerable issues with the compatibility of national legal provisions which are applicable to disinformation with freedom of expression.

3.2 The Digital Services Act and disinformation

The different national approaches to regulate disinformation trigger not only concerns with respect to fundamental rights, and freedom of expression in particular, but also in relation to the freedoms under the European Treaty, and the objectives of the internal market. According to Art. 26 (2) of the Treaty on the Functioning of the European Union, the European ‘internal market shall comprise an area without internal frontiers in which the free movement of goods, persons, services and capital is ensured in accordance with the provisions of the Treaties’ (Treaty on the Functioning of the European Union, Art. 26(2)). The freedom to offer services and products across the entire EU and without frontiers is one of the four freedoms that are central to the European project, together with the free movement of goods, capital, and persons. An important objective of the various regulatory initiatives of the EU in the context of the media and media markets so far has been the abolition of obstacles to the free flow of services. The idea of one single European market is particularly relevant in the context of digital services that by nature are no longer bound by national borders and creating a regulatory and economic environment in which digital services can flow freely has been a central strategic ambition of the European Commission for the past decade (European Commission, 2015).

In the previous section, we described the different national initiatives to define and regulate issues of disinformation, and also how great the diversity of the different national approaches is. The national approaches already differ at the definitional level. While Lithuania explicitly refers to and defines disinformation, other countries speak about ‘false news which is likely to alarm public opinion or disturb public good order or good peace’ (Malta), ‘false news’ that ‘[w]ill disturb the tranquillity of citizens’ (Croatia), false news that causes fear (Greece), or false news that can ‘potentially harm civil order or the public’s trust towards the State’ (Cyprus). Some national definitions require an element of malicious or economic intent, others do not. Some are regulated in national criminal laws with the threat of serious criminal sanctions, including prison (such as in Hungary), in other member states violations will ‘only’ trigger administrative sanctions. The national approaches also differ in what exactly is subject to regulatory intervention, the mere distribution of disinformation, or also the production thereof (e.g., Lithuania). It is not difficult to see how the disparities in national approaches can create considerable legal uncertainty and obstacles for any provider of digital content services in Europe. Indeed, such fragmentation and uncertainty is not that uncommon when it comes to internet policy. This is certainly true for digital platforms but also, for example, for all the websites of national news organisations. Not only will news organisations be forced to familiarise themselves with a growing array of national regulations of the content of online media. Seeing the width and relative ambiguity of some of the definitions, there is a real danger that, say, a YouTube news video that critically discusses the way European member states have so far approached the Covid-19 crisis, can be considered ‘news that can potentially harm civil order or the public’s trust towards the State or its authorities or cause fear or worry among the public or harm in any way the civil peace and order,’ in the sense Article 50 of the Criminal Code of Cyprus, or ‘reckless disregard for its truth or falsity at times of special order’ in the sense of Section 337(2) of the Hungarian Criminal Code.

The different national approaches to regulate disinformation can be seen as part of a broader trend at the level of member states to adopt regulations that deal with the impact of social media platforms for national media systems, democratic processes and the realisation of public values (Schulz, Potthast, & Helberger, 2021, trans.). In response to the emerging divergent national approaches, the European Commission in late 2020 published a landmark piece of proposed EU legislation—the Digital Services Act (DSA)—which would impose a whole set of new ‘uniform rules’ for digital platforms, and would be directly applicable in all 27 EU member states (DSA, 2020, Art. 1). An important objective of the DSA is hence to harmonise the different national approaches, which is also a reason why the Commission has opted for a regulation, not a directive (DSA, 2020, Recital 106 and Explanatory Memorandum, p. 3). Next to updating certain provisions on hosting providers in the E-Commerce Directive (Directive 2000/31/EC), the DSA is the planned European response to the growing impact of digital platforms and the need to find new ways to hold digital platforms accountable for the systemic risks that they facilitate or even create, including the dissemination of disinformation through their services. As the European Commission explains, ‘[t]he approximation of national regulatory measures at Union level concerning the requirements for providers of intermediary services is necessary in order to avoid and put an end to fragmentation of the internal market and to ensure legal certainty’ (DSA, 2020, Preamble).

Crucially, in the DSA, the concept of ‘illegal content’ is central. Article 2 of the DSA defines ‘illegal content’ as ‘any information, which, in itself or by its reference to an activity, including the sale of products or provision of services is not in compliance with Union law or the law of a Member State, irrespective of the precise subject matter or nature of that law’ (DSA, 2020, Art. 2(g)). This definition captures all the instances of disinformation as defined in the national laws described in the previous section. By referring to the national regulations, the DSA essentially incorporates the divergent national approaches that do conceptualise disinformation as ‘illegal content’ firmly into the system of the Act. From the perspective of the realisation of the internal market, the DSA can therefore be expected to further accentuate national divergences rather than harmonise the national approaches to disinformation.

In addition, there are a number of further provisions in the DSA which are potentially applicable to disinformation. First, under Article 26 DSA, so called ‘Very Large Online Platforms’2 (VLOPs) will be required to carry out risk assessments of ‘significant systemic risks’ stemming from the functioning of their services, including the ‘intentional manipulation of their service’, by ‘means of inauthentic use or automated exploitation of the service, with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security’ (DSA, 2020, Art. 26). The elements of the provision mirror the elements in the definition of disinformation discussed in Section 1 but then on the (systemic) dissemination level. Notably, Recital 57 DSA gives further examples of intentional manipulation of a platform’s service, such as the ‘creation of fake accounts, the use of bots, and other automated or partially automated behaviours, which may lead to the rapid and widespread dissemination of information that is illegal content or incompatible with an online platform’s terms and conditions’ (DSA, Art. 26(1)). In so doing, the DSA’s concepts of ‘intentional manipulation’ seems to add yet another dimension to the concept of disinformation by including the misleading nature of the form in which information is disseminated (namely through the inauthentic or automated use of platforms’ services to spread information).

Crucially, under Article 27 DSA, certain platforms will be required to put in place ‘effective mitigation measures’ tailored to these systemic risks. These measures can include platforms ‘adapting content moderation or recommender systems’, adapting ‘terms and conditions’, targeting measures to limit the display of advertisements, and ‘reinforcing’ internal processes or supervision of their activities in particular as regards detection of systemic risk (DSA, Art. 27(1)(a)-(e)). Having said so, the Article 26 risk assessments are in a sense self-assessments, subject to independent audit (DSA, 2020, Art. 28), and as such, leave considerable leeway to VLOPs to define the risks and also Article 27 DSA leaves it up to the discretion of VLOPs to decide if, and which ‘reasonable, proportionate and effective mitigation measures’ they will undertake. The European Commission may issue guidelines, recommendations and best practices (DSA, Art. 27(3)) that platforms can decide to follow. An open question is how the rather flexible framing of Article 27 DSA, that leaves discretion to platforms on if and how to act,relates to the national provisions applicable to disinformation, particularly those that criminalise and essentially ban the dissemination of disinformation. Notably, the DSA is silent on which procedures and mechanisms platforms should have in order to deal with the different national provisions and bans on disinformation.

Second, under Article 30 DSA, certain online platforms are required to establish publicly-available depositories containing all online advertisements displayed on their platforms. The purpose of these advertisement depositories is to facilitate supervision in relation to risks associated with online advertisements, including ‘manipulative techniques and disinformation with a real and foreseeable negative impact on public health, public security, civil discourse, political participation and equality’ (DSA 2020, Recital 63).

Third, under Article 35 DSA, the European Commission, and a newly-established advisory board (called the European Board for Digital Services), will facilitate the drawing up of codes of conduct in order to address ‘systemic’ risks. Importantly, in the recitals the DSA states that systemic risks can include ‘disinformation’ or manipulative and abusive activities, including ‘coordinated operations aimed at amplifying information, including disinformation, such as the use of bots or fake accounts for the creation of fake or misleading information, sometimes with a purpose of obtaining economic gain, which are particularly harmful for vulnerable recipients of the service’ (DSA, 2020, Recital 68). These codes of conduct would include commitments to take specific ‘risk mitigation measures’ and would be subject to regular monitoring and evaluation by the Commission and new Board. The Recitals also mention that codes of conduct under Article 35 DSA can serve as a basis for already established self-regulatory measures, including the EU Code of Conduct on disinformation (DSA, Recital 68). Thus, while the DSA’s provisions do not specifically mention, nor define, disinformation, the new rules on systemic risks (Arts. 26 and 27), ad libraries (Art. 30), and codes of conduct (Art. 35), clearly envision being applicable to disinformation, according to the Recitals. However, there are two additional provisions in the proposed DSA which may radically alter current EU policy on disinformation, and which the results of our national legislative findings may impact upon.

The first of these is Article 8 DSA, which will create an explicit legal mechanism for national judicial and administrative authorities to issue orders for online platforms to ‘act against’ specific user content that is deemed ‘illegal content’. Platforms are required to inform the national authority ‘without undue delay’ of the effect given to the order, and the action taken (DSA, Art. 8(2)). Crucially, as discussed above, illegal content is given quite a broad definition under the DSA. As such, all of the national provisions at EU member-state level on disinformation, false news and false information, are applicable under Article 8 DSA. Thus, Article 8 DSA will create an explicit EU law mechanism to facilitate national judicial and administrative authorities ordering platforms to remove content under national provisions applicable to disinformation. This has profound implications for freedom of expression online in Europe. This is because, where individuals are prosecuted under such national laws for their online expression, or an administrative authority finds online content violates national law, Article 8 DSA will now make it easier for national authorities to go a step further, with platforms being instrumentalised to allow national authorities to also have such content removed from the online environment. Indeed, it may also indirectly incentivise platforms to introduce harsher content moderation in response to such instrumentalisation.

Further, Article 14 puts in place a notice-and-action mechanism for illegal content, with platforms required to put in place mechanisms to ‘allow any individual or entity to notify them of the presence on their service of specific items of information that the individual or entity considers to be illegal content,’ (DSA, 2020, Art. 14(1)) and platforms must process any notices that they receive, and take their decisions, ‘in a timely, diligent and objective manner’ (DSA, 2020, Art. 14(6)). Thus, because of the broad definition of illegal content, and because some member states criminalise the distribution of disinformation, notice and take down mechanisms will also apply to disinformation. This is another instance where the DSA essentially extrapolates national rules that ban disinformation and give them effect vis-à-vis platforms.

Conclusion

This article has discussed the influential definitions that form the basis of EU disinformation policy, and subsequently analysed the national legislation in EU member states applicable to the definitions of disinformation, in light of the fundamental right to freedom of expression and the proposed Digital Services Act. The analysis of this paper can be summarised in three points. First, in terms of the definitions of disinformation in current EU policy, the European Commission’s definition may be criticised for being too broad and vague to function as a legal definition, from the perspective of legal certainty, effectiveness and freedom of expression. Indeed, some elements in the Commission’s definition are to be criticised, such as misleading information, which invites a subjective criterion. In this regard, we identify a number of elements that are common to current prominent definitions—the EC (2018a), HLEG (2018) and Wardle and Derakshan (2017)—that should be clarified to come to a more workable policy definition of disinformation. These elements are: (a) false information, (b) disseminated with a specific intention, and (c) targeting certain public harms. Crucially, the method of dissemination, such as automated amplification techniques, is lacking in these definitions and should be addressed to acknowledge the role of online platforms in the dissemination of disinformation. This should only function as a policy definition, and not a legal definition. Second, our analysis demonstrates that there are considerable discrepancies across national approaches to disinformation in EU member states, and there is a worrying trend toward criminalisation. As such, it can be argued that current EU policy on disinformation has done little in terms of harmonisation. Third, we have argued how these laws are quite problematic from a freedom of expression perspective, and also an internal market perspective, due to the diverging national approaches. Based on this analysis, a number of concluding points can be made.

Notably, while the EU’s proposed Digital Services Act is an ambitious piece of legislation designed to harmonise national approaches to illegal content online, our research suggests that the DSA may end up amplifying national differences and thereby at least in parts achieve exactly the opposite of what it intended to do, namely harmonisation. This is because the DSA’s definition of illegal content actually refers to legislation at member-state level, and would therefore capture all the national legislation applicable to disinformation that we discuss. Overall, the DSA does seek to harmonise the procedures with which platforms should approach the dissemination of disinformation on their services. It can be doubted, however, to what extent the DSA will be able to take away obstacles to the free flow of information as a result of the disparate national approaches to tackling disinformation. The considerable freedom that the DSA leaves at least VLOPs to decide on how to deal with disinformation as a systemic risk, furthermore, creates new risks from the perspective of freedom of expression. It also lays bare the potential frictions with national approaches to the regulation of disinformation. As became apparent from the national comparison, most member states that have regulated disinformation consider disinformation as a matter of national security, peace, public order, etc., and many adopted regulations in the area of criminal law. Crucially, the competencies of the European Commission in this area are very limited. The DSA appears to create a unified framework for platforms to deal with unlawful disinformation, but upon closer scrutiny does little to remove the underlying disparities in the national approaches, or provide a unified framework of how platforms should deal with those disparities. Indeed, the DSA does little to harmonise national approaches, or point to ways for platforms to deal with national disparities. This will be a problem particularly for smaller platforms that offer their services across borders but that cannot afford a team of lawyers that speaks all member states’ languages.

What then are the possibilities moving forward? Either the European Commission clarifies that the DSA’s definition of illegal content also includes content that has been identified as unlawful under national disinformation laws. Doing so, however, will then trigger a difficult follow-up question about the extent to which the DSA is maximum harmonisation and the national rules need to be adapted. Indeed, it would also raise the question of how far the European Commission’s competency to regulate matters of disinformation really reaches, especially if national regulations are motivated by concerns about national security and a healthy public debate. And of course, including disinformation within the definition would raise acute freedom of expression concerns. Or the European Commission decides to narrow down the scope and definition of 'illegal' content under Art. 2 DSA to exclude national laws applicable to disinformation and false information. Doing so, however, would endanger the harmonisation project and to some extent also defeat the overall objective of the DSA to create a unified framework for dealing with the systemic risks that VLOPs create (with disinformation being one of the most pertinent of those risks). It would also risk turning a blind eye to those national solutions that are difficult to square with fundamental European values, such as freedom of expression. Either way, the regulation of disinformation exposes a growing tension between national and European competencies and regulatory objectives when it comes to the regulation of disinformation and the digital economy.

The question on how to move forward in the tackling of disinformation, especially at a European level, and how to apply a brake to the criminalisation of disinformation thus presents the European Commission with a difficult conundrum. Given the limited competencies of the EU in matters of criminal law, national security and culture and the importance of freedom of expression, one organisation that seems well positioned to play a larger role in coordination and standard setting with regards to European disinformation policy is the Council of Europe (CoE), an international organisation specialising in human rights and based in Strasbourg. The CoE has 47 member states (including all EU member states), which are subject to the European Convention of Human Rights, and its guarantee of freedom of expression. The CoE has long played a prominent role, for example, in the decriminalisation of defamation in Europe (Ó Fathaigh, 2013; McGonagle, 2016) and providing guidance for all its members on a broad range of matters related to the media, digital technology and human rights, including the human-rights law implications of the increasingly prominent role of digital platforms (Council of Europe, 2018). Similarly, the CoE as a standard-setting institution in relation to freedom of expression should play a more prominent role in EU policy on disinformation (McGonagle, 2018, trans.). The European Commission and EU can continue their focus on procedural questions around illegal content and guidance in how online platforms should deal with disinformation, with a complementary role for the CoE on the question of content-based restrictions on disinformation to tackle the diverging approaches of the member states on both fronts. The legislative journey of the DSA3 offers a unique opportunity to reassess the European approach to disinformation, a chance that should not be wasted.

References

Andersen, J., & Søe, S. O. (2020). Communicative actions we live by: The problem with fact-checking, tagging or flagging fake news – the case of Facebook. European Journal of Communication, 35(2), 126–139. https://doi.org/10.1177/0267323119894489

Bayer, J., Bitiukova, N., Bard, P., Szakács, J., Alemanno, A., & Uszkiewicz, E. (2019). Disinformation and Propaganda – Impact on the Functioning of the Rule of Law in the EU and its Member States. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3409279

Betzel, M., Nyakas, L., Papp, T., Kelemen, L., Monori, Z., Varga, Á., Marrazzo, F., Matějka, S., Ó Fathaigh, R., & Helberger, N. (2020). Notions of Disinformation and Related Concepts (ERGA Report) [Report]. European Regulators Group for Audiovisual Media Services. https://erga-online.eu/wp-content/uploads/2021/03/ERGA-SG2-Report-2020-Notions-of-disinformation-and-related-concepts-final.pdf

Brzeziński v. Poland (application no. 47542/07). (2019). European Court of Human Rights.

Charter of Fundamental Rights of the European Union, Pub. L. No. C 326/391 (2012).

Convention for the Protection of Human Rights and Fundamental Freedoms, Pub. L. No. 213 U.N.T.S 221 (1950). https://www.echr.coe.int/Documents/Archives_1950_Convention_ENG.pdf

Council of Europe. (2018). Recommendation CM/REC(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries. https://go.coe.int/CXCbe

Council of Europe Commissioner for Human Rights. (2020, March 4). Press freedom must not be undermined by measures to counter disinformation about COVID-19. Commissioner for Human Rights. https://www.coe.int/en/web/commissioner/view/-/asset_publisher/ugj3i6qSEkhZ/content/press-freedom-must-not-be-undermined-by-measures-to-counter-disinformation-about-covid-19

Craufurd Smith, R. (2019). Fake news, French Law and democratic legitimacy: Lessons for the United Kingdom? Journal of Media Law, 11(1), 52–81. https://doi.org/10.1080/17577632.2019.1679424

Criminal Code (as amended by the Media and Defamation Act, 2018) Malta. (n.d.). https://justice.gov.mt/en/pcac/Documents/Criminal%20code.pdf

Criminal Code, Austria. https://www.jusline.at/gesetz/stgb

Criminal Code, Czech Republic. https://www.legislationline.org/download/id/6370/file/Czech%20Republic_CC_2009_am2011_en.pdf

Criminal Code, Greece. https://www.lawspot.gr/nomikes-plirofories/nomothesia/poinikos-kodikas-nomos-4619-2019

Criminal Code, Hungary. https://net.jogtar.hu/jogszabaly?docid=a1200100.tv

Criminal Code, Slovak Republic. https://www.legislationline.org/download/id/3763/file/Slovakia_CC_2005_en.pdf

Decree signed by the President of Romania, Mr. Klaus Iohannis, regarding the establishment of the state of emergency on the Romanian territory. (2020). https://www.presidency.ro/ro/media/decrete-si-acte-oficiale/decret-semnat-de-presedintele-romaniei-domnul-klaus-iohannis-privind-instituirea-starii-de-urgenta-pe-teritoriul-romaniei

Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce’). (2020). https://eur-lex.europa.eu/eli/dir/2000/31/oj

Directorate-General for Communications Networks, Content and Technology. (2018). High Level Expert Group on fake news and online disinformation [Report]. European Commission. https://op.europa.eu/s/tRk9

Epstein, B. (2020). Why It Is So Difficult to Regulate Disinformation Online. In S. Livingston & W. L. Bennett (Eds.), The Disinformation Age: Politics, Technology, and Disruptive Communication in the United States (pp. 190–210). Cambridge University Press. https://doi.org/10.1017/9781108914628.008

European Broadcasting Union. (2018). Fake News and the Information Disorder [Position Paper]. EBU. https://www.ebu.ch/news/2018/04/fake-news-and-the-information-disorder

European Commission. (2015). A Digital Single Market Strategy for Europe COM(2015) 192 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52015DC0192

European Commission. (2018a). Tackling Online Disinformation: A European Approach COM/2018/236 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0236

European Commission. (2018b). EU Code of Practice on Disinformation. https://www.europeansources.info/record/eu-code-of-practice-on-disinformation/

European Commission. (2020a). Digital Services Act package: Deepening the Internal Market and clarifying responsibilities for digital services (2020)2877686. https://eur-lex.europa.eu/legal-content/EN/PIN/?uri=pi_com:Ares(2020)2877686

European Commission. (2020b). Communication on Tackling COVID-19 disinformation—Getting the facts right JOIN/2020/8 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020JC0008

European Commission. (2020c). Assessment of the Code of Practice on Disinformation – Achievements and areas for further improvement SWD(2020) 180 final. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=69212

European Parliament. (2020). Resolution on the Digital Services Act and fundamental rights issues posed (Report (2020/2022(INI))). Committee on Civil Liberties, Justice and Home Affairs. https://www.europarl.europa.eu/doceo/document/A-9-2020-0172_EN.html

Farkas, J., & Schou, J. (2018). Fake News as a Floating Signifier: Hegemony, Antagonism and the Politics of Falsehood. Javnost - The Public, 25(3), 298–314. https://doi.org/10.1080/13183222.2018.1463047

Human Rights Committee. (1999). Concluding Observations of the Human Rights Committee: Cameroon. https://www.refworld.org/docid/3ae6b01014.html

Human Rights Council. (2020). Disease pandemics and the freedom of opinion and expression A/HRC/44/49. https://www.undocs.org/A/HRC/44/49

International Covenant on Civil and Political Rights, Pub. L. No. 99 U.N.T.S. 171 (1966). https://treaties.un.org/doc/publication/unts/volume%20999/volume-999-i-14668-english.pdf

Joint Declaration on Freedom of Expression and Elections in the Digital Age. (2020). Organization for Security and Co-operation in Europe. https://www.osce.org/representative-on-freedom-of-media/451150

Joint declaration on freedom of expression and “fake news”, disinformation and propaganda. (2017). Organization for Security and Co-operation in Europe. https://www.osce.org/fom/302796

Kita v. Poland (application no. 57659/00). (2008). European Court of Human Rights.

Kwiecień v. Poland (application no. 51744/99). (2007). European Court of Human Rights.

Law 2018-1202 of 22 December 2018 on the fight against the manipulation of information, France. https://www.legifrance.gouv.fr/loda/id/JORFTEXT000037847559/

Law of 29 July 1881 on freedom of the press, (1881). https://www.legifrance.gouv.fr/loda/id/JORFTEXT000000877119/

Law on Misdemeanours against Public Order and Peace, Croatia. https://www.zakon.hr/z/279/Zakon-o-prekr%C5%A1ajima-protiv-javnog-reda-i-mira

Marsden, C., Meyer, T., & Brown, I. (2020). Platform values and democratic elections: How can the law regulate digital disinformation? Computer Law & Security Review, 36, 105373. https://doi.org/10.1016/j.clsr.2019.105373

Matasick, C., Alfonsi, C., & Bellantoni, A. (2020). Governance responses to disinformation. In OECD Working Papers on Public Governance (Vol. 39). https://www.oecd-ilibrary.org/content/paper/d6237c85-en

McGonagle, T. (2017). “Fake news”: False fears or real concerns? Netherlands Quarterly of Human Rights, 35(4), 203–209. https://doi.org/10.1177/0924051917738685

McGonagle, T. (2018). De Raad van Europa en online desinformatie: Laveren tussen zorgen en zorgplichten? Mediaforum, 30. https://dare.uva.nl/search?identifier=286d7d2f-f17e-416b-b7b4-5c2864ec6124

McGonagle, T., McGonagle, M., Ó Fathaigh, R., & Fathaigh, R. Ó. (2016). Freedom of expression and defamation: A study of the case law of the European Court of Human Rights (O. Andreotti, Ed.). Council of Europe Publishing.

Meyer, T., Marsden, C., & Directorate-General for Parliamentary Research Services (European Parliament). (2019). Regulating disinformation with artificial intelligence: Effects of disinformation initiatives on freedom of expression and media pluralism. Publications Office. https://data.europa.eu/doi/10.2861/003689

Möller, J., Hameleers, M., & Ferreau, F. (2020). Typen von Desinformation und Misinformation [Report]. die medienanstalten. https://www.die-medienanstalten.de/publikationen/weitere-veroeffentlichungen/artikel?tx_news_pi1%5Bnews%5D=4859&cHash=97354e7f535acb7ffc8b058839960131

Nuñez, F. (2019). Disinformation Legislation and Freedom of Expression. UC Irvine Law Review, 10, 783–798.

Nyakas, L., Heuser, B., Cavallaro, R., Donde, M., McQuarrie, L., Honoré, R., Hari, M., & Hradický, M. (2018). Internal Media Plurality in Audiovisual Media Services in the EU: Rules and Practices (ERGA Report) [Report]. European Regulators Group for Audiovisual Media Services. https://erga-online.eu/wp-content/uploads/2019/01/ERGA-2018-07-SG1-Report-on-internal-plurality-LQ.pdf

Ó Fathaigh, R. (2013). Article 10 and the chilling effect principle. European Human Rights Law Review, 3, 304–313.

Penal Code, Cyprus. http://www.cylaw.org/nomoi/enop/non-ind/0_154/full.html

Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC. (n.d.). https://eur-lex.europa.eu/legal-content/en/TXT/?uri=COM:2020:825:FIN

Radu, R. (2020). Fighting the ‘Infodemic’: Legal Responses to COVID-19 Disinformation. Social Media + Society, 6(3), 205630512094819. https://doi.org/10.1177/2056305120948190

Salov v. Ukraine (application no. 65518/01). (2005). European Court of Human Rights.

Schulz, W., Potthast, K., & Helberger, N. (2021). Wissenschaftskommunikation und Social Media zwischen Rechtsschutz und Regulierungsbedarf. Berlin-Brandenburgische Akademie der Wissenschaften.

Tambini, D. (2020). Media freedom, regulation and trust: A systematic approach to information disorder [Background Paper]. Council of Europe Conference of Ministers Responsible for Media and Information Society. https://rm.coe.int/cyprus-2020-new-media/16809a524f

Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “Fake News”: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143

Treaty on the Functioning of the European Union (Consolidated Version) 2012/C 326/01. (2012). Offical Journal of the European Union. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2012:326:FULL:EN:PDF

Van Hoboken, J., Appelman, N., Ó Fathaigh, R., Leerssen, P., McGonagle, T., van Eijk, N., & Helberger, N. (2019). The legal framework on the dissemination of disinformation through Internet services and the regulation of political advertising [Report]. Dutch Ministry of Interioor and Kingdom Relations. https://www.ivir.nl/publicaties/download/Report_Disinformation_Dec2019-1.pdf

Van Hoboken, J., & Ó Fathaigh, R. (2021). Regulating Disinformation in Europe: Implications for Speech and Privacy. UC Irvine Journal of International, Transnational, and Comparative Law, 6, 9–36.

Waisbord, S. (2018). Truth is What Happens to News: On journalism, fake news, and post-truth. Journalism Studies, 19(13), 1866–1878. https://doi.org/10.1080/1461670X.2018.1492881

Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an interdisciplinary framework for research and policy making (Report DGI(2017)09). Council of Europe. https://rm.coe.int/090000168076299d

Footnotes

1. Ronan Ó Fathaigh and Natali Helberger were co-authors of Betzel et al., Notions of Disinformation and Related Concepts (ERGA, 2021). The authors are very grateful to ERGA for the results of the survey conducted.

2. According to Article 25(1) DSA, these are online platforms that provide their services to 45 million or more average monthly active recipients.

3. The DSA is subject to the normal legislative procedure and is at time of writing (August 2021) being simultaneously discussed in Parliament and the Council.

Add new comment