Unpacking the “European approach” to tackling challenges of disinformation and political manipulation

The European Commission (EC) has recognised the exposure of citizens to online disinformation and micro-targeting of voters based on the unlawful processing of personal data as one of the major challenges for European democracies. In a response, the EC has put in place several measures creating a “European approach”. This paper analyses the approach to identify which are the key principles upon which it is based; and the extent to which it takes into account the complexities of the challenges identified. The initial conclusions are that, while being a significant step in the creation of a common EU answer to disinformation and political manipulation, the “European approach” requires further elaboration, primarily to include additional layers of transparency and public oversight.


INTRODUCTION
In recent years, the spread of disinformation on online platforms and micro-targeted datadriven political advertising has become a serious concern in many countries around the world, in particular as regards the impact this practice may have on informed citizenship and Unpacking the "European approach" to tackling challenges of disinformation and political manipulation Internet Policy Review | http://policyreview.info 2 December 2019 | Volume 8 | Issue 4 democratic systems. In April 2019, for the first time in the country's modern history, Switzerland's supreme court has overturned a nationwide referendum on the grounds that the voters were not given complete information and that it "violated the freedom of the vote". While in this case it was the government that had failed to provide correct information, the decision still comes as another warning of the conditions under which elections nowadays are being held and as a confirmation of the role that accurate information plays in this process. There is limited and sometimes even conflicting scholarly evidence as to whether today people are exposed to more diverse political information or trapped in echo chambers, and whether they are more vulnerable to political disinformation and propaganda than before (see, for example : Bruns, 2017, andDubois &Blank, 2018). Yet, many claim so, and cases of misuse of technological affordances and personal data for political goals have been reported globally.
The decision of Switzerland's supreme court has particularly resonated in Brexit Britain where the campaign ahead of the European Union (EU) membership referendum left too many people feeling "ill-informed" (Brett, 2016, p. 8 (2016, p. 3). According to this, the voters in the United Kingdom were not receiving complete or even truthful information, and there are also concerns that they might have been manipulated by the use of bots (Howard & Kollanyi, 2016) and by the unlawful processing of personal data (ICO, 2018a(ICO, , 2018b. The same concerns were raised in the United States during and after the presidential elections in 2016. Several studies have shown evidence of the exposure of US citizens to social media disinformation in the period around elections (see: Guess et al., 2018, andAllcott &Gentzkow, 2017). In other parts of the world, such as in Brazil and in several Asian countries, the means and platforms for transmission of disinformation were somewhat different but the associated risks have been deemed even higher. The most prominent world media, fact checkers and researchers systematically reported about the scope and spread of disinformation on the Facebook-owned and widely used messaging application WhatsApp in the 2018 presidential elections in Brazil. Freedom House warned that elections in some Asian countries, such as India, Indonesia, and Thailand, were also afflicted by falsified content.
Clearly, online disinformation and unlawful political micro-targeting represent a threat to elections around the globe. The extent to which certain societies are more resilient or more vulnerable to the impact of these phenomena depends on different factors, including, among other things, the status of journalism and legacy media, levels of media literacy, the political context and legal safeguards (CMPF, forthcoming). Different political and regulatory traditions play a role in shaping the responses to online disinformation and data-driven political manipulation. Accordingly, these range from doing nothing to criminalising the spread of disinformation, as is the case with the Singapore's law 1 which came into effect in October 2019.
While there seems to be more agreement that regulatory intervention is needed to protect democracy, the concerns over the negative impact of inadequate or overly restrictive regulation on freedom of expression remain. In his recent reports (2018,2019), UN Special Rapporteur on Freedom of Expression David Kaye warned against regulation that entrusts platforms with even more powers to decide on content removals in very short time frames and without public oversight. Whether certain content is illegal or problematic on other grounds is not always a straightforward decision and often depends on the context in which it is presented. Therefore, as highlighted by the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression (2019), to require platforms to make these content moderation decisions Unpacking the "European approach" to tackling challenges of disinformation and political manipulation in an automated way, without built-in transparency, and without notice or timely recourse for appeal, contains risks for freedom of expression.
The European Commission (EC) has recognised the exposure of citizens to large scale online disinformation (2018a) and micro-targeting of voters based on the unlawful processing of personal data (2018b) as major challenges for European democracies. In a response to these challenges, and to ensure citizens' access to a variety of credible information and sources, the EC has put in place several measures which aim to create an overarching "European approach".
This paper provides an analysis of this approach to identify the key principles upon which it builds, and to what extent, if at all, they differ from the principles of "traditional" political advertising and media campaign regulation during the electoral period. The analysis further looks at how these principles are elaborated and whether they reflect the complexity of the challenges identified. The focus is on the EU as it is "articulating a more interventionist approach" to the relations with the online platform companies (Flew et al., 2019, p. 45

THE ROLE OF (SOCIAL) MEDIA IN ELECTIONS
The paper starts from the notion that a healthy democracy is dependent on pluralism and that the role of (social) media in elections and the transparency of data-driven political advertising are among the crucial components of any assessment of the state of pluralism in a given country.
In this view, pluralism "implies all measures that ensure citizens' access to a variety of information sources, opinion, voices etc. in order to form their opinion without the undue influence of one dominant opinion forming power" (EC, 2007, p. 5;Valcke et al., 2009, p. 2).
Furthermore, it implies the relevance of citizens' access to truthful and accurate information.
The media have long been playing a crucial role in election periods: serving, on one side, as wide-reaching platforms for parties and candidates to deliver their messages, and, on the other, helping voters to make informed choices. They set the agenda by prioritising certain issues over others and by deciding on time and space to be given to candidates; they frame their reporting within a certain field of meaning and considering the characteristics of different types of media; and, if the law allows, they sell time and space for political advertising (Kelley, 1963). A democracy requires the protection of media freedom and editorial autonomy, but asks that the media be socially responsible. This responsibility implies respect of fundamental standards of journalism, such as impartiality and providing citizens with complete and accurate information.
As highlighted on several occasions by the European Commission for Democracy through Law (so-called Venice Commission) of the Council of Europe (2013, paras. 48, 49): "The failure of the media to provide impartial information about the election campaign and the candidates is one of the most frequent shortcomings that arise during elections".
Access to the media has been seen as "one of the main resources sought by parties in the campaign period" and to ensure a level playing field "legislation regarding access of parties and candidates to the public media should be non-discriminatory and provide for equal treatment" (Venice Commission, 2010, para. 148). The key principles of media regulation during the electoral period are therefore media impartiality and equality of opportunity for contenders.
Public service media are required to abide by higher standards of impartiality compared to private outlets, and audiovisual media are more broadly bound by rules than the printed press Unpacking the "European approach" to tackling challenges of disinformation and political manipulation and online media. The latter are justified by the perceived stronger effects of audiovisual media on voters (Schoenbach & Lauf, 2004) and by the fact that television channels benefit from the public and limited resource of the radio frequency spectrum (Venice Commission, 2009, paras. 24-28, 58).
In the Media Pluralism Monitor (MPM) 2, a research tool supported by the European Commission and designed to assess risks to media pluralism in EU member states, the role of media in the democratic electoral process is one out of 20 key indicators. It is seen as an aspect of political pluralism and the variables against which the risks are assessed have been elaborated in accordance with the above-mentioned principles. The indicator assesses the existence and implementation of a regulatory and self-regulatory framework for the fair representation of different political actors and viewpoints on public service media and private channels, especially during election campaigns. The indicator also takes into consideration the regulation of political advertising -whether the restrictions are imposed to allow equal opportunities for all political parties and candidates.
The MPM results (Brogi et al., 2018) showed that the rules to ensure the fair representation of political viewpoints in news and informative programmes on public service media channels and services are imposed by law in all EU countries. It is, however, less common for such regulation and/or self-regulatory measures to exist for private channels. A similar approach is observed in relation to political advertising rules, which are more often and more strictly defined for public service than for commercial media. Most countries in the EU have a law or another statutory measure that imposes restrictions on political advertising during election campaigns to allow equal opportunities for all candidates. Even though political advertising is "considered as a legitimate instrument for candidates and parties to promote themselves" (Holtz-Bacha & Just, 2017, p. 5), some countries do not allow it at all. In cases when there is a complete ban on political advertising, public service media provide free airtime on principles of equal or proportionate access. In cases when paid political advertising is allowed, it is often restricted only to the campaign period and regulation seeks to set limits on, for example, campaign resources and spending, the amount of airtime that can be purchased and the timeframe in which political advertising can be broadcast. In most countries there is a requirement for transparency -how much was spent for advertising in the campaign, presented through spending on different types of media. For traditional media, the regulatory framework requires that political advertising (as any other advertising) be properly identified and labelled as such.
Television remains the main source of news for citizens in the EU (Eurobarometer, 2018a(Eurobarometer, , 2017. However, the continuous rise of online sources and platforms as resources for (political) news and views (Eurobarometer, 2018a), and as channels for more direct and personalised political communication, call for a deeper examination of the related practice and potential risks to be addressed. The ways people find and interact with (political) news and the ways political messages are being shaped and delivered to people has been changing significantly with the global rise, popularity and features offered by the online platforms. An increasing number of people, and especially young populations, are using them as doors to news (Newman et al., 2018, p. 15;Shearer, 2018 advertising in the online environment. According to the available MPM data (Brogi et al., 2018; and preliminary data collected in 2019), only 11 countries (Belgium, Bulgaria, Denmark, Finland, France, Germany, Italy, Latvia, Lithuania, Portugal and Sweden) have legislation or guidelines to require transparency of online political advertisements. In all cases, it is the general law on political advertising during the electoral period that also applies to the online dimension.
Political advertising and political communication more broadly take on different forms in the environment of online platforms, which may hold both promises and risks for democracy (see, for example, Valeriani & Vaccari, 2016;and Zuiderveen Borgesius et al., 2018). There is still limited evidence on the reach of online disinformation in Europe, but a study conducted by Fletcher et al. (2018) suggests that even if the overall reach of publishers of false news is not high, they achieve significant levels of interaction on social media platforms. Disinformation online comes in many different forms, including false context, imposter, manipulated, fabricated or extreme partisan content (Wardle & Derakhshan, 2017), but always with an intention to deceive (Kumar & Shah, 2018). There are also different motivations for the spread of disinformation, including financial and political (Morgan, 2018), and different platforms' affordances affect whether disinformation spreads better as organic content or as paid-for advertising. Vosoughi et al. (2018) have shown that Twitter disinformation organically travels faster and further than true information pieces due to technological possibilities, but also due to human nature that is more likely to spread something surprising and emotional, which disinformation often does. On Facebook, on the other hand, the success of spread of disinformation may be significantly attributed to advertising, claim Chiou and Tucker (2018).
Accordingly, platforms have put in place different policies towards disinformation. Twitter has recently announced a ban on political advertising, while Facebook continues to run it and exempts politician's speech and political advertising from third-party fact-checking programmes.
Further to different types of disinformation, and different affordances of platforms and their policies, there are "many different actors involved and we're learning much more about the different tactics that are being used to manipulate the online public sphere, particularly around elections", warns Susan Morgan (2018, p. 40). Young Mie Kim and others (2018) have investigated the groups that stood behind divisive issue campaigns on Facebook in the weeks before the 2016 US elections, and found that most of these campaigns were run by groups which did not file reports to the Federal Election Commission. These groups, clustered by authors as non-profits, astroturf/movement groups, and unidentifiable "suspicious" groups, have sponsored four times more ads than those that did file the reports to the Commission. In addition to the variety of groups playing a role in political advertising and political communication on social media today, a new set of tactics are emerging, including the use of automated accounts, so-called bots, and data-driven micro-targeting of voters (Morgan, 2018). Bradshaw and Howard (2018) have found that governments and political parties in an increasing number of countries of different political regimes are investing significant resources in using social media to manipulate public opinion. Political bots, as they note, are used to promote or attack particular politicians, to promote certain topics, to fake a follower base, or to get opponents' accounts and content removed by reporting it on a large scale. Micro-targeting, as another tactic, is commonly defined as a political advertising strategy that makes use of data analytics to build individual or small group voter models and to address them with tailored political messages (Bodó et al., 2017). These messages can be drafted with the intention to deceive certain groups and to influence their behaviour, which is particularly problematic in the Unpacking the "European approach" to tackling challenges of disinformation and political manipulation election period when the decisions of high importance for democracy are made, the tensions are high and the time for correction or reaction is scarce.
The main fuel of contemporary political micro-targeting is data gathered from citizens' online presentation and behaviour, including from their social media use. Social media has also been used as a channel for distribution of micro-targeted campaign messages. This political advertising tactic came into the spotlight with the Cambridge Analytica case reported by journalist Carole Cadwalladr in 2018. Her investigation, based on the information from whistleblower Christopher Wylie, revealed that the data analytics firm Cambridge Analytica, which worked with Donald Trump's election team and the winning Brexit campaign, harvested the personal data of millions of peoples' Facebook profiles without their knowledge and consent, and used it for political advertising purposes (Cadwalladr, 2018). In the EU, the role of social The EC considers online platforms as covering a wide range of activities, but the European approach to tackling disinformation is concerned primarily with "online platforms that distribute content, particularly social media, video-sharing services and search engines" (EC, 2018a). This paper employs the same focus and hence the same narrow definition of online platforms. The main research questions are: which are the key principles upon which the European approach to tackling disinformation and political manipulation builds; and to what extent, if at all, do they differ from the principles of "traditional" political advertising and media campaign regulation in the electoral period? The analysis further seeks to understand how these principles are elaborated and whether they reflect the complexity of the challenges identified.
For this purpose, the 'European approach' is understood in a broad sense (EC, 2018f). Looking Unpacking the "European approach" to tackling challenges of disinformation and political manipulation through the lens of pluralism, this analysis uses a generic inductive approach, a qualitative research approach that allows findings to emerge from the data without having pre-defined coding categories (Liu, 2016). This methodological decision was made as this exploratory research sought not only to analyse the content of the above listed documents, but also the context in which they came into existence and how they relate to one another.

TWO BIRDS WITH ONE STONE: THE EUROPEAN APPROACH IN CREATING FAIR AND PLURAL CAMPAIGNING ONLINE
The The Commission guidance on the application of GDPR in the electoral context (EC, 2018d) underlines that it "applies to all actors active in the electoral context", including European and national political parties, European and national political foundations, platforms, data analytics companies and public authorities responsible for the electoral process. Any data processing should comply with the GDPR principles such as fairness and transparency, and for specified purposes only. The guidance provides relevant actors with the additional explanation of the notions of "personal data" and of "sensitive data", be it collected or inferred. Sensitive data may include political opinions, ethnic origin, sexual orientation and similar, and the processing of such data is generally prohibited unless one of the specific justifications provided for by the GDPR applies. This can be in the case where the data subject has given explicit, specific, fully informed consent for processing; when this information is manifestly made public by the data subject; when the data relate to "the members or to former members of the body or to persons who have regular contact with"; or when processing "is necessary for reasons of substantial public interest" (GDPR, Art. 9, para. 2). In a statement adopted in March 2019, the EDPB points out that derogations of special data categories should be interpreted narrowly. In particular, the derogation in the case when a person makes his or her 'political opinion' public cannot be used to legitimate inferred data. Bennett (2016) also warns that vagueness of several terms used to describe exceptions from the application of Article 9(1) might lead to confusion or inconsistencies in interpretation as processing of 'political opinions' becomes increasingly relevant for contemporary political campaigning.
The principles of fairness and transparency require that individuals (data subjects) are informed of the existence of the processing operation and its purposes (GDPR, Art. 5). The Commission's guidance clearly states that data controllers (those who make the decision on and the purpose of processing, like political parties or foundations) have to inform individuals about key aspects related to the processing of their personal data, including why they receive personalised messages from different organisations; which is the source of the data when not collected directly from the person; how are data from different sources combined and used; and whether the automated decision-making has been applied in processing.
Despite the strengthened powers and an explicit call to act more in the political realm (EC, 2018d), to date we have not seen many investigations by DPAs into political parties under the GDPR. An exception is UK Information Commissioner Elizabeth Denham. In May 2017, she announced the launch of a formal investigation into the use of data analytics for political purposes following the wrongdoings exposed by journalists, in particular Carole Cadwalladr, during the EU Referendum, and involving parties, platforms and data analytics companies such as Cambridge Analytica. The report of November 2018 concludes: that there are risks in relation to the processing of personal data by many political parties. Particular concerns include the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence, a lack of fair processing and the use of third-party data analytics companies, with insufficient checks around consent (ICO, 2018a, p. 8).
Unpacking the "European approach" to tackling challenges of disinformation and political manipulation As a result of the investigation, the ICO sent 11 letters to the parties with formal warnings about their practices, and in general it became the largest investigation conducted by a DPA on this matter and encompassing different actors, not only political parties but also social media platforms, data brokers and analytics companies.
Several cases have been reported where the national adaptation of the GDPR does not fully meet the requirements of recital 56 GDPR which establishes that personal data on people's political opinions may be processed "for reasons of public interest" if "the operation of the democratic system in a member state requires that political parties compile" such personal data; and "provided that appropriate safeguards are established". In November 2018 a question was raised in the European Parliament on the data protection law adapting Spanish legislation to the GDPR which allows "political parties to use citizens' personal data that has been obtained from web pages and other publicly accessible sources when conducting political activities during election campaigns". As a member of the European Parliament Sophia in 't Veld, who posed the question, highlighted: "Citizens can opt out if they do not wish their data to be processed.
However, even if citizens do object to receiving political messages, they could still be profiled on the basis of their political opinions, philosophical beliefs or other special categories of personal data that fall under the GDPR". The European Commission was also urged to investigate the Romanian GDPR implementation for similar concerns. Further to the reported challenges with national adaptation of GDPR, in November 2019 the EDPS has issued the first ever reprimand to an EU institution. The ongoing investigation into the European Parliament was prompted by the Parliament's use of a US-based political campaigning company NationBuilder to process personal data as part of its activities relating to the 2019 EU elections.

COMBATING DISINFORMATION
In contrast to the GDPR, which is sometimes praised as "the most consequential regulatory Unpacking the "European approach" to tackling challenges of disinformation and political manipulation December 2019 | Volume 8 | Issue 4 Transparency, in this view, means that it should be clear to users where the information comes from, who the author is and why they see certain content when an automated recommendation system is being employed. Furthermore, a clearer distinction between sponsored and informative content should be made and it should be clearly indicated who paid for the advertisement. The diversity principle is strongly related to strengthening so-called quality journalism, 4 to rebalancing the disproportionate power relations between media and social media platforms, and to increasing media literacy levels. The credibility, according to the EC, is to be achieved by entrusting platforms to design and implement a system that would The CoP builds on the principles expressed in the Commission's Communication (2018a) through the actions listed in Table 1. For the purpose of this paper the actions are not presented in the same way as in the CoP. THey are instead slightly reorganised under the following three categories: Disinformation; Political advertising, Issue-based advertising.

Issue-based advertising
To disrupt advertising and monetisation incentives for accounts and websites which consistently misrepresent information about themselves To clearly label paid-for communication as such Unpacking the "European approach" to tackling challenges of disinformation and political manipulation

Disinformation
Political advertising

Issue-based advertising
Limiting the abuse of platforms by unauthentic users (misuse of automated bots) To publicly disclose political advertising, including actual sponsor and amounts spent To publicly disclose, conditioned to developing a working definition of "issue-based advertising" which does not limit freedom of expression and excludes commercial advertising Implementing rating systems (on trustworthiness), and report system (on false content) Enabling users to understand why they have been targeted by a given advertisement To invest in technology to prioritise "relevant, authentic and authoritative information" in search, feeds and other ranked channels Resources for users on how to recognise and limit the spread of false news In the statement on the first annual self-assessment reports by the signatories of the CoP, the Commission acknowledged that some progress has been achieved, but warns that it "varies a lot between signatories and the reports provide little insight on the actual impact of the selfregulatory measures taken over the past year as well as mechanisms for independent scrutiny". The European Regulators Group for Audiovisual Media Services (ERGA) has been supporting the EC in monitoring the implementation of the commitments made by Google, Facebook and Twitter under the CoP, particularly in the area of political and issue-based advertising. In June 2019 ERGA released an interim Report as a result of the monitoring activities carried out in 13 EU countries, based on the information reported by platforms and on the data available in their online archives of political advertising. While it stated "that Google, Twitter and Facebook made evident progress in the implementation of the Code's commitments by creating an ad hoc procedure for the identification of political ads and of their sponsors and by making their online repository of relevant ads publicly available", it also emphasised that the platforms have not met a request to provide access to the overall database of advertising for the monitored period, which "was a significant constraint on the monitoring process and emerging conclusions" (ERGA, 2019, p. 3). Furthermore, based on the analysis of the information provided in the platforms' repositories of political advertising (e.g., Ad Library), the information was "not complete and that not all the political advertising carried on the platforms was correctly labelled as such" (ERGA, 2019, p. 3).
The EC still needs to provide a comprehensive assessment on the implementation of the commitments under the CoP after an initial 12-month period. However, it is already clear that the issue of the lack of transparency of the platforms' internal operations and decision-making processes remains and represents a risk. If platforms are not amenable to thorough public auditing, then the adequate assessment of the effectiveness of implementation when it comes to self-regulation becomes impossible. The ERGA Report (2019)  The Recommendation (EC, 2018e) further advises member states to set up a national election network, involving national authorities with competence for electoral matters, including data protection commissioners, electoral authorities and audio-visual media regulators. This recommendation is further elaborated in the Action plan (EC, 2018c) but, because of practical obstacles, national cooperation between authorities has not yet become a reality in many EU countries.

KEY PRINCIPLES AND SHORTCOMINGS OF THE EUROPEAN APPROACH
This analysis has shown that the principles contained in the above mentioned instruments, which form the basis of the European approach to combating disinformation and political manipulation are: data protection; transparency; cooperation; mobilising the private sector; promoting diversity and credibility of information; raising awareness; empowering the research community.
Data protection and transparency principles related to personal data collection, processing and use are contained in the GDPR. The requirement to increase transparency of political and issues-based advertising and of automated communication is currently directed primarily towards platforms that have committed themselves to label and publicly disclose sponsors and content of political and issues-based advertising, as well as to identify and label automated accounts. Unlike with the traditional media landscapes where, in general, on the same territory, media were broadcasting the same political advertising and messages to their audiences, in the digital information environment political messages are being targeted and shown only to specific profiles of voters with limited ability to track them to see which messages were targeted to whom. To increase transparency on this level would require platforms to provide a userfriendly repository of political ads, including searchable information on actual sponsors and amounts spent. At the moment, they struggle with how to identify political and issue-based ads, to distinguish them from other types of advertising, and to verify ad buyers' identities (Leerssen et al., 2019).
Furthermore, the European approach fails to impose similar transparency requirements towards political parties to provide searchable and easy to navigate repositories of the campaign materials used. The research project of campaign monitoring during the 2019 European elections, showed that parties/groups/candidates participating in the elections were largely not Unpacking the "European approach" to tackling challenges of disinformation and political manipulation transparent about their campaign materials. Materials were not readily available on their websites or social media accounts nor did they respond to direct requests from researchers (Simunjak et al., 2019). This warns that while it is relevant to require platforms to provide more transparency on political advertising, it is perhaps even more relevant to demand this transparency directly from political parties and candidates in elections.
In the framework of transparency, the European approach also fails to further emphasise the need for political parties to declare officially to authorities and under a specific category the amounts spent for digital (including social media) campaigning. At present, in some EU countries (for example Croatia, see: Klaric, 2019), authorities with competences in electoral matters do not consider social media as media and accordingly do not apply the requirements to report spending on social media and other digital platforms in a transparent manner. This represents a risk, as the monitoring of the latest EP elections has clearly showed that the parties had spent both extensive time and resources on their social media accounts (Novelli & Johansson, 2019).
The diversity and credibility principles stipulated in the Communication on tackling online disinformation and in the Action plan ask from platforms to indicate the information trustworthiness, to label automated accounts, to close down fake accounts, and to prioritise quality journalism. At the same time, clear definition or instructions on criteria to determine whether an information or a source is trustworthy and whether it represents quality journalism is not provided. Entrusting platforms with making these choices without the possibility of auditing their algorithms and decision-making processes represents a potential risk for freedom of expression.
The signatories of the CoP have committed themselves to disrupt advertising and monetisation incentives for accounts and websites, which consistently misrepresent information about themselves. But, what about accounts that provide accurate information about themselves but occasionally engage in campaigns which might also contain disinformation? For example, a political party may use data to profile and target individual voters or a small group of voters with messages that are not completely false but are exaggerated, taken out of context or framed with an intention to deceive and influence voters' behaviour. As already noted, disinformation comes in many different forms, including false context, imposter, manipulated or fabricated content (Wardle & Derakhshan, 2017). While the work of fact-checkers and flagging of false content are not completely useless here, in the current state of play this is far from sufficient to tackle the problems of disinformation, including in political advertising and especially of dark ads 5. The efficiency of online micro-targeting depends largely on data and profiling. Therefore, if effectively implemented, the GDPR should be of use here by preventing the unlawful processing of personal data.
Another important aspect of the European approach are stronger sanctions in cases when the rules are not respected. This entails increased powers of authorities, first and foremost of DPAs and increased fines under the GDPR. Data protection in the electoral context is difficult to ensure if the cooperation between different authorities with competence for electoral matters (such as data protection commissioners, electoral authorities and audio-visual media regulators) is not established and operational. While the European approach strongly recommends cooperation, it is not easily achievable at a member state level, as it requires significant investments in capacity building and providing channels for cooperation. In some cases, it may even require amendments to the legislative framework. The cooperation of regulators of the same type at the EU level is sometimes hampered by the fact that their competences differ in Unpacking the "European approach" to tackling challenges of disinformation and political manipulation different member states.
The CoP also contains a commitment on "empowering the research community". This means that the CoP signatories commit themselves to support research on disinformation and political advertising by providing researchers access to data sets, or collaborating with academics and civil society organisations in other ways. However, the CoP does not specify how this cooperation should work, the procedures for granting access and for what kind of data, or which measures should researchers put in place to ensure appropriate data storage, security and protection. In the reflection on the platform's progress under the Code, three Commissioners warned that the "access to data provided so far still does not correspond to the needs of independent researchers".

CONCLUSIONS
This paper has given an overview of the developing European approach to combating disinformation and political manipulation during an electoral period. It provided an analysis of the key instruments contained in the approach and drew out the key principles upon which it builds: data protection; transparency; cooperation; mobilising the private sector; promoting diversity and credibility of information; raising awareness; empowering the research community.
The principles of legacy media regulation in the electoral period are impartiality and equality of opportunity for contenders. This entails balanced and non-partisan reporting as well as equal or proportionate access to media for political parties (be it free or paid-for). If political advertising is allowed, it is usually subject to transparency and equal conditions requirements: how much was spent on advertising in the campaign needs to be presented through spending on different types of media and reported to the competent authorities. The regulatory framework requires that political advertising be properly labelled as such.
In the online environment, the principles applied to legacy media require further elaboration as the problem of electoral disinformation cuts across a number of different policy areas, involving a range of public and private actors. Political disinformation is not a problem that can easily be compartmentalised into existing legal and policy categories. It is a complex and multi-layered issue that requires a more comprehensive and collaborative approach when designing potential solutions. The emerging EU approach reflects the necessity for that overall policy coordination.
The main fuel of online political campaigning is data. Therefore, the protection of personal data and especially of "sensitive" data from abuse becomes a priority of any action that aims to ensure free, fair and plural elections. The European approach further highlights the importance of transparency. It calls on platforms to clearly identify political advertisements and who paid for them, but it fails to emphasise the importance of having a repository of all the material used in the campaign provided by candidates and political parties. Furthermore, a stronger requirement for political parties to report on the amounts spent on different types of communication channels (including legacy, digital and social media) is lacking in this approach, as well as the requirement for platforms to provide more comprehensive and workable data on sponsors and spending in political advertising.
The European Commission's communication of the European approach claims that it aims to address all actors active in the electoral context, including European and national political Unpacking the "European approach" to tackling challenges of disinformation and political manipulation parties and foundations, online platforms, data analytics companies and public authorities responsible for the electoral process. However, it seems that the current focus is primarily on the platforms and in a way that enables them to shape the future direction of actions in the fight against disinformation and political manipulation.
As regards the principle of cooperation, many obstacles, such as differences in competences and capacities of the relevant national authorities, have not been fully taken into account. The elections are primarily a national matter so the protection of the electoral process, as well as the protection of media pluralism, falls primarily within the competence of member states. Yet, if the approach to tackling disinformation and political manipulation is to be truly European, there should be more harmonisation between authorities and approaches taken at national levels.
While being a significant step in the creation of a common EU answer to the challenges of disinformation and political manipulation, especially during elections, the European approach requires further elaboration, primarily to include additional layers of transparency. This entails transparency of political parties and of other actors on their actions in the election campaigns, as well as more transparency about internal processes and decision-making by platforms especially on actions of relevance to pluralism, elections and democracy. Furthermore, the attempt to propose solutions and relevant actions at the European level faces two constraints.
On the one hand, it faces the power of global platforms shaped in the US tradition, which to a significant extent differs from the European approach in balancing freedom of expression and data protection. On the other hand, the EU approach confronts the resilience of national political traditions in member states, in particular if the measures are based on recommendations and other soft instruments.
Unpacking the "European approach" to tackling challenges of disinformation and political manipulation disinformation campaigns in the context of elections to the European Parliament. Retrieved from https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-cybersecurityelections-recommendation-5949_en.pdf Unpacking the "European approach" to tackling challenges of disinformation and political manipulation Unpacking the "European approach" to tackling challenges of disinformation and political manipulation