Reading between the lines and the numbers: an analysis of the first NetzDG reports

: Approaches to regulating social media platforms and the way they moderate content has been an ongoing debate within legal and social scholarship for some time now. European policy makers have been asking for faster and more effective responses from the various social media platforms to explain how they might deal with the dissemination of hate speech and disinformation. After a failed attempt to push social media platforms to self-regulate, Germany adopted a law called the Network Enforcement Act (NetzDG) which forces platforms to ensure that “obviously unlawful content” is deleted within 24 hours. It contains an obligation that all platforms that receive more than 100 complaints per calendar year about unlawful content must publish bi-annual reports on their activities. This provision is designed to provide clarification on the way content is moderated and complaints handled on social networks. After the NetzDG came into force, initial reports reveal the law’s weak points, predominantly in reference to their low informative value. When it comes to important takeaways regarding new regulation against hate speech and more channelled content moderation, the reports do not live up to the expectations of German lawmakers. This paper analyses the legislative reasoning behind the reporting obligation, the main outcomes of the reports from the major social networks (Facebook, YouTube, and Twitter) and why the reports are unsuitable to serve as grounds for further development of the NetzDG or any similar regulation.


INTRODUCTION
Good content, bad content? What seems reasonable to some might be offensive to others.
Depending on our social norms, laws and culture, we tend to categorise what we see on the internet as a process of content selection that fits our expectations and our needs or not (Gillespie, 2018, p. 197). This plays into our perceptions of the content disseminated by users on social media platforms and these inherent discrepancies constitute a reason why regulating online speech is still an unresolved issue for lawmakers. In the legislative process one pervading question remains: how to improve content moderation in the light of long established legal provisions. The minimum European lawmakers tend to agree upon is the legitimacy of the need to take down unlawful content. At the very least, this is what the German government assumed when it issued a law that made it mandatory for the largest social media platforms to ban obviously unlawful content within 24 hours. In doing so, Germany was one of the first countries to issue a so-called anti-"hate speech" law and it has been the target of fairly constant criticism since it was instated. Not only do critics label it as unconstitutional (Gersdorf, 2017;Schulz, 2018), but it is also mentioned in the international discussion as a bad example of platform regulation (Special Rapporteur for the UN: Kaye, 2018). The NetzDG was meant to enhance the protection of users against hate speech and to provide more clarity on the way platforms handle and moderate unlawful content (Wischmeyer, 2018, p. 7). However, as this paper will show, so far there is still no certainty about either of these goals mainly because the reports do not provide well defined results. Many have expected the published reports to provide more substantial insights on content moderation policies -an expectation that seems to have been betrayed (Gollatz et al., 2018).
The main aim of this article is to provide an analysis of the reports published by the major social media platforms, including insights into the implementation of the NetzDG while focusing on the obligation of ensuring user-friendly complaint procedures. The result of this analysis shows that it is not enough to require companies to publish transparency reports if the information they contain has no real informative value. This article shows that the law might have incentivised the platforms to remove hate speech faster, but that there is no certainty about its effect due to the lack of substantial information in the reports. Facebook for example does not fully comply with the obligation of supplying an easily recognisable complaint procedure, therefore the number of complaints cannot be considered conclusive. After introducing the NetzDG in general, and the reasons why it is deemed unconstitutional in the German debate, the present article dives deeper into the rationale of the NetzDG. This law can serve as an example of "new-school speech regulation", that is, a type of regulation that is aimed at the owners of the digital infrastructure instead of the speakers themselves (Balkin, 2014(Balkin, , p. 2298. Starting from the published complaint figures, this article examines the implementation of complaint tools and eventually goes to show that the reports constitute no factual ground for a re-evaluation of the NetzDG or for a similar regulatory project, even though the reporting obligation is a key provision. The law leaves room for interpretation with regards to its implementation and this has somehow hollowed out the obligation to publish transparency reports as the figures do not reflect the full picture. The reporting obligation under NetzDG might however serve as a counterexample in the discussion on transparency and the corresponding reports. It shows that we need to formulate transparency rules in a clearer way so that the data collected can serve the purpose of iteration for both the companies and the state. from the state. In addition, German law now forces social networks to become more active. They are obliged to implement procedures that ensure obviously unlawful content is deleted within 24 hours of receiving a complaint. If there is any doubt regarding a takedown decision, the procedure may take up to seven days. After that deadline, a final decision on the lawfulness of a post must be reached and unlawful content needs to be removed, that is, either blocked or deleted. The fines for a breach of this obligation can reach up to €50 million. In addition to complying with this operational provision, social media platforms are obliged to publish biannual reports which will be addressed and analysed below.

MAIN ALLEGATIONS REGARDING THE VIOLATION OF CONSTITUTIONAL LAW
In order to fully assimilate the importance of the published reports, it is helpful to know more about the context of the NetzDG. This law has been under attack ever since its first draft was made public. Not only was it perceived as an ad hoc legislative reaction, more importantly it is considered merely as a loophole that transfers public responsibility to a private actor. It has been criticised from many perspectives and an exhaustive description would go beyond the scope of this article (cf. Schulz, 2018, passim). However, a brief overview is necessary to comprehend the context in which the first NetzDG reports were published. To state it briefly, a wide array of scholars, politicians and activists demanded the abrogation of the NetzDG or a revised version in the near future. Liberal politicians have opposed the law in court and there have been counter-proposals (e.g., the Green Party's proposal from January 2019). The criticism did not diminish over the course of 2018 although the consequences for free speech were not as severe as expected, instead the numbers below show that the NetzDG did not really have an impact on content moderation. Nonetheless, a law has no legitimation to stay in effect if it is deemed unconstitutional, which is why it is still expected to be revised. The following passage will give a general overview of the allegations made against the NetzDG. The procedural points of criticism will be omitted because they are more technical and inherent to the German legal system and add little to the present argument. Others are related to human rights infringements and partly transferrable to similar law projects in other jurisdictions (cf. Funke, 2018) and are, therefore, more relevant to this paper. 1 The main focus of the (non-procedural) criticism relates to possible violations of freedom of speech in various ways. The de facto obligation for social networks to delete manifestly unlawful content within 24 hours has raised questions pertaining to the potential overblocking of content and to the privatisation of the judiciary due to the interpretation and application of criminal law by private companies. These two elements combined can in turn have chilling effects on freedom of speech and we will take a closer look at them below. First, however, one must bear in mind that the question of whether the content targeted by NetzDG may be deleted is not my main concern. First, because such content is illegal and second, because takedown is still the most effective tool social media platforms make use of when it comes to hate speech (Citron & Norton, 2011, p. 1468Klonick, 2018, p. 12). The main source of scepticism is the shift of responsibility towards private companies as a corollary of the obligations that have been laid upon them (Guggenberger, 2017b(Guggenberger, , p. 2582). In sum, scholars agree that the NetzDG is not really an exemplar of methods for fighting hate speech online (Gersdorf, 2017, p. 447;Guggenberger, 2017a, p. 101;Balkin, 2018, p. 28).
The German Basic Law allows lawmakers to restrict fundamental rights under certain conditions and freedom of speech may be constrained by general laws according to art. 5 (2) Basic Law. This includes criminalising offensive speech and it can have a horizontal effect between private parties when private actors require one another to observe the law. This is acceptable as long as it does not result in overblocking. Overblocking is the term used when content is deleted or blocked for no substantial reason, because the incentive to immediately delete rather than perform more fundamental checks arises (Richter, 2017;Holznagel, 2018, p. 369). Criminal offences related to the protection of honour (such as libel) mostly overlap with the categories used by social networks in their "community guidelines". However, this overlap is not preserved when it comes to the specific elements of a criminal offence. One would need to know and practice (national) criminal law and consider the context of the generated content (Wimmers & Heymann, 2017, p. 100). These various parameters make it difficult to parse unlawful content in a short timeframe and that is what the social media platforms have been insisting upon in recent years when justifying the slow removal of hate speech. The problem lies in the risk that there is just not enough time to make accurate takedown decisions coupled with the high level of pressure of being fined. In this scenario, the net result could potentially be overblocking (Kaye, 2018, p. 7). In accordance with section 3 (2) NetzDG, social networks must provide a procedure that ensures the deletion of obviously unlawful content within 24 hours after a user complaint. If they fail to do so, as mentioned earlier, they risk a fine of up to €50 million, which makes the incentive for decisions in favour of takedowns stronger than before the implementation of the NetzDG (Schiff, 2018, p. 370). However, the fear of overblocking doesn't seem to have materialised when looking at the takedown numbers in the reports published by the companies concerned by the NetzDG (Wieschmeyer, 2018, p. 20).
However, the critique of a substantial shift of responsibility from the judiciary to platforms themselves is still under discussion. This is mainly due to the wording of sec. 3 (2) Nr. 2 NetzDG, that is, to delete "content that is manifestly unlawful". Generally, content-related regulation has to be as neutral as possible with regards to the opinions expressed, i.e., it is subject to a strict proportionality test (so-called "Wechselwirkung"). The scope of application of a content targeting law must be sufficiently precise to avoid too much room for interpretation, which could, in the case of the NetzDG, result in an overly broad definition of legal terms and an unsubstantial removal of content. Using an unspecified legal term such as "manifestly", although the law is applied by a private party and not by a judge, puts the power of the judiciary in its interpretation and the application of the law at risk. Clear legal definitions and specific criteria are necessary to constrain the platforms' discretion (Nolte, 2017, p. 556-558;Liesching, 2018a, p. 27;Wieschmeyer, 2018, p. 15-16;Belli, Francisco, & Zingales, 2017, p. 52;Nunziato, 2014, p. 10); leaving the interpretation of a key term of the bill too unspecified is considered as unconstitutional (FCC: BVerfGE 47, 109, 121; Wimmers & Heymann, 2017, p. 97-98;Reuter, 2018, p. 86) because this kind of interpretation is actually a core function of the judiciary.
Unspecified legal terms are usually only filled with meaning by court rulings. Up until then, they can be loaded with interpretations which might be revised later. In order to decide whether a statement is still within the boundaries of the law and therefore still protected by freedom of speech, a judge will have to examine the requirements mentioned above and potentially balance the fundamental rights of both parties. The result can then later subsequently be applied by private parties as a standard or a guideline. What is "manifestly" illegal? The NetzDG's explanatory memorandum defines it as follows: "Content is manifestly unlawful if no in-depth examination is necessary to establish the unlawfulness within the meaning of sec. 1 (3)." This sentence does not explain from which starting point an examination is considered "in depth", it leaves the original question of how to define "manifestly" unanswered. Still, users' right to take the platforms' decisions to court remains intact when social networks delete user-generated content under NetzDG. This possibility makes it unlikely that the NetzDG will be abrogated on the grounds of the 'privatisation' argument. Nevertheless, the complexity of this assignment, deciding whether content is unwanted but perhaps not unlawful, is a core element of the public debate around content moderation (Kaye, 2018, p. 4). Because of the important implications for users' freedom of speech, one cannot help but wonder about the fact that the German lawmakers delegated this task to social media platforms instead of enhancing their own law enforcement forces (Schmitz & Berndt, 2018, p. 7;Wieschmeyer, 2018, p. 15-16;Buermeyer, 2017).

THE OBLIGATION TO IMPLEMENT A COMPLAINT PROCEDURE
All in all, German lawmakers were aiming for a faster response from social networks as to when content is reported as unlawful. It resulted in the obligation to ensure a procedure that would guarantee a reaction on "manifestly unlawful content" within 24 hours after receipt of the complaint. As mentioned above, this provision is one of the most criticised because of the uncertain effects it could possibly have on free speech (Keller, 2018, p. 2). This topic is rightly at the centre of the debate because scholars are only beginning to know more about the effects of these rules on the behaviour of both the platforms and the users (Gollatz et al., 2018). However, this paper focuses more on the way social networks have implemented the obligation to ensure a complaint procedure. This will later contribute to measure the informational value of the reports (see infra, section 4).
According to sec. 3 (1) NetzDG, social networks have "to supply users with an easily recognisable, directly accessible and permanently available procedure for submitting complaints about unlawful content." The implementation of this obligation is decisive for the relevance of the reports when it comes to conducting meaningful evaluation and regulatory impact assessments. I evaluated the significance of the reports in view of the accessibility of the complaint tool for users and their comparativeness. As mentioned above, the number of complaints filed could be of significance as far as the regulatory goals are concerned if the manner in which the data was collected and presented in the reports was different, or on the other hand, if the provision was identically implemented, regardless of the company carrying out their legal obligations. One needs to bear in mind (again) the explanatory memorandum of the NetzDG which states that social networks have to provide a "user-friendly procedure for submitting complaints about unlawful content". Furthermore, the procedure must be "easily recognisable, immediately accessible and always available". The memorandum does not provide any further expectations or provisions concerning the implementation of the complaint tool. The second half of the memorandum's section on the complaint procedure contains the requirements for the way that complaints are handled once submitted by a user. It does not elaborate any further on the way social networks should design the complaint procedure as such, leaving the concrete implementation of complaint procedures, for the most part, at the platform's discretion. This aspect was probably not expected to be as decisive as it appears to be now that the reports show that at least one platform violates this provision. The criteria mentioned above regarding expectations of the "user-friendly procedure" shall therefore be at the centre of the remarks below when it comes to the informative value of the reports.

THE NETZDG REPORTING OBLIGATION
In order to gain a better understanding of the way social networks moderate user-generated content and how they decide whether or not to remove content, the German lawmakers included a biannual reporting obligation. According to section 2 NetzDG: Providers of social networks which receive more than 100 complaints per calendar year about unlawful content shall be obliged to produce half-yearly German-language reports on the handling of complaints about unlawful content on their platforms, covering the points enumerated in subsection (2), and shall be obliged to publish these reports in the Federal Gazette and on their own website no later than one month after the half-year concerned has ended. The reports published on their own website shall be easily recognisable, directly accessible and permanently available.2 The NetzDG's explanatory memorandum states that the reporting obligation is required "in order to create the necessary transparency for the general public", 3 a requirement that was similarly formulated a while ago by scholars and activists (Gillespie, 2018, p. 199). The secondary goal of the reporting obligation is to provide numbers and facts "necessary in the interest of an effective impact assessment". The German Parliament is currently discussing a revision of the law subsequent to proposals that range from a complete abrogation of the law to only light adaptations. Changes made to the NetzDG will be based, at least partly, on the reports.
The whole NetzDG project also serves as an example (for better or worse) for similar legislative undertakings. These reports are, therefore, central to a better development of the legislative tool, not only at national level, but also to answer the challenge posed by content moderation in general. As mentioned above, the German approach was quite a push forward due to political circumstances and public pressure. There is so far no equivalent in other jurisdictions and no solutions considered standard nor has best practice been established across borders because, when it comes to balancing content moderation and freedom of expression, the issues that arise are too numerous and too varied.
Hate speech, fake news, copyright infringements -just to name a few of the issues that ariseare often confused in the public debate and their respective definitions differ from one country to another. Considering the fact that social media platforms act globally, a one-size-fits-all solution would reduce costs. At the same time, such an extensive approach could be a threat to freedom of expression because of scopes of application that are too broad, leading to more restrictive regimes. To design a new regulatory framework, it is, therefore, necessary to monitor the effectiveness of its application. On that account, implementing a reporting obligation in sec.
2 NetzDG was necessary to improve this type of regulation (Eifert, 2017(Eifert, , p. 1453. The memorandum also states that producing the reports shall "ensure a meaningful and comprehensive picture of how they [the social networks] deal with complaints". As sec. 2 (2) NetzDG determines the minimum requirements for the reports, the memorandum justifies the reporting obligation with the special role of social networks. They are "crucial to the public debate" and must take on their "increased social responsibility". Rather than providing numbers without context, the reports are supposed to help understand the connection between the grounds on which social networks delete or block unlawful content and the provisions provided by law. Unfortunately, this expectation was not fulfilled.
The minimum requirements for the reports include specific points under sec. 2 NetzDG, such as providing the "number of incoming complaints about unlawful content" (nr. 3), the "number of complaints for which an external body was consulted" (nr. 6) and the "number of complaints in the reporting period that resulted in the deletion or blocking of the content at issue" (nr. 7). The numbers listed in nr. 7 need to be broken down according to the reasons for the specific complaint which makes them particularly interesting with regards to the regulatory goal. The explanatory memorandum of the NetzDG is quite brief on that point: it merely mentions "the interests of transparency and the effectiveness of the complaint management" as the reason for that specific point and then refers to the comments on sec. 3 NetzDG ("Handling of complaints about unlawful content"). This part nevertheless reveals the tight connection between the reports and the handling of complaints. As a result, the obligation to report mainly serves to enhance transparency which goes hand in hand with an effective impact assessment of the new law, as stated in the memorandum, and the long-term goal of developing this regulatory framework in a sensible manner. These goals are important characteristics for the evaluation of the published reports. It will become clear at a later point in this article that they were perhaps underrated and minimised by the platforms -as propositioned by the title.

MAIN RESULTS OF THE FIRST ROUND OF REPORTS
According to sec. 1 (1) NetzDG, only social networks that have more than two million users have to comply with its rules and, therefore, with the reporting obligation in sec. 2. In view of the user numbers on the largest social networks, the minimum of 100 complaints per calendar year (as an obligation for having to publish reports) was easily arrived at by Facebook, YouTube and Twitter. Their reports demonstrate many similarities in the way they handle the matter of content moderation, but they also feature notable differences as far as the numbers of complaints are concerned. 4 The three reports from Facebook, YouTube and Twitter were analysed for this article not long after their publication, in August 2018. The overall result, as will be explained below, is that provisions for these types of reports need to be precise if one wishes to gather meaningful data. In substance, the reports show that social media platforms tend to moderate content on the grounds of their own community guidelines more than on the basis of national criminal law. I presume that the reason for this is that it allows them to react on a global scale rather than on a national one. Furthermore, social media platforms tend to use terms and tonalities in their community guidelines that are very similar to the vocabulary used in the NetzDG, making it rather unclear to the user where the differences lie (cf. Celeste, 2018).
This similarity between reports being stated, the divergence between the complaint figures is quite significant.

COMMUNITY GUIDELINES ARE PRIORITISED
As the reports show, the content review process is based on a two-step approach for all three platforms. After being notified of a user complaint, the first check is made on the grounds of community guidelines or standards (both terms being used synonymously hereinafter). If the content violates these internal rules the reviewer will take the content down. Only if the result of that review is negative and if the user also submitted a complaint under NetzDG provisions (not only community guidelines), the content will be further checked for NetzDG infringements. It remains unclear how much content was taken down as hate speech under community guidelines, which could also have been blocked because of a violation of German criminal law.
To submit a complaint under the NetzDG, the user will either have to tick an additional NetzDG box in the case of YouTube and Twitter, or, in the case of Facebook, go to the "Help Centre" and follow a specific NetzDG complaint link. In the next subsection, I will take a closer look at how each platform implemented the NetzDG complaint procedure and the subsequent effects on their complaint numbers. The reports do not state whether complaints have been examined on NetzDG violations even if they were not flagged as such by users. Nevertheless, it appears that YouTube, Twitter and Facebook all prioritise their own community guidelines since none of them offers to immediately submit a complaint under NetzDG (which is not mandatory under sec. 3 (2) NetzDG).
A reason for this prioritisation could be the subsequent takedown options. So-called unwanted content, that is, content that violates community guidelines, will be deleted globally whereas content that is unlawful under German penal law (and therefore subject to removal under NetzDG) could only be blocked in Germany. Considering that content might be illegal in several countries, deleting it according to community guidelines might be more effective than taking it down for just one single country, with the possibility of repeating this action in another country further down the line. This raises questions of freedom of expression in privately-owned communication spaces, especially with regards to collateral censorship (Eifert, 2017(Eifert, , p. 1452Balkin, 2018, p. 6). Although, from a European perspective there is a big overlap between unlawful and unwanted content, the definitions do not completely intersect. From a communications science perspective, it might be questionable as to which consequences the NetzDG provisions have on the way social networks formulate their community guidelines on a global scale because they could adjust their own policies to fit the broadest definition of hate speech (Gollatz et al., 2018, p. 6). It also points out that adapting the community guidelines to national (criminal) law in order to decrease the differences between community rules and German law could, in turn, have a massive influence on another country's regulation of social networks. However, there is no certainty that platforms will follow that path because the cost of adapting to national legislation could be too high. Instead, they could broaden their community guidelines to cover multiple definitions of hate speech -eventually restricting more speech than necessary under respective national regulations (Eifert, 2017(Eifert, , p. 1452Belli, Francisco, & Zingales, 2017, p. 46;Koebler & Cox, 2018).

CONTENT MODERATION: HOW?
Under the reporting obligation of sec. 2 NetzDG, it is mandatory to describe the personnel and organisational resources deployed to comply with the provisions of the law. The reports show that human content reviewers are by no means replaced by machines (Delort et al., 2011, p. 24).
None of the three social networks examined for this paper solely rely on algorithms or artificial intelligence to cover the tasks of recognising and reviewing unlawful content or handling user complaints. Given the amount of data uploaded by users, filters and other, technologies are in use and undergo constant optimisation. Yet, in order to properly review content that might be unlawful, platforms still heavily depend on human moderators (Roberts, 2016;Matsakis, 2018).
The role of moderators in the review process is even more important when it comes to evaluating content that does not violate community standards but is potentially unlawful (see infra, section 3.1.). Cases that do not violate community guidelines but might be punishable under German criminal law are in general more complex and as a result require more sophisticated review. The cost of content moderation therefore increases when it has to comply with the law rather than with community guidelines, this is because of the complexity of applying legal provisions instead of internal guidelines. From a cost-benefit point of view, platforms would prefer to minimise staff overheads (Read, 2019) which is why the way they address this challenge is worth paying attention to.
Through a partnership with a German company named Arvato, Facebook has a team specially dedicated to NetzDG complaints numbering approximately 65 employees (as of August 2018).
The staff speaks German and is trained to handle the complaints that are within the scope of section 1 (3) NetzDG. As an initial step, the "Community Operations" team reviews the reported content to determine whether or not it violates Facebook's Community Standards. If the issue is taken further, the second part of the two-step approach, a so-called "Legal Takedown Operation" team takes over, who are specially trained to review content for potential illegality. The report does not mention any software that would support the work of the Facebook reviewers. This might indeed not be necessary since the amount of complaints under NetzDG seems quite manageable (see infra, section 3.3). In general, Facebook makes use of AI to identify content that clearly violates their community standards (Koebler & Cox, 2018), but relies on approximately 15,000 moderators worldwide to review unwanted content (Newton, 2019) as hate speech is still difficult to identify automatically with any precision (Koebler & Cox, 2018).
YouTube has integrated tools such as content filters in their upload process. Their NetzDG report states that they already filter any video uploaded for unlawful content and they interpret this measure as an additional compliance with the provisions of sec. 1 (3) NetzDG. YouTube had to deal with copyright infringements a long time before the hate speech problem became so virulent and has therefore been using their filtering software, ContentID to manage copyright issues. Since June 2017, YouTube has also integrated machine learning in order to optimise the filtering of illegal content (YouTube, 2017). Furthermore, there is a team dedicated to NetzDG flagged content that, similar to Facebook's approach, does not violate community guidelines but is reported by a user or a complaints body as a NetzDG violation. For the time being, their team dedicated solely to NetzDG complaints numbers approximately 100 employees.
According to Twitter's report, over 50 people work on NetzDG complaints. The report does not include any further information on the technological tools used to support that team in any way.
Given Twitter's massive deletion of (presumably) fake accounts in July 2018 on the one hand, and the immense volume of content constantly uploaded on the other, it is quite likely that Twitter also uses filters to detect unwanted content. The company does not disclose which technological resources it uses to find and review unwanted content such as hate speech.
However, it probably makes use of algorithms and machine learning to support human reviewers to reduce the amount of labour involved in the process. Such filters could also be used by Twitter to proactively detect content that is potentially unlawful under NetzDG but it does not mention them in its report (this information is not mandatory under NetzDG reporting provisions).
Since automated technologies are not yet able to detect cases related to sensitive or contextrelated issues that can be classified as as hate speech, satire, awareness-or education-related or even politically sensitive, they are unable to handle all types of complaints. Although studies show that the technology is getting better at detecting hate speech and offensive language using machine learning (Gaydhani et al., 2018), researchers tend to agree that so far no technology is capable of recognising hate speech beyond very clear cases of offensive speech (Gröndahl et al., 2018, p. 3). YouTube's statement on this subject is clear and straightforward: "Algorithms are not capable of recognising the difference between terror propaganda and critical coverage of such organisations or between inciting content and political satire." The inability of filters to recognise unwanted content could be the cause of unsubstantial content removal. Several cases were left uncommented on by the concerned platforms, including satire and innocent pictures, which could be because of human mistakes but perhaps also because of algorithmic and intelligent systems' failure to distinguish unwanted content from uncontentious content. These cases have been discussed in the media, but none of the platforms disclosed on which grounds the content was removed (Schmitz & Berndt, 2018, p. 31). Taken together, the point on the "resources deployed to comply with the NetzDG" constitutes only one of many sources of speculation around the use of technology in content moderation and the reports are not specific enough to draw further conclusions in that area.

NUMBER OF COMPLAINTS
The biggest divergence between the reports lies in the number of complaints from one company Twitter on the one hand, and Facebook on the other, especially since Facebook has the most users and the least complaints (absolutely and relatively). As already mentioned, the implementation of a complaint tool for users is crucial to the numbers that later constitute the central part of the report. After reading the reports and exploring the complaint mechanisms, the correlation between the flagging tool and the complaints filed seems obvious but should be analysed carefully. Compliance with section 3 (1) NetzDG bears the only meaningful difference between the social networks, hence its implementation is why Facebook's numbers are significantly lower. Regarding the implementation of a NetzDG complaint tool, two approaches can be observed: either the NetzDG complaint procedure is incorporated within the flagging tool of the social network or it is located somewhere else. In the latter approach, the usual flagging tool does not include the option "complaint under NetzDG" at first glance. While Google and Twitter chose to include the NetzDG complaint in their flagging tool (visible in the first step described above), Facebook placed the access to its NetzDG complaint procedure separately from the content under its imprint and legal information.
To be more specific, Facebook's complaint form according to NetzDG is not incorporated in their feedback function next to the contentious post. Users who want to report third-party content will first be offered the report categories under Facebook's community guidelines, when clicking on a button to "give feedback". Categories include nudity, violence, harassment, suicide or self-harm, fake news, spam, hate speech and illegal sales. In addition to Facebook's reporting tool, a complaint under NetzDG can be submitted by using an external link located next to Facebook's "Impressum" (the company's imprint and legal information). This begs the question of whether or not this type of implementation of sec. 3 NetzDG is sufficient, which I examin below. Before analysing the consequences of this implementation in the next section, one has to bear in mind that the NetzDG does not oblige social networks to incorporate their respective complaint procedure into pre-existing complaint mechanisms.

INHERENT BIAS DUE TO A DIVERGENT IMPLEMENTATION
According to sec. 3 (1) NetzDG, social networks have to implement an "easily recognisable, directly accessible and permanently available procedure for submitting complaints". As

USER-FRIENDLY COMPLAINT PROCEDURE
As aforementioned, to comply with sec. 3 (1) NetzDG, platforms do not have to connect existing reporting tools to the NetzDG procedure. The latter must, however, be user-friendly, that is, an "easily recognisable, directly accessible and permanently available procedure". The legislative memorandum does not provide further details on how these requirements shall be translated in the design of the complaint procedure. Hence, the question is not about the margin of discretion, but whether or not a procedure such as Facebook's is meeting these criteria. First of all, the link to the NetzDG complaint form is not easily recognisable for users since it is located far away from what users are able to immediately see when using the platform's complaints procedure. When a user sees a post that he or she believes to be unlawful, the feedback tool alongside it only shows the categories of the community guidelines and does not mention the possibility of reporting it under NetzDG provisions. The detour via an external link located next to Facebook's "Impressum" can hardly be described as "easily recognisable". Providing a NetzDG link next to a website's imprint is easily recognisable if you are looking for Facebook's general company information, but not if your goal is to report hate speech. That being said, it is "permanently available" when a user accesses Facebook in Germany.
Looking at the low volume of complaints under NetzDG in Facebook's case, one cannot help but connect the remote location of the complaint link and the numbers in Facebook's report. 886 cases in the first half-year of 2018 do not correlate with the high number of users and Facebook's constant struggle with unwanted content when it comes to hate speech (and other contentious posts) (Pollard, 2018). Libel, defamation or incitement to violence have been a constant issue for the world's largest social network (Citron & Norton, 2011, p. 1440 and Facebook only recently started to uncover some of its takedown rules (Constine, 2018). The latter have been kept secret for a long time, opening up speculation as to Facebook's real takedown policies. Social media platforms are often criticised for their lack of transparency when it comes to policing speech (Citron & Norton, 2011, p. 1441Ranking Digital Rights, 2018 Index, Section 6).
Every Facebook user has access to a reporting tool (the feedback button next to a post), regardless of the NetzDG provisions, but he or she might not be aware of the additional possibility provided by the NetzDG -which makes this case quite special. Not only is this additional complaint procedure well hidden, but once a user is presented with the NetzDG complaint form (on Facebook), he or she will be warned that any false statement could be punishable by law (even if this rule doesn't apply to statements made to private parties). On the one hand, this might discourage people who wish to complain for no genuine reason and reduce the costs of unnecessary review loops. On the other hand, it could prevent users from reporting potentially unlawful content, which is cause for concern as it may result in chilling effects. The notion of chilling effects comes from US First Amendment scholarship and was introduced by a US Supreme Court ruling in 1952. The "chilling effects" concept essentially means that an individual will be deterred from a specific action under the "potential application of any civil sanction" (Schauer, 1978, p. 689). If a user -who is probably unsure of the lawfulness of thirdparty content -tries to use the complaint procedure and is confronted with the warning that any false statement could lead to legal steps, the act of deterrence seems likely. Again, Facebook's NetzDG report is too short and unspecified to infer from the simple numbers on chilling effects.
The latter is nevertheless not to be underestimated and all these elements combined suggest that Facebook is pushing its users away from the NetzDG complaint procedure.

BYPASSING THE LEGISLATIVE GOAL?
The concrete implementation of this complaint procedure is decisive for its usability, but it is also relevant for the informational value of the figures featured in the reports. This begs the question as to whether implementing the complaint procedure the way Facebook did could mean bypassing the legislative goal of the NetzDG. The implementation of the complaint procedure should in the first place -as stated above -protect users, not serve reporting purposes. A company's compliance with this obligation is therefore related only collaterally to the information value of the reports. Nonetheless, it is important when it comes to evaluating how the law contributes to the enhanced protection of users. The arguments above have shown that although social networks have fulfilled their transparency obligation by publishing reports, there is actually very little that we can conclude from them. The Facebook case makes it even more difficult to use the figures as the foundation for further development. Compliance alone does not lead to insightful data. Speculating on the reasons why Facebook decided to keep users away from using the NetzDG complaint procedure will not lead anywhere, but what is certain is that the law was implemented in a rather symbolic way. One may raise doubts on how accurate the formulation of sec. 3 (1) NetzDG regarding the implementation of an "easily recognisable" complaint procedure is. However, it would be too easy to blame it on the wording alone.
The argument was made that Germany could not expect platforms to "self-regulate in its [in Germany's] interest" (Fagan, 2017, p. 435) because of the relatively small size of its market on a global scale. On the contrary, if a company wants to do business in several countries it needs to respect each country's laws, especially if the laws in a specific country are, in principle, in line with the company's own guidelines -just as most of the offences enumerated in the NetzDG overlap with the "hate speech" category of platforms' community guidelines. In the case of Germany, there is no legal obligation to prioritise the relevant legal norms over the platform's community guidelines (as long as the unlawful content will be taken down), but that does not set aside the obligation to implement a user-friendly NetzDG complaint procedure. The subsequent question is: does Facebook's failure to implement an easily-recognisable complaint procedure according to sec. 3 (1) NetzDG mean that it is also bypassing the general legislative goal?
The answer is that, even though Facebook's complaint procedure is very likely to violate the legal provision (see supra, section 4.1), it has -in sum -achieved the outlined objective, that is, to remove hate speech quicker. The German government's overall goal in 2017 was to force social networks to be speedier in their responses to alleged offences on their platforms. That is why the time span for takedown decisions is limited to 24 hours and why any breach of the obligation to ensure that this type of fast-track procedure would be severely fined. The legislator wanted to remove unlawful content from sight as quickly as possible while ensuring the users' right to a due process. These reports confirm there was a need to address the issue of verbal coarsening and to protect digital communication spaces from hate speech. All three examined platforms name hate speech as the first source of complaints under the NetzDG. The social networks all implemented additional reporting tools (as part of the mandatory procedure), they deployed additional resources, and responded to the majority of complaints within 24 hours. To that extent the main requirements were met. The entry into force of the NetzDG led to larger and more specialised reviewer teams who could potentially provide a more granular review procedure. Under these circumstances it would be wrong to conclude that any of the platforms explicitly bypassed the primary legislative goal. Facebook's implementation nevertheless undermines the significance of the reports since the numbers produced cannot be taken into account for an advanced evaluation.

CONCLUSION
The discussion around content moderation by social media platforms and its regulation is still unresolved on many levels. More work needs to be done on the relation between private rules for content moderation and national laws, including the question of prioritisation. This is also true for the enforcement of rules and the role of non-human content review in that process. We have seen that platforms cannot solely rely on technology, such as upload filters, for example, to carry out the task of content moderation since the technology is still not fully capable of recognising hate speech. Although most of the criticism around the NetzDG with regards to constitutional law still remains valid, the reports analysed in this paper show that there are other aspects on which attention should lie, such as the implementation of complaint procedures. Unfortunately, the reports analysed constitute no reliable ground for screening and a further development of this obligation despite the data they contain. This is mainly due to the fact that the biggest player, Facebook, has dodged the obligation of creating an accessible and user-friendly NetzDG complaint procedure, preferring to manoeuvre users towards its own feedback form featuring its own categories of community standards. There was no change in this regard in the second round of reports. As long as platforms prioritise their own community rules, the effects on online speech remain more or less similar than before the coming into force of the NetzDG making it almost impossible to truly evaluate the impact of such regulation.
We can nonetheless wonder about the added value of an additional complaint tool within the platform's feedback mechanisms. Since (most) social media platforms operate globally, moderating content on the basis of global community guidelines is more cost-effective than if it was conducted on the basis of national regulation. Thus, the NetzDG reports could lead to the conclusion that this type of additional feedback tool, which would vary from country to country (because of national regulations), is ineffective and therefore unnecessary. As mentioned in the last section, the NetzDG did push the platforms to eventually take action against hate speech, an achievement which should not be downplayed. Perhaps, ensuring a faster review of user content by a more specialised content moderator is a sufficient goal for this type of law. The only conclusion to be drawn is that, for the time being and for the sake of acting on a global scale, social media platforms will prioritise their community guidelines when it comes to moderating user content.
I would like to thank Stephan Dreyer and Nikolas Guggenberger for their valuable feedback on the earlier draft of this paper. Thank you to the peer-reviewers for reading and evaluating this article.