Notice and takedown under the GDPR: an operational overview

Daphne Keller, The Center for Internet and Society, Stanford University, United States of America

PUBLISHED ON: 29 Oct 2015

This is the third of a series of posts about the pending EU General Data Protection Regulation (GDPR), and its consequences for intermediaries and user speech online. In an earlier introduction and FAQ, I discuss the GDPR’s impact on both data protection law and internet intermediary liability law. Developments culminating in the GDPR have put these two very different fields on a collision course -- but they lack a common vocabulary and are in many cases animated by different goals. Laws addressing concerns in either field without consideration for the concerns of the other can do real harm to users’ rights to privacy, freedom of expression, and freedom to access information online.

Cross-posted to Stanford Law School’s CIS Blog

Disclosure: I previously worked on "Right to Be Forgotten" issues as Associate General Counsel at Google.

This is the third post in a series analysing the EU’s pending General Data Protection Regulation (GDPR). The previous post reviewed high-level problems with the GDPR’s process for erasing content posted online by internet users. The process disproportionately burdens the rights of internet users to seek and impart information online. Those rights could be much better protected, without sacrificing remedies for people whose privacy has been violated, if the GDPR treated erasure of user-generated content separately from erasure of data collected directly by companies based on user behaviour and used for back-end processes such as profiling. The GDPR could then apply standard, well-tested procedural checks to limit erroneous or bad faith removals of lawful user-generated content.  

This post goes into more detail about the Regulation’s exact language and the removal process. It will walk through each step an intermediary would follow to erase user-generated content based on the GDPR’s Right to Be Forgotten provisions.  

For the person requesting content removal on the basis of privacy or data protection rights, the removal process will be something of a black box – though increasing calls for transparency could change that somewhat in practice. From the perspective of the person whose speech rights are affected, it’s an even blacker box. In many cases, the speakers may not know that their content has been challenged and taken down; if they notice that it’s gone, they won’t know why. From the perspective of the people seeking information online, the process is entirely opaque. They’ll never know what they’re missing.   

From the intermediary’s perspective, the process is an operational challenge, requiring an ongoing investment of time, personnel, legal analysis and engineering work. Not all intermediaries will choose to make that investment, or to go through the process described here. Financial incentives for companies to simply honour all removal requests, or to err on the side of removal in case of doubt, are extremely strong. An intermediary risks sanctions of up to 0.5% - or 2%, or even 5%, depending which draft provision you read - of its annual global turnover every time it chooses to keep user content online (Art 79). Those are dangerously high figures for any company, and particularly for intermediaries handling multiple takedown requests. As a result, the GDPR will likely lead to the frequent erasure of lawful, free expression by internet users.

The GDPR draft provisions cited here all appear in the European Data Supervisor’s comparison document.

1. The removal request comes in

In practice and under many intermediary liability laws or model rules, an intermediary may receive an initial removal request but not have enough information to evaluate it until later, when the requester sends more information. The GDPR does not clearly distinguish between those two stages, which creates problems I describe below.

a. The initial communication arrives

The first thing that happens is that the intermediary gets a removal request, asserting that content put online by another internet user violates the requester’s rights. If it does not have a well-designed intake form for requests (because it’s a small company, for example), or if the requester did not use the form, further communication will often be necessary for clarification. Such back and forth is common, because removal requests often inadvertently omit key information like the location of the offending content or the legal right it is said to violate.  

This part of the process would be greatly improved for both the requester and the intermediary if they could rely on existing intermediary liability law or best practices regarding the information that must be included in removal requests. Those rules tell intermediaries, as a procedural matter, when to proceed to the request evaluation stage; and they ensure the intermediary has the information it needs when that stage comes. For people requesting removal, clear rules help them submit an actionable request on the first try, and tell them when the ball is in the intermediary’s court to respond.

The GDPR could help requesters and companies by implementing form-of-request requirements modeled on the DMCA, the Manila Principles, or even existing national-law guidelines for complaints to data protection authorities (DPAs). These need not be onerous.  Guidelines commonly call for the requester to provide things like her contact information, the exact location of the content, and the legal basis for removal. For Right to Be Forgotten requests, Microsoft’s Bing removal request form suggests a useful additional element: the requester must explain any public role she has in her community.

The GDPR allows the intermediary to ask the requester for ID at this stage, if there is a reasonable doubt as to her identity (Art 10.2, 12.4a, Council draft). Intermediaries can also reject requests that are “manifestly unfounded or excessive”; by doing so they assume the burden of proof for that conclusion (Art. 12.4, Council draft)  

B. The requester provides any additional information needed for the intermediary to evaluate her claim

At some point, with or without further communication with the requester, the intermediary acquires enough information to make a judgment about honouring the removal request (or, is presumed by law to have enough information). You could think of this as the point when the request becomes procedurally ripe or valid, in the same way that a court pleading becomes procedurally valid by meeting legal filing formalities. Once it is reached, the intermediary can turn to the substantive legal claim being asserted.

The GDPR requires a one-month turnaround time for most removal requests. Hopefully this only begins once the request is procedurally ripe, and evaluation is possible. The GDPR could be clear though – the one-month clock should not start ticking from the minute the first communication comes through the door, unless it provides necessary, specified information (Art. 12.2).

2. The intermediary restricts public access to the disputed content

Now, the first really unusual thing happens: in most cases, the intermediary must take the challenged content offline immediately, before weighing the public interest and perhaps before even looking at the content.  The GDPR calls this “restriction” of processing. The restriction provisions have changed language and location from draft to draft, and are difficult to parse. But they appear to mean that intermediaries take challenged content offline first, and ask questions later, subject to some unclear exceptions. They may even mean that the intermediary must temporarily remove content as soon as the initial complaint identifies the content’s location, even before the requester clarifies the basis for the legal claim. To my knowledge, this is unprecedented. No other intermediary liability system gives one user this kind of instantaneous veto power over another user’s expression.1

The rest of this subsection will parse the GDPR’s dense legislative language about restriction of content. Readers who don’t like that kind of thing should probably skip ahead to Step 3. The basic overview is this: (a) most provisions clearly say that “restricted” content must be rendered publicly inaccessible; (b) almost any removal request to an intermediary can trigger the restriction obligation; and (c) exceptions to automatic takedown exist, but they aren’t very clear or meaningful. Minor amendments could, and should, clarify these exceptions to solve the problem I identify here.

A. What does it mean to restrict content?

The GDPR says that restricting content means making it inaccessible to the public. As the Parliament draft explains, restricted data is no longer “subject to the normal data access and processing operations and cannot be changed anymore” - including, presumably, by the person who uploaded it (Parl. Art. 17(4)). The Council draft provides that restricted data: 

may, with the exception of storage, only be processed with the data subject's
consent or for the establishment, exercise or defence of legal claims
or for the protection of the rights of another natural or legal person
or for reasons of important public interest.
(Art. 17a(3), see also EDPS draft Art. 19a)

In other words, restricted data is kept in storage and not otherwise processed unless an exception applies.

The Council draft definition of “restriction of processing” introduces the only ambiguous note. It says restriction is “the marking of stored personal data with the aim of limiting their processing in the future” (Art. 4(3a)). For intermediaries, arguably this could mean “marking” back-end copies of user-generated content, but not restricting normal public access. That’d be odd and inefficient as a technical matter, but at least it wouldn’t burden anyone’s speech and information rights in advance of knowing whether the takedown request is valid. It’s not likely to be what is meant, though, because the same draft, from the Council, includes the language I quoted above about suspending “normal data access.”  

More likely, this anomalous definition just reflects the GDPR drafters’ focus on back-end stored user data, rather than on public-facing online content. A good revision to the GDPR could track exactly this distinction. By expressly excluding user-generated content from the restriction provisions, drafters could avoid significant problems that the restriction provisions create for online expression and information rights.

B. What kinds of requests trigger content restriction?

In theory, not all requests should trigger content restriction. The GDPR says restriction is only for processing and requests that are predicated on specific, enumerated legal grounds  (Parl. Art. 17a(3); Art. 6). Those grounds may effectively cover all processing of user-generated content by intermediaries, though.

One listed basis for restriction is when content’s “accuracy is contested by the data subject, for a period enabling the controller to verify the accuracy of the data” (Council 17a(1)(a)); Parl. Art 17.4(a)).  In other words, claims that would once have sounded in defamation, and been subject to well-developed defenses, now lead to immediate suspension of content.  The content can be reinstated when the controller “verif[ies] the accuracy of the data” - generally meaning never, because finding the truth behind real-world disputes is not what intermediaries do well. Interestingly, the problems with asking anyone but courts to adjudicate questions of accuracy were flagged by the Article 29 Working Party in its Costeja recommendations, saying that DPAs await should judicial determinations in cases of ongoing dispute about accuracy. The GDPR nonetheless puts this responsibility in the hands of intermediaries.

The other basis for restriction is even broader, but harder to piece together from the GDPR text. It arises when an intermediary’s initial processing of user-generated content took place on the basis of “legitimate interests” that outweighed the privacy rights of people mentioned or depicted in that content (Art. 6.1.(f)). Under data protection law, this is the legal basis for most, possibly all processing of such data by intermediaries. So this basis for restriction seems to apply generally to intermediaries facing content removal requests.

To figure out whether to restrict in response to such a request, an intermediary must perform a multi-step, circular analysis, which hinges almost entirely on balancing poorly defined “legitimate interests”.2 This “legitimate interests” analysis is very similar to the analysis the intermediary is supposed to perform later, to decide whether to permanently erase the content. The “legitimate interests” basis for temporary restriction should be different from the “legitimate interests” basis for permanent erasure, though. The two analyses happen at different times, and an intermediary is supposed to restrict “pending the verification whether the legitimate grounds of the [intermediary] controller override those of the data subject,” i.e. restriction is provisional until the erasure decision is made (Council 17a(1)(c)). But given the confusing similarity of the standards, and the clear intention that some content be restricted from public access right away, we should not be surprised to see quick and sloppy -- and permanent -- removal decisions being made  immediately upon receipt of challenges to online expression.

Similar GDPR language in another draft, which may or may not mean the same thing, says restriction lasts until “the controller demonstrates compelling legitimate grounds for the processing which override the interests, fundamental rights and freedoms of the data subject” (Parliament Art 19(1)).3 Here again, the standard for ending restriction is unclear. It might boil down to the same vague, widely criticised standard set by the Court of Justice of the European Union (CJEU) in the Costeja “Right to Be Forgotten” case - but expressed in a lot more words.

C. Exceptions to the restriction requirement 

Intermediaries can decline to restrict content “for reasons of important public interest” or to protect “the rights of another natural or legal person”.4 (Council 17a(3)).  It’s unclear if these exceptions set a higher or lower bar than the “legitimate interests” standard intermediaries are supposed to apply at other points in their analysis of removal requests. Arguably, these exceptions protect even less content than the CJEU’s Costeja standard: Costeja lets Google reject de-indexing requests based on the “preponderant interest of the general public,” while the GDPR lets intermediaries leave content up, during the time it takes to evaluate a removal request, based on an “important public interest” (Costeja Par. 97).   Intermediaries willing to bet real money that they know the difference between “preponderant” and “important” can choose their actions accordingly. Intermediaries flummoxed by these standards will simply take the content offline without additional review.

Another exception permits content to remain publicly available at this stage “in order to protect the rights of another natural or legal person”. This seems more promising. Content removal requests, almost by definition, affect the rights of another person - the content’s publisher. Intermediaries or publishers could even argue that every request to remove public content (as opposed to every request to erase user data in back-end storage) qualifies for this exception. It is unrealistic to expect intermediaries broadly to take this position, of course, given uncertainty the about whether DPAs or courts would agree, and given that errors expose the company to fines heavy enough to sink a business. But GDPR drafters could easily modify this part of the statute to protect online speakers from having legal content suppressed, by specifying that pre-review restriction is never appropriate in the case of user-generated content.  

3. The intermediary decides whether to permanently remove the content

The intermediary now comes to the crux of the issue: Has the complainant made a claim strong enough to override the interests of the person who put the content online in the first place – as well as the interests of all the people who might want to see it, and the interests of the intermediary itself? If so, the content gets erased. I’ll write about how the GDPR shapes that substantive decision - the merits of the “Right to Be Forgotten” claim - later.  One interesting procedural wrinkle is that, according to the Article 29 Working Party, in difficult cases the intermediary may at this stage consult with the user who uploaded the information. I’ll talk later about the limited practical value of this possibility. For now, I assume that the intermediary agrees to remove the content.    

4. The content is removed

For a search index, presumably what the GDPR calls “erasure” is meant to instead be the more limited de-linking mandated in the Costeja case.5 Following the CJEU’s ruling, this means removing links, titles, snippets and cache copies of the webpage from only certain search results: those generated by searching on the requester’s name. 

For hosting platforms, complying with a GDPR removal request has far greater impact on internet users’ free expression and access to information.6 Deletion by a host often eliminates the only place on the internet where particular material can be found. In practice, for ephemeral content like social media posts, it is often the author’s only copy as well. Content deleted by an internet host may be well and truly gone from the world. Given these dramatically greater consequences, the standards applied by hosts in making removal decisions should be very different than those applied by search engines – with much greater weight given to free expression concerns. As a procedural matter, it also seems more than reasonable that a host should postpone final erasure until the speaker has an opportunity to defend her speech. But while the difference between de-indexing search results and deleting content at its source is widely commented upon in academic and policy discussions, I know of no written guidelines for hosts. The GDPR provides none.7

5. The requester and downstream recipients are told about the removal

The primary person the intermediary must tell about removal is, of course, the requesting data subject (Art. 12.2). In addition, to help that person enforce his or her data protection rights, the intermediary must also pass information about the removal downstream, so that whoever received content from the intermediary can also delete it.8 (Art. 13). If the intermediary has unlawfully made the data public, it must attempt to undo the damage by tracking down recipients and telling them to delete any copies or links (Art 17.2).

These pass-through provisions, while potentially valuable with respect to traditional data controllers - a hospital that shared patient information with an insurer, for example - are an odd fit for intermediaries. The content they handle typically originates with a third party, and passes through the intermediary’s technical systems without human review. If that content was “illegal” ab initio under the GDPR, perhaps because of special rules governing sensitive data, must the intermediary then ransack its logs to find and communicate deletion requests to other users who saw it? Would the person requesting removal – say, someone who was the subject of an ugly Facebook post – even want to risk the potential Streisand Effect from this publicity?   

6. The intermediary discloses identifying information about the user who posted the content

The GDPR also creates a troubling disclosure obligation in cases where the intermediary got the disputed content from someone other than the person requesting removal - which is the case in most notice and takedown situations. The intermediary is supposed to tell the requester “the identity and the contact details of the controller” - in other words, the internet user - who provided the content (Art. 14a Council). While there are arguments that users posting on social media or other hosting platforms do not qualify as controllers, those arguments have fared poorly in court and in analysis from academics and the Article 29 Working Party.9 Users who post their expression online are probably controllers, and the GDPR disclosure requirement probably applies to their personal data held by an intermediary. The intermediary can be compelled, based on an unverified complaint, to unmask anonymous speakers - sharing their personal information without consent.

The disclosure requirement may be a sensible provision for traditional data controllers - say, a lender that shared information with a credit reporting agency. But it is dangerous for online platforms. It provides a means for companies and individuals to identify and potentially retaliate against people who say things about them that they do not like. The GDPR specifies no legal process or protection for the rights of those users, but does provide exceptions for cases in which “disclosure is expressly laid down by Union or Member State law to which the controller is subject” (14a.4(c) Council, sic). Presumably this section is intended to limit disclosure of user data to situations where there is valid legal process and the disclosure complies with the legal protections of the GDPR itself. But this is far from clear, and badly in need of redrafting to clearly prohibit disclosure absent adequate legal protection for the speaker.10

I can only assume that the drafters were not considering this situation, or its tremendous impact on anonymous online speech, given their keen interest in anonymity and pseudonymity in other parts of the GDPR. Here again, viewing the issue through the lens of intermediaries’ Notice and Takedown process illuminates disturbing consequences for internet users who seek and impart information on the internet. And, again, simply excluding online content providers from this provision of the GDPR would solve an important problem.

See new footnote11 for updated analysis as of 17 November 2015.

7. The person who put content online is not told of its erasure

Finally, there is the one person who is not supposed to be told about the removal: the person whose speech is being erased. The GDPR leaves intact legal provisions that regulators have interpreted to prohibit notice to the content’s publisher under existing law.  In its guidelines for Google’s implementation of the Costeja decision, the Article 29 Working Party said that there is no legal basis for Google to routinely tell webmasters when their content is delisted.12

The idea that the person who put content online should not know when it is erased or de-linked makes some sense from a pure data protection perspective. The idea is that the requester is exercising a legal right to make the company stop processing her information.  Talking to a publisher or webmaster about the request is just more unauthorised processing. More pragmatically, a person whose privacy is violated by online content probably will not want the perpetrator to know of her efforts to remove it.

Viewed through the lens of intermediary liability, due process, or free expression rights, by contrast, this looks pretty outrageous. It gives all procedural protections to the accuser, and none to the accused. The resulting harms to individuals and companies are real: a small business can lose access to customers; a speaker can have her opinions silenced; all through a secret process which in most cases provides no notice or opportunity for defense. There is a reason that intermediary liability model rules like the Manila Principles, and existing laws like the US DMCA, allow or even require companies to let users know when their content is deleted, and give users a chance to challenge removal decisions. A response or “counternotice” from the content provider serves as an important check on both improper removal requests and intermediary error in processing them. The risk of error - or laziness, or risk-aversion - by the intermediary is the reason why routine, pre- or post-removal notice to the accused internet user is so important. If notice only happens when the intermediary figures out that a removal request is problematic - as Article 29 suggested in its Guidelines - many improper deletions of legal content will go uncorrected.

Notice to the person who put the content online can lead to better decision-making, by bringing someone who knows the context and underlying facts - and who is motivated to defend her own expression - into the conversation. Importantly, it also opens up possibilities for better, more proportionate, and well-tailored solutions. While intermediaries have a binary13 choice – take content down or leave it up – a content creator can do much better: she can reword a bad phrase, update or annotate a news story, take down a picture from a blog post while leaving lawful text intact. She can preserve the good parts of her online speech while removing the bad.  Removal or correction of content at its source can also provide a better outcome for a person whose rights it violated, since the infringing content is no longer out there for people to find on other sites or shared by other means.  

A number of courts have looked to content creators to take measures like this in the “Right to Be Forgotten” context. A recent, post-Costeja data protection case from the Constitutional Court of Colombia is an example. After weighing the opposing fundamental rights at issue, the court ordered an online news source to (1) update its article about the plaintiff, and (2) use simple technical tools to prevent search engines from listing the information in search results. Jef Ausloos and Aleksandra Kuczerawy report a similar case in Belgium. The idea of putting this decision and technical control in the hands of the publisher is not new - well before Costeja, the Italian DPA did the same thing with archived news articles about old criminal convictions.14

Giving publishers notice and opportunity to defend their online expression would make the GDPR’s removal process more fair; avoid unnecessary harm to free expression and information access online; and introduce better tools to redress privacy harms to the person requesting removal. Right now, the GDPR is putting decisions about the publisher’s content in the hands of her accuser and a technology company, instead - and giving both of those parties incentives to disregard her rights.

The GDPR creates a process that fails to protect internet users’ rights to free expression and access to information. Simple text changes could eliminate many of these shortcomings, while still providing relief for people harmed by content online. Lawmakers can and should make those changes while there is still time.

Footnotes

1. Though most laws would not preclude removals outside the intermediary liability framework, such as voluntary removals and in some countries government requests, from working like this.

2. Council Article 17a(1)(c) says a data subject has a right to restriction where “he or she has objected to processing pursuant to Article 19(1) pending the verification whether the legitimate grounds of the controller override those of the data subject”. The referenced article, 19.1, applies to processing of “personal data concerning him or her which is based on points (e) and or (f) of Article 6(1); the first sentence of Article 6(4) in conjunction with point (e) of Article 6(1) or the second sentence of Article 6(4). The controller shall no longer process the personal data unless the controller demonstrates compelling legitimate grounds for the processing which override the interests rights and freedoms of the data subject or for the establishment, exercise or defence of legal claims”. The relevant referenced basis for processing at 6(1)(f) is that “processing is necessary for the purposes of the legitimate interests pursued by a the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child”.

3. The words “demonstrate” and “verification” here sound oddly as if the intermediary is proving something to a third party, such as a DPA, but nothing else in the Regulation indicates this.

4. In at least one draft, this exception applies only to interests of a “natural person,” not a legal person.  Under this version, the interests of the intermediary or publishing company would not be a factor (EDPS draft Art. 19a.).

5. The GDPR does not actually say this. But if the GDPR rendered Costeja obsolete, a more expert data protection lawyer than I would surely have pointed it out by now.

6. This discussion assumes that the GDPR erasure requirements apply to hosts. As mentioned in the introduction, there are complicated questions about this, and about whether hosting services are data processors or data controllers. The GDPR does not resolve them. But exempting hosts from the erasure provisions seems politically unrealistic.

7. Important ECHR precedent reinforces this perspective. In Węgrzynowski and Smolczewski v. Poland, the court declined to purge online news archives of even clearly defamatory material, saying the articles could instead be annotated. It wrote, “[t]he Court accepts that it is not the role of judicial authorities to engage in rewriting history by ordering the removal from the public domain of all traces of publications which have in the past been found, by final judicial decisions, to amount to unjustified attacks on individual reputations.” (Par. 65)

8. This provision refers to “transfer” of data, which arguably doesn’t include intermediaries’ display of user-generated content to other internet users.

9. See Ryneš and  Lindqvist cases, and discussion in Bezzi et al, Privacy and Data Management for Life, p. 70-71.

10. On a more philosophical note, there is a neat parallel between privacy and anonymity lurking here. The “Right to Be Forgotten” lets an individual dissociate herself from past words or acts, so that her public identity is free of them. Anonymity does the reverse. It lets an individual dissociate her words or acts from her self, so that the words stand on their own, free from association with her.

11. 17 November 2015 update: Miquel Peguera has pointed out that 14a(1) appears to require the intermediary controller to disclose its own contact information. He is right, as usual. Language I should have cited instead appears at 14a(g) and 15(1)(g) of the Council draft. 

14a(2)(g) requires the intermediary to disclose “from which source the personal data originate, unless the data originate from publicly accessible sources.” Language at 14a(2) may limit this disclosure based on “circumstances and context,” but does not clearly address the privacy concerns of the internet user whose information would be disclosed.

15(1)(g) additionally provides the data subject with the right to obtain information, including “where the personal data are not collected from the data subject, any available information as to their source.”

12. Notice about removals to people seeking content online is another important check on over-removal. Erroneous removals on other legal grounds are often identified and corrected by this means. Google has tried to address this for the “Right to Be Forgotten” through near-ubiquitous notices on search results pages. These don’t really tell anyone what content was removed, though, and Article 29 Working Party has said Google would violate the law if they did.

13. There are other logical possibilities, but most – like taking a scene out of a hosted video – would endanger the intermediary’s protections under the eCommerce Directive or other intermediary liability law.

14. See discussion of cases at notes 121-123 in Tamò and George, Oblivion, Erasure and Forgetting in the Digital Age.

Add new comment