Intermediary liability and user content under Europe’s new data protection law

Daphne Keller, The Center for Internet and Society, Stanford University, United States of America

PUBLISHED ON: 08 Oct 2015

This is the first in a series of posts about the pending EU General Data Protection Regulation (GDPR), and its consequences for intermediaries and user speech online.

Cross-posted to Stanford Law School’s CIS Blog

Disclosure: I previously worked on "Right to Be Forgotten" issues as Associate General Counsel at Google.

A big new law is coming, and a lot of companies doing business online aren’t going to like it. Neither will many advocates of civil liberties for internet users. Europe’s pending General Data Protection Regulation (GDPR) updates and overhauls EU data protection law – the law that produced this week’s Schrems case and last year’s “Right to Be Forgotten” ruling in the EU. Data protection has long been a field considered arcane and impenetrable by many US lawyers. Most businesses and other entities outside Europe have rightly paid little attention in the past because the law seemingly did not apply to them. But if draft GDPR provisions circulating at this near-final stage in the lawmaking process are enacted into law, that’s about to change. Companies that previously fell outside data protection jurisdiction, including those with minimal ties to Europe, are being brought within its scope. For many, compliance will entail meaningful costs in money and engineering time. And online companies that deal in content – whether as creators and online publishers or as technical intermediaries – may find themselves receiving unprecedented erasure demands from European citizens or regulators. Going forward, if users around the world find their Facebook reminiscences about European acquaintances disappearing – or can’t find tweets about individuals who settled fraud allegations with the US Federal Trade Commission – this law will likely be the reason.

The GDPR is in many other respects a very good law. Europe already provides more robust legal privacy protections than many countries, including the US; this will make those protections even stronger and advance global norms around the privacy rights of internet users.1 And it should surprise no one that European lawmakers, angered by the Snowden revelations and the US government’s lackadaisical response, want more control over what personal data leaves Europe and how it is protected and safeguarded. But the GDPR has many other consequences, intended or unintended, for free expression, innovation, and the cost of doing business on the internet. Those deserve much more public discussion than they are currently getting.

Impact on internet intermediaries

Over the coming months, I will be unpacking these elements of the GDPR in a series of blog posts.2 My focus will mostly be on how the law affects internet intermediaries – and through them, users’ ability to receive and impart information using the internet. Some aspects I discuss, like jurisdiction and the “Right to Be Forgotten,” will be important for other kinds of online entities as well. The series isn’t about privacy under the GDPR, and it won’t focus on data protection law governing collection and use of user data in logs or other back-end storage systems. Great coverage of privacy aspects is available from public interest groups, law firms, and other sources.

A major goal of this series is to foster better conversation between data protection experts and practitioners focused on other parts of internet law – particularly intermediary liability and free expression. My own background is in internet law. I am not a data protection lawyer. In my previous role as Associate General Counsel for Google, I had an immersive real-world education in data protection, most recently in relation to the CJEU’s “Right to Be Forgotten” ruling in Costeja. But there are other areas of data protection law where I am a relative novice. My hope is that data protection practitioners, as well as other internet law mavens, will leave comments here or otherwise reach out with feedback, including criticism. These posts will later be aggregated in a single publication, which will be greatly improved by your comments.

A brief background on data protection law and intermediary liability

The law of data protection is generally very foreign to US lawyers. But some version of it exists in many countries around the world, not just in Europe,3 and provides important rights to citizens. Data protection is enshrined in the EU Charter of Fundamental Rights as a right distinct from privacy: a broad right to limit processing of all information relating to oneself, not just information that invades personal privacy. Where it conflicts with other fundamental rights, including rights to receive and impart information, the rights at issue must be balanced. The 1995 Data Protection Directive sets out a detailed framework for the data protection right, including specific legal grounds for entities to process personal data. It also establishes regulatory bodies for enforcement. National and sub-national Data Protection Agencies (DPAs) are the primary enforcers, and have ongoing relationships with many regulated entities. For most internet companies, the foremost data protection issue has been, and will continue to be, the backend processing of data about users – maintaining account information, for example, or tracking behaviour on a site.

The law of intermediary liability limits and defines the legal responsibility of technical intermediaries for content posted online by third parties. In the US, key intermediary liability laws are the DMCA for copyright and CDA 230 for defamation, invasion of privacy, and most other concerns. In the EU, intermediary liability is governed by Articles 12-15 of the eCommerce Directive, as implemented in the national laws of member states. Protected intermediaries generally have no obligations to police, and no liability for unlawful user content until they know about it.4 To comply with these laws, intermediaries operate notice and takedown systems to remove content when notified that it violates the law. In theory intermediaries should only remove user content if the notice is correct and the content actually is illegal – but intermediaries often delete content based on inaccurate or bad faith accusations, leading to over-removal of internet users’ lawful speech.5

Historically, many lawyers have not drawn a connection between data protection and the law of intermediary liability. The two fields use very different vocabularies, and are for the most part interpreted, enforced and litigated by different practitioners. A lawyer who views an issue through the lens of intermediary liability and one who views the same issue through the lens of data protection may have trouble even understanding each other’s concerns.  

But if the two fields were ever really separate, the CJEU’s 2014 “Right to Be Forgotten” ruling in the Costeja case changed that. The court ruled that Google had to de-list certain search results when users searched for the plaintiff’s name. It prescribed what is effectively a notice and takedown system to remove search results, but arrived at this remedy through the language and logic of data protection – with no reference to Europe’s intermediary liability rules.6 Costeja follow-on cases will likely force lower courts to grapple more directly with questions about how the two areas of law fit together.  Even as those cases progress, however, EU legislators are overhauling the governing law by replacing the Data Protection Directive with the pending GDPR.

Legislative process for the GDPR

The GDPR has been in the works since January 2012, when the European Commission proposed a comprehensive update and reform of the 1995 Data Protection Directive. A number of drafts7 from different EU governance bodies have been released since.8 The GDPR is now in a final ‘trilogue’ process, in which remaining differences will be resolved. One announced timeline put finalisation as early as December, though such deadlines often slip. The law will come into force two years after its publication date. Because it is a Regulation rather than a Directive, it will not have to be implemented as separate legislation in each member state of the EU. Rather, it will automatically go into effect. The GDPR covers a lot of ground, with provisions addressing everything from data portability, to coordination between national DPAs, to company codes of conduct and appointment of data protection officers. A good summary of the process and overall issues as of June is here, and a substantive Q&A from the European Parliament is here.

There is a chance that some of the sound and fury around the GDPR will come to nothing, if provisions of the GDPR are obviated by other sources of law – such as one of the pending trade agreements with the US, or laws arising from the EU’s new Digital Single Market (DSM) initiative. This possibility of preemption could explain why trade and business groups have been relatively unengaged with the GDPR. But the DSM process is in its infancy, and trumping the GDPR through a trade agreement seems like a long shot. European lawmakers do not seem disposed to make major concessions to the US right now on issues of privacy and data protection. And to the extent that US trade negotiators are seeking such concessions, their priorities may not lie with the issues I identify here.

Final passage of the GDPR will not necessarily answer the questions raised in this series about intermediaries and user access to information. Practitioners have significant unresolved differences about how certain points in the 1995 Directive should be interpreted; the GDPR probably won’t change that. Existing drafts are unclear on some key points, and seem likely to remain so – there can be good reasons for negotiators to choose constructive ambiguity, leaving room for DPA or court interpretation after the law is enacted. The upshot is that we will not necessarily see expert consensus on everything the GDPR means, and what parts of the law it has changed, even once its language is finalised.

Ambiguous drafting, intentional or not, will likely leave room for litigation and policy battles about the GDPR’s impact on internet intermediaries and user free expression. But it is clear that overall the Regulation moves the needle in a troubling direction for online innovation and civil liberties. It extends jurisdiction to a vast new group of internet companies, imposing burdensome regulatory obligations on companies that have never heard of this law. It extends “Right to Be Forgotten” content erasure requirements, applying European legal standards to require deletion of content that is legal in other places. By the same token, it puts decisions balancing European users’ speech and privacy rights into the hands of foreign technology companies, instead of national courts. And it tilts the playing field for the people whose rights are affected: it expands rules and institutions to vindicate privacy rights, but has no countervailing increase in resources or legal channels to protect speech and information rights. These issues merit much closer consideration before the GDPR is finalised and brought into effect.

Questions going forward

A preview of substantive issues to be covered in future installments is at the bottom of the cross-post on the Stanford Law School Center for Internet and Society Blog, in the FAQ section. Issues briefly discussed there include

  • The GDPR’s greatly expanded jurisdiction provisions
  • The Controller/Processor distinction under data protection law
  • The GDPR’s “Right to Be Forgotten” provisions
  • The GDPR’s relationship to the eCommerce Directive
  • The GDPR’s free expression provisions
  • Legal process for interpreting the GDPR and adjudicating disputes

This FAQ also delves into a handful of thorny and arcane questions, which may delight the hearts of intermediary liability and data protection nerds, but will be outside the scope of the series.

Footnotes

1. One important new provision of the GDPR establishes user rights to data portability, for example. (Art. 18)

2. Many thanks to Neal Cohen, who reviewed this work with a data protection practitioner’s keen eye. Remaining mistakes are my own.

3. The title of Professor Graham Greenleaf’s article on point is telling: Global Data Privacy Laws: 89 Countries, and Accelerating.

4. Protection varies with the nature of the service.  Providers in the “mere conduit” category do not have knowledge-based removal obligations; and all intermediaries can lose legal protection if they are too actively involved in managing content.

5. See, e.g., Jennifer Urban and Laura Quilter’s 2006 review of DMCA removals and Daniel Seng’s more recent work in the same area; Bits of Freedom’s study of Dutch intermediaries and Oxford PCMLP’s study on UK and US intermediaries; and Rishabh Dara’s detailed study of over-removals by Indian intermediaries.

6. Some argue that Costeja de-indexing should not be called “removal,” because it leaves the same results available for different queries. In intermediary liability parlance, this kind of partial suppression of content would still be called a “removal.” Search engines are not expressly covered by the eCommerce Directive intermediary liability provisions, but many national courts or laws have protected them as intermediaries. The protection of any intermediaries with respect to data protection-based content removal requests is further complicated by eCommerce Directive Article 1.5(b), which some argue excludes that content from the immunities created by Articles 12-15. Miquel Peguera addresses the connection between the two Directives in more depth here.

7. This discussion does not distinguish between drafts except where differences are relevant.

8. Winston and Strawn has a good, long list of links to drafts and commentary, mostly from government and business sources. A pdf and app comparing major drafts are available from the European Data Protection Supervisor here.

2 Comments

Jörg Pohle

10 October, 2015 - 09:51

Thank you very much for your blog post.
I am a bit surprised that you don't analyze the power relationships between the different actors as they are fundamentally different in the copyright area and in the privacy/data protection area: While copyright holders are much more powerful than those who consume their copyrighted materials, data subjects are generally much less powerful than data controllers. This means that copyright law is designed to protect the economically powerful actors and their profit interests against consumers (and therefore reproduces this power imbalance), data protection law is designed to protect the (structurally much) weaker data subjects against the information and decision-making power of data controllers (and data processors) and therefore aims at leveling (or at least at limiting) the power imbalances between these actors.
Could you please explain why you simply ignore this difference. How would your analysis change if it were aware of the power imbalances in both fields?
Thank you very much.

Profile image

Daphne Keller

31 October, 2015 - 21:14

Hi, Jörg-

Thanks so much for commenting, and sorry I didn't see this sooner. I don't get notices when comments go up on this blog, this just was pointed out to me yesterday.

You ask a good question and I think you are completely right that power imbalances matter. If the issue is the balance of power between an individual data subject and a big company -- Google, their bank, even a large retailer -- then I agree that this imbalance will nearly always exist. The issue I am more worried about is the balance of power between the data subject and the other individual who put content about them online. You can imagine situations where one or the other has more clout. (The data subject may have less power if she's a private person whose teenage arrest for disorderly conduct was reported in a big newspaper; and the online publisher is the newspaper. The data subject has more power if she owns a large business and the publisher is a former employee complaining about unfair treatment at work. Etc., etc.) Or in many situations they are likely to be pretty equal. In any case there usually won't be the kind of significant disparity you were talking about with data subjects and big, corporate data controllers.

The power of the corporate data controller does matter a lot for the relationship between the data subject and the person whose online content she wants to have erased, though. The GDPR's notice and takedown process lets the data subject harness a lot of the power of the corporate controller, through a set of requirements and incentives for the controller to do what the data subject asks it to do. Those requirements and incentives probably make sense when the dynamic is data-subject-versus-big-data-controller, and the data at issue is back-end logs or other information collected about the user's online behavior. But when a third party's online expression is at issue, the GDPR's rules effectively line up the interests and powers of the data subject and the big company data controller, against the person who put content online. Most of what I argue in my current set of posts (#2 and #3) is that the GDPR should have fair procedures to restore that balance between the two individuals. I don't think those changes would give more power to the intermediary data controller, but I have to admit I hadn't framed the question this way until your helpful question, so I may have to think about that more.

The other thing I'd add, in response to your point about copyright and power, is this. If a notice and takedown legal tool makes it easy to get online content taken down -- as the US DMCA does to some degree, and as the GDPR does much more -- then it becomes an attractive tool for everyone, not just the intended users. The DMCA may mostly be used by big content companies these days, but it is still also used by small business competitors trying to take down each others' websites, or religious groups trying to stifle criticism, etc. The same is and will be true of the Right to Be Forgotten. It's intended to help regular people, and hopefully that is mostly what it will do. But there will still be people or companies pursuing other agendas who find that this is their best avenue to get rid of internet content they don't like. (For RTBF a company would have to file a complaint in the name of an individual, but that's not hard.) The procedural improvements I recommend are all about correcting for those "abusive" removal requests, while still providing a streamlined path for legitimate removal requests to be carried out.

I hope this is a helpful response. I really appreciate the question, it gives me a lot to think about.

Add new comment