News and Research articles on Facebook

To ban content that might possibly violate their own content policies, social media platforms use the term 'borderline‘. This means categorising content as potentially unwanted (e.g. harmful, inappropriate, etc) and sanctioning legitimate expressions of opinion - hence putting lawful speech in a twilight zone.

Transnational collective actions for cross-border data protection violations

Federica Casarosa, European University Institute
PUBLISHED ON: 16 Sep 2020 DOI: 10.14763/2020.3.1498

Although the GDPR paves the way for a coordinated EU-wide legal action against data protection infringements, only a reform of private international law rules can enhance the opportunities of data subjects to enforce their rights.

Anchoring the need to revise cross-border access to e-evidence

Sergi Vazquez Maymir, Vrije Universiteit Brussel
PUBLISHED ON: 16 Sep 2020 DOI: 10.14763/2020.3.1495

The percentages and figures used in the impact assessment accompanying the European Commission’s e-evidence package strongly influence the analysis of the problem and limit the assessment of the problem of cross-border access to e-evidence to technical and efficiency considerations.

Geopolitics, jurisdiction and surveillance

Monique Mann, Deakin University
Angela Daly, University of Strathclyde
PUBLISHED ON: 16 Sep 2020 DOI: 10.14763/2020.3.1501

The internet is a forum for geopolitical struggle as states wield power beyond their terrestrial territorial borders through the extraterritorial geographies of data flows. This exertion of power across multiple jurisdictions, and via the infrastructure of transnational technology companies, creates new challenges for traditional forms of regulatory governance and the protection of human rights.

Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad

Lianrui Jia, University of Toronto
Lotus Ruan, University of Toronto
PUBLISHED ON: 16 Sep 2020 DOI: 10.14763/2020.3.1502

This paper examines data and privacy governance by four China-based mobile applications and their international versions - including the role of the state. It also highlights the role of platforms in gatekeeping mobile app privacy standards.

What if Facebook goes down? Ethical and legal considerations for the demise of big tech

Carl Öhman, University of Oxford
Nikita Aggarwal, University of Oxford
PUBLISHED ON: 11 Aug 2020 DOI: 10.14763/2020.3.1488

This paper examines the ethical and legal issues arising from the closure of a data-rich firms such as Facebook and provides four policy recommendations to mitigate the resulting harms to society.

Back up: can users sue platforms to reinstate deleted content?

Matthias C. Kettemann, Leibniz Institute for Media Research | Hans-Bredow-Institut
Anna Sophia Tiedeke, Leibniz Institute for Media Research | Hans-Bredow-Institut
PUBLISHED ON: 4 Jun 2020 DOI: 10.14763/2020.2.1484

Can platforms delete whatever content they want? Not everywhere, say the authors of this paper, which shows why certain social networks ‘must carry’ some content – and how users in some jurisdictions can force the companies to allow them into their communicative space.

Transparency in artificial intelligence

Stefan Larsson, Lund University
Fredrik Heintz, Linköping University
PUBLISHED ON: 5 May 2020 DOI: 10.14763/2020.2.1469

Introduction: transparency in AI Transparency is indeed a multifaceted concept used by various disciplines (Margetts, 2011; Hood, 2006). Recently, it has gone through a resurgence with regards to contemporary discourses around artificial intelligence (AI). For example, the ethical guidelines published by the EU Commission’s High-Level Expert Group on AI (AI HLEG) in April 2019 states transparency as one of seven key requirements for the realisation of ‘trustworthy AI’, which also has made its clear mark in the Commission’s white paper on AI, published in February 2020. In fact, “transparency” is the single most common, and one of the key five principles emphasised in the vast number – a …

Data-driven elections: implications and challenges for democratic societies

Colin J. Bennett, University of Victoria
David Lyon, Queen's University
PUBLISHED ON: 31 Dec 2019 DOI: 10.14763/2019.4.1433

In the wake of the Facebook/Cambridge Analytica scandal, it is timely to review the state of the debate about the impact of data-driven elections and to identify key questions that require academic research and regulatory response. The papers in this collection, by some of the world’s most prominent elections researchers, offer that assessment.

Is the “European approach” an adequate response to the challenges of disinformation and political manipulation, especially in election periods?

The regulation of online political micro-targeting in Europe

Tom Dobber, University of Amsterdam
Ronan Ó Fathaigh, University of Amsterdam
Frederik J. Zuiderveen Borgesius, Radboud University
PUBLISHED ON: 31 Dec 2019 DOI: 10.14763/2019.4.1440

This paper discusses how online political micro-targeting is regulated in Europe, from the perspective of data protection law, freedom of expression, and political advertising rules.

Voter preferences, voter manipulation, voter analytics: policy options for less surveillance and more autonomy

Jacquelyn Burkell, The University of Western Ontario
Priscilla M. Regan, George Mason University
PUBLISHED ON: 31 Dec 2019 DOI: 10.14763/2019.4.1438

Personalised political messaging undermines voter autonomy and the electoral process. Use of voter analytics for political communication must be regulated.