This paper process-traces how European policymakers have delegated regulatory responsibilities to private certification and monitoring bodies acting as regulatory intermediaries. It explores how regulators can constrain or incentivise self-regulation that exists in their shadow via intermediaries, instead of using direct modes of regulation.
Research articles on Governance
Can public authorities in the EU continue using US cloud services in light of the EU Court’s view of the US surveillance regime? Maybe, but it will require a lot of work.
This paper is part of Governing “European values” inside data flows , a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan. Introduction The entrenchment and establishment of particular rights has from the outset been part of the advancement of the European project and how the European
This article assesses the bidirectional interaction between meso- and macro-level data governance frameworks.
This editorial introduces ten research articles, which form part of this special issue, exploring the governance of “European values” inside data flows.
This article considers the unique challenges of platform policies aimed at the off-platform misbehaviour of users through the case of Twitch.
How have app stores governed the global app response to the coronavirus pandemic? An exploratory systematic mapping of COVID-19 pandemic response apps.
The spread of hate speech and disinformation on social media has contributed to inflaming conflicts and mass atrocities as seen in Myanmar. Is the doctrine of information intervention a solution to escalations of violence?
Black box algorithms and the rights of individuals: no easy solution to the “explainability” problem
The design of modern machine learning systems should take into account not only their effectiveness in solving a given problem, but also their impact on the rights of individuals. Implementing this goal may involve applying technical solutions proven in the IT industry, such as event logs or certification frameworks.
Recommendation algorithms potentially amplifying extremist content has become a policy concern in recent years. We conduct a novel empirical experiment on three platforms (YouTube, Reddit, and Gab) to test this phenomenon. We find that YouTube’s “Recommended for you” system does promote extreme content. We synthesise the findings into the policy