What happens when legal principles meet speculative storytelling and role-play workshops? Our interdisciplinary team found new ways to govern smart glasses, leading to the ‘ethics of interactions’ framework.
News and Research articles on AI governance
China is recasting artificial intelligence as a tool of infrastructure diplomacy, a strategic shift that confronts the innovation-led paradigm and navigates the risks of fragmented global governance.
Given the ubiquity of fairness as a normative criterion in tech policy, this op-ed warns of particular risks to its legitimising potential which may, in the long term, damage the standing of fairness as crowd-pleaser.
European AI regulations often overlook the experiences of marginalised communities disproportionately impacted by algorithmic biases. This op-ed explores how AI-driven tools exacerbate discrimination against immigrants and minority groups, calling for more inclusive policy frameworks.
This opinion piece discusses the recently adopted Artificial Intelligence Act (AIA) by the European Parliament, highlighting its goals and regulatory structure. The authors argue that the Act's predominantly rule-based approach may not effectively balance innovation and regulation.
Klinger & Hacker highlight the risk of “public interest AI” simply becoming a marketing label, despite its potential, due to the inherent misalignment between for-profit goals and public interest aspirations.
Is the current regulatory focus on AI misguided? AI-based services are produced in agile production environments that are decades in the making and concentrated in the hands of a few companies. This article illustrates how AI is only the latest output of these production environments, gives an overview of the socio-technical as well as political-economic concerns these environments raise, and argues why they may be a better target for policy and regulatory interventions.
This article explores some conditions and possibilities for public contestability in AI governance; a critical attribute of governance arrangements designed to align AI deployment with the public interest.
This article provides an initial analysis of the EU AI Act's approach to general-purpose artificial intelligence, arguing that the regulation marks a significant shift from reactive to proactive AI governance, while concerns about its enforceability, democratic legitimacy and future-proofing remain.
Introduction Globally, there are now over 800 AI policy initiatives, from the governments of at least 60 countries, with most being introduced after 2016. The United Kingdom (UK) is at the forefront of AI governance efforts, at least quantitatively, being second only to the United States (US) in terms of the number of national-level AI policies released (OECD.AI, 2021) and ranking top for the number of mentions of AI in legislative documents between 2016 and 2021 (Zhang et al., 2022). According to the OECD, the US has released 55 documents, the UK 53, the EU 59, and China 22. These figures do not evidence the UK producing better outcomes than other countries that have published fewer …
Smart cities need citizen participation, robust data protection, non-discrimination and AI governance to effectively address the challenges of ever-changing technologies, function creep and political apathy.
The article identifies critical blindspots in current European AI policies and explores the impact of AI technologies in the media and communications sector, based on a novel multi-level analytical framework.
On the inadequacy of the risk-based approach for generative and general purpose AI.
In this article, I propose a distinction between individual harm, collective harm and societal harm caused by artificial intelligence (AI), and focus particularly on the latter. By listing examples and identifying concerns, I provide a conceptualisation of AI’s societal harm so as to better enable its identification and mitigation. Drawing on an analogy with environmental law, which also aims to protect an interest affecting society at large, I propose governance mechanisms that EU policymakers should consider to counter AI’s societal harm.
Introduction: transparency in AI Transparency is indeed a multifaceted concept used by various disciplines (Margetts, 2011; Hood, 2006). Recently, it has gone through a resurgence with regards to contemporary discourses around artificial intelligence (AI). For example, the ethical guidelines published by the EU Commission’s High-Level Expert Group on AI (AI HLEG) in April 2019 states transparency as one of seven key requirements for the realisation of ‘trustworthy AI’, which also has made its clear mark in the Commission’s white paper on AI, published in February 2020. In fact, “transparency” is the single most common, and one of the key five principles emphasised in the vast number – a …