News and Research articles on AI governance

The Artificial Intelligence Act. Taking normative imbalances seriously

Michał Araszkiewicz, Jagiellonian University
Grzegorz J. Nalepa, Jagiellonian University
Radosław Pałosz, Jagiellonian University

PUBLISHED ON: 17 Dec 2024

This opinion piece discusses the recently adopted Artificial Intelligence Act (AIA) by the European Parliament, highlighting its goals and regulatory structure. The authors argue that the Act's predominantly rule-based approach may not effectively balance innovation and regulation.

Misguided: AI regulation needs a shift in focus

Agathe Balayn, Delft University of Technology (TU Delft)
Seda Gürses, Delft University of Technology (TU Delft)

PUBLISHED ON: 30 Sep 2024

Is the current regulatory focus on AI misguided? AI-based services are produced in agile production environments that are decades in the making and concentrated in the hands of a few companies. This article illustrates how AI is only the latest output of these production environments, gives an overview of the socio-technical as well as political-economic concerns these environments raise, and argues why they may be a better target for policy and regulatory interventions.

Contesting the public interest in AI governance

Tegan Cohen, Queensland University of Technology (QUT)
Nicolas P. Suzor, Queensland University of Technology (QUT)
PUBLISHED ON: 30 Sep 2024 DOI: 10.14763/2024.3.1794

This article explores some conditions and possibilities for public contestability in AI governance; a critical attribute of governance arrangements designed to align AI deployment with the public interest.

General-purpose AI regulation and the European Union AI Act

Oskar J. Gstrein, University of Groningen
Noman Haleem, University of Groningen
Andrej Zwitter, University of Groningen
PUBLISHED ON: 1 Aug 2024 DOI: 10.14763/2024.3.1790

This article provides an initial analysis of the EU AI Act's approach to general-purpose artificial intelligence, arguing that the regulation marks a significant shift from reactive to proactive AI governance, while concerns about its enforceability, democratic legitimacy and future-proofing remain.

Artificial intelligence regulation in the United Kingdom: a path to good governance and global leadership?

Huw Roberts, University of Oxford
Alexander Babuta, British Library
Jessica Morley, University of Oxford
Christopher Thomas, British Library
Mariarosaria Taddeo, University of Oxford
Luciano Floridi, University of Bologna
PUBLISHED ON: 26 May 2023 DOI: 10.14763/2023.2.1709

Introduction Globally, there are now over 800 AI policy initiatives, from the governments of at least 60 countries, with most being introduced after 2016. The United Kingdom (UK) is at the forefront of AI governance efforts, at least quantitatively, being second only to the United States (US) in terms of the number of national-level AI policies released (OECD.AI, 2021) and ranking top for the number of mentions of AI in legislative documents between 2016 and 2021 (Zhang et al., 2022). According to the OECD, the US has released 55 documents, the UK 53, the EU 59, and China 22. These figures do not evidence the UK producing better outcomes than other countries that have published fewer …

Substantively smart cities – Participation, fundamental rights and temporality

Philipp Hacker, European University Viadrina Frankfurt
Jürgen Neyer, European University Viadrina Frankfurt
PUBLISHED ON: 31 Mar 2023 DOI: 10.14763/2023.1.1696

Smart cities need citizen participation, robust data protection, non-discrimination and AI governance to effectively address the challenges of ever-changing technologies, function creep and political apathy.

Governing artificial intelligence in the media and communications sector

Jo Pierson, Hasselt University
Aphra Kerr, Maynooth University
Stephen Cory Robinson, Linköping University
Rosanna Fanni, Centre for European Policy Studies (CEPS)
Valerie Eveline Steinkogler, Vrije Universiteit Brussel
Stefania Milan, University of Amsterdam
Giulia Zampedri, Vrije Universiteit Brussel
PUBLISHED ON: 21 Feb 2023 DOI: 10.14763/2023.1.1683

The article identifies critical blindspots in current European AI policies and explores the impact of AI technologies in the media and communications sector, based on a novel multi-level analytical framework.

Beyond the individual: governing AI’s societal harm

Nathalie A. Smuha, KU Leuven
PUBLISHED ON: 30 Sep 2021 DOI: 10.14763/2021.3.1574

In this article, I propose a distinction between individual harm, collective harm and societal harm caused by artificial intelligence (AI), and focus particularly on the latter. By listing examples and identifying concerns, I provide a conceptualisation of AI’s societal harm so as to better enable its identification and mitigation. Drawing on an analogy with environmental law, which also aims to protect an interest affecting society at large, I propose governance mechanisms that EU policymakers should consider to counter AI’s societal harm.

Transparency in artificial intelligence

Stefan Larsson, Lund University
Fredrik Heintz, Linköping University
PUBLISHED ON: 5 May 2020 DOI: 10.14763/2020.2.1469

Introduction: transparency in AI Transparency is indeed a multifaceted concept used by various disciplines (Margetts, 2011; Hood, 2006). Recently, it has gone through a resurgence with regards to contemporary discourses around artificial intelligence (AI). For example, the ethical guidelines published by the EU Commission’s High-Level Expert Group on AI (AI HLEG) in April 2019 states transparency as one of seven key requirements for the realisation of ‘trustworthy AI’, which also has made its clear mark in the Commission’s white paper on AI, published in February 2020. In fact, “transparency” is the single most common, and one of the key five principles emphasised in the vast number – a …