News and Research articles on Artificial intelligence

Follow along as Members of the European Parliament navigate bias and discrimination in AI and explore their perspectives on regulatory measures, shedding light on the complexities of their understanding and paving the path towards informed policy development.

Artificial intelligence regulation in the United Kingdom: a path to good governance and global leadership?

Huw Roberts, University of Oxford
Alexander Babuta, British Library
Jessica Morley, University of Oxford
Christopher Thomas, British Library
Mariarosaria Taddeo, University of Oxford
Luciano Floridi, University of Bologna
PUBLISHED ON: 26 May 2023 DOI: 10.14763/2023.2.1709

Introduction Globally, there are now over 800 AI policy initiatives, from the governments of at least 60 countries, with most being introduced after 2016. The United Kingdom (UK) is at the forefront of AI governance efforts, at least quantitatively, being second only to the United States (US) in terms of the number of national-level AI policies released (OECD.AI, 2021) and ranking top for the number of mentions of AI in legislative documents between 2016 and 2021 (Zhang et al., 2022). According to the OECD, the US has released 55 documents, the UK 53, the EU 59, and China 22. These figures do not evidence the UK producing better outcomes than other countries that have published fewer …

Governing artificial intelligence in the media and communications sector

Jo Pierson, Hasselt University
Aphra Kerr, Maynooth University
Stephen Cory Robinson, Linköping University
Rosanna Fanni, Centre for European Policy Studies (CEPS)
Valerie Eveline Steinkogler, Vrije Universiteit Brussel
Stefania Milan, University of Amsterdam
Giulia Zampedri, Vrije Universiteit Brussel
PUBLISHED ON: 21 Feb 2023 DOI: 10.14763/2023.1.1683

The article identifies critical blindspots in current European AI policies and explores the impact of AI technologies in the media and communications sector, based on a novel multi-level analytical framework.

Artificial emotional intelligence beyond East and West

Daniel White, University of Cambridge
Hirofumi Katsuno, Doshisha University
PUBLISHED ON: 11 Feb 2022 DOI: 10.14763/2022.1.1618

Artificial emotional intelligence refers to technologies that perform, recognise, or record affective states. More than merely a technological function, however, it is also a social process whereby cultural assumptions about what emotions are and how they are made are translated into composites of code, software, and mechanical platforms. This essay illustrates how aspects of cultural difference are both incorporated and elided in projects that equip machines with emotional intelligence.

Whiteness in and through data protection: an intersectional approach to anti-violence apps and #MeToo bots

Renee Shelby, Northwestern University
Jenna Imad Harb, Australian National University
Kathryn Henne, Australian National University
PUBLISHED ON: 7 Dec 2021 DOI: 10.14763/2021.4.1589

This analysis of digital technologies aimed at supporting survivors of sexual and gender-based violence illustrates how they reaffirm normative whiteness.

This paper introduces a socio-technical typology of bias in data-driven machine learning and artificial intelligence systems. It argues that a clear distinction must be made between different concepts of bias in such systems in order to analytically assess and politically critique these systems. By analysing the controversial Austrian “AMS algorithm” as a case study among other examples, this paper defines the following three types of bias: purely technical, socio-technical, and societal.

Programming the machine: gender, race, sexuality, AI, and the construction of credibility and deceit at the border

Lucy Hall, University of Amsterdam
William Clapton, University of New South Wales
PUBLISHED ON: 7 Dec 2021 DOI: 10.14763/2021.4.1601

This article considers the use of AI border control technologies to manage security risks and the gendered, racialised, and sexualised impacts this may have on both regular travellers and asylum seekers.

This op-ed is part of a series of opinion pieces edited by Amélie Heldt in the context of a workshop on the Digital Services Act Package hosted by the Weizenbaum Institute for the Networked Society on 15 and 16 November 2021 in Berlin. This workshop brought together legal scholars and social scientists to get a better understanding of the DSA Package, in detail and on a meta level.

Governing “European values” inside data flows: interdisciplinary perspectives

Kristina Irion, University of Amsterdam
Mira Burri, University of Lucerne
Ans Kolk, University of Amsterdam
Stefania Milan, University of Amsterdam
PUBLISHED ON: 30 Sep 2021 DOI: 10.14763/2021.3.1582

This editorial introduces ten research articles, which form part of this special issue, exploring the governance of “European values” inside data flows.

This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan. Introduction The entrenchment and establishment of particular rights has from the outset been part of the advancement of the European project and how the European Union (EU) has defined itself. References to ‘European values’ are often rooted in an understanding of this commitment to rights seen to uphold certain principles about democracy and the relationship between market, state and citizens. Although the notion that Europe is premised on a set of exceptional values is contentious, Foret and Calligaro argue …

Beyond the individual: governing AI’s societal harm

Nathalie A. Smuha, KU Leuven
PUBLISHED ON: 30 Sep 2021 DOI: 10.14763/2021.3.1574

In this article, I propose a distinction between individual harm, collective harm and societal harm caused by artificial intelligence (AI), and focus particularly on the latter. By listing examples and identifying concerns, I provide a conceptualisation of AI’s societal harm so as to better enable its identification and mitigation. Drawing on an analogy with environmental law, which also aims to protect an interest affecting society at large, I propose governance mechanisms that EU policymakers should consider to counter AI’s societal harm.