News and Research articles on Artificial intelligence

Regulating high-reach AI: On transparency directions in the Digital Services Act

Kasia Söderlund, Lund University
Emma Engström, Institute for Futures Studies
Kashyap Haresamudram, Lund University
Stefan Larsson, Lund University
Pontus Strimling, Institute for Futures Studies
PUBLISHED ON: 26 Mar 2024 DOI: 10.14763/2024.1.1746

Focusing on recommender systems used by dominant social media platforms as an example of high-reach AI, this study explores the directionality of transparency provisions introduced by the Digital Services Act and highlights the pivotal role of oversight authorities in addressing risks posed by high-reach AI technologies.

According to recent studies, Generative Artificial Intelligence (AI) output discriminates against women. On testing ChatGPT, terms such as “expert” and “integrity” were used to describe men, while women were associated with “beauty” or “delight”. This was the case while using the Large Language Model, Alpaca,  a model developed by Stanford University to produce recommendation letters for potential employees.

The road to regulation of artificial intelligence: the Brazilian experience

Laura Schertel Mendes, Goethe-Universität Frankfurt am Main
Beatriz Kira, University of Sussex

PUBLISHED ON: 21 Dec 2023

Brazil is currently examining a comprehensive AI bill to establish a rights-based and risk-based regulatory framework. In contrast to notions of legal transplant or the influence of the Brussels effect, Brazil seeks to carve its own path, addressing the nation's distinct challenges and opportunities.

Follow along as Members of the European Parliament navigate bias and discrimination in AI and explore their perspectives on regulatory measures, shedding light on the complexities of their understanding and paving the path towards informed policy development.

Artificial intelligence regulation in the United Kingdom: a path to good governance and global leadership?

Huw Roberts, University of Oxford
Alexander Babuta, British Library
Jessica Morley, University of Oxford
Christopher Thomas, British Library
Mariarosaria Taddeo, University of Oxford
Luciano Floridi, University of Bologna
PUBLISHED ON: 26 May 2023 DOI: 10.14763/2023.2.1709

Introduction Globally, there are now over 800 AI policy initiatives, from the governments of at least 60 countries, with most being introduced after 2016. The United Kingdom (UK) is at the forefront of AI governance efforts, at least quantitatively, being second only to the United States (US) in terms of the number of national-level AI policies released (OECD.AI, 2021) and ranking top for the number of mentions of AI in legislative documents between 2016 and 2021 (Zhang et al., 2022). According to the OECD, the US has released 55 documents, the UK 53, the EU 59, and China 22. These figures do not evidence the UK producing better outcomes than other countries that have published fewer …

Governing artificial intelligence in the media and communications sector

Jo Pierson, Hasselt University
Aphra Kerr, Maynooth University
Stephen Cory Robinson, Linköping University
Rosanna Fanni, Centre for European Policy Studies (CEPS)
Valerie Eveline Steinkogler, Vrije Universiteit Brussel
Stefania Milan, University of Amsterdam
Giulia Zampedri, Vrije Universiteit Brussel
PUBLISHED ON: 21 Feb 2023 DOI: 10.14763/2023.1.1683

The article identifies critical blindspots in current European AI policies and explores the impact of AI technologies in the media and communications sector, based on a novel multi-level analytical framework.

Artificial emotional intelligence beyond East and West

Daniel White, University of Cambridge
Hirofumi Katsuno, Doshisha University
PUBLISHED ON: 11 Feb 2022 DOI: 10.14763/2022.1.1618

Artificial emotional intelligence refers to technologies that perform, recognise, or record affective states. More than merely a technological function, however, it is also a social process whereby cultural assumptions about what emotions are and how they are made are translated into composites of code, software, and mechanical platforms. This essay illustrates how aspects of cultural difference are both incorporated and elided in projects that equip machines with emotional intelligence.

This paper introduces a socio-technical typology of bias in data-driven machine learning and artificial intelligence systems. It argues that a clear distinction must be made between different concepts of bias in such systems in order to analytically assess and politically critique these systems. By analysing the controversial Austrian “AMS algorithm” as a case study among other examples, this paper defines the following three types of bias: purely technical, socio-technical, and societal.

Whiteness in and through data protection: an intersectional approach to anti-violence apps and #MeToo bots

Renee Shelby, Northwestern University
Jenna Imad Harb, Australian National University
Kathryn Henne, Australian National University
PUBLISHED ON: 7 Dec 2021 DOI: 10.14763/2021.4.1589

This analysis of digital technologies aimed at supporting survivors of sexual and gender-based violence illustrates how they reaffirm normative whiteness.

Programming the machine: gender, race, sexuality, AI, and the construction of credibility and deceit at the border

Lucy Hall, University of Amsterdam
William Clapton, University of New South Wales
PUBLISHED ON: 7 Dec 2021 DOI: 10.14763/2021.4.1601

This article considers the use of AI border control technologies to manage security risks and the gendered, racialised, and sexualised impacts this may have on both regular travellers and asylum seekers.

This op-ed is part of a series of opinion pieces edited by Amélie Heldt in the context of a workshop on the Digital Services Act Package hosted by the Weizenbaum Institute for the Networked Society on 15 and 16 November 2021 in Berlin. This workshop brought together legal scholars and social scientists to get a better understanding of the DSA Package, in detail and on a meta level.

Governing “European values” inside data flows: interdisciplinary perspectives Governing “European values” inside data flows

Kristina Irion, University of Amsterdam
Mira Burri, University of Lucerne
Ans Kolk, University of Amsterdam
Stefania Milan, University of Amsterdam
PUBLISHED ON: 30 Sep 2021 DOI: 10.14763/2021.3.1582

This editorial introduces ten research articles, which form part of this special issue, exploring the governance of “European values” inside data flows.