News and Research articles on Artificial intelligence

Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems

Michael Gille, Hamburg University of Applied Sciences
Marina Tropmann-Frick, Hamburg University of Applied Sciences
Thorben Schomacker, Hamburg University of Applied Sciences
PUBLISHED ON: 30 Sep 2024 DOI: 10.14763/2024.3.1797

The article takes an in-depth look at the AI Act’s governance approach to non-high-risk AI systems and provides a multi-perspective analysis of the challenges that the EU’s regulation of AI brings about.

Balancing efficiency and public interest: The impact of AI automation on social benefit provision in Brazil

Maria Alejandra Nicolás, Federal University of Latin American Integration
Rafael Cardoso Sampaio, Federal University of Paraná
PUBLISHED ON: 30 Sep 2024 DOI: 10.14763/2024.3.1799

The Brazilian Social Security Management Office's AI system reduces the waiting list but increases automatic refusals, harming beneficiaries and increasing inequality in the delivery of public services to the poorest and elderly people.

Misguided: AI regulation needs a shift in focus

Agathe Balayn, Delft University of Technology (TU Delft)
Seda Gürses, Delft University of Technology (TU Delft)

PUBLISHED ON: 30 Sep 2024

Is the current regulatory focus on AI misguided? AI-based services are produced in agile production environments that are decades in the making and concentrated in the hands of a few companies. This article illustrates how AI is only the latest output of these production environments, gives an overview of the socio-technical as well as political-economic concerns these environments raise, and argues why they may be a better target for policy and regulatory interventions.

Contesting the public interest in AI governance

Tegan Cohen, Queensland University of Technology (QUT)
Nicolas P. Suzor, Queensland University of Technology (QUT)
PUBLISHED ON: 30 Sep 2024 DOI: 10.14763/2024.3.1794

This article explores some conditions and possibilities for public contestability in AI governance; a critical attribute of governance arrangements designed to align AI deployment with the public interest.

Interview with Friederike Rohde: The environmental impact of AI as a public interest concern

Theresa Züger, Alexander von Humboldt Institute for Internet and Society

PUBLISHED ON: 30 Sep 2024

Rohde cautions that the economic structure that AI is currently being developed within is unfortunately at odds with the public interest. At the same time, she also believes that intelligent algorithms and digitalisation can in essence contribute to environmental and climate protection in tangible ways.

In this interview Meyer describes her thinking in the role of a funder and enabler of public interest technology, around how they could decide if a project is in the public interest or constitutes a public good, and various interactions between such projects/products and the market, including the importance of free and open source solutions.

Introduction to the special issue on AI systems for the public interest

Theresa Züger, Alexander von Humboldt Institute for Internet and Society
Hadi Asghari, Alexander von Humboldt Institute for Internet and Society
PUBLISHED ON: 30 Sep 2024 DOI: 10.14763/2024.3.1802

As the debate on public interest AI is still a young and emerging one, we see this special issue as a way to help establish this field and its community by bringing together interdisciplinary positions and approaches.

The principle of proportionality not only addresses the conflict among competing interests under Article 15(1)(h) GDPR but also shapes the justifications for public interest restrictions on the right of access to AI decision-making information.

General-purpose AI regulation and the European Union AI Act

Oskar J. Gstrein, University of Groningen
Noman Haleem, University of Groningen
Andrej Zwitter, University of Groningen
PUBLISHED ON: 1 Aug 2024 DOI: 10.14763/2024.3.1790

This article provides an initial analysis of the EU AI Act's approach to general-purpose artificial intelligence, arguing that the regulation marks a significant shift from reactive to proactive AI governance, while concerns about its enforceability, democratic legitimacy and future-proofing remain.

The European approach to regulating AI through technical standards

Mélanie Gornet, Institut Polytechnique de Paris
Winston Maxwell, Institut Polytechnique de Paris
PUBLISHED ON: 16 Jul 2024 DOI: 10.14763/2024.3.1784

The AI Act will require high-risk AI systems to comply with harmonised technical standards, including for the protection of fundamental rights: what problems might arise when mixing technical standards and fundamental rights?

This paper empirically explores how AWS, Microsoft Azure, and Google Cloud strategically attempt to operationalise infrastructural power in AI development and implementation through their ecosystems for cloud AI.

The intensified digital divide: Comprehending GenAI

Mennatullah Hendawy, Center for Advanced Internet Studies (CAIS)

PUBLISHED ON: 14 Jun 2024

In the swiftly evolving digital landscape, the advent of generative artificial intelligence (GenAI) is heralding unprecedented changes in how we interact, work, and innovate. However, this technological renaissance brings to the fore a critical yet often overlooked aspect: widening the existing digital divide.

Regulating high-reach AI: On transparency directions in the Digital Services Act

Kasia Söderlund, Lund University
Emma Engström, Institute for Futures Studies
Kashyap Haresamudram, Lund University
Stefan Larsson, Lund University
Pontus Strimling, Institute for Futures Studies
PUBLISHED ON: 26 Mar 2024 DOI: 10.14763/2024.1.1746

Focusing on recommender systems used by dominant social media platforms as an example of high-reach AI, this study explores the directionality of transparency provisions introduced by the Digital Services Act and highlights the pivotal role of oversight authorities in addressing risks posed by high-reach AI technologies.

According to recent studies, Generative Artificial Intelligence (AI) output discriminates against women. On testing ChatGPT, terms such as “expert” and “integrity” were used to describe men, while women were associated with “beauty” or “delight”. This was the case while using the Large Language Model, Alpaca,  a model developed by Stanford University to produce recommendation letters for potential employees.

The road to regulation of artificial intelligence: the Brazilian experience

Laura Schertel Mendes, Goethe-Universität Frankfurt am Main
Beatriz Kira, University of Sussex

PUBLISHED ON: 21 Dec 2023

Brazil is currently examining a comprehensive AI bill to establish a rights-based and risk-based regulatory framework. In contrast to notions of legal transplant or the influence of the Brussels effect, Brazil seeks to carve its own path, addressing the nation's distinct challenges and opportunities.