The article takes an in-depth look at the AI Act’s governance approach to non-high-risk AI systems and provides a multi-perspective analysis of the challenges that the EU’s regulation of AI brings about.
News and Research articles on Artificial intelligence
Klinger & Hacker highlight the risk of “public interest AI” simply becoming a marketing label, despite its potential, due to the inherent misalignment between for-profit goals and public interest aspirations.
The Brazilian Social Security Management Office's AI system reduces the waiting list but increases automatic refusals, harming beneficiaries and increasing inequality in the delivery of public services to the poorest and elderly people.
Is the current regulatory focus on AI misguided? AI-based services are produced in agile production environments that are decades in the making and concentrated in the hands of a few companies. This article illustrates how AI is only the latest output of these production environments, gives an overview of the socio-technical as well as political-economic concerns these environments raise, and argues why they may be a better target for policy and regulatory interventions.
This article explores some conditions and possibilities for public contestability in AI governance; a critical attribute of governance arrangements designed to align AI deployment with the public interest.
Rohde cautions that the economic structure that AI is currently being developed within is unfortunately at odds with the public interest. At the same time, she also believes that intelligent algorithms and digitalisation can in essence contribute to environmental and climate protection in tangible ways.
In this interview Meyer describes her thinking in the role of a funder and enabler of public interest technology, around how they could decide if a project is in the public interest or constitutes a public good, and various interactions between such projects/products and the market, including the importance of free and open source solutions.
Why it does not make sense to move faster when heading the wrong way.
As the debate on public interest AI is still a young and emerging one, we see this special issue as a way to help establish this field and its community by bringing together interdisciplinary positions and approaches.
The principle of proportionality not only addresses the conflict among competing interests under Article 15(1)(h) GDPR but also shapes the justifications for public interest restrictions on the right of access to AI decision-making information.
This article provides an initial analysis of the EU AI Act's approach to general-purpose artificial intelligence, arguing that the regulation marks a significant shift from reactive to proactive AI governance, while concerns about its enforceability, democratic legitimacy and future-proofing remain.
The AI Act will require high-risk AI systems to comply with harmonised technical standards, including for the protection of fundamental rights: what problems might arise when mixing technical standards and fundamental rights?
This paper empirically explores how AWS, Microsoft Azure, and Google Cloud strategically attempt to operationalise infrastructural power in AI development and implementation through their ecosystems for cloud AI.
In the swiftly evolving digital landscape, the advent of generative artificial intelligence (GenAI) is heralding unprecedented changes in how we interact, work, and innovate. However, this technological renaissance brings to the fore a critical yet often overlooked aspect: widening the existing digital divide.
European digital and internet policies must consider digital policy and infrastructure in the Middle East, particularly in the Gulf region, expanding the geographical and regulatory frontiers of the European digital space discussions.
Focusing on recommender systems used by dominant social media platforms as an example of high-reach AI, this study explores the directionality of transparency provisions introduced by the Digital Services Act and highlights the pivotal role of oversight authorities in addressing risks posed by high-reach AI technologies.
According to recent studies, Generative Artificial Intelligence (AI) output discriminates against women. On testing ChatGPT, terms such as “expert” and “integrity” were used to describe men, while women were associated with “beauty” or “delight”. This was the case while using the Large Language Model, Alpaca, a model developed by Stanford University to produce recommendation letters for potential employees.
Declaration
This work was based on the research developed in the framework of PNRR/NextGenerationEU project "Biorobotics Research and Innovation Engineering Facilities “IR0000036” – CUP J13C22000400007".
Brazil is currently examining a comprehensive AI bill to establish a rights-based and risk-based regulatory framework. In contrast to notions of legal transplant or the influence of the Brussels effect, Brazil seeks to carve its own path, addressing the nation's distinct challenges and opportunities.
Bloomberg has recently reported that Visa has 90 million disputes annually. The resolution of so many disputes requires an innovative dispute resolution method: CODR.