Looking at the nature of AI technology and the contents of the EU AI Act, the external impact of the Act is better understood in terms of experimentalist governance than in terms of the much-cited ‘Brussels effect’.
News and Research articles on AI Act
Article 10 of the EU’s AI Act puts data governance at the heart of bias mitigation in high-risk AI systems but offers little guidance on implementation. Delegating these challenges to technical standardisation bodies raises both feasibility and legitimacy concerns, posing a significant test for the institutions now tasked with defining AI fairness in practice.
In her op-ed, the author argues that the AI Act overlooks the challenges posed by the use of generative AI in the literary industry. She calls for European legislation that takes into account the specific conditions and cultural value of original literary production.
This opinion piece discusses the recently adopted Artificial Intelligence Act (AIA) by the European Parliament, highlighting its goals and regulatory structure. The authors argue that the Act's predominantly rule-based approach may not effectively balance innovation and regulation.
This article evaluates how to reconcile AI Act’s Art. 50 transparency provisions applicable to AI-generated text with news readers’ perceptions of manipulation and empowerment.
The article takes an in-depth look at the AI Act’s governance approach to non-high-risk AI systems and provides a multi-perspective analysis of the challenges that the EU’s regulation of AI brings about.
This article provides an initial analysis of the EU AI Act's approach to general-purpose artificial intelligence, arguing that the regulation marks a significant shift from reactive to proactive AI governance, while concerns about its enforceability, democratic legitimacy and future-proofing remain.
The AI Act will require high-risk AI systems to comply with harmonised technical standards, including for the protection of fundamental rights: what problems might arise when mixing technical standards and fundamental rights?
In the swiftly evolving digital landscape, the advent of generative artificial intelligence (GenAI) is heralding unprecedented changes in how we interact, work, and innovate. However, this technological renaissance brings to the fore a critical yet often overlooked aspect: widening the existing digital divide.
How will generative AI affect elections? On the evening of the European Parliament elections, the alarms are sounding.
According to recent studies, Generative Artificial Intelligence (AI) output discriminates against women. On testing ChatGPT, terms such as “expert” and “integrity” were used to describe men, while women were associated with “beauty” or “delight”. This was the case while using the Large Language Model, Alpaca, a model developed by Stanford University to produce recommendation letters for potential employees.
Declaration
This work was based on the research developed in the framework of PNRR/NextGenerationEU project "Biorobotics Research and Innovation Engineering Facilities “IR0000036” – CUP J13C22000400007".
This article critically examines how three AI initiatives articulate corporate responsibility for human rights regarding long-term risks posed by smart city AI systems.