Follow along as Members of the European Parliament navigate bias and discrimination in AI and explore their perspectives on regulatory measures, shedding light on the complexities of their understanding and paving the path towards informed policy development.
News and Research articles on Artificial intelligence
Introduction Globally, there are now over 800 AI policy initiatives, from the governments of at least 60 countries, with most being introduced after 2016. The United Kingdom (UK) is at the forefront of AI governance efforts, at least quantitatively, being second only to the United States (US) in terms of the number of national-level AI policies released (OECD.AI, 2021) and ranking top for the number of mentions of AI in legislative documents between 2016 and 2021 (Zhang et al., 2022). According to the OECD, the US has released 55 documents, the UK 53, the EU 59, and China 22. These figures do not evidence the UK producing better outcomes than other countries that have published fewer …
News media discourses on datafication and automation have become more sensitive to data risks, but their complexity is a challenge for informing lay audiences about root causes and solutions.
This article critically examines how three AI initiatives articulate corporate responsibility for human rights regarding long-term risks posed by smart city AI systems.
ChatGPT’s contribution will be minimal, given the ways that its AI by design will permeate social bias and thus risk its business model.
The article identifies critical blindspots in current European AI policies and explores the impact of AI technologies in the media and communications sector, based on a novel multi-level analytical framework.
On the inadequacy of the risk-based approach for generative and general purpose AI.
Artificial emotional intelligence refers to technologies that perform, recognise, or record affective states. More than merely a technological function, however, it is also a social process whereby cultural assumptions about what emotions are and how they are made are translated into composites of code, software, and mechanical platforms. This essay illustrates how aspects of cultural difference are both incorporated and elided in projects that equip machines with emotional intelligence.
Feminist theories have extensively debated consent in sexual and political contexts. But what does it mean to consent when we are talking about our data bodies feeding artificial intelligence (AI) systems?
This analysis of digital technologies aimed at supporting survivors of sexual and gender-based violence illustrates how they reaffirm normative whiteness.
This paper introduces a socio-technical typology of bias in data-driven machine learning and artificial intelligence systems. It argues that a clear distinction must be made between different concepts of bias in such systems in order to analytically assess and politically critique these systems. By analysing the controversial Austrian “AMS algorithm” as a case study among other examples, this paper defines the following three types of bias: purely technical, socio-technical, and societal.
This article considers the use of AI border control technologies to manage security risks and the gendered, racialised, and sexualised impacts this may have on both regular travellers and asylum seekers.
This op-ed is part of a series of opinion pieces edited by Amélie Heldt in the context of a workshop on the Digital Services Act Package hosted by the Weizenbaum Institute for the Networked Society on 15 and 16 November 2021 in Berlin. This workshop brought together legal scholars and social scientists to get a better understanding of the DSA Package, in detail and on a meta level.
This editorial introduces ten research articles, which form part of this special issue, exploring the governance of “European values” inside data flows.
This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan. Introduction The entrenchment and establishment of particular rights has from the outset been part of the advancement of the European project and how the European Union (EU) has defined itself. References to ‘European values’ are often rooted in an understanding of this commitment to rights seen to uphold certain principles about democracy and the relationship between market, state and citizens. Although the notion that Europe is premised on a set of exceptional values is contentious, Foret and Calligaro argue …
In this article, I propose a distinction between individual harm, collective harm and societal harm caused by artificial intelligence (AI), and focus particularly on the latter. By listing examples and identifying concerns, I provide a conceptualisation of AI’s societal harm so as to better enable its identification and mitigation. Drawing on an analogy with environmental law, which also aims to protect an interest affecting society at large, I propose governance mechanisms that EU policymakers should consider to counter AI’s societal harm.
How to provide society with legal protection against the harm caused by AI? Considerations against the background of tort liability in Polish and European law
Smart technologies are capable of responding to feedback in ways that range from clever to mind-boggling. The point is how they affect human agency.
Feel like living in a dystopia? Take a deep breath, get a strong coffee, and let us challenge your ideas of where reality ends, and sci-fi begins…
A young entrepreneur finds herself battling uphill against GDPR enforcement organisations in an effort to bring her AI personal assistant to the market. Can her technological wits outwin the legal shortcuts these organisations have in place?