News and Research articles on Bias

Article 10 of the EU’s AI Act puts data governance at the heart of bias mitigation in high-risk AI systems but offers little guidance on implementation. Delegating these challenges to technical standardisation bodies raises both feasibility and legitimacy concerns, posing a significant test for the institutions now tasked with defining AI fairness in practice.

Follow along as Members of the European Parliament navigate bias and discrimination in AI and explore their perspectives on regulatory measures, shedding light on the complexities of their understanding and paving the path towards informed policy development.

This paper introduces a socio-technical typology of bias in data-driven machine learning and artificial intelligence systems. It argues that a clear distinction must be made between different concepts of bias in such systems in order to analytically assess and politically critique these systems. By analysing the controversial Austrian “AMS algorithm” as a case study among other examples, this paper defines the following three types of bias: purely technical, socio-technical, and societal.