News and Research articles on Automated decision-making

Fairness as crowd-pleaser

Lee Andrew Bygrave, University of Oslo

PUBLISHED ON: 24 Jul 2025

Given the ubiquity of fairness as a normative criterion in tech policy, this op-ed warns of particular risks to its legitimising potential which may, in the long term, damage the standing of fairness as crowd-pleaser.

While transparency is often championed as the key to addressing the risks of automated decision-making (ADM) in public governance, this op-ed argues that a narrow focus on explainability overlooks deeper systemic issues such as power imbalances, commercial influence, and weakened accountability. To address these issues, mechanisms that promote transparency must operate alongside efforts to enhance citizen engagement and other methods of oversight and accountability to better protect democratic values.

This op-ed defends the Universal Inscrutability Argument by clarifying what legal explainability actually requires: justifying reasons for institutional decisions, not access to individual motivations. The argument holds that legal standards for explainability should be based on the latter, not the former.

The violence of the majority: Rethinking AI positionality in decision-making

Mennatullah Hendawy, Center for Advanced Internet Studies (CAIS)

PUBLISHED ON: 13 Jan 2025

Mennatullah Hendawy critically examines how AI systems often perpetuate societal inequities by prioritising majority perspectives, marginalising underrepresented groups. Drawing from examples like predictive policing and agricultural tools in the Global South, she underscores the importance of considering the positionality of AI creators.