Regulating AI without the people: Europe’s technocratic drift

Ahmed Alrawi, University of Virginia, Karsh Institute Digital Technology for Democracy, United States of America

PUBLISHED ON: 26 May 2025

“As a victim of the child benefit scandal, I have been neglected and robbed of my dignity again” (Geerdink, 2021, para. 11), stated Franciska Manuputty, a single mother in the Netherlands who was falsely accused of childcare benefits fraud due to an algorithmic system used by the Dutch tax authorities. She was ordered to repay €30,000, leading to severe financial hardship, including reliance on food banks and the threat of eviction. Batya Brown, another Dutch citizen, also received a letter wrongly accusing her of fraud in the same scandal, reflecting the broader impact of flawed algorithmic systems in welfare policies (Amnesty International, 2023).

Manuputty’s case shows how AI-driven systems in welfare policies can reinforce racial and economic inequities, the very type of harm the EU AI Act aims to address. For instance, in the Netherlands, the AI system used to detect welfare fraud disproportionately targeted families with dual nationalities, leading to wrongful accusations and severe financial hardships. The Dutch Data Protection Authority later found that the system had illegally used nationality as a variable, a practice Amnesty International likened to 'digital ethnic profiling' (Amnesty International, 2023).

In March 2024, the European Union passed its landmark Artificial Intelligence Act (EU AI Act), promoted as a legal framework that incorporates fundamental rights protections, aiming to include considerations of civil liberties and potential social harms. The core purpose of this law was to regulate AI activities that pose high risks to individuals’ digital communication practices and uphold basic civil rights, representing a breakthrough in digital governance and the operational systems of Information and Communication Technologies (ICTs). Nevertheless, hidden issues lie beneath this ambitious effort, mainly in how the EU’s regulation of AI is increasingly directed by technocratic processes that undoubtedly sideline the very publics it claims to protect.

Rather than anchoring the law in civil society, local communities, or the segments of society most affected by algorithmic systems, the regulatory process has centered on the voices of industry lobby groups, big tech companies, and elite political networks. In this op-ed, I argue that such top-down governance is not only democratically thin but also risks reproducing social harms such as bias, exclusion, and state surveillance that regulation is meant to prevent.

AI systems are not neutral

The core problem with AI systems is that they are not neutral tools within societies but are embedded in a broader context characterised by inequality and bias against individuals (Duke & Giudici, 2025). Numerous studies have demonstrated how AI systems, when implemented without democratic oversight, can perpetuate biases and exacerbate algorithmic decisions that deprive marginalised groups, including immigrants, racialised communities, and rural populations, of their rights (Coeckelbergh, 2024; Jungherr, 2023; Alrawi, 2023). Still, the EU policymaking process that shapes AI governance remains predominantly driven by technical experts, with limited opportunities for input or contestation from affected communities. The exclusion of these voices not only undermines the legitimacy of AI governance frameworks but also perpetuates the disparities such policies aim to address.

A technocratic process with limited accountability

The EU AI Act offers clear evidence of this technocratic drift. While initial drafts offer space for consultation phases, these were largely inaccessible to ordinary citizens and nonprofit civil society groups. Instead, organisations such as DigitalEurope and the European Round Table for Industry exerted noticeable influence, shaping the regulatory language in ways that prioritise innovation and economic growth while disregarding public input (Corporate Europe Observatory, 2025; Balayn & Gürses, 2024). In many instances, ethical and societal concerns are reframed as technical issues, and regulatory obligations are treated as constraints on innovation rather than essential safeguards for democratic oversight. This imbalance reflects what political theorists describe as “post-democratic” policymaking, where democratic forms remain in place, but real power shifts to unelected technocratic actors (Crouch, 2019, 136).

Critics have consistently questioned this imbalance. Civil society networks, including the European Digital Rights (EDRi) association and Amnesty International, have called for participatory impact assessments and stronger human rights safeguards. For ordinary EU citizens living under the umbrella of a so called democratic digital governance system, they, without doubt, need for institutions to treat data governance as a public rather than merely technical issue. Without these shifts, the EU AI Act risks institutionalising a system in which the rules of automated decision-making are shaped far from public scrutiny and civic deliberation.

Voices from the margins are still missing

Although the foundation of EU AI policies is centered on regulating AI systems, it remains detached from the lived realities of those most affected by flawed algorithmic decisions. Marginalised communities, particularly immigrants, Roma groups, and digitally underserved rural residents, often bear the brunt of AI-driven tools (Bueno Patin & Stapper, 2025; Pham & Davies, 2024). Their exclusion from these processes not only undermines the legitimacy of AI governance but also prevents the development of policies that address real-world social harms. Current EU AI policies lack clear, systematic guidelines for integrating civil society perspectives into existing frameworks (Mügge, 2024). Instead, elite private tech corporations invoke abstract notions such as European values and trustworthy AI as blanket justifications, without clarifying what those values mean in diverse, lived contexts. This exclusion reflects a deeper techno-policy issue in which harm is assessed through non-representational metrics rather than through the everyday experiences of those governed by the technology.

Toward participatory AI governance

To move beyond this technocratic drift, the European Union must incorporate stronger forms of democratic deliberation into AI policymaking. Several practical interventions are possible.

First, member states should be required to conduct localised public forums such as citizen assemblies, town halls, or deliberative workshops before adopting national AI implementation plans. These forums must be accessible and inclusive, with targeted outreach to low-income, migrant, and rural populations. Examples from participatory budgeting and climate assemblies demonstrate that structured deliberation can generate informed and balanced input that technical experts may overlook.

Second, the European Commission should establish an independent observatory to monitor the social impacts of AI systems across different regions. This observatory should rely on qualitative fieldwork, site visits, and participatory research. It must be entirely separate from industry-led initiatives and grounded in partnerships between academia and civil society. Such a body would help counterbalance the centralised influence of corporate stakeholders by providing contextual insights from affected communities.

Third, policymakers should adopt public reasonability, not just technical risk, as a central guiding principle. This approach treats AI systems not merely as tools to be optimized, but as political instruments whose legitimacy depends on widespread social consent. A rights-based, community-informed framework should be central to future revisions of the Act.

Conclusion: Democratising AI policy formation

Europe prides itself on leading the globe in ethical AI regulation. Yet, leadership is not merely about drafting rules; it is about how those rules are made, who shapes them, and whose voices count. If Europe’s AI governance continues to rely on elite consultations and obscure processes, it will only reinforce the legitimacy crisis facing democratic institutions more broadly. The promise of AI regulation lies not just in reducing risk but in opening new possibilities for inclusive and participatory digital futures. That will require regulators to look beyond codebooks and compliance checklists and toward the communities whose lives are shaped by automated decisions every day. Expanding civic participation in AI governance is not only a democratic necessity but also a prerequisite for building equitable and resilient digital systems.