Brussels effect or experimentalism? The EU AI Act and global standard-setting
Abstract
Supporters of the EU’s Artificial Intelligence Act have presented it as promising to define the global standard for AI regulation, thus emulating the widely heralded ‘Brussels effect’. This paper juxtaposes this expectation with the alternative position of experimentalist governance, which rather sees the EU’s AI Act as one approach to AI regulation among others and that conceives of its interaction with other regulatory approaches across the world in a more cooperative and open-ended way. The paper explores the differences between these two theories along two lines. First, comparing the nature of AI with established digital technologies, it finds that the fundamental uncertainty that accompanies AI calls for regulators across the world to adopt a more interventionist approach that balances between its promises and perils. Secondly, looking at the contents of the AI Act, the analysis establishes that it remains very procedural, reflecting a distinctively incremental and open-ended approach. On the whole, these analyses suggest that the external impact of the AI Act is more in line with experimentalist governance than with the theory of the Brussels effect.
Introduction
The Artificial Intelligence Act (Regulation 2024/1689) that the European Union (EU) adopted in spring 2024 is widely seen as marking a major step in AI regulation worldwide. Typically, in self-congratulatory fashion, the Council of the EU celebrated the adoption of the AI Act in the following terms:
Today the Council approved a ground-breaking law [emphasis added] aiming to harmonise rules on artificial intelligence, the so-called artificial intelligence act. The flagship legislation follows a ‘risk-based’ approach, which means the higher the risk to cause harm to society, the stricter the rules. It is the first of its kind in the world and can set a global standard [emphasis added] for AI regulation (Council of the EU, 2024).
This quote expresses a distinct understanding of the EU’s role in the worldwide regulation of AI, namely that of a global frontrunner and standard setter. The underlying logic of this understanding has been very much developed and propagated by Anu Bradford’s account of the ‘Brussels effect’ (Bradford, 2019; Vogel, 1995). The Brussels effect implies that, by adopting ambitious product standards, the EU is able to drive product standards to a higher level in markets elsewhere in the world. The effect relies on the twin causes of the EU’s reputation as a regulator and the exceptional size of the EU’s consumer market. Given the EU’s strong reputation as a regulator, other jurisdictions are likely to adopt EU standards as a template (the de jure Brussels effect) and, given the size of the EU’s consumer market, companies may choose to internalise the EU standards since by doing so their products meet standards everywhere (the de facto Brussels effect). Bradford (2019) demonstrates this effect for competition policy, environmental standards, health and safety regulation, and data protection and hate speech on the internet. What is more, EU policy makers have self-consciously adopted the logic of the Brussels effect in the way they frame their policies (Vestager & Bradford, 2021). Taking up on these cues, and especially the way the EU’s GDPR regulation has affected data privacy regulation worldwide (Bradford, 2019, Chapter 5), observers have started to speculate whether the Brussels effect will also apply to the EU’s AI Act (Siegmann & Anderljung, 2022; Engler, 2022; Bradford, 2023).
Since the AI Act has only been adopted recently and much of its implementation is still work-in-progress, it is too soon to evaluate empirically whether the act has indeed come to effectively set the worldwide standard. Instead, this article offers an ex-ante evaluation of the likelihood that the EU AI Act can be expected to have the Brussels effect taking account, on the one hand, of the characteristics of the emerging market of AI models and applications (Section 3) and, on the other hand, of the form and contents of the AI Act itself (Section 4). Both perspectives suggest that the Brussels effect may not only overestimate the global impact of the EU AI Act but, more fundamentally, miscast the relations between the EU and the rest of the world in competitive rather than more cooperative terms. To capture such an alternative dynamic, I build on the theory of experimentalist governance (Sabel and Zeitlin, 2008, 2012). Hence, before turning to the examination of the two questions, the next section first lays out the debate on the Brussels effect in digital policymaking and introduces experimentalism as an alternative perspective.
Theorising EU digital regulation in its global context
In a globalised world, the way that regulation from the European Union interacts with the regulation of other global jurisdictions varies (Young, 2015). Two dimensions of variation stand out in particular. For one, the EU can be a rule-setter for the rest of the world or a rule-taker, for instance in regulatory domains that have come to be dominated by the US. The second major dimension is whether the interactions of the EU with other regulatory systems are framed in terms of competition or in terms of cooperation (Young, 2015, p. 1239). The frame of competition highlights the economic and security interests involved in product markets. In contrast, the frame of cooperation underlines that regulation around the world often serves similar objectives, like health and safety, and that there generally is an overarching interest in having shared standards.
The EU is considered to be particularly well-positioned to be a global rule-setter. This is not merely because of the size of its market, but in particular due to its established regulatory capacity (Bach & Newman, 2007) as well as to its inclination to set high standards (Bradford, 2019, p. 37). The ability for the EU to become a global rule-setter has also been recognised to depend on the global regulatory context (in particular the density of international institutions) (Newman & Posner 2015) and on the nature of the object of regulation (the product) involved. As regards the object of regulation, Bradford (2019, Chapter 2) highlights the conditions of ‘inelasticity’ and ‘non-divisibility’. ‘Inelasticity’ essentially refers to the limited ability of producers to divert their activities from the regulating market, like the EU. ‘Non-divisibility’ refers to the difficulty that firms may have in trying to separate their supply for the EU market from the rest of the world, or when they can only do so at such disproportionate costs that it is more efficient to simply apply the EU standard across the board.
The term ‘Brussels effect’ has come to be attached to cases in which the EU has been particularly successful in turning its demanding product standards into the worldwide norm (Bradford 2019). In the digital domain, the most celebrated example of the Brussels effect is the EU’s General Data Protection Regulation (GDPR). As Bradford (2019, Chapter 5) shows, even if the GDPR sets relatively demanding standards for data protection, other countries have been keen to meet these as well so as to ensure the free flow of data in and from the EU. Notable examples are the way the GDPR inspired the 2018 California Consumer Privacy Act (Gunst & De Ville, 2021) and the 2020 Brazilian data protection law (Gadoni Canaan, 2023). More important even than this de jure Brussels effect of legislative alignment with the EU, is the de facto Brussels effect through which key digital companies (like Apple and Google) have come to adopt the GDPR standards for their global market as a whole. In the latter respect, the relative indivisibility of dataflows and the internet certainly was a major contributing factor.
In her most recent book, Digital Empires: The Global Battle to Regulate Technology, Bradford claims that “Artificial intelligence may well be the next frontier of the Brussels Effect”, adding that the EU AI Act “has the potential to shape the development of AI globally” (Bradford, 2023, p. 348). As the subtitle of the book indicates, Bradford’s assessment emerges from an analysis that frames the regulation of the digital domain (and particularly of AI) in strikingly competitive terms. She sees the global digital order as being shaped by the competition between three regulatory ‘empires’: the market-driven model of the US, the state-centred Chinese approach, and the rights-based regime of the EU. Rather than expecting one digital empire to prevail, Bradford sketches a future in which different regimes will continue to co-exist in a complex balance. She does recognize the ascendency of the Chinese model, particularly in the non-democratic world (Bradford, 2023, Chapter 8). Notably, however, for the democratic part of the world, she is remarkably optimistic about the EU rights-driven model (Bradford, 2023, p. 366), which she sees as serving citizens’ needs and ultimately expects to prevail over the more narrowly focused American market-driven model. To be sure, Bradford’s (2023) account of the regulation of AI was bound to remain somewhat speculative as it was written before the final AI Act was adopted by the EU. Notably, in her chapter on digital regulation in the US, the issue of AI is completely absent, and the whole book contains no reference to ChatGPT or any other large language model.
In any case, before getting carried away by the Brussels effect, it is useful to also consider the possibility that the EU’s regulatory approach may not prevail. Among the various alternative dynamics that one can envisage, the most relevant one for the purposes of comparison is ‘experimentalist governance’ (Sabel & Zeitlin, 2008, 2012) since it not only suggests that the EU need not be the global rule-setter but also highlights the more cooperative elements in the interaction between regulatory regimes rather than that it frames them as an inevitable battle. Charles Sabel and Jonathan Zeitlin (2008, 2012) theorize ‘experimentalist governance’ as a setting in which independent jurisdictions are collectively involved in responding to a given challenge (like the rise of AI technology) and do so under conditions of pervasive uncertainty about the most effective instruments to be used. Crucially, experimentalist governance presumes that there is no central rule-setter. Hence, the different jurisdictions retain considerable discretion in defining their own policy priorities and adopting their own instruments. In the experimentalist frame, such a setting is bound to invite a process of joint monitoring of each other’s efforts and results and, in this process, regulators will selectively copy practices from each other, depending also on their judgement of the similarities between the different policy contexts. Experimentalist governance can thus be seen as a context of continuous policy monitoring and learning among regulatory actors that each retain their autonomy. Over time, it may well lead to convergence on policies that have proven effective. However, to the extent that policy contexts are found to be structurally different from each other, policies may also remain divergent.
The Brussels effect and experimentalist governance are not diametrical opposites; they are better regarded as different points on a spectrum. On this spectrum, the Brussels effect embodies the extreme at which the EU is widely recognised as the worldwide rule-setter. For proponents of the Brussels effect, the AI Act promises to position the EU as a global standard setter for other (democratic) regulatory regimes and for tech firms around the world. In contrast, experimentalist governance suggests a more nuanced position, which rather frames the EU as engaged in a more horizontal process of exchanging regulatory practices. It sees the EU AI Act as just one intervention among a wide range of policy responses worldwide that seek to regulate AI and that, ultimately, do share certain overarching ‘framework goals’ (Sabel & Zeitlin, 2008), like for instance the concerns about safety and innovation in the case of AI. The AI Act will certainly influence regulatory norms elsewhere around the world. But its impact is bound to remain limited, and EU regulation is expected to be as much impacted by policy initiatives from others in return (Newman, 2015). In marked contrast to the belligerent framing of Bradford’s Digital Empires (2023), experimentalist governance offers a less confrontational and more collaborative perspective on the way AI regulation interacts across the world.
Debate on the possible ‘Brussels effect’ of the EU AI Act already took off well ahead of its formal adoption. Siegmann and Anderljung (2022) are particularly confident about the impact that the EU’s AI Act will have on AI regulation worldwide. They present an extensive inventory of the de facto and de jure channels through which the AI Act may affect regulatory practices beyond the EU. Engler (2022) adopts a more nuanced position. He certainly expects the AI Act to have a global impact on tech firms and other jurisdictions, but he also expects great variation in the ability of, particularly, companies to circumvent the act’s requirements, to differentiate their EU operations from the rest of the world, and to successfully push back against an all too strict implementation of the act. Pagallo (forthcoming) adopts an even more sceptical position. He expresses serious doubts about the global impact of the AI Act, pointing to the many ambiguities in the act and to the likelihood that technological development of AI may quickly outpace the regulation. Gikay (2024) goes even further in suggesting that the AI Act risks turning the EU into a laggard rather than a frontrunner in AI as he sees it as a case of over-regulation that will stifle technological innovation and drive it away from the continent. Almada and Radu (2024) present an altogether different argument. They claim that the AI Act’s objective to protect fundamental rights is compromised by the Act relying on the much more limited template of product safety legislation. They do not exclude the possibility that the AI Act will be successful in becoming a global standard. However, to the extent that it does, it would propagate an approach that actually fails to meet the professed objective of protecting fundamental rights and may actually facilitate them being undermined.
While these assessments were written even before the AI Act officially entered into force, less than a year after that moment and with many of its implementation instruments still under development, it is still too early to conclude whether it is setting the global standard. First impressions are less encouraging than what EU policy makers may have hoped for. One hopeful sign for the global coordination of AI regulation was the ‘Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet’ (Elysée, 2025) that was adopted at the end of the AI Action Summit in Paris in February 2025. Although this statement echoes many of the concerns of the EU AI Act, it does not explicitly refer to it as a major source of inspiration. What is more, while the statement was supported by all EU member states and China, the US and the UK declined to sign it.
In general, changes in the global political context have not helped the global reach of the EU’s AI Act. Most importantly, the re-election of Donald Trump has led the US government to closely identify with the global interests of US tech firms. Pushback against attempts of the EU to implement its digital regulations and to impose sanctions for non-compliance feature prominently in its communication. This pressure in combination with similar signals from European companies and politicians, have led the European Commission to consider pausing the implementation of the AI Act (Bertuzzi, 2025). At the same time, the interest of China in regulatory collaboration seems to have been reduced after the successful launch of DeepSeek, an exceptionally effective Chinese AI-model, which has significantly increased Chinese confidence in its AI capabilities (Singer & Sheehan, 2025).
Rather than establishing the impact of the AI Act on other jurisdictions empirically, this article offers an ex-ante evaluation of its likely impact and discusses the likelihood that this will deviate from the Brussels effect of the GDPR by zooming in on two aspects. The first concerns the nature of the object of regulation, that is, AI technology. Here the key question is: Is the nature of AI technology, the concerns it raises and the way its market is likely to be structured, similar to that of the digital technologies (internet platforms in particular) for which the Brussels effect has been claimed? This question, that is addressed in the next section, focuses on the preconditions for the de facto Brussels effect, specifically the public concerns involved and structure of the sector to be regulated.1The second aspect (Section 4) concerns the contents of the AI Act. Here the key question is whether the substance of the AI Act justifies the expectation that it will come to set the global standard. If the EU AI Act is indeed to set the global standard for AI regulation, then one would expect its contents to be as clear and coherent as possible. In contrast, from the perspective of experimentalist governance, one rather expects many of its provisions to remain open-ended and ambiguous, and even to remain open to input from experiences elsewhere.
How AI is not just another digital technology
There are three fundamental respects in which the nature of AI differs from the previous celebrated cases of the Brussels effect in the digital domain, such as the GDPR as well as the EU’s regulation of online hate speech (first through the voluntary Code of Conduct on Countering Illegal Hate Speech Online (2016) and since 2022 through the Digital Services Act (Bradford, 2019, Chapter 5)). These differences affect the kind of position that the EU can claim in the global regulatory domain. The first, and most fundamental one, is that AI raises a wide range of fundamental uncertainties rather than a limited range of well-specified risks. The second difference that follows from the first is that government regulation of AI is not necessarily just a burden for companies; regulation becomes for them also a critical instrument to instil trust in their products. Thirdly, equating AI regulation with previous EU regulation of the digital domain is moreover too quick in assuming that the market of AI systems and models and systems is bound to be concentrated around a few major tech giants as is the case for most of the internet economy. These three differences make it less likely that EU AI At will give rise to the Brussels effect and suggest that experimentalist governance may offer the more appropriate perspective on the EU’s place in the global regulation of AI.
Looking at the kind of risks to be regulated against, one should note that the celebrated cases of the Brussels effect in the digital domain addressed public concerns like privacy, defamation, and disinformation that had clear precedents in the pre-digital world. Hence, they could build on well-established policies on, for instance, privacy rights and information law. To be sure, many of the risks associated with AI can also be seen as extensions of existing concerns that have already been addressed for established digital technologies. Still, AI often leads to a significant exacerbation of the concerns (Weidinger et al., 2022; Floridi, 2024). That typically applies to privacy, where the established concerns about data protection morph into a much broader concern about mass surveillance and profiling by state actors. Similarly, concerns about disinformation are amplified by AI technologies that can systematically mobilize fake texts, images, voices and videos. Such concerns become even more prominent because of the possibilities that AI invites for malicious use and the tendencies of many large language models towards bias and discrimination.
Moreover, AI also raises distinctive concerns that are more fundamental in character and have a broader societal impact (Bengio et al., 2024; Weidinger et al., 2022). One of these is the use of AI for military purposes, specifically its executing military tasks through autonomous weapons use (e.g., Maas et al., 2023). A second concern that is very prominent in the case of AI is the concern about it having major and disruptive impacts on the job market and the capacity of (well-educated) workers to earn their own living (Bengio et al., 2024, p. 54). Furthermore, more than established digital technologies, AI raises environmental concerns in terms of energy and (cooling) water consumption (Bengio et al., 2024, pp. 59–60; Weidinger et al., 2022, pp. 220–1). Last but not least, there are concerns about the so-called existential risk that AI systems might be able to take on a will and a purpose of their own at the cost of humanity.
Overall, the crucial aspect that distinguishes AI from preceding digital technologies is the fundamental uncertainty and risks that accompany it. This has the consequence that AI is not just being regulated for any specific risks but that it calls for more integral regulation. Such integral regulation requires a broader vision on the stakes involved in AI, both in terms of opportunities for innovation, societal uptake, and economic gain as well as in terms of a general recognition of its potential risks. The contours of such a vision are clearly apparent in the way that the purpose of the EU AI Act (Art. 1.1) is defined: to “promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights [….] and to support innovation”.
From this fundamental difference between AI and previous digital technologies in terms of uncertainty and risks follows a second difference, which lies in the way that companies will evaluate regulation of the technology. As companies are insufficiently able to guarantee the necessary trust by themselves, government regulation plays a key role in directly and indirectly vouching for the trustworthiness of AI applications and for creating an ecology in which consumers can feel secure that the applications that are developed can be trusted (Smuha, 2021). Partly this is a question of product regulation or labelling, partly this also involves the building up of a system of liability in which it is clear where responsibility is assigned if the use of applications leads to harm or unintended costs. It is exactly on this point that Bradford claims that it may be “possible for the EU to capture a commercial advantage if consumers prefer AI applications that adhere to high regulatory standards and are hence easier to trust” (Bradford, 2023, p. 138; referring to European Commission, 2021, p. 2).
The recognition that an active market in AI systems and applications relies on the guaranteed safety of the products challenges the stark opposition that Bradford (2023) draws between the market-driven model promoted by the US and the rights-based model of the EU. Timo Seidl (2024, p. 2028) has already pointed out that neither of the two jurisdictions consistently lives up to Bradford’s characterisations and, particularly, that the EU often intersperses its rights-based rhetoric with market-oriented positions. More fundamentally, however, one can question whether a market of AI applications and models is viable in the absence of active state interference. Besides the essential role of the state in vouching for the safety of AI applications, there is also the fact that states are in fact the main customers for many large-scale applications.
Ultimately, in regulating AI, all jurisdictions need to reconcile, on the one hand, facilitating (market-driven) innovation and, on the other, preventing risks for society and individuals and the values and rights they embody. Different regulatory regimes may prioritise different societal values and strike the balance in different ways. Still, worldwide agreements like the Paris ‘Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet’ (Elysée, 2025) and international scientific reports (e.g., Bengio et al., 2024) suggest that there is a rather broad consensus on the key societal concerns involved. Rather than indicating an inherent incompatibility between the different regulatory models (Bradford, 2023), these shared orientations emerge as typical ‘framework goals’ as experimentalist governance conceives of them (Sabel & Zeitlin, 2008).
A third issue to consider is that the Brussels effect depends on the structure of the market and the number of suppliers involved. Particularly, much of the de facto Brussels effect hinges on digital technology operating through networks or platforms where it is very hard and inefficient to separate the supply to the EU market from that to the rest of the world. The operation of this effect is further facilitated by the fact that much of it hinges on a few big, globally operating companies. In general, the internet economy has come to be dominated by a small group of companies (Moore & Tambini 2018), since these have been able to exploit the network benefits of their historical prominence over potential contenders (Culpepper & Thelen, 2020). Such network effects are particularly strong for internet platforms as these do not merely offer a service but also provide an infrastructure on which consumers and other businesses come to rely (Rahman, 2018). Even if the GDPR has a very broad range of application, the compliance of major platform companies (in particular, Meta, Amazon, TikTok, LinkedIn, Uber, and Google) has been critical to its implementation (dataprivacymanager.net, 2025). The more recent EU regulations of the Digital Services Act and the Digital Markets Act display an even more direct focus on a relatively small group of big companies that operate the Very Large Online Platforms (VLOPs). Once such major companies and platforms are forced to adjust their policies in the EU, such adjustments are likely to be applied worldwide, either because it is more efficient for the firms involved or because other jurisdictions adopt similar policies as the EU towards them.
Much remains uncertain about how the market for AI models and applications will develop exactly. However, it is unlikely to adopt a similar oligopolist structure as that of established digital platforms (Kowalski et al., 2024). Scale and speed to capture the market certainly offer advantages. This is reflected in the way the initial frontrunning AI enterprises (like OpenAI, DeepMind, Anthropic or Mistral) are closely associated with established Big Tech conglomerates, especially because they can offer access to extensive reservoirs of (training) data. Still, even the importance of scale is challenged by the emergence of more efficient, smaller-scale models like DeepSeek (Gibney, 2025).
More fundamentally, AI applications do not have the network and infrastructural effects that internet platforms have. They rather operate as plug-ins that allow users to substitute one model for another without significant side effects. This has important consequences for the kinds of business models that are to make these systems profitable (Nathan, 2024). Revenues are likely to remain dependent on specific applications, which involves intermediaries (like software developers or car makers) that tailor the AI systems to their specific user needs. Compared to internet platforms, the market of AI applications is thus likely to remain much more fragmented, especially to the extent that these applications rely on individual purchases rather than networked structures.
This market structure has crucial consequences for regulation. Above all, it means that AI applications are far from non-divisible, which is a crucial condition of the de facto Brussels effect (Bradford, 2019, p. 53). Providers can easily differentiate their applications for different markets, certainly where AI is embedded in specific devices like cars or medical technology. As Engler (2022) notes, “many commercial [smartphone] apps are already designed and provided specifically for different countries” (n.p.). Alternatively, major AI developers (e.g., Palantir) are also likely to rely on public clients rather than consumer markets, for instance for military applications, environmental and health monitoring, crowd surveillance, and crime detection (Van Noordt & Misuraca, 2022; Hickok, 2024). In those cases, public authorities can gain considerable leverage over the direction that AI systems take.2In both kinds of markets (consumer markets and public clients), legislators are likely to coordinate their regulations and to learn from each other, as experimentalist governance would suggest, but there is little reason to expect one worldwide standard to emerge.
In short, the kind of regulatory challenges that AI raises are different from those raised by preceding digital technologies on which the Brussels effect has been modelled. A major aspect is the fundamental uncertainty and inestimable risks associated with AI. This condition means that, more than in preceding digital technologies, regulation plays an essential role in constituting the AI economy and that firms take an active interest in regulation rather than seeing it as a constraint only. More generally, it means that market-based and rights-based considerations cannot simply be opposed to each other in the regulation of AI, but that legislators have to determine how these considerations are best reconciled. At the same time, AI lends itself to a wide range of applications and these applications are unlikely to have the network effects that have so much shaped the business structure of the big internet firms. While these conditions suggest that jurisdictions have considerable freedom to adopt their own approach to AI and can see and compare how they will play out, the particular uncertainties that AI raises also invite a more integral approach to the regulation of this kind of technology. Since the EU AI Act promises to mark a major step in this regard, it is appropriate to take a closer look at its contents.
Deciphering the EU AI Act
The very fact that the EU decided to move early to establish an integral regulatory framework for AI – before the full potential of the technology has unfolded and ahead of other jurisdictions – has limited its ability to set the ‘gold standard’ in AI regulation (Gstrein, 2022). The overarching concern of the EU AI Act is to ensure that the AI systems and applications on the EU internal market will not violate the health, safety, and fundamental rights of European citizens (cf. Art.1.1).3However, in the absence of much experience to build upon, the AI Act adopts a markedly procedural strategy to ensure these protections. How this procedural framework will play out very much depends on the administrative instruments that will be developed and the ways these will be applied. What is more, AI developers and deployers retain a considerable degree of freedom in regulating their own practices as long as they carefully document them and justify their choices. Thus, the AI Act promises to be more of a laboratory of collective learning than that it paves the way for heavy-handed administrative interventions.
This is not an easy framework for other jurisdictions to copy, as the Brussels effect would predict. The procedural framework of the AI Act rather suggests that it itself remains open to revision and fine-tuning, and that part of this fine-tuning is likely to be inspired by experiences in other jurisdictions. The most strikingly experimentalist feature of the AI Act is that it essentially offers competing models of regulation, not only in the way it allows for the arbitration between different risk-based categories but also because of the insertion of specific provisions on general-purpose AI models besides the original risk-based structure.
The main aspect on which the AI Act sets forward concrete and substantive norms is in the specification of the kind of AI systems that are considered to have unacceptable risks and are prohibited altogether (Art. 5). A second aspect where the AI Act has direct effect is in the establishment of a system of registration that is to be maintained by the Commission and that is to keep track of the AI systems on the EU market (Arts. 49, 52 and 71). Thirdly, the AI Act contains some relatively concrete transparency obligations towards consumers and citizens, most notably the ‘bot disclosure’ obligation in Article 50 (Veale & Zuiderveen Borgesius, 2021, p.106) that requires providers and deployers of AI systems to ensure that natural persons are properly informed whenever they are interacting with, or exposed to, content that is AI generated. While especially the absolute prohibitions may well be copied by other jurisdictions, such well-specified rules are the exception in the AI Act.
Initially, the AI Act was to be structured on the basis of the much-heralded ‘risk-based approach’ (Veale & Zuiderveen Borgesius, 2021). The logic behind this approach is that AI systems and applications are assigned to different regulatory regimes based on the level of risk associated with them. Notably, however, the AI Act construes ‘risks’ in a remarkably narrow way (Almada & Petit 2025, p. 94). Following traditional notions of product regulation, the Act (esp. Art. 9) calls upon providers and deployers to prevent risks from malfunctioning with some attention for pre-empting potential malign use. In fact, the Act does not so much define the level of risks in relation to specific potential harms but rather relies on the areas in which they are employed as a proxy (Art. 6). Thus, the critical category of ‘High-Risk AI Systems’ is defined on the basis of a list of usage domains that are enumerated in Annex III to the Act and include: biometrics; critical infrastructure; education and vocational training; employment, workers management and access to self-employment; access to and enjoyment of essential private services and essential public services and benefits; law enforcement; migration, asylum and border control management; and administration of justice and democratic processes.
The way the AI Act conceptualises risk thus bypasses much of the debate on AI risks as it is being waged worldwide (Bengio et al., 2024, pp. 59-60; Weidinger et al., 2022). There is little to nothing in the EU AI Act that directly identifies the more systemic risks that have been associated with AI. In notable contrast to Biden’s Executive Order (Biden, 2023, Section 2(c)), the AI Act does not address the potential impact of AI on workers and jobs, and it is also completely silent on the existential threats that are widely highlighted in popular discussions of AI.
The risk-based taxonomy has also shifted considerably in the course of the legislative process. The category of ‘High-Risk AI Systems’ is by far the most prominent category, as 44 of the total number of 113 articles in the Act are specifically concerned with it. Only one article in the Act directly addresses the category of AI systems with unacceptable risk, while the initial categories of limited and minimal risk have been more or less collapsed as a residual category, ‘non-high-risk’ (Articles 6.3, 79 and 80). Most notably, however, triggered by the public release of ChatGPT in November 2022, a new category was inserted into the AI Act, namely ‘general-purpose AI models (Helberger & Diakopoulos, 2023).
The upshot of this restructuring has been that, instead of the initial risk-based categorisation, most of the AI Act is now organised around the distinction between high-risk AI systems and general-purpose models. For high-risk applications, providers and professional deployers of AI systems are to maintain a risk-management system (Art. 9) that is to systematically monitor and prevent any risks for health, safety and fundamental rights that the intended use of the AI system may give rise to. The risk management system is essentially subject to a process of self-certification whereby the provider assesses the conformity of the risk management system with the established requirements (Art. 43). Once high-risk AI systems have entered the market, they are, like other products, subject to the controls of national market surveillance authorities (Art. 74), only when there are clear transnational implications will coordination across the national boundaries take place (Art. 79). It is up to the market surveillance authorities to determine whether AI systems present a risk and, if they believe this to be the case, to carry out the necessary evaluations and to impose any demand or prohibitions on providers.
In contrast, for the regulation of general-purpose AI models, the powers to supervise and enforce the requirements are entrusted (as good as) exclusively to the AI Office that has been newly established within the European Commission (Arts. 64 and 88). Providers of general-purpose AI models (or, when based in third countries, their authorised representatives) are required to document and publish extensive technical information (Arts. 53 and 54). On top of the general measures for general-purpose AI models, the Act moreover includes special provisions for general-purpose AI models that are classified as being “with systemic risk” given their high impact capabilities (for now, set at 1025 FLOPs) (Ch. V, Art. 51.2 and Section 3; Wachter, 2024, Section II.A). Providers of these general-purpose AI models with systemic risks are to systematically test their models, to adopt appropriate measures to mitigate any risks that arise, and to track any serious incidents that they may cause (Art. 55).
Thus, the AI Act adopts fundamentally different regulatory strategies towards these two kinds of technologies: while the regulation of high-risk AI applications follows a rather distributed, bottom-up logic with most competences exercised at the national level, the strategy towards general-purpose AI models is markedly centralised at the EU level (Almada & Petit, 2025, pp. 101–102). Even if this has not necessarily been done on purpose, the co-existence of these two strategies reflects a rather experimentalist orientation since, ultimately, one of the two may come to dominate the other. An effective regulation of general-purpose (or foundational) models can alleviate many of the risks in their (downstream) deployment in specific AI systems. Alternatively, one can also imagine that most of the regulatory controls eventually concentrate at the application level at the end of the value chain and that these retrospectively constrain the choices that are made, and the risks that are allowed, upstream. Regardless of whether one or the other strategy will prevail, this construction is unlikely to be integrally copied by other jurisdictions and rather suggests that the EU itself keeps its options open.
When it comes to the development of technological standards, the procedural nature of the AI Act and the different approaches towards different kinds of technologies come together. While, in line with the procedural orientation of the whole Act, the development of actual standards is delegated, there is a notable difference in the way that this process is envisaged for the two kinds of AI technologies. For the standards for high-risk AI systems, the Commission is mostly to rely on European standardisation organisations (most notably, the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC)), as it would with other product specifications (Art. 40.2; Wachter, 2024, p. 690). In contrast, as regards the evaluation of general-purpose AI models, the Act envisages the development of ‘codes of practice’ as a kind of collective process under the direction of the AI Office in the European Commission, in which governmental agencies, civil society organisations, academic experts, and AI firms cooperate (Art. 56). This signals the greater uncertainty surrounding these general-purpose technologies as well as the recognition that it may be hard to keep up with the expertise of the technology if not the firms, and other societal actors, are involved directly. At the same time, in both kinds of standard-developing procedures, the EU standards are likely to be coordinated with the developments and views elsewhere in the world. For one, the AI firms and experts involved in developing ‘codes of practice’ for general-purpose AI models are bound to rely on insights from markets outside of the EU. Similarly, however, also established EU standardisation bodies like CEN and CENELEC are heavily enmeshed in transatlantic and global networks in which they coordinate the standards they develop (Engler, 2022; Shahin, 2024). Thus, the way the AI Act proceduralises the development of standards decreases the chances that the EU will operate as a unilateral standard-setter, as suggested by the Brussels effect, and rather positions the EU as engaged in a collective learning process, as envisaged by experimentalist governance.
As a final point, even when it comes to sanctions, the AI Act prioritises a procedural approach over direct rule-setting and enforcement. Certainly, the Act foresees the possibility to impose substantial fines of up to 15 million euro or 3% of the annual turnover of AI firms that fail to comply (Arts. 99 and 101).4However, such fines only become likely as the conclusion of a long process which involves many back-and-forths. For one, as discussed in the previous paragraph, the standards that are to apply still need further specification and codification and, given its advanced expertise and experience, that process is bound to closely involve the AI industry itself. Secondly, even if there will be a clear and determinate legal basis to act upon, the AI Act typically foresees an iterative process in which risks are first identified and examined and in which the responsible provider or deployer is then given the right to respond and to adopt mitigating actions before punitive actions will be activated. Unless AI systems will give rise to major sudden crises, this incremental way of proceeding is likely to prevail under the AI Act.
Overall, the AI Act adopts a thoroughly procedural approach that leaves many standards to be determined and many options open. Thus, the Act itself already displays a rather experimentalist orientation in which provisions may eventually evolve in different directions. It may even be the case that some provisions (for instance on general-purpose models) will come to yield an expansive regulatory system while others remain a dead letter. Although other jurisdictions may well come to use parts of the AI Act (like the provisions on AI systems with unacceptable risk), an Act that itself does not point in a clear direction itself is unlikely to become the golden standard for others, as the Brussels effect would have it..
Conclusion
Bradford’s theory of the Brussels effect has helped to boost the confidence of EU regulators in regulating the digital domain and has stimulated systematic and strategic thinking about the impact that EU rules can provoke on regulators and business elsewhere in the world. Yet, the upshot of the double analysis above (in Section 3 and 4 respectively) is that it is far from sure that the EU’s AI Act will have a Brussels effect comparable to, most notably, the GDPR. As an alternative theory of the external impact of EU digital regulation, I have put forward experimentalist governance. From the perspective of experimentalist governance, the EU emerges less as a global standard setter and more as a regulator that is itself still involved in a learning process and that offers its own interventions as an example to regulators elsewhere in the world.
This article offers some preliminary evidence to suggest that experimentalist governance may in fact be the more appropriate theory through which to interpret the external impact of the AI Act. First of all, in Section 3, I argued that, more than in established digital technologies, uncertainty about the potential harms of AI remains ubiquitous and that this actually leads to a demand for regulation, not least by AI firms themselves, in all jurisdictions. What is more, the structure of the market for AI also remains deeply uncertain and it is certainly not foretold that it will display the same network effects that characterize much of the internet economy. Instead, many AI applications may well proliferate in a much more fragmented and dispersed market, while at the same time some of the more expansive and prominent applications will be directly contracted by public authorities where they can directly impose the desired specifications. In turn, the analysis in Section 4 highlighted how the AI Act itself displays a rather incremental and open-ended approach. The Act is built upon a shifting understanding of what needs to be regulated and how it is best conceived, and it remains very much procedural in character rather than that it defines substantive and concrete standards. Most notably, the AI Act proposes two regulatory strategies that adhere to rather distinct logics: a decentralised strategy for high-risk AI systems and centralized EU-level strategy for general-purpose models.
On the whole, then, both the structure of the AI domain and the nature of the AI Act suggest that the external impact of the AI Act is more in line with experimentalist governance than with the theory of the Brussels effect. Clearly, the EU has been ahead of the game in getting its AI regulation codified in a (quasi-)comprehensive Act. However, given the high level of uncertainty surrounding AI, the AI Act inevitably remains very open about the exact risks that are to be monitored and about the kind of tests that are appropriate for that. Thus, the AI Act emerges very much as an adaptive instrument, and the experimental nature of the regulation has been internalised into the very operation and potential evolution of the Act.
Some EU policy makers may find it difficult to relinquish the ambition to become the lodestar for international AI regulation, which is implied by the notion of the Brussels effect. However, abandoning the Brussels effect also means abandoning the premise of the combative metaphor of ‘digital empires’ and the inherent rivalries between them. In contrast, experimentalist governance departs from the premise that governments across the world share an interest to actively engage with, and learn from, each other in developing AI regulation. Although their interests are likely to diverge at points, shared standards are more likely to emerge from conversation than from competition. Importantly, this conversation is not restricted between the three digital ‘empires’ (the US, China, and the EU) but will take place in multilateral fora (such as the AI Safety Summit) and also include smaller countries and countries from the Global South (Farhad, 2025). While it is unlikely that AI regulation will soon be integrated in a global regime, we are bound to see growing and significant coordination across the regulatory regimes. The EU may well be able to claim its own niche in this global game (for instance with respect to consumer protection), but its regulatory efforts are likely to remain in conversation (rather than competition or domination) with those of other regulatory powers across the world.
Acknowledgements
An earlier version of this paper was presented at the workshop ‘The Artificial Administrative State: Democracy in the Age of AI’, European University Institute 6-7 June 2024, and at the 12th Biennial Conference of the ECPR Standing Group on the EU, Universidade NOVA, Lisbon, 19-21 June 2024. Particular thanks to Madalina Busuioc, Deirdre Curtin and Alvaro Oleart, and to the journal reviewers Ali Mert Gurkan, Maria Lorena Flórez Rojas, Ricard Espelt, and Frédéric Dubois.
References
Almada, M., & Petit, N. (2025). The EU AI Act: Between the rock of product safety and the hard place of fundamental rights. Common Market Law Review, 62(1), 85–120.
Almada, M., & Radu, A. (2024). The Brussels side-effect: How the AI Act can reduce the global reach of EU Policy. German Law Journal, 25(4), 646–663. https://doi.org/10.1017/glj.2023.108
Bach, D., & Newman, A. L. (2007). The European regulatory state and global public policy: Micro-institutions, macro-influence. Journal of European Public Policy, 14(6), 827–846. https://doi.org/10.1080/13501760701497659
Bengio, Y. (2024). International scientific report on the safety of advanced AI: Interim report. (No. DSIT 2024/009). https://assets.publishing.service.gov.uk/media/66474eab4f29e1d07fadca3d/international_scientific_report_on_the_safety_of_advanced_ai_interim_report.pdf
Bertuzzi, L. (2025, May 26). EU Commission eyes pausing AI Act’s entry into application. MLex. https://www.mlex.com/mlex/articles/2344845/eu-commission-eyes-pausing-ai-act-s-entry-into-application
Biden, J. (2023, October 20). Executive Order on the safe, secure, and trustworthy development and use of AI. Washington DC.
Bradford, A. (2020). The Brussels effect: How the European Union rules the world (1st edn). Oxford University Press New York. https://doi.org/10.1093/oso/9780190088583.001.0001
Bradford, A. (2023). Digital empires: The global battle to regulate technology (1st edn). Oxford University PressNew York. https://doi.org/10.1093/oso/9780197649268.001.0001
Busuioc, M. (2022). AI algorithmic oversight: New frontiers in regulation. In Maggetti, M., Di Mascio, F., & Natalini, A. (Eds), Handbook of regulatory authorities (pp. 470–486). Edward Elgar Publishing. https://doi.org/10.4337/9781839108990.00043
Council of the EU. (2024, May 21). Artificial intelligence (AI) act: Council gives final green light to the first worldwide rules on AI [Press release]. https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-rules-on-ai/#:~:text=It%20is%20the%20first%20of,both%20private%20and%20public%20actors.
Culpepper, P. D., & Thelen, K. (2020). Are we all Amazon Primed? Consumers and the politics of platform power. Comparative Political Studies, 53(2), 288–318. https://doi.org/10.1177/0010414019852687
dataprivacymanager.net. (2025, March 3). 20 biggest GDPR fines so far [2025]. Dataprivacymanager.Net. https://dataprivacymanager.net/5-biggest-gdpr-fines-so-far-2020/
Elysée. (2025, February 11). Statement on inclusive and sustainable artificial intelligence for people and the planet. Elysée. https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet
Engler, A. (2022, August 6). The EU AI Act will have global impact, but a limited Brussels effect. Brookings. https://www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/
Europea Parliament and Council. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
European Commission. (2021). European approach to artificial intelligence (No. COM (2021) 205 final).
Farhad, S. (2025). Passengers in flight: AI governance capacity in the global south. Digital Society, 4(2). https://doi.org/10.1007/s44206-025-00195-6
Floridi, L. (2024). Introduction to the special issues. American Philosophical Quarterly, 61(4), 301–307. https://doi.org/10.5406/21521123.61.4.01
Gadoni Canaan, R. (2023). The effects on local innovation arising from replicating the GDPR into the Brazilian General Data Protection Law. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1686
Gibney, E. (2025). China’s cheap, open AI model DeepSeek thrills scientists. Nature, 638, 13–14.
Gikay, A. A. (2024). Risks, innovation, and adaptability in the UK’s incrementalism versus the European Union’s comprehensive artificial intelligence regulation. International Journal of Law and Information Technology, 32. https://doi.org/10.1093/ijlit/eaae013
Gstrein, O. J. (2022). European AI regulation: Brussels effect versus human dignity? Zeitschrift Für Europarechtliche Studien, 25(4), 755–772. https://doi.org/10.5771/1435-439x-2022-4-755
Gunst, S., & Ville, F. (2021). The Brussels effect: How the GDPR conquered Silicon Valley. European Foreign Affairs Review, 26(3), 437-458,.
Helberger, N., & Diakopoulos, N. (2023). ChatGPT and the AI Act. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1682
Hickok, M. (2024). Public procurement of artificial intelligence systems: New risks and future proofing. AI & SOCIETY, 39(3), 1213–1227. https://doi.org/10.1007/s00146-022-01572-2
Kowalski, K., Volpin, C., & Zombori, Z. (2024). Competition in generative AI and virtual worlds. Competition Policy Brief, 3. https://competition-policy.ec.europa.eu/document/download/c86d461f-062e-4dde-a662-15228d6ca385_en
Maas M., Lucero-Matteucci, K., & Cooke D. (2023). Military artificial intelligence as a contributor to global catastrophic risk. In S. Beard, M. Rees, C. Richards, & C. Rojas (Eds), The era of global risk: An introduction to existential risk studies (pp. 237–284). Open Book Publishers.
Moore, M., & Tambini, D. (Eds). (2018). Digital dominance: The power of Google, Amazon, Facebook, and Apple. Oxford University Press.
Nathan, A. (2024). Interview with Jim Covello. Top of Mind. Goldman Sachs, Global Macro Research, 129.
Newman, A. (2015). European data privacy regulation on a global stage. In Zeitlin, J. (Ed.), Extending experimentalist governance? The European Union and transnational regulation (pp. 226–246). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198724506.003.0009
Newman, A. L., & Posner, E. (2015). Putting the EU in its place: Policy strategies and the global regulatory context. Journal of European Public Policy, 22(9), 1316–1335. https://doi.org/10.1080/13501763.2015.1046901
Pagallo, U. (Forthcoming). Why the AI Act won’t trigger a Brussels effect. AI Approaches to the Complexity of Legal Systems. https://ssrn.com/abstract=4696148
Rahman, K. S. (2018). The new utilities: Private power, social infrastructure, and the revival of the public utility concept. Cardozo Law Review, 39(5), 1621–1689.
Sabel, C. F., & Zeitlin, J. (2008). Learning from difference: The new architecture of experimentalist governance in the EU. European Law Journal, 14(3), 271–327. https://doi.org/10.1111/j.1468-0386.2008.00415.x
Sabel, C. F., & Zeitlin, J. (2012). Experimentalist governance. In Levi-Faur, David (Ed.), The Oxford handbook of governance (pp. 169–184). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199560530.013.0012
Seidl, T. (2024). Charting the contours of the geo-tech world. Geopolitics, 29(5), 2033–2045. https://doi.org/10.1080/14650045.2024.2333358
Shahin, J. (2024). Dancing to the same tune? EU and US approaches to standards setting in the global digital sector. Journal of European Integration, 46(7), 1111–1131. https://doi.org/10.1080/07036337.2024.2398430
Siegmann, C., & Anderljung, M. (2022). The Brussels effect and artificial intelligence: How EU regulation will impact the global AI market. Centre for the Governance of AI. https://cdn.governance.ai/Brussels_Effect_GovAI.pdf
Singer, S., & Sheehan, M. (2025). China’s AI policy at the crossroads: Balancing development and control in the DeepSeek era. Carnegie Endowment for International Peace. https://carnegie-production-assets.s3.amazonaws.com/static/files/Singer%20Sheehan%20-%20Chinese%20AI%20Policy%20Eras.pdf
Smuha, N. A. (2021). From a ‘race to AI’ to a ‘race to AI regulation’: Regulatory competition for artificial intelligence. Law, Innovation and Technology, 13(1), 57–84. https://doi.org/10.1080/17579961.2021.1898300
Van Noordt, C., & Misuraca, G. (2022). Artificial intelligence for the public sector: Results of landscaping the use of AI in government across the European Union. Government Information Quarterly, 39(3), 101714. https://doi.org/10.1016/j.giq.2022.101714
Veale, M., & Zuiderveen Borgesius, F. (2021). Demystifying the draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), 97–112. https://doi.org/10.9785/cri-2021-220402
Vestager, M., & Bradford, A. (2021, May 21). Europe’s digital future. Project Syndicate. https://www.project-syndicate.org/onpoint/eu-regulations-for-the-digital-economy-by-margrethe-vestager-and-anu-bradford-2021-05
Vogel, D. (1995). Trading up: Consumer and environmental regulation in a global economy. Harvard University Press.
Wachter, S. (2024). Limitations and loopholes in the EU AI Act and AI liability directives: What this means for the European Union, the United States, and beyond. Yale Journal of Law & Technology, 26(3), 671–718. https://doi.org/10.2139/ssrn.4924553
Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P.-S., Mellor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., Biles, C., Brown, S., Kenton, Z., Hawkins, W., Stepleton, T., Birhane, A., Hendricks, L. A., Rimell, L., Isaac, W., … Gabriel, I. (2022). Taxonomy of risks posed by language models. 2022 ACM Conference on Fairness Accountability and Transparency, 214–229. https://doi.org/10.1145/3531146.3533088
Young, A. R. (2015). The European Union as a global regulator? Context and comparison. Journal of European Public Policy, 22(9), 1233–1252. https://doi.org/10.1080/13501763.2015.1046902
Footnotes
1. Note, however, that the de jure Brussels effect relies to a significant extent on the de facto effect, since if the EU’s leverage over the regulated companies is more limited then it also becomes a less compelling example for other jurisdictions.
2. Note, however, that such leverage is not necessarily benign since non-democratic public authorities may well use AI as a means of surveillance and oppression. Also, democratic countries may be tempted to employ AI technology in faulty or even malicious ways (Busuioc, 2022).
3. The AI Act also explicitly celebrates the potential benefits of AI technology and the innovation that it can spur. However, of the thirteen chapters of the Act, only one (Chapter VI on ‘Measures in Support of Innovation’) is explicitly geared towards promoting innovation in AI.
4. Notably, these maximum fines are just a bit lower than the ones possible under the GDPR, which have a maximum of 20 million euro or 4% of annual turnover.