Artificial intelligence regulation in the United Kingdom: a path to good governance and global leadership?

Huw Roberts, Oxford Internet Institute, University of Oxford, United Kingdom, huw.roberts@oii.ox.ac.uk
Alexander Babuta, The Alan Turing Institute, British Library, United Kingdom
Jessica Morley, Oxford Internet Institute, University of Oxford, United Kingdom
Christopher Thomas, The Alan Turing Institute, British Library, United Kingdom
Mariarosaria Taddeo, Oxford Internet Institute, University of Oxford, United Kingdom
Luciano Floridi, Department of Legal Studies, University of Bologna, Italy

PUBLISHED ON: 26 May 2023 DOI: 10.14763/2023.2.1709

Abstract

On 29 March 2023 the United Kingdom (UK) government published its AI Regulation White Paper, a “proportionate and pro-innovation regulatory framework” for AI designed to support innovation, identify and address risks, and establish the UK as an “AI superpower”. In this article, we assess whether the approach outlined in this policy document is appropriate for meeting the country’s stated ambitions. We argue that the proposed continuation of a sector-led approach, which relies on existing regulators addressing risks that fall within their remits, could support contextually appropriate and novel AI governance initiatives. However, a growing emphasis from the central government on promoting innovation through weakening checks, combined with domestic tensions between Westminster and the UK’s devolved nations, will undermine the effectiveness and ethical permissibility of UK AI governance initiatives. At the same time, the likelihood of the UK’s initiatives proving successful is contingent on relationships with, and decisions from, other jurisdictions, particularly the European Union. If left unaddressed in subsequent policy, these factors risk transforming the UK into a reluctant follower, rather than a global leader, in AI governance. We conclude this paper by outlining a set of recommendations for UK policymakers to mitigate the domestic and international risks associated with the country’s current trajectory.
Citation & publishing information
Received: December 11, 2022 Reviewed: March 2, 2023 Published: May 26, 2023
Licence: Creative Commons Attribution 3.0 Germany
Funding: Huw Roberts’ research was partially supported by a research grant for the AI*SDG project at the University of Oxford’s Saïd Business School.
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: AI governance, United Kingdom, Brussels Effect, Ethics, Artificial intelligence
Citation: Roberts, H. & Babuta, A. & Morley, J. & Thomas, C. & Taddeo, M. & Floridi, L. (2023). Artificial intelligence regulation in the United Kingdom: a path to good governance and global leadership?. Internet Policy Review, 12(2). https://doi.org/10.14763/2023.2.1709

Introduction

Globally, there are now over 800 AI policy initiatives, from the governments of at least 60 countries, with most being introduced after 2016. The United Kingdom (UK) is at the forefront of AI governance efforts, at least quantitatively, being second only to the United States (US) in terms of the number of national-level AI policies released (OECD.AI, 2021) and ranking top for the number of mentions of AI in legislative documents between 2016 and 2021 (Zhang et al., 2022).1 These figures do not evidence the UK producing better outcomes than other countries that have published fewer governance documents, but they are indicative of the keen interest the UK is taking in ensuring that AI is governed appropriately. In 2021, the UK published its centrepiece National AI Strategy, which outlined a ten-year plan for AI in the UK, including setting the lofty ambition of developing the “most trusted and pro-innovation system of [AI] governance in the world”. In its recently published AI Regulation White Paper (2023), the UK government proposed a “pro-innovation approach to AI regulation”, which explains how it plans to fulfil this ambition.

While AI governance initiatives in the UK and other states are still nascent, distinct approaches are beginning to emerge (Radu, 2021; Roberts et al., 2021, 2022). Given this diversity of approaches, it is important to contextualise the UK’s trajectory and to consider its strengths and weaknesses in light of the government’s aspirations. This is the task of this paper. More specifically, this paper will undertake a contextualised analysis of the UK’s proposed approach to AI governance and assess it against two criteria established in recent government policy documents, namely:

  • Whether the UK approach will establish the “most trusted and pro-innovation system of governance” (National AI Strategy, 2021).

  • Whether this approach will facilitate the UK “lead[ing] the international conversation on AI governance” (AI Regulation White Paper, 2023).

While these criteria are framed in the UK’s nationalistic and competitive terms – something that is common globally in the framing of national AI policy documents (Fuchs, 2022; Ossewaarde & Gulenc, 2020) – reasonable aspirations lie behind this rhetoric. Regarding criterion 1, we will assess whether the UK’s approach promotes the development and use of ethically permissible AI, which preserves fundamental rights and mitigates potential individual and societal harms.2 Regarding criterion 2, we will assess whether the UK’s approach to governing AI will strengthen the country’s international influence; for instance, through developing novel and impactful initiatives that are emulated elsewhere. The rationale for this ambition, which can be inferred from relevant policy documents, is to achieve the reputational benefits of leadership, notably an ability to attract AI companies, and to promote international alignment with UK values (Foreign Affairs Committee, 2022; International Tech Strategy, 2023; Pro-innovation review, 2023). In assessing both criteria, we will frame these points within the broader, international AI governance and geopolitical landscape, to provide a contextualised understanding of the international dynamics that impact these proposals and the potential alternative governance options.

We argue that the UK’s sector-led approach – which delegates responsibility for governing AI to existing regulators who focus on applications falling within their regulatory remits – has fostered several contextually appropriate and novel governance initiatives, and that the proposed continuation of this approach, as outlined in the AI Regulation White Paper, may lead to the development of other globally leading initiatives. However, a growing domestic emphasis from the central government on promoting innovation through weakening checks, combined with tensions between Westminster and the UK’s devolved nations, will undermine the effectiveness and ethical permissibility of UK AI governance initiatives. This risk is particularly high for general-purpose AI systems that impact multiple different sectors. At the same time, the degree to which UK AI governance initiatives will be effective and globally leading will be heavily influenced by relationships with, and decisions from, other jurisdictions, particularly the European Union (EU). Accordingly, for the UK to fulfil its ambition of producing “trustworthy”, “pro-innovation”, and “world leading” AI governance, a change of direction in UK policy is needed that strengthens sectoral regulatory powers, capacities, and coordination, while positioning the UK internationally as an agile and innovative AI regulator.

To make this argument, the remainder of our paper is structured as follows. Section 1 outlines the evolution of UK AI policy following the country’s vote to leave the EU in 2016. Section 2 presents the strengths of the UK’s sector-led approach to AI governance. Section 3 considers the domestic constraints that may undermine the UK’s ambitions, with Section 4 analysing accompanying international constraints. Finally, we conclude the paper by offering policy recommendations to support the UK in achieving its stated ambitions.

The UK approach to AI governance

Following the UK’s 2016 vote to leave the EU (Brexit), the UK government singled out growth through emerging technologies, including AI, as a central priority for the country (Lynskey, 2017; Schlesinger, 2022). AI and big data were identified as one of four “Grand Challenges” where the UK could “lead the world” in the future (Department for Business, Energy & Industrial Strategy, 2017), and nearly £1bn was put towards the research, development, and adoption of AI in the UK (Department for Science, Innovation & Technology, 2019). Concurrently, efforts were made to develop appropriate governance mechanisms for AI. For instance, several government bodies were established to promote effective and ethical governance, including the Office for AI, the Centre for Data Ethics and Innovation (CDEI), and the NHS AI Lab. Outside of government, several other bodies emerged to advise and scrutinise the use and governance of AI, including the House of Lords Select Committee on AI and the Ada Lovelace Institute.

Early efforts at AI governance: 2018-2021

Against this backdrop, the UK government explicitly outlined its first national-level position on AI governance in 2018, in an official response to a report by the House of Lords Select Committee on AI. The government agreed with the conclusion reached by the Select Committee that,

Blanket AI-specific regulation, at this stage, would be inappropriate. We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed (AI in the UK: Ready, willing and able, 2018).

This official position signalled to different government departments and regulators that responsibility lay with them for governing AI related to their jurisdictions.3 From 2018 onwards, several regulators began to develop governance measures to cover their specific jurisdictions. The Information Commissioner’s Office (ICO), the UK’s data protection authority, has been one of the most active in this space, for instance producing a Guide to AI Audits (2019, 2022). Alongside this, in 2020, three of the UK’s key regulators focused on AI – the Competition and Markets Authority (CMA), the ICO, and the Office for Communications (Ofcom) – established the Digital Regulators Cooperation Forum (DRCF). This body, which the Financial Conduct Authority (FCA) also joined in April 2021, was designed to promote formal collaboration and deeper cooperation among regulators in governing digital technologies (Schlesinger, 2022). Other bodies, such as the UK’s equalities regulator, were slower to act and produced no guidance on AI from 2018-2021.

This sector-led approach also implicitly signalled that the UK’s devolved nations – Scotland, Wales, and Northern Ireland – would have a high degree of policy autonomy in certain areas. Since the late 1990s, many areas of policymaking have been formally devolved to the Scottish Parliament, the Welsh Parliament, and the National Assembly of Northern Ireland (Mitchell, 2013). Devolved powers include some policy areas relevant to AI, such as healthcare and education. While other areas, including data protection, are “reserved” for Westminster.

The devolved nature of some areas of UK policymaking facilitated Scotland publishing its National AI Strategy in 2021, six months before the UK’s. This document outlined a vision for Scotland to “become a leader in trustworthy AI” that will support the country to become “fairer”, “greener”, “more prosperous”, and “outward-looking”. The Strategy considers governance, stressing that actions should be guided by the Organisation for Economic Cooperation and Development’s (OECD) AI ethics principles and UNICEF’s principles for AI and children (Scotland’s Artificial Intelligence Strategy, 2021). However, this discussion is kept high level, with little offered in the way of specific mechanisms for governing these technologies. While the Welsh government was comparatively inactive in AI policymaking from 2018-2021, the Digital Strategy for Wales (2021) emphasises the importance of using AI “ethically and with integrity” to steer “data driven innovation”.

Devolution, combined with the cross-cutting nature of AI that covers both “reserved” and “devolved” powers held by Westminster and devolved parliaments respectively helps rationalise the sector-led approach within the UK. The diverse and uncodified nature of regulatory powers in the devolved nations of the UK (Mitchell, 2013) means that the regulatory scope for AI held by each governance entity in the UK is distinct, complex, and will likely evolve (McHarg, 2010). For instance, data protection is a reserved power, so ICO guidance on AI and data protection, based on legislation from Westminster, will apply across the UK. This does not, however, always mean there is uniformity in interpretation, implementation, operationalisation or the wider ‘governance’ of the regulation. In healthcare, for example, while general data protection remains a reserved power, ‘information governance’ is typically devolved, meaning that each of the devolved nations has different data strategies in place and different rules governing, for example, access to data for secondary purposes including the development of AI. Likewise, in some cases, regulatory responsibility is shared between UK-wide and devolved regulators, such as in the case of equalities law in Scotland, where both the Equalities and Human Rights Commission and the Scottish Human Rights Commission have remit. Brexit has further complicated this picture, with EU law previously providing consistency between the four UK nations in many devolved areas of governance, with some of these powers now repatriated to devolved administrations (Spisak & Britto, 2021).4 Accordingly, for the UK to introduce a cross-cutting AI regulation like the EU’s AI Act, it would need to be premised on reserved powers, such as data protection, consented to by devolved administrations, or forced through by Westminster breaking the Sewel Convention.5

Given the decentralised approach to AI governance supported in this period, it was difficult to define a coherent “UK” vision for AI beyond an emphasis on policy autonomy for different sectoral regulators and devolved governments. This approach also created potential issues for consistency and effective governance. While some UK regulators took the initiative to address issues with the fragmented digital regulatory environment by creating the DRCF, open questions remained – and still do remain – as to whether a decentralised approach can successfully manage the risks of these technologies.

The UK’s National AI Strategy: 2021 onwards

In September 2021, the UK government published its National AI strategy, a ten-year plan to “maintain” the country’s status as a “global AI superpower” and develop “the most trusted and pro-innovation governance framework in the world”. Regarding governance, the strategy specifically emphasised that “it is now the time to decide whether our existing [sector-based] approach remains the right one”, due to concerns about regulatory mandates and consistency (National AI Strategy, 2021). However, beyond these high-level statements, the strategy did little to update the UK’s national position on AI governance. Instead, it suggested that a subsequent White Paper would explicate the national approach.

After soliciting stakeholder feedback via a consultation policy paper in summer 2022, the AI White Paper was published in March 2023. It emphasises that the UK should continue on the trajectory outlined in 2018 by focusing on context-specific and, at least initially, non-statutory governance. The rationale for this approach is that it will limit new regulatory burdens which may hinder innovation, while also providing sufficient flexibility to deal with new technological advances. The “lighter touch” nature of this policy is consistent with the UK’s broader post-Brexit approach to regulation, which promotes “proportionality” and “non-regulatory approaches” (Department for Business, Energy & Industrial Strategy, 2021), as well as the strong government emphasis on utilising technology to promote innovation.6 However, because of the recognised risk of regulatory inconsistencies, gaps, and overlaps, the White Paper proposes three mechanisms to improve regulatory coordination. First, it defines “the core characteristics of AI”, which are designed to bound what constitutes AI, without being overly prescriptive, while providing an understanding that is robust to technological changes. The core characteristics outlined are,

  • Adaptiveness: the logic of AI decision-making can be difficult to determine because it is based on learning rather than instructions expressly programmed with human intent;

  • Autonomy: AI automates complex cognitive tasks, meaning decisions can be made without human intent or ongoing control.

Second, the document establishes a set of cross-sectoral principles that should be tailored and applied by regulators governing AI within their remits. These principles are: (1) safety, security, and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. Like many other AI policy documents, the principles are grounded in the OECD AI Principles (2019), with the guidance offered generally aligning with that developed in other jurisdictions (Floridi & Cowls, 2019; Jobin et al., 2019). However, uncertainty remains over how these principles will be applied in practice, with the government noting that “it is not yet clear how responsibility and liability for demonstrating compliance with the AI regulatory principles will be or should ideally be, allocated to existing supply chain actors within the AI lifecycle” (AI Regulation White Paper, 2023, p. 55). These principles are also initially being established on a non-statutory basis, meaning regulators are provided with no new enforcement powers. It is highlighted that at a later, unspecified date, the government anticipates introducing a statutory duty on regulators to have due regard for these principles. This delay is due to a fear that introducing statutory requirements too early would be harmful for innovation.

Third, and most notably, the White Paper establishes a set of central government functions to “identify, assess, prioritise and monitor cross-cutting AI risks that may require government intervention” (AI Regulation White Paper, 2023, p. 50). A range of activities are proposed to achieve these goals, including central regulatory guidance on the implementation of principles, a cross-economy AI risk register to support risk assessments, a horizon scanning function to identify future risks, a coordinative function to clarify regulator responsibilities and promote joined up guidance where appropriate, an innovation function to support companies navigate regulatory complexity, and an international function to promote alignment with international initiatives. These functions sound promising on paper, but little detail is provided about the resources available to them or how they will be structured.

On top of establishing these mechanisms to coordinate regulatory efforts, the White Paper explicitly addresses the question of the territorial application of the AI regulatory framework, stressing that it applies to the whole of the UK. It states that reserved powers relating to data protection and equalities law underpin the UK-wide application and that the introduction of any new statutory requirements will be consulted on with devolved administrations.

The UK’s central emphasis on taking a light-touch, “pro-innovation” strategy breaks from the regulatory approach taken by many other “early mover” governments (Roberts et al., 2023). Some governments, such as the EU, Canada, and Brazil have introduced cross-cutting AI-specific regulations that establish new hard law requirements for different types of AI systems. Other jurisdictions, such as China, have introduced application-specific regulation, such as for recommender systems and generative AI. Given the sector-based approach taken in the UK, it is unlikely that a cross-cutting AI regulation comparable to the EU’s draft AI Act will be introduced. There is a possibility of application-specific regulation being introduced if the new central government functions find gaps in the proposed framework; however, given the time it will take to establish these functions and subsequently introduce legislation, and due to government reluctance establishing new statutory requirements, application-specific regulation appears unlikely in the immediate term.

Keeping pace or leading the field

The UK’s sector-led governance approach can be considered a pragmatic option on account of its strong emphasis on (1) context and (2) flexibility. Regarding context, the ethical risks associated with different AI capabilities are highly context-specific, meaning generalised initiatives may not be appropriate. For instance, it is reasonable to expect a high degree of scrutiny over the use of AI systems for high-risk medical decisions, while less scrutiny may be required for other lower-impact areas, such as supply chain logistics. AI is also a general-purpose technology encompassing various subfields (Lipsey et al., 2005). Different AI techniques or applications may require specific regulatory measures, which imprecise guidance or overarching regulation may not adequately capture (Theodorou & Dignum, 2020). Accordingly, a ‘one-size-fits-all’ approach to ethical governance may be neither appropriate nor adequate.

Regarding flexibility, a context-sensitive approach to governance that does not rely on a single piece of primary legislation may grant the UK more agility in managing sector-specific AI risks. Several examples of this context-specific regulatory approach can already be seen, most notably in relation to public sector uses of AI, where departments and regulators have released best practice or guidance for complying with existing regulation in areas such as equalities law, public sector procurement, and algorithmic transparency.7 For instance, the College of Policing’s Authorised Professional Practice for Live Facial Recognition (LFR) sets out non-statutory but binding official national guidance for how police forces in England and Wales should deploy LFR technology to ensure compliance with relevant legal frameworks, including data protection and human rights legislation (College of Policing, 2022). These examples demonstrate the benefits of a sector-based approach that can adapt to new technological developments without requiring new statutes. It should be noted, however, that the quality of protections afforded by guidance documents is dependent on the strength of the primary legislation that it is based on.

This light touch flexibility contrasts with the EU’s approach, which some civil society organisations have suggested is too rigid to provide adequate and lasting protections (European Digital Rights et al., 2021). For instance, the EU’s proposed framework is arguably already struggling to address the risks associated with large language models (LLMs), forcing policymakers to revise the draft AI Act (Volpicelli, 2023). Such revisions are still feasible while the regulation is being finalised, yet they will not be possible once it is passed into law.

The UK’s decentralised approach has also encouraged a multitude of different government bodies to consider AI governance as it applies to their regulatory remits (i.e., rather than relying on a single or small number of specified AI regulators). Encouraging interventions from a diversity of regulatory authorities allows each body to come up with novel solutions based on their existing governance approaches and expertise, which has, in turn, produced innovative governance initiatives.8 There are numerous examples of the UK undertaking innovative regulatory interventions, ranging from guidance on “medical apps” (Medicines and Healthcare Products Regulatory Agency & Javid, 2021), to governing third-party cookies (Competition and Markets Authority, 2023), and promoting privacy-enhancing technologies (CDEI, 2022b). Here, we deep dive into two types of regulatory intervention that are being pioneered in the UK, which we believe hold particular promise.

First is the UK’s strong emphasis on AI assurance, understood here as processes to ensure that the development or use of an AI system is ethical, legally compliant, or simply that the system is functioning as claimed. Typical assurance techniques for AI include impact assessments and bias audits, with certification schemes to signal compliance seen as an important future goal (CDEI, n.d.). Providing this sort of assurance is designed to improve trust in AI and ultimately support the adoption of these systems (Freeman et al., 2022). The CDEI published an AI Assurance Roadmap in 2021, which outlines a five-year vision for the UK to “have a thriving and effective AI assurance ecosystem” based on “strong, existing professional services firms, alongside innovative start-ups and scale ups, [who] will provide a range of services to build justified trust in AI.” This plan provides a clear direction for developing a market-based approach to assurance that supports regulators in managing and monitoring compliance, while enabling industry to develop innovative assurance measures (Clark & Hadfield, 2019). Subsequent work by the CDEI includes engagement exercises with industry on AI assurance to understand current practices and how to overcome barriers to effective AI assurance across different sectors (CDEI, 2022a). The DRCF published a report on Auditing Algorithms (2022) that complements this vision by considering the role that regulators and third party auditors could play in a future UK AI assurance landscape. This report was followed by a call for input to inform regulatory choices. Although shared rules for this assurance ecosystem are currently lacking, the UK has funded the AI Standards Hub, led by The Alan Turing Institute, which is designed to support UK stakeholders in developing and adopting technical standards, including for assurance (UK Standards for AI, 2022). On top of this, a nascent ecosystem for auditing is emerging in the UK, based on new start-ups and existing professional services firms expanding their offerings into this area.9

The UK’s approach to developing an AI assurance ecosystem can be characterised by its collaborativeness, particularly with industry. There are risks to this approach, with the prospect of using audits for “ethics washing” particularly concerning if effective standards and certification schemes are not developed (Floridi, 2019; Goodman & Trehu, 2022). Nonetheless, the comparative benefits of the UK’s approach should also be recognised. Notably, while the EU has begun to develop comparable audit requirements as part of the AI Act (i.e., conformity assessments and post-market monitoring), there is still significant ambiguity surrounding hard and soft law aspects of the regulation (e.g., about which type of system will require a third party audit; how adherence to voluntary codes of conduct could be assessed) and the top-down approach has left considerations of how to actually stimulate an audit ecosystem, secondary (Mökander et al., 2022). Accordingly, there is scope for the UK to pioneer a third party assurance ecosystem that leverages market mechanisms to support regulators in achieving ethical and innovation-friendly outcomes.

These efforts could also support the country’s ambition to exert global influence in technical standards-making bodies, like the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). This could include through leading in the development of standards guiding auditing practices, which could in turn be adopted in other jurisdictions. For example, ISO/IEC standards – or aspects of these standards – influenced by the UK’s approach to AI assurance could be adopted to form the European Standards being developed by the European Committee for Standardisation (CEN) and European Committee for Electrotechnical Standardization (CENELEC) to support the implementation of the EU AI Act (European Commission, 2022). Additionally, the UK’s AI assurance approach could influence assessment criteria for best practice in the voluntary codes of conduct for low or minimal risk AI systems, where guidance is currently lacking at the EU level.

The second notable area where the UK has pioneered regulatory innovation is the use of regulatory sandboxes, understood here as controlled environments where organisations can trial innovations with the oversight of a regulator, often using real data. While regulatory sandboxes were not specifically developed for AI technologies, they were launched in 2015 by the UK’s FCA to support financial technology companies (Cornelli et al., 2020), and have since been applied to AI. The ICO has been particularly active in using regulatory sandboxes for data-driven applications, with any organisation under the remit of UK data protection law able to apply to test their product and be provided with expert advice on legal compliance from the ICO’s sandbox team. Alongside this regulator-led initiative, the LawTech sandbox is a government-backed, private sector initiative designed to incubate innovative uses of AI in the legal sector across the UK, while ensuring they comply with regulation. Finally, the AI Regulation White Paper commits to establishing a multi-regulator sandbox specifically focused on AI.

These sandbox initiatives encourage innovation as well as active regulatory oversight and direct feedback from regulators, which are beneficial for the fast-moving environment of AI. The success of the sandbox model can be seen in both the positive feedback from those participating in the scheme (Truby et al., 2022) and the widespread take-up of regulatory sandboxing across the globe (The World Bank, 2020). This includes the EU, where regulatory sandboxing has been promoted in the draft AI Act (Ranchordas, 2021; First Regulatory Sandbox, 2022). Continued leadership in regulatory sandboxing would support the ethical, or at least legal, use of AI, while demonstrating UK competence in AI governance.

The UK’s sector-led approach holds much promise. It promotes flexibility and governance through cooperation, which theoretically allows it to keep pace with AI innovations in a relatively resource-efficient way. The above examples are indicative of the type of regulatory innovation that could be facilitated by this approach, with new governance initiatives likely to emerge as different regulators’ and devolved governments’ interest in this area mature. As such, while the actual outcome of UK AI governance is uncertain, there is reason to believe that the UK’s strategy holds much promise for contextually appropriate and novel AI governance.

Domestic political challenges

When read in isolation, the approach proposed in the AI Regulation White Paper seems reasonable. However, it may face considerable domestic and international challenges going forward. One notable domestic challenge relates to regulatory capacity and coordination. This is something the government has recognised, with a report from the Alan Turing Institute, commissioned by the UK government, concluding that “significant readiness gaps” were present and that shared sources of AI expertise were necessary (Aitken et al., 2022). Given the recognised shortage of AI competency within the UK (Department for Digital, Culture, Media & Sport, 2021a) and generally uncompetitive government salaries compared to industry, it is unsurprising that a decentralised approach may lead to government departments and regulators struggling to attract sufficient talent. The UK’s commitment to establishing central functions to support individual regulators could help to relieve this issue, yet the effectiveness of these functions will be contingent on resourcing and structure. Currently, no clarity has been provided regarding either factor.

Deregulation

When considered in light of the broader deregulatory policy context, there is reason to be sceptical about the future resourcing and powers of these central functions and individual regulators. Following Brexit, the UK introduced deregulatory initiatives in areas ranging from human rights to financial regulation. This general policy shift, which has accelerated since the Covid-19 pandemic, has led to a weakening of the powers and independence of a number of regulatory and oversight bodies. We offer three cases of this deregulatory trend impacting UK AI governance and consider how this trajectory undermines the country’s aspirations for ethical and pro-innovation outcomes.

First, the UK government has sought to weaken the country’s data protection regime by replacing the General Data Protection Regulation (GDPR) and the corresponding Data Protection Act (2018), which remained in place following Brexit. The draft Digital Information and Data Protection bill which would replace these legislation contains several worrying elements. One particularly damaging proposal is the erosion of the independence of the UK’s data protection authority, the ICO. For instance, the bill introduces government approval requirements for ICO codes of practice and statutory guidance (Data Protection Bill, 2022, 124D), which are the ICO’s main policy tools for governing AI. More subtly, the bill introduces a growth and innovation duty for the ICO, alongside its data protection duty (Data Protection Bill, 2022, 120D). When read in tangent, these proposals weaken the ICO’s rights-based focus and provide the government with powers to intervene if AI policy initiatives are not sufficiently “pro-innovation”. This erosion of regulatory independence heightens the potential for politicised interventions, thus undermining the ability of the ICO to act independently in protecting consumer data effectively.

Regarding the process for revising UK data protection law, most respondents to the government consultation disagreed with both of the above changes, with the government choosing to pursue them regardless (Department for Digital, Culture, Media & Sport, 2022). On top of this, 30 civil society groups accused the government of acting unlawfully in the consultation by failing to consult with groups disproportionately affected by the law changes (Waterfield, 2022). These examples signal that aspects of the central government’s “pro-innovation” approach to data protection have been democratically problematic and arguably unpopular.

Second, the purpose of the UK’s CDEI has shifted in line with the central government’s emphasis on innovation through weakening checks. Initially, the CDEI was established with a “mandate to advise government” and published reports with recommendations that “the government is then bound to consider and respond publicly to” (CDEI, 2019). The CDEI was also to receive independent statutory footing to provide it with the necessary distance from the government to generate meaningful recommendations. Since 2021, official language surrounding the CDEI has shifted towards a new focus on “working with organisations across the UK to develop, test and refine approaches to trustworthy data use and AI governance” (Burch, 2021). Alongside this, mention of statutory independence is now absent from official CDEI documentation.

This change of direction weakens the independent accountability and oversight functions that were initially envisaged for the CDEI. Such weakening is particularly damaging for transparency, as a requirement to officially respond to CDEI recommendations would have forced the government to publicly rationalise the direction and particulars of its approach to AI policy. While the CDEI’s new advisory role may have some benefits, particularly regarding the number of government projects it can impact, the lack of formal powers the body possesses undermines its potential effectiveness. Specifically, the CDEI’s partnerships are based on a demand-side model, meaning the CDEI are disincentivised from pushing recommendations that would be unpopular with government partners, even when they are necessary for ethical outcomes. Even if the CDEI did put forward ethically sound recommendations, the voluntary and opaque partnership model means that recommendations can be ignored by government bodies when they are deemed inconvenient.

Third, proposed efforts to strengthen regulatory oversight through introducing new bodies have faced difficulties. The Digital Markets Unit (DMU) was established in 2021 to deal with the novel challenges of anti-competitive advantages of companies with a “strategic market status” in digital markets (i.e., Big Tech). However, it is yet to be provided with the legal powers to conduct envisaged interventions, such as outlining legally enforceable competition conduct requirements for companies that hold a strategic market status, and new powers to make pro-competition interventions relating to interoperability and data access (Cardell, 2022). The intention to introduce such legislation was recently announced, yet there was widespread speculation beforehand that this would not take place (Rutter Pooley, 2022). Several senior politicians have been speaking out against regulatory intervention, including the DMU (Grylls et al., 2022), while few senior figures in the government have voiced support for effective digital regulation to counterbalance this deregulatory emphasis (Dickson et al., 2022). This process can be compared to the EU’s Digital Markets Act which serves a similar purpose, but which came into force in 2022. The case of the DMU is indicative of the difficulties regulatory bodies may face if they need their powers strengthened or remits expanded to manage the risks of AI effectively. Given the novel challenges raised by AI, and due to regulators having to take on extra responsibilities that were previously held by EU institutions (Spisak & Britto, 2021), there is a real possibility of this need, and hence this difficulty.

The UK’s deregulatory approach is misguided for ensuring ethical outcomes and innovation. Considering the former, weak protections and oversight will allow harm to materialise, which could be avoided with stronger checks. For instance, the emphasis on developing co-regulatory initiatives with the private sector will fail to prove ethically fruitful if regulators are not sufficiently resourced or empowered to ensure that regulatory capture does not take place (Clark & Hadfield, 2019). While some scholars have expressed scepticism about the effectiveness of government interventions for ensuring ethical outcomes (Chomanski, 2021), the alternative of industry self-regulation has been demonstrably flawed. Specifically, there have been several examples of large technology companies firing teams designed to provide ethical oversight when it does not align with their core business interests (Simonite, 2021; Schiffer & Newton, 2023).

In terms of innovation, the UK’s approach to governance rests on the false assumption that deregulation engenders innovation (Floridi & Taddeo, 2016). Weak checks will lead to more scandals that undermine public trust in AI, which is already comparatively low in the UK when compared to other “important AI markets” (Drake et al., 2021). In turn, organisations could face legal challenges based on existing frameworks, such as data protection and equality law, when the public or civil society organisations are unhappy with the use of a system (Babuta et al., 2018). Given the UK’s post-Brexit “bonfire” that will lead to around 4,000 pieces of EU-derived law being repealed or amended as early as by the end of the year (McDonald, 2023), there will be significant uncertainty for the public and private sectors over whether their systems are legally compliant. Thus, it is reasonable to expect hesitancy from organisations developing and deploying AI systems, particularly in higher-risk contexts, negatively impacting efforts to innovate.

General-purpose AI

The UK’s decision to simultaneously take a predominantly sector-led and deregulatory approach is particularly risky at a time of rapid advances in general-purpose AI (Whittlestone, 2022). General-purpose AI systems are specifically designed to have a wide range of possible applications and can thus be applied to several different contexts. As an example, an image recognition system may be used for both identifying potholes in a road and signs of skin cancer in a medical context (Future of Life Institute, 2022). In particular, large language models (LLMs), such as OpenAI’s ChatGPT, have recently been seeing increasing commercialisation and integration into products across sectors.

The proliferation of LLMs and other general-purpose AI systems create a unique set of upstream and downstream challenges for regulators (Küspert et al., 2023). For instance, the resources required to develop LLMs, combined with their cross-sector applicability, could further drive economic concentration around a handful of AI companies. Similarly, these systems can facilitate a proliferation of homogenous, low-quality, or false information (Huang & Siddarth, 2023), while also posing threats to cybersecurity through the production of malicious content (Helberger & Diakopoulos, 2023). These risks may already fall into existing regulators’ remits; however, it is unclear whether existing powers are sufficient for addressing the breadth and complexity of the risks at hand. Moreover, because these systems will impact multiple sectors in different ways, it will be difficult for even a well-resourced central government function to adequately coordinate multiple stakeholders. This risk will be exacerbated where regulators interpret and apply the AI regulatory principles differently, potentially leading to confusion, or the possibility that business will try to game the system, searching for the regulatory path of least resistance. Given the broader deregulatory direction of the UK, it seems unlikely that a sector-based approach will be sufficient for addressing general-purpose AI.

Devolution

While the AI Regulation White Paper acknowledges coordination challenges between UK regulators, and the potential difficulties that general-purpose AI may pose to a sector-based approach, it fails to discuss potential tensions associated with the devolved governance of AI. Two short paragraphs in the White paper discuss the territorial arrangements of AI governance in the UK, concluding that reserved powers are sufficient for the current proposals and that any new statutory requirements will be consulted with the devolved administrations.

Again, considering the broader context demonstrates why this approach may prove problematic in practice. The UK is arguably moving away from a “devolve and forget” model of devolution towards a more interventionist strategy by Westminster (Clear, 2023). A stark example of this interventionism is Westminster’s recent response to Scotland’s Gender Reform Bill – designed to make it easier for people in Scotland to change their gender – with Section 35 veto powers from the Scotland Act (1998) used for the first time to block this bill. Westminster’s rationale for using this “last resort” veto power was that it is about gender reform, which is a devolved matter, but that it has potential implications for reserved powers, particularly equalities law. Given that AI is a general purpose technology that impacts multiple reserved and devolved powers, including contentious and ambiguously devolved equalities and human rights issues, there is a risk of misalignment in the approaches taken by Westminster and devolved administrations. This could materialise through either a devolved nation introducing an AI-specific regulation that Westminster deems to encroach on reserved powers or through disagreements between newly established central government AI functions and devolved regulators. There is a notable difference between the UK central government’s “pro-innovation” emphasis and Scotland’s “ethical digital nation” (Edinburgh Innovations, 2022), leaving such disagreements as more than hypotheticals. These internal tensions may prove challenging for Westminster’s “pro-innovation” vision for UK AI governance.

International political constraints

The second set of challenges facing the UK’s aspiration for AI governance are geopolitical. The UK central government has outlined a strong intention to establish a unique post-Brexit governance regime for AI, but the geopolitical reality constrains its ability to do so. In particular, the EU’s shadow continues to loom large over UK policymaking.

One area where the EU’s continued influence can be seen is in data protection adequacy agreements. For data to flow freely from the EU to the UK, the European Commission needs to be satisfied that the country offers the same level of data protection present in the EU. Currently, the adequacy of UK policy is recognised on account of the GDPR and Law Enforcement Directive being retained in domestic legislation post-Brexit (Department for Digital, Culture, Media & Sport, 2021b). However, it is unclear whether data protection policy will be considered adequate after the UK undertakes its “pro-innovation” reforms. Indeed, the adequacy agreement reached stipulates a sunset clause for the first time, effectively allowing the agreement to expire after four years (Kazim et al., 2021). If the UK wishes to maintain this adequacy agreement and the benefits of free-flowing personal data from the EU,10 as it has repeatedly stated it intends to (Department for Digital, Culture, Media & Sport, 2022; Department for Digital, Culture, Media & Sport, 2021b), then its efforts to produce novel AI governance initiatives through data protection reform will likely be constrained.

Even if UK data protection reforms are undertaken in a way that does not meet adequacy standards, leading to recognition being dropped, the AI governance initiatives of the EU and other states will maintain some influence over the UK’s domestic governance efforts.

One of the most notable areas in this respect concerns Northern Ireland. To avoid a hard border between Northern Ireland and the Republic of Ireland, a Protocol was designed to protect the economic integrity of both the UK and the EU’s Single Market. Specifically, it establishes that Northern Ireland remains part of the UK’s customs union, while also aligning with EU Single Market product regulations to avoid customs checks (Duparc-Portier & Figus, 2022). Aspects of this Protocol were amended by the “Windsor Framework” in February 2023, which revised various details of the agreement in relation to customs, certain product regulations, and governance procedures. Regarding AI, the degree to which EU law will apply to Northern Ireland is unclear. Given that, in many cases, the EU draft AI Act regulates AI as a product (Veale & Borgesius, 2021), provisions relating to physical goods integrating AI could apply in Northern Ireland when the Act comes into force. Precisely whether this is the case depends on if the AI Act is deemed to revise the product regulations listed within the Northern Ireland Protocol Agreement or if it is interpreted as introducing a new standalone product regulation (European Scrutiny Committee, 2021). If interpreted as the former, then the AI Act will theoretically apply to Northern Ireland, unless the “Stormont Brake” is used to veto the application of EU law in this area.11 However, if this veto is applied – which is only meant to be used in exceptional circumstances that have a “significant impact specific to everyday life” (The Windsor Framework, 2023) – then there is a risk of political tension between the EU and UK worsening, undermining efforts at cooperation. If the AI Act is interpreted as introducing a new standalone product regulation that falls within the scope of the Protocol but which neither amends nor replaces a Union act listed in the Annexes to the Protocol, then, under Article 13(4), the Withdrawal Agreement Joint Committee will need to discuss whether the AI Act applies to Northern Ireland, if requested by either the EU or UK (Revised Protocol to the Withdrawal Agreement, 2020). Hypothetically, applying the AI Act to Northern Ireland could lead to AI systems that are legal within England, Scotland, and Wales, but contravene aspects of the AI Act, making them non-permissible in Northern Ireland ((European Scrutiny Committee, 2021).

More generally, it is likely that the EU's proposed AI Act will lead to an extraterritorial policy influence in the UK through the so-called “Brussels Effect”, which externalises EU laws to other jurisdictions. Because of the regulatory capacity and market size of the EU, it can set governance standards that companies follow globally because of legal, economic, and technical incentives. In turn, companies pressure other governments to avoid added regulatory burdens on top of the EU’s rules, leading to regulatory alignment between jurisdictions. The General Data Protection Regulation (GDPR) is an example of numerous multinational companies choosing to adopt these privacy standards globally, with several governments subsequently emulating key provisions (Bradford, 2020).

The degree to which the Brussels Effect impacts the UK will depend on the particular type of AI application. For high-risk AI systems used on internationally connected platforms, it appears likely that EU rules will be adopted globally by organisations, due to the technical difficulty and regulatory burden that would come from following multiple, competing requirements. Likewise, for AI systems integrated into products, it is probable that global manufactures will integrate the EU’s new requirements for AI into their existing conformity assessment procedures. However, the reliance on standards for explicating these rules – which is largely driven by market actors – will provide UK companies with some influence in determining how the rules are explicated. For less burdensome changes, such as transparency reporting for human interaction with AI, it is possible that companies which already localise their services will feel less pressure to change their requirements in the UK (Engler, 2022). A Brussels Effect will also be lacking for those systems which pose a minimal risk and are only subject to voluntary codes of conduct, leaving the UK with a higher degree of policy autonomy in this area.

There is a prospect of the Brussels Effect for AI governance becoming a “Transatlantic effect”, with the EU and US working more closely on AI and digital governance; see, for example, recent initiatives at the EU-US Trade and Technology Council (TTC), including a joint AI Roadmap that marks an early step in developing joint methodologies and metrics for AI risk management (Bertuzzi, 2022). Consequently, EU and EU-US policies could ultimately be followed by organisations operating in the UK, rendering the UK’s more permissive domestic regulatory regime effectively obsolete for several types of AI.

Finally, the effectiveness of the UK’s domestic efforts in AI governance that have the potential to be more stringent than those currently proposed in the EU and US will also be impacted by their reception abroad. For example, the UK’s vision for AI assurance is pioneering globally, with government departments and regulators considering how to stimulate an effective third-party assurance market. However, it is unlikely that the UK market alone will be sufficient for developing such an external market, with larger markets needing to buy into this vision for it to be worth the investment by private sector actors. It is currently unclear whether the EU or the US will follow a similar approach to the UK. In the EU, there is uncertainty as to what a third party audit market may look like, with explication in policy documents surrounding the EU AI Act and the Digital Services Act vague (Mökander et al., 2022). In the US, the 2022 Algorithmic Accountability Act is gaining little traction in Congress, meaning that, at a federal level, the choice to submit to an external audit for AI will be left to companies, at least in the immediate future (Mökander & Floridi, 2022).12 As such, the UK will need to convince other governments and companies of the worth of its vision for an effective market ecosystem for assurance, if it is to materialise in practice.

Conclusions

In recent years, the UK has taken a keen interest in AI governance, with its sector-led approach facilitating contextually appropriate governance, as well as experimentation and innovation from regulators and government departments. Many of these initiatives have been pioneering and demonstrate that the UK has the potential to be a global leader in AI governance. Despite this progress and the promise of many UK initiatives, the country’s efforts going forward look set to be constrained by domestic and international political factors.

Domestically, UK regulators and devolved governments face coordination and capacity challenges. While progress has been made in understanding and addressing coordination challenges between regulators, notably through the commitment to establish several central government AI governance functions, the UK’s broader deregulatory policy shift risks undermining these efforts. The White Paper provides regulators with no new powers or funding to support governance efforts and offers little in terms of concrete next steps or timelines for central government support capacities. Alongside this, numerous laws that may have provided protections are being amended or repealed, particularly as part of a post-Brexit “bonfire” of retained EU law. Given the rapid proliferation of general-purpose AI systems that have a cross-cutting impact across sectors, the robustness of the UK’s light-touch, context-focused approach will be immediately challenged. It seems unlikely that current proposals will be adequate for governing these technologies.

Regarding devolved governments, there has been little acknowledgement from Westminster of the potential coordination problems faced. The ambiguity surrounding how reserved and devolved powers relate to AI creates a real risk of regulatory divergence, which could lead to ineffective protections for citizens on the one hand, and a confusing regulatory environment for companies on the other, undermining a central aim of the UK’s overarching approach. Existing efforts to promote greater regulatory consistency among the UK’s nations, such as through the Internal Markets Act (2020), have led to worsening relations between Westminster and devolved governments (Armstrong, 2022). This indicates that efforts to ensure regulatory alignment among the UK nations may lead to a further deterioration in the relationship between some devolved governments and Westminster.

As a first step to addressing these coordination and capacity challenges, the central government should immediately move to place the AI principles outlined in the White Paper on a statutory footing. This would provide regulators with enhanced powers to address the risks of AI, and place an onus on them to consider AI as a regulatory priority. Without this statutory requirement, substantive and effective cooperation between regulators with distinct incentives will prove challenging (Chomanski, 2021). On top of this, concrete details about central government AI coordination functions should be provided, including timelines and funding. Without a well-resourced central function, the UK’s sector-led approach is likely to be too disjointed to effectively address the risks of general-purpose AI systems that can be applied across multiple sectoral contexts. Given the recent proliferation of general-purpose AI systems, establishing these central functions must be a key priority for the government. One possible avenue for quickly establishing central government AI support functions for regulators is to repurpose and expand the CDEI, given that the foundations for many of the proposed functions already exist in this body. Delaying this empowering of regulators and the establishment of coordinating functions exacerbates the risk of an ineffective response.

While these powers are being legislated for, UK regulators should turn their attention to the risks posed by LLMs and other general-purpose AI. A first step in this respect is clarifying how existing UK law applies to these technologies. The UK’s data protection authority and medical regulator have produced initial guidance (Almond, 2023; Ordish, 2023), but more work is needed from a wider range of bodies. In particular, regulators should reflect on whether they are sufficiently empowered and coordinated to address general-purpose AI systems. And if not, whether additional technology-specific regulation is required. The DRCF should take the lead in coordinating this response. Addressing this question now is central for producing timely protections for citizens, but it could also help the UK further its international aspirations of being a global leader in AI regulation.

Efforts should also be made to ensure coordination and collaboration between the four nations of the UK. Given the decentralised approach taken by the UK and current political tensions, this is likely to be challenging. One avenue that could be explored is establishing an Interministerial Group on Digital Governance, which would provide a space for regular Ministerial-level discussion between the UK nations on AI governance. Given the stated importance of AI to UK growth, ensuring consistency between the UK nations should be seen as a high priority and not left to ad hoc meetings between different central and devolved government bodies. While imperfect at capturing the breadth of sectoral initiatives, this group could be an effective way of identifying where regulatory overlap or inconsistency is emerging between nations and sharing best practices. This initiative could also mark a wider effort at conciliation and improved dialogue amongst the UK nations.

Internationally, the UK faces challenges both in terms of the influence other jurisdictions will indirectly assert over the country’s domestic AI governance efforts, and potential constraints to the UK’s international influence. The AI Regulation White Paper and the UK’s International Technology Strategy, also published in March 2023, provide a promising starting point for mitigating these risks. Both documents place a strong emphasis on cooperation and taking a multi-stakeholder approach, which are prerequisites for the UK jointly shaping international rules. However, more work is needed to clarify the UK’s vision and to act collaboratively in practice. Building on its strengths, the UK should promote itself as an agile regulatory environment that can find novel solutions to address the gaps left by more rigid approaches. This could involve pioneering best practices for AI regulatory sandboxes or privacy enhancing technologies, while playing a leading role in standards creation for AI assurance. In parallel with this, the UK should focus on improving strained ties with the EU, which appears to be a prerequisite for partaking in the collaborative initiatives that it is currently closed off from, such as the EU-US Trade and Technology Council (Lanktree, 2023). Being explicit about the UK’s role, and using it to collaborate with, and complement the approaches of other governments, is crucial for sharing the leadership role in international AI governance efforts. Ultimately, it is this type of collaboration that will support the UK in achieving its ambitions for AI governance at home and abroad.

References

A pro-innovation approach to AI regulation. (2023). [White Paper]. UK Government. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

AI in the UK: Ready, willing and able? - Government response to the select committee report (pp. 1–42). (2018). [Policy Paper]. UK Government. https://www.gov.uk/government/publications/ai-in-the-uk-ready-willing-and-able-government-response-to-the-select-committee-report

Aitken, Mhairi, Leslie, David, Ostmann, Florian, Pratt, Jacob, Margetts, Helen, & Dorobantu, Cosmina. (2022). Common regulatory capacity for AI (pp. 1–102) [Report]. The Alan Turing Institute. https://doi.org/10.5281/ZENODO.6838946

Almond, S. (2023, April 3). Generative AI: eight questions that developers and users need to ask. https://ico.org.uk/about-the-ico/media-centre/blog-generative-ai-eight-questions-that-developers-and-users-need-to-ask/

Armstrong, K. A. (2022). The governance of economic unionism after the United Kingdom Internal Market Act. The Modern Law Review, 85(3), 635–660. https://doi.org/10.1111/1468-2230.12706

Artificial intelligence strategy: Trustworthy, ethical and inclusive. (2021). [Strategy]. Scottish Government. http://www.gov.scot/publications/scotlands-ai-strategy-trustworthy-ethical-inclusive/

Babuta, A., Oswald, M., & Rinik, C. (2018). Machine learning algorithms and police decision-making: Legal, ethical and regulatory challenges (Whitehall Reports, pp. 1–35) [Report]. RUSI. https://static.rusi.org/201809_whr_3-18_machine_learning_algorithms.pdf.pdf

Bertuzzi, L. (2022, November 21). Digital infrastructure, AI roadmap tangible results of transatlantic cooperation. EURACTIV. https://www.euractiv.com/section/digital/news/digital-infrastructure-ai-roadmap-tangible-results-of-transatlantic-cooperation/

Bowers, P. & Parliament and Constitution Centre. (2005). The Sewel Convention [Research Brief]. House of Commons Library. https://researchbriefings.files.parliament.uk/documents/SN02084/SN02084.pdf

Bradford, A. (2020). The Brussels effect: How the European Union rules the world (1st ed.). Oxford University Press. https://doi.org/10.1093/oso/9780190088583.001.0001

Burch, F. (2021, September 10). Enabling trustworthy innovation to thrive in the UK. Centre for Data Ethics and Innovation Blog. https://cdei.blog.gov.uk/2021/09/10/enabling-trustworthy-innovation-to-thrive-in-the-uk/

Cardell, S. (2022, November 28). Ensuring digital market outcomes that benefit people, businesses and the wider UK economy [Speaker’s Notes]. 4th Annual BIICL/Linklaters Tech Antitrust Roundtable. https://www.gov.uk/government/speeches/sarah-cardell-ensuring-digital-market-outcomes-that-benefit-people-businesses-and-the-wider-uk-economy

Central Digital & Data Office. (2020). Data ethics framework [Guidance]. UK Government. https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020

Central Digital & Data Office. (2021). Algorithmic transparency standard. UK Government. https://www.gov.uk/government/publications/algorithmic-transparency-data-standard

Centre for Data Ethics and Innovation (CDEI). (2022a). Industry temperature check: Barriers and enablers to AI assurance [Independent report]. UK Government. https://www.gov.uk/government/publications/industry-temperature-check-barriers-and-enablers-to-ai-assurance

Centre for Data Ethics and Innovation (CDEI). (n.d.). AI assurance guide [Guide]. Retrieved 2 June 2022, from https://cdeiuk.github.io/ai-assurance-guide

Centre for Data Ethics and Innovation (CDEI). (2019). Centre for data ethics (CDEI) 2 year strategy [Independent report]. https://www.gov.uk/government/publications/the-centre-for-data-ethics-and-innovation-cdei-2-year-strategy/centre-for-data-ethics-cdei-2-year-strategy

Centre for Data Ethics and Innovation (CDEI). (2021). The roadmap to an effective AI assurance ecosystem [Independent report]. https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem

Centre for Data Ethics and Innovation (CDEI). (2022b). U.K. and U.S. governments collaborate on prize challenges to accelerate development and adoption of privacy-enhancing technologies [Press release]. UK Government. https://www.gov.uk/government/news/uk-and-us-governments-collaborate-on-prize-challenges-to-accelerate-development-and-adoption-of-privacy-enhancing-technologies

Chomanski, B. (2021). The missing ingredient in the case for regulating big tech. Minds and Machines, 31(2), 257–275. https://doi.org/10.1007/s11023-021-09562-x

Clark, J., & Hadfield, G. K. (2019). Regulatory markets for AI safety (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2001.00078

Clear, S. (2023, January 25). How the UK government’s veto of Scotland’s gender recognition bill brought tensions in the union to the surface. The Conversation. http://theconversation.com/how-the-uk-governments-veto-of-scotlands-gender-recognition-bill-brought-tensions-in-the-union-to-the-surface-198181

College of Policing. (2022). Authorised professional practice for live facial recognition (LFR) [Guidance]. https://www.college.police.uk/app/live-facial-recognition

Competition and Markets Authority. (2023). Investigation into Google’s ‘privacy sandbox’ browser changes [Investigation Report]. UK Government. https://www.gov.uk/cma-cases/investigation-into-googles-privacy-sandbox-browser-changes

Cornelli, G., Doerr, S., Gambacorta, L., & Merrouche, O. (2020). Inside the regulatory sandbox: Effects on Fintech funding (Discussion Paper No. 15502). CEPR Press. https://doi.org/10.2139/ssrn.3727816

Data protection and digital information bill, 143, House of Commons, Session 2022-23, 58/3 (2022). https://publications.parliament.uk/pa/bills/cbill/58-03/0143/220143.pdf

Department for Business, Energy & Industrial Strategy. (2017). Industrial strategy: Building a Britain fit for the future [White Paper]. UK Government. https://www.gov.uk/government/publications/industrial-strategy-building-a-britain-fit-for-the-future

Department for Business, Energy & Industrial Strategy. (2021). Reforming the framework for better regulation: A consultation [Consultation document]. UK Government. https://www.gov.uk/government/consultations/reforming-the-framework-for-better-regulation

Department for Business, Energy & Industrial Strategy, Department for Digital, Culture, Media & Sport, Department for Science, Innovation and Technology, & Office for Artificial Intelligence. (2022). Establishing a pro-innovation approach to regulating AI [Policy Paper]. UK Government. https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement

Department for Digital, Culture, Media & Sport. (2022). Data: A new direction—Government response to consultation [Consultation document]. UK Government. https://www.gov.uk/government/consultations/data-a-new-direction/outcome/data-a-new-direction-government-response-to-consultation

Department for Digital, Culture, Media & Sport. (2021a). 9 key findings from understanding the UK AI labour market: 2020 report [Report Findings]. UK Government. https://www.gov.uk/government/publications/understanding-the-uk-ai-labour-market-2020/9-key-findings-from-understanding-the-uk-ai-labour-market-2020-report

Department for Digital, Culture, Media & Sport. (2021b). EU adopts ‘adequacy’ decisions allowing data to continue flowing freely to the UK [Press release]. UK Government. https://www.gov.uk/government/news/eu-adopts-adequacy-decisions-allowing-data-to-continue-flowing-freely-to-the-uk

Department for Digital, Culture, Media & Sport, Office for Artificial Intelligence, & Philp, C. (2022). New UK initiative to shape global standards for artificial intelligence [Press release]. UK Government. https://www.gov.uk/government/news/new-uk-initiative-to-shape-global-standards-for-artificial-intelligence

Department for Science, Innovation and Technology, Department for Business and Trade, Office for Artificial Intelligence, Department for Digital, Culture, Media & Sport, & Department for Business, Energy & Industrial Strategy. (2019). AI sector deal [Policy Paper]. UK Government. https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal

Department for Science, Innovation and Technology & Foreign, Commonwealth & Development Office. (2023). The UK’s international technology strategy [Policy Paper]. GOV.UK. https://www.gov.uk/government/publications/uk-international-technology-strategy/the-uks-international-technology-strategy

Department for Science, Innovation and Technology, Office for Artificial Intelligence, Department for Digital, Culture, Media, & Sport, & Department for Business, Energy & Industrial Strategy. (2020). Guidelines for AI procurement [Guidance]. UK Government. https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement

Dickson, A., Manancourt, V., & Stolton, S. (2022, April 18). Distractions plague UK’s post-Brexit tech plan. Politico. https://www.politico.eu/article/distractions-plague-post-brexit-tech-plan/

Digital Regulation Cooperation Forum. (2022). Auditing algorithms: The existing landscape, role of regulators and future outlook [Discussion Paper]. https://www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/auditing-algorithms-the-existing-landscape-role-of-regulators-and-future-outlook

Digital strategy for Wales: How we will use digital, data and technology to improve the lives of people in Wales. (2021). [Strategy]. https://www.gov.wales/digital-strategy-wales-html

Drake, A., Keller, P., Pietropaoli, I., Puri, A., Maniatis, S., Tomlinson, J., Maxwell, J., Fussey, P., Pagliari, C., Smethurst, H., Edwards, L., & Blair, S. W. (2021). Legal contestation of artificial intelligence-related decision-making in the United Kingdom: Reflections for policy. International Review of Law, Computers & Technology, 36(2), 251–285. https://doi.org/10.1080/13600869.2021.1999075

Duparc-Portier, G., & Figus, G. (2022). The impact of the new Northern Ireland protocol: Can Northern Ireland enjoy the best of both worlds? Regional Studies, 56(8), 1404–1417. https://doi.org/10.1080/00343404.2021.1994547

Edinburgh Innovations. (2022). Building trust in the digital era: Achieving Scotland’s aspirations as an ethical digital nation [Independent report]. University of Edinburgh. https://www.gov.scot/publications/building-trust-digital-era-achieving-scotlands-aspirations-ethical-digital-nation/pages/2/

Engler, A. (2022). The EU AI Act will have global impact, but a limited Brussels Effect [Report]. https://www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/

European Commission. (2021). Data protection: Commission adopts adequacy decisions for the UK [Press release]. https://ec.europa.eu/commission/presscorner/detail/en/ip_21_3183

European Commission. (2022). Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy artificial intelligence [Draft request]. https://ec.europa.eu/docsroom/documents/52376

European Digital Rights, Access Now, Panoptykon Foundation, epicenter.works, AlgorithmWatch, European Disability Forum, Bits of Freedom, Fair Trials, PICUM, & ANEC. (2021). An EU artificial intelligence act for fundamental rights: A civil society statement [Statement]. EDRI. https://edri.org/wp-content/uploads/2021/12/Political-statement-on-AI-Act.pdf

European Scrutiny Committee. (2021). Fourth report of session 2021–22 [Report]. House of Commons. https://committees.parliament.uk/publications/6461/documents/70485/default/

First regulatory sandbox on artificial intelligence presented. (2022). [Press release]. European Commission. https://digital-strategy.ec.europa.eu/en/news/first-regulatory-sandbox-artificial-intelligence-presented

Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1–14. https://doi.org/10.1162/99608f92.8cd550d1

Floridi, L., Holweg, M., Taddeo, M., Amaya Silva, J., Mökander, J., & Wen, Y. (2022). CapAI - A procedure for conducting conformity assessment of AI systems in line with the EU Artificial Intelligence Act [Report]. University of Oxford. http://dx.doi.org/10.2139/ssrn.4064091

Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society, 374(2083), 1–5. https://doi.org/10.1098/rsta.2016.0360

Foreign Affairs Committee. (2022). Encoding values: Putting tech at the heart of UK foreign policy—Government response to the committee’s third report (Report Third Report of Session 2022–23). House of Commons Committee. https://publications.parliament.uk/pa/cm5803/cmselect/cmfaff/170/report.html

Freeman, L., Batarseh, F. A., Kuhn, D. R., Raunak, M. S., & Kacker, R. N. (2022). The path to a consensus on artificial intelligence assurance. Computer, 55(3), 82–86. https://doi.org/10.1109/MC.2021.3129027

Fuchs, C. (2022). Policy discourses on robots and artificial intelligence (AI) in the EU, the USA, and China. In C. Fuchs, Digital humanism: A philosophy for 21st century digital society (pp. 155–171). Emerald Group Publishing Limited. https://doi.org/10.1108/978-1-80382-419-220221006

Future of Life Institute. (2022). General purpose AI and the AI Act (pp. 1–6) [Research Paper]. https://futureoflife.org/wp-content/uploads/2022/08/General-Purpose-AI-and-the-AI-Act-v5.pdf

Goodman, E. P., & Trehu, J. (2022). AI audit washing and accountability. https://doi.org/10.2139/ssrn.4227350

Government Digital Service (GDS) & Office for Artificial Intelligence (OAI). (2019). A guide to using artificial intelligence in the public sector [Guidance]. UK Government. https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector

Grylls, G., Scott, G., Swinford, S., & Zeffman, H. (2022, June 10). Boris Johnson has until autumn to save his job, Lord Frost declares. The Times. https://www.thetimes.co.uk/article/speaking-out-against-boris-johnson-was-my-duty-claims-jeremy-hunt-mftnd07n3

Helberger, N., & Diakopoulos, N. (2023). ChatGPT and the AI Act. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1682

HM Treasury, & Vallance, P. (2023). Pro-innovation regulation of technologies review: Digital technologies (pp. 1–14) [Policy Paper]. UK Government. https://www.gov.uk/government/publications/pro-innovation-regulation-of-technologies-review-digital-technologies

HMG response to the House of Lords Communications and Digital Select Committee’s report on digital regulation (pp. 1–9). (2022). [Recommendation]. UK Parliament. https://committees.parliament.uk/publications/9464/documents/161530/default/

Huang, S., & Siddarth, D. (2023). Generative AI and the digital commons [Working Paper]. https://cip.org/research/generative-ai-digital-commons

Information Commissioner’s Office (ICO). (2022a). A guide to ICO audit: Artificial intelligence (AI) audits [Guide]. https://ico.org.uk/media/for-organisations/documents/4022651/a-guide-to-ai-audits.pdf

Information Commissioner’s Office (ICO). (2022b). Guidance on AI and data protection. [Guidance]. https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-ai-and-data-protection/

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Kazim, E., Almeida, D., Kingsman, N., Kerrigan, C., Koshiyama, A., Lomas, E., & Hilliard, A. (2021). Innovation and opportunity: Review of the UK’s national AI strategy. Discover Artificial Intelligence, 1(1), Article 14. https://doi.org/10.1007/s44163-021-00014-0

Küspert, S., Moës, N., & Dunlop, C. (2023). The value​​​ ​​​chain of general-purpose AI. The Ada Lovelace Institute. https://www.adalovelaceinstitute.org/blog/value-chain-general-purpose-ai/

Lanktree, G. (2023, March 5). Biden rebuffs UK bid for closer cooperation on tech. Politico. https://www.politico.eu/article/biden-united-kingdom-technology-rishi-sunak-cooperation-trade/

Lipsey, R. G., Carlaw, K. I., & Bekar, C. T. (2005). Economic transformations: General purpose technologies and long-term economic growth. OUP Oxford. https://www.researchgate.net/publication/227468040_Economic_Transformations_General_Purpose_Technologies_and_Long-Term_Economic_Growth

Lynskey, O. (2017). LSE law Brexit special #7:Brexit and the UK’s tech industry (Policy Brief No. 26–2017; LSE Law Policy Briefing Series, pp. 1–4). The London School of Economics and Poltical Science (LSE). https://www.ssrn.com/abstract=2941375

Making government deliver for the British people. (2023). [Policy Paper]. UK Government. https://www.gov.uk/government/publications/making-government-deliver-for-the-british-people/making-government-deliver-for-the-british-people-html

McDonald, A. (2023, January 18). UK’s post-Brexit bonfire plan passes in House of Commons. Politico. https://www.politico.eu/article/uks-post-brexit-bonfire-bill-passes-in-house-of-commons/

McHarg, A. (2010). Devolution and the regulatory state: Constraints and opportunities. In D. Oliver, T. Prosser, & R. Rawlings (Eds.), The Regulatory State: Constitutional implications (pp. 67–91). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199593170.001.0001

Medicines and Healthcare products Regulatory Agency, & Javid, S. (2021). Consultation on the future regulation of medical devices in the United Kingdom [Consultation document]. https://www.gov.uk/government/consultations/consultation-on-the-future-regulation-of-medical-devices-in-the-united-kingdom

Mitchell, J. (2013). Devolution in the UK. Manchester University Press.

Mökander, J., Axente, M., Casolari, F., & Floridi, L. (2022). Conformity assessments and post-market monitoring: A guide to the role of auditing in the proposed European AI regulation. Minds and Machines, 32(2), 241–268. https://doi.org/10.1007/s11023-021-09577-4

Mökander, J., & Floridi, L. (2022). From algorithmic accountability to digital governance. Nature Machine Intelligence, 4(6), 508–509. https://doi.org/10.1038/s42256-022-00504-5

National AI strategy. (2021). [White Paper]. UK Government. https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version

OECD.AI. (2021). Database of national AI policies [Data set]. https://oecd.ai/en/dashboards

Ordish, J. (2023, March 3). Large language models and software as a medical device. MedRegs Blog. https://medregs.blog.gov.uk/2023/03/03/large-language-models-and-software-as-a-medical-device/

Ossewaarde, M., & Gulenc, E. (2020). National varieties of artificial intelligence discourses: Myth, utopianism, and solutionism in west European policy expectations. Computer, 53(11), 53–61. https://doi.org/10.1109/MC.2020.2992290

Radu, R. (2021). Steering the governance of artificial intelligence: National strategies in perspective. Policy and Society, 40(2), 178–193. https://doi.org/10.1080/14494035.2021.1929728

Ranchordas, S. (2021). Experimental regulations for AI: Sandboxes for morals and mores. Morals & Machines, 1. https://doi.org/10.5771/2747-5174-2021-1-86

Revised protocol to the Withdrawal Agreement. (2020). [Protocol]. UK Government. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/840230/Revised_Protocol_to_the_Withdrawal_Agreement.pdf

Roberts, H., Cowls, J., Hine, E., Mazzi, F., Tsamados, A., Taddeo, M., & Floridi, L. (2021). Achieving a ‘good AI society’: Comparing the aims and progress of the EU and the US. Science and Engineering Ethics, 27(6), Article 68. https://doi.org/10.1007/s11948-021-00340-7

Roberts, H., Cowls, J., Hine, E., Morley, J., Wang, V., Taddeo, M., & Floridi, L. (2022). Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes. The Information Society, 39(2), 79–97. https://doi.org/10.1080/01972243.2022.2124565

Roberts, H., Ziosi, M., Cailean, O., Saouma, L., Belias, A., Buchser, M., Casovan, A., Kerry, C. F., Meltzer, J. P., Mohit, S., Ouimette, M. E., Renda, A., Stix, C., Teather, E., Woolhouse, R., & Zeng, Y. (2023). A comparative framework for AI regulatory policy (pp. 1–56) [Report]. The International Centre of Expertise on Artificial Intelligence in Montreal (CEIMIA). https://ceimia.org/wp-content/uploads/2023/02/Comparative-Framework-for-AI-Regulatory-Policy.pdf

Rutter Pooley, C. (2022, May 4). Why the UK will pay for its delay over Big Tech. Financial Times. https://www.ft.com/content/33349af8-8b62-429e-b74b-13a8a476813b

Schiffer, Z., & Newton, C. (2023, March 14). Microsoft just laid off one of its responsible AI teams. Platformer. https://www.platformer.news/p/microsoft-just-laid-off-one-of-its

Schlesinger, P. (2022). The neo‐regulation of internet platforms in the United Kingdom. Policy & Internet, 14(1), 47–62. https://doi.org/10.1002/poi3.288

Simonite, T. (2021, June 8). What really happened when Google ousted Timnit Gebru. Wired. https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/

Spisak, A., & Britto, D. (2021). After Brexit: Divergence and the future of UK regulatory policy (pp. 1–58) [White Paper]. Tony Blair Institute for Global Change. https://institute.global/policy/after-brexit-divergence-and-future-uk-regulatory-policy

The Windsor framework: A new way forward. (2023). [Command Paper]. UK Government. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1138989/The_Windsor_Framework_a_new_way_forward.pdf

The World Bank. (2020). Key data from regulatory sandboxes across the globe [Data set]. https://www.worldbank.org/en/topic/fintech/brief/key-data-from-regulatory-sandboxes-across-the-globe

Theodorou, A., & Dignum, V. (2020). Towards ethical and socio-legal governance in AI. Nature Machine Intelligence, 2(1), 10–12. https://doi.org/10.1038/s42256-019-0136-y

Truby, J., Brown, R. D., Ibrahim, I. A., & Parellada, O. C. (2022). A sandbox approach to eegulating high-risk artificial intelligence applications. European Journal of Risk Regulation, 13(2), 270–294. https://doi.org/10.1017/err.2021.52

Turner Lee, N., & Lai, S. (2021, December 20). Why New York City is cracking down on AI in hiring. TechTank Brookings. https://www.brookings.edu/blog/techtank/2021/12/20/why-new-york-city-is-cracking-down-on-ai-in-hiring/

Veale, M., & Zuiderveen Borgesius, F. (2021). Demystifying the draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), 97–112. https://doi.org/10.9785/cri-2021-220402

Volpicelli, G. (2023, March 3). ChatGPT broke the EU plan to regulate AI. Politico. https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/

Walker, P. (2022, July 22). Truss vows to scrap remaining EU laws by end of 2023 risking ‘bonfire of rights’. The Guardian. https://www.theguardian.com/politics/2022/jul/22/bonfire-of-rights-truss-vows-to-scrap-remaining-eu-laws-by-end-2023

Waterfield, S. (2022, June 13). Data Reform Bill consultation ‘rigged’ say civil rights groups. Techmoniitor. https://techmonitor.ai/policy/privacy-and-data-protection/data-reform-bill-consultation-dcms-nadine-dorries

Whittlestone, J. (2022). Response to “Establishing a pro-innovation approach to regulating AI” (pp. 1–13) [White Paper]. Centre for Long-Term Resillience. https://www.longtermresilience.org/post/response-to-establishing-a-pro-innovation-approach-to-regulating-ai

Zhang, D., Maslej, N., Brynjolfsson, E., Etchemendy, J., Lyons, T., Manyika, J., Ngo, H., Niebles, J. C., Sellitto, M., Sakhaee, E., Shoham, Y., Clark, J., & Perrault, R. (2022). The AI index 2022 annual report (Version 1, pp. 1–229) [Report]. Stanford Institute for Human-Centered AI. https://doi.org/10.48550/arXiv.2205.03468

Acknowledgements

We would like to thank Christopher Boniface, Angela Daly, Frédéric Dubois, Hadrien Pouget, and Janis Wong for their extremely helpful feedback during the drafting of this paper.

Huw Roberts, Chris Thomas and Alexander Babuta previously worked at the Centre for Data Ethics and Innovation, while Luciano Floridi was a member of the advisory board. Huw Roberts has also worked as a contractor for the Digital Regulation Cooperation Forum.

Mariarosaria Taddeo wishes to acknowledge that she serves as the non-executive president of the board of directors of Noovle Spa.

Huw Roberts’ research was partially supported by a research grant for the AI*SDG project at the University of Oxford’s Saïd Business School.

Footnotes

1. According to the OECD, the US has released 55 documents, the UK 53, the EU 59, and China 22.

2. Note, we are considering the promotion of AI here given that it is part and parcel of the UK’s stated pro-innovation approach. While it may reasonably be argued that no AI is the best course of action in certain instances, such a discussion is beyond the scope of this paper.

3. While the UK’s decentralised approach to AI governance is generally referred to as being ‘sector-based’ or relating to ‘sector-specific regulators’, a number of UK regulators actually have cross-sectoral (horizontal) remits, adding an additional layer of complexity to coordinating this approach. For example, the ICO and the Equality and Human Rights Commission (EHRC) respectively regulate information rights, and equalities and human rights across all sectors of the UK economy. These remits will intersect with each other, as well as with sector-based regulators in healthcare, finance, and transport etc.

4. The Internal Market Act (2020) aims to minimise disruptive differences to trade through introducing a “mutual recognition” principle that established that products and services must be saleable in all parts of the UK.

5. Note, devolution does not alter the principle of parliamentary sovereignty in the UK, meaning Westminster could theoretically introduce rules that cover AI, as it has done in other areas of policymaking; however, under the Sewel Convention it is established that “Westminster would not normally legislate with regard to devolved matters in Scotland without the consent of the Scottish parliament” (Bowers, 2005, p. 2).

6. As an example, the government established a Department of Science, Innovation, and Technology in February 2023 which has taken on leadership of AI and data policy, and is tasked with “positioning the UK at the forefront of global scientific and technological advancement” (Making Government Deliver, 2023).

7. See: A Guide to Using AI in the Public Sector (2019), Guidelines for AI Procurement (2020), Algorithmic Transparency Standard (2021), Data Ethics Framework (2020).

8. A reasonable retort to this point is that a specified AI regulator may be better placed to develop novel governance initiatives appropriate for AI. However, an AI-specific regulator would also have several drawbacks compared to a sector-led approach; for instance (i) following on from a single regulatory tradition if these powers are embedded within an existing regulator (e.g., the ICO); (ii) the lack of institutional knowledge from adjacent policy areas if a new regulatory body is established; (iii) and a narrower set of powers for regulatory experimentation if AI governance responsibility is centred in a single body. Accordingly, it is reasonable to suggest that a decentralisation of AI governance responsibility has facilitated the development of novel governance initiatives.

9. For instance, Holistic AI is a British startup focused specifically on AI auditing, while larger firms such as EY have been developing algorithmic auditing capabilities.

10. In the UK government’s own words, “this free flow of personal data [from the EU] supports trade, innovation and investment, assists with law enforcement agencies tackling crime, and supports the delivery of critical public services sharing personal data as well as facilitating health and scientific research” (Department for Digital, Culture, Media & Sport, 2021b).

11. The Stormont Brake is triggered if at least 30 of the 90 members of the Northern Ireland Assembly vote to block the adoption of updated EU Single Market rules. If this happens then a discussion about this objection is opened between the Northern Ireland Assembly, Westminster, and Brussels (The Windsor Framework, 2023).

12. It should be acknowledged that some local government initiatives are emerging in the US which introduce audit requirements (Turner Lee & Lai, 2021).

Add new comment