Who’s at stake? The (non)performativity of “stakeholders” in UK tech policy
Abstract
The term "stakeholder" features prominently in discourses surrounding tech policy. It is used as a marker for engaging with wider organisations and publics beyond government and the companies that make technologies. But what does this term do in practice? What roles does it create or deny, what power structures does it open or close? This study of 194 tech policy documents produced by the UK government over a span of five years uncovers the different ways the term "stakeholder" is used in practice. By comparing the use of the word with whose ideas are actually cited in the documents, it highlights the discursive gaps between what is claimed and what is represented in the voices that shape policy. The results are analysed through queer performativity, including institutional non-performativity and peri-performative framings, to assess the roles that are imposed on different groups, and the different hierarchies and power structures the stakeholder constructs in current UK policy practices.
INTRODUCTION
Tech policy in the UK has been undergoing a rapid increase in activity and prominence in recent years. Following the adoption of the European General Data Protection Regulation (GDPR) in 2016, and in the context of UK law diverging after its departure from the EU, there has been a flurry of proposed regulations, including multiple digital strategies (for example, DSIT and DCMS, 2019; DSIT et al., 2021) and a major AI white paper (DSIT and Office for AI, 2023). The UK has raced to lead the world in tackling the escalating harms of technologies, for example with the Online Safety Act 2023, while promoting its values and interests. This has led to a wealth of documents: policy proposals, consultations, guidance, codes and others. These have been created by departments and regulators from across government, from the Department for Culture, Media and Sport (DCMS) and Department for Business, Energy and Industrial Strategy (BEIS) to the Central Digital and Data Office and the National Audit Office, the Information Commissioner’s Office (ICO) and OfCom, and the Children’s Commissioner, among many others, including an increase in cross-departmental collaborations. And yet the UK’s industrial strategy-driven approach has lagged behind, for example, the EU and China, in terms of effective regulation that meets the needs of everyday citizens, and the effects of the policy frenzy remain contested, with the Online Safety Act, the flagship regulation enacted so far, being labelled by London Mayor Sadiq Khan as “not fit for purpose” (Elgot and Courea, 2024) in response to specific crises.
In part, the difficulties are due to the sheer breadth of social and technical problems to consider when attempting to manage the wide variety of issues with which tech policy must contend, intersecting with many different other policy areas from health to education to policing to business to energy and the environment. Tech policy is also notable for the continued contestation of key terms. Much debate has been had, for example, around the definition of “AI” in legislation (the EU AI Act revised and expanded its definitions and categories throughout its development), with Imogen Parker of the Ada Lovelace Institute pointing out how lack of definitional clarity “leave[s] the public sector without actionable or cohesive frameworks for decision-making” (2025). When the very things being regulated are still being defined, it is therefore a key moment in which to ensure that these cross-cutting and far-reaching policies are embedding the terms and values that represent the different groups with interests at stake in the debates. This issue of stakeholders is important. Many documents claim to consider or represent the interests of “wide-ranging stakeholders”, or to consult with stakeholders. But what exactly does “stakeholder” mean? Despite who is consulted, whose interests actually follow through to influence policy? Who counts as a stakeholder in different facets of tech policy? Is stakeholder used to represent ideals or the interests of those whose lives are at stake? Who has a stake in the benefits, the harms, the decisions?
In this research we examined 194 documents of UK tech policy to uncover how widely the term stakeholder is used. We wanted to look beneath the rhetoric of consultation to see what sources get cited in the final presentation of policies, and by extension whose interests are really represented. The project is based on the following key questions:
- How widely is the term “stakeholder” employed in UK tech policy?
- Whose views are actually represented in policy documents?
- How does this representation differ between technology areas, issues, or departments?
- What does the use of the term stakeholder create, conceal or exclude?
Answering these questions gets to the heart of what it means to have a stake in technology, and who is at stake in the way tech policy is developed. Pursuing these arguments highlights the need to look at exclusion and marginalisation not only in a practical sense but also at the discursive level. The use of the term stakeholder implies certain roles, yet they remain ill-defined, malleable and open to being operationalised for political ends. Stakeholder engagement could be a tool for widening participation in tech policy, or it could become a veil for assigning power to those with existing influence or a financial stake in steering regulation away from more transformative or redistributive aims. It is important to consider approaches to analysing these discourses and the different ways that the same term can be used to create power or deny it.
The analysis is therefore framed through the lens of performativity, bringing queer feminist principles to understanding the ways that power is embedded in dominant discourses, roles and behaviours. Building on the work of Judith Butler (1990; 2018), we applied the idea that speech and naming can create certain identities and power structures when people take on (or are forced to take on) certain roles. This approach helps us to understand when the labelling of a group as having a stake in policy actually constitutes that role and affords influence or brings their interests to the fore. We used Sara Ahmed’s (2006) idea of non-performativity, where terms like inclusion are used to hide their opposite, to the exclusion of marginalised groups. This highlights intersectional concerns and a focus on examining the potentially tokenistic nature of terms like stakeholder when it comes to actually empowering the groups most affected by institutional power. We also considered Eve Kosofsky Sedgwick’s (2003) concept of peri-performativity, to understand how the ways we talk about these practices can shape wider understandings and set the stage for future developments. This offers important reflections on the discursive allocation of performative roles, and the self-referential element of government agendas in defining who has access to policy. Across these theories, we were interested in how the term stakeholder might be used in different ways to privilege different groups, to analyse what power dynamics it creates and to answer our core question of who is at stake in UK tech policy.
LITERATURE ON STAKEHOLDERS
Freeman’s (1984) foundational text on ‘management in turbulent times’ situated stakeholder theories as emerging from business and organisational contexts, as a counter to focusing solely on the priorities of shareholders and acknowledging a wider social responsibility. The widespread focus on stakeholders could therefore be said to be less a matter of civic duty and more rooted in an organisational strategy serving business interests. The term has also been shown to be problematic due to its mercenary origin (Sharfstein, 2016) of holding stakes in a bet or having a financial investment in an issue, which precludes the wider focus beyond shareholding that the theories attempted to promote. The term stakeholder has also received criticism due to its connotations of stakeholder colonialism, in which “discourses of corporate citizenship […] are defined by narrow business interests” and stakeholder theory “represents a form of stakeholder colonialism that serves to regulated the behavior of stakeholders” (Bannerjee, 2008). Even expansions into organisational ethics (Phillips, 2003) remain rooted in market-driven motivations responding to a cultural and political climate that began to demand and scrutinise corporate responsibility. Across these problematic framings of stakeholders as a framework for organisational policy is a tendency towards staking a claim rather than examining what or who is at stake. This form of decision-making, then, acts as a foil or distraction, not necessarily promoting the interests of anyone outside of existing positions of power and influence.
Phillips (2003) suggests that the power in the term stakeholder is due to its conceptual breadth as the term can be interpreted in a number of ways and holds different meanings to different people. However, the remit of organisational ethics places stakeholderism as an optional (if ideal) choice for a business, fundamentally for its own greater gain, rather than the policy and legal contexts that carry a responsibility to empower different publics and protect the vulnerable. An area where these concerns have been demonstrated and discussed is public health. Here the breadth of the term has been expressed as a severely limiting factor, rendering stakeholder engagement tokenistic (Sharfstein, 2016), with calls for greater specificity in who exactly is being counted as a stakeholder.
Within tech policy, previous work on stakeholders has also focused on the governance of private companies, particularly those labelled as ‘platforms’ (see, for example, Gorwa, 2019). Where studies have examined the impact of stakeholders on legislation, regulation and public policy, they have focused on global governance, particularly of Internet protocols and infrastructure (existing often outside of states on privately owned systems) (Belli, 2015; Epstein et al., 2016; Hofmann, 2016; Liaropoulos, 2016), of platform regulation and accountability (existing within the private sector in conflict with states) (Barrett and Kreiss, 2019; Mannan and Schneider, 2020) including fake news (Marda and Milan, 2018), of cyber security and privacy measures (existing between states and with the private sector) (e Silva, 2013; Wolff, 2016; Bauer and van Eeten, 2009; Janssen et al., 2020), and of society-wide principles and values for the development of AI and related emerging technologies (especially transnational regulation) (Cath et al., 2018; Floridi et al., 2018; Hickman and Petrin, 2021; Smuha, 2021; Wilson, 2022). Other more specific areas have drawn on the application of technology to fields already interested in the role of stakeholders, such as AI in healthcare (Scott et al., 2021) or online voting (Burkell and Regan, 2019). Niklas and Dencik (2021), in their thematic analysis of the discourses in stakeholder submissions to the EU White Paper on AI, found that. ‘fundamental rights’ was the most prominent form of rights (closely followed by privacy and human rights), although references to discrimination within the White Paper itself focused on issues surrounding data (bias, quality, etc.) rather than a deeper engagement with the interactions between uses of AI and rights. They also found that ‘social rights’, while a growing part of wider discourse, was almost absent from stakeholder discourses. The findings of the study suggest how contributions by stakeholders (which, as we found in this study, were dominated by corporate entities) fall back on dominant narratives of legal minimum requirements rather than alternative ways of organising power and technology.
However, stakeholder engagement in such literature is often held as an answer to corporate and/or state power. Even though power symmetries and dependencies are often identified (for example, van Dijck et al., 2019), further work is needed to problematise the normative effects of invoking ‘stakeholders’, and highlighting the power asymmetries that remain in dominant modes of stakeholder engagement. In much of the current regulatory landscape, multistakeholder approaches still allow disproportionate influence by private companies over regulation, creating loopholes such as data-trafficking between related corporate entities (Kokas, 2023). In this sense, stakeholderism acts as a code for self-regulation, which in turn corresponds to influence over regulation, essentially allowing private companies to write their own regulation. Areas such as Internet governance have identified criteria for meaningful engagement (Malcolm, 2015). But while stakeholderism is often predicated on the idea of an equal footing, this discursive flattening also hides the power asymmetries that remain, and the role of the term stakeholder in sustaining such power.
There remains a question of defining the stakeholder, though it is one that we (at least partially) reject. Stakeholder can mean different things in different contexts, and lacks a stable definition. There is growing consensus around who the term stakeholder might include; Gorwa (2019), for example, offers firms, government actors and non-governmental actors as a tripartite system of formal and informal platform governance. However, this often falls back on those who are already engaged in policy discussions or wield other mechanisms to influence technology practices. It highlights a separation between those involved (to varying degrees) in building policy and regulation, and those whom such policy is supposed to protect, those subject to the technologies and policies. The claiming of a ‘stake’ in policy discussions - or the awarding of such a stake to another party - is itself a gesture that creates power and voice for those who are often already engaged in the conversation. A key contribution of this article is a shift in focus from what or who a stakeholder is, but on what the term stakeholder does, the different ways it is applied to different groups and the different roles and power structures such a term constitutes. This also leads to our critique of the discourses that mobilise the term stakeholder through its focus on who holds a stake, in order to address the often obscured matter of who is at stake. We return to this question in our analysis and discussion.
PERFORMING THE ROLE OF STAKEHOLDER
In order to assess the term stakeholder not as a state of being, with a stake as a thing to be ‘held’, but instead as an active gesture that constitutes power relations in the way it is used, the theoretical framework for this study is rooted in queer notions of performativity. The understanding of performativity builds on that developed by Judith Butler (1990; 2018) in relation to the social construction of gender identity and applied by Garfield Benjamin to technology in relation to privacy (2020), trust (2023), and the terminology surrounding data (2021). This form of performativity encapsulates the two processes of socially constructing norms and roles. On one hand, it is the way individual acts of naming and speaking constitute the identity or role that is being performed. This can be a purposeful or voluntary act of creating identity by adopting specific roles. On the other, it is the social and cultural norms that shape, reinforce and enforce the roles that people are expected to perform. This can be an oppressive set of structures that constitute and impose power asymmetries. In terms of gender, these processes are the way individuals constitute their own identity through each act of speech, dress or mannerism, as well as the set of social norms that are simultaneously constituted through these individual acts and constitute the unequal roles that individuals are expected to perform. In terms of the stakeholder, then, we are concerned with acts of labelling that lay claim to a stake in policy, to a voice in the discussions leading to policy, and to power and influence over the interests and values embedded within such policy. While the definitions underpinning tech policy are still in flux, who is involved in developing a consensus on what counts as one technology or another, and what the purpose or limits of such technologies should be, carries a timely importance in relation to normative power.
This performative framework therefore provides a useful lens through which to examine the role of the term stakeholder in UK tech policy, what the term does to construct certain power relations (and not others). Each use of the term helps constitute what it means to be a stakeholder. This includes what that role entails, what power or influence it provides, and who is allowed to occupy that role. By extension, each act of labelling or invoking stakeholders contributes to claims of who has a stake in developing policy, whose voice counts and whose interests are represented. But the wider ways in which making reference to stakeholders is constituted across the gamut of policy documents and processes imposes norms and power structures that perpetuate inequalities in who counts and whose voice is represented. This shapes further uses of the term and normalises specific meanings of the role of stakeholder that call into question who is really being included.
This understanding of how “stakeholder” is performatively constituted in tech policy can be extended by two further theoretical developments. One of these is its inverse, what Sara Ahmed calls non-performativity (2006). Developing a critique of institutional inclusion initiatives, Ahmed outlines how the act of speaking a role or function can be used to obscure its absence. Like diversity and inclusion, the role of stakeholder can be used in ways that signify representation of different interests without making any changes to the power structures and privileges that focus on the same views time and time again. This includes the vague references to “wider stakeholders” or “a range of stakeholders” or even the more obvious “other stakeholders” that relegates such inclusion to an othered category of generic recognition, often in the form of consultation without true representation.
The other theoretical position that informs our discussion is peri-performativity as outlined by Eve Kosofsky Sedgwick (2003). This is the set of acts about acts, the allo-referential practices that constitute not speech or roles themselves but the contexts in which they are produced. This pushes the collective norms of performativity further by encouraging us to think about the instability of power structures and how references to stakeholders establish the contexts in which those stakeholders might be represented. Peri-performativity comes into play when the role of stakeholder is performed on behalf of the individual or group concerned. This is a significant part of the discourse of stakeholder engagement in policy processes as stakeholders are often referred to in generic or categorical senses imposed on individuals or groups, rather than being a role or identity purposefully performed by those whose lives are ‘at stake’.
What this means for our analysis is examining the different ways stakeholders are invoked as roles, groups or methods for performatively (or non-performatively) generating the values and interests behind tech policy. With these tools in hand, we can metaphorically set the stage for the stakeholder to enter the spotlight of critique within our consideration of UK tech policy documents.
METHODS
The study used a mixed method approach, building quantitative and qualitative data upon which to analyse the use of the term stakeholder and the discursive way it constitutes different forms of power and exclusion. Mixed methods approaches have been used in technology policy analysis to bridge gaps between existing quantitative and qualitative approaches (Saheb and Saheb, 2024). We add to the growing body of literature gathering technology policy documents through key governmental and intergovernmental sources (Corrêa et al., 2023), further honed here by the focus on the particular regulatory context of UK public bodies. We followed an inductive approach to document and coding (Schiff et al., 2021). Our focus on a specific theoretical framing led to a more discourse analysis approach akin to work such as Niklas and Dencik (2021) through the lens of performativity (Benjamin, 2020; 2021) but where these mentioned studies were seeking key themes present in the documents our coding and analysis was more purposive in our focus on “stakeholder” as a term.
Policy documents
The key aim when selecting documents was to include a range of reports, policy and guidance documents that demonstrated how different voices and perspectives were involved in influencing the design or implementation of policy. The list of documents were all drawn from UK tech policy, as this was the focus of the study based on the particular context of separation from the EU and consequent lack of attention in the literature compared to EU and US contexts. The principal investigator generated a list of relevant documents using an inductive approach. This began with the UK Digital Strategy as a key overarching statement of the government’s regulatory agenda around digital technologies. From this, reports cited were considered for inclusion leading to a snowball method of ‘references’, ‘further reading’, ‘related topics’ or key word categories within reports and the government websites hosting them.
A number of inclusion and exclusion criteria were followed. To be included, documents have to be hosted by a UK government department or body (i.e. from a .gov.uk, .ico.org.uk , .ofcom.org.uk domain or similar). Documents were included if they fell within the time range of 2017 to 2022 (or the very beginning of 2023). This period represented a specific ‘era’ in UK tech policy characterised by greater attention not only across different government departments but also in the media. Inclusion focused on documents related directly to the reflexively emerging categories of ‘data’, ‘AI’ or ‘platforms’, based on how the documents themselves defined the underlying technologies with which they were concerned, but also wider relevant areas including digital literacy or using digital technologies as an enabling factor (such as economic and industrial strategies).
Documents were excluded if they did not present analysis or policy outcomes, for example surveys conducted by external media organisations or Parliamentary Committee minutes. The rationale for this decision was the research questions’ focus on how specific perspectives or voices were included in policy outcomes. So, for example, a government department’s response to consultation would be included, as it demonstrates which submitted evidence was selected to align with government priorities, while the broader submissions themselves do not necessarily equate to influence or inclusion within the policy process; the consultation method does not guarantee inclusion. Our observations surrounding this are discussed below. A study of committee meetings and evidence submissions would be potential avenues for further study with a slightly different emphasis on who is invited or which political actors shape discussions within groups of elected representatives. The aims of the present study instead resulted in a focus on the civil service: government departments and bodies responsible for developing and implementing policy. The list was then checked by the full team to ensure fit. These criteria led to a body of 194 documents.
Analysis
Documents were sorted by year and by authoring department(s), before being labelled according to categories emerging through reflexive coding of the data set. This was undertaken across two axes. The first was the technology area the document applied to or focused on, grouped into ‘Data’, ‘AI’, ‘Platforms’ and ‘Other’. The second axis was the topic of a document’s policy area or type, again coded based on reflexive reading of themes from the documents’ titles, authoring departments or key words. This included areas such as children, business, health, identity, information security, public sector or society which identified whose interests were ‘at stake’ in the policy, as well as the labels of standards or strategy which identified a document framed as shaping further policy implementation.
In order to examine which interests and voices were being included in policies, or were being presented as those with influence, we categorised the references cited within the policy documents. Themes emerged that in part align with, for example, Gorwa’s (2019) tripartite categories but required further separation to view the different types of actors present. ‘Industry’ stood as its own category, but to some extent stood separately from ‘Media’ which covered private interests presented through journalistic sources rather than direct influence of a specific company. To this end, ‘government’ sources related specifically to other UK government reports, while ‘government (international)’ demonstrated links to other national contexts and supranational organisations such as the UN or related agencies. This decision delineated the internal focus that emerged in a lot of the reports, discussed in our findings below. Non-governmental actors were further separated into ‘Academia’, ‘Civil society’, ‘Media’ (overlapping with Industry when it related to regulation of online platforms, for example, but defined as journalistic sources which were often cited in the documents), and ‘Other’. Marking out these different categories that emerged in the data provided us with a more granular impression of which types of organisations were prioritised by government departments authoring reports, demonstrating a general sense of insider circles of influence and more pertinently the peri-performative construction of being a stakeholder present in discussions outlined below in the performative analysis. The analysis of citational practices was undertaken across the full list of 194 documents in part to include a larger data set and in part so that some comparisons could later be drawn between those using or not using the term stakeholder.
Uses of the word stakeholder was found in a subset of 141 of the documents. We further coded the use of the word stakeholder, with themes reflexively emerging after an initial reading, adapted and agreed between the researchers, that categorised the different ways the stakeholder was used. The first categorisation was temporal, as documents tended to focus on one or both of past engagement with stakeholders (in developing the policy) or future expectation (in implementing the policy). The second set of categories that emerged were how the term was used to identify specific or general groups. This was divided into: specifically named stakeholders or types of stakeholders; ‘key’ or ‘relevant’ stakeholders; ‘a range of’ or ‘diverse’ stakeholders; ‘other’ or ‘some’; and finally the generic unqualified use of ‘stakeholders’ by itself.
The quantitative data we created about the documents reveals initial findings . Differences across categories showed who was and was not being included or made visible as ‘stakeholders’ within the policy documents. However, the function of this data was also to inform the discourse analysis we conducted using the performative theoretical framework. Qualitative observations were therefore made throughout this process, particularly in relation to the uses of the term stakeholder, in order to contextualise the quantitative data and support the wider discussion. For example, observations were made in relation to the combination of the temporality and type of stakeholder mentioned, such as “we engaged with industry stakeholders” compared with “we will engage with diverse stakeholders”. This enabled us to examine in more detail the way that the term stakeholder embeds power in different ways, and to situate the findings within these broader narratives of tech policy.
FINDINGS
Policy documents
Of the 194 reports analysed, these were weighted towards more recent years, showing the rapid rise in interest and activity in tech policy. There were 43 different organisations who sole-authored reports, rising to 52 including co-authoring. Figure 1. shows the distribution of authoring departments over time.
Authors ranged from major government departments such as DCMS, with direct legislative remit over technology policy, to a range of specific regulators, offices and other bodies. These may have a more specific remit over a particular aspect of tech policy, such as the UK Council for Internet Safety, the National Data Guardian, the Office for AI, the Geospatial Commission or NCSC, or they may cover a different area that has overlap with issues in technology, such as the Race Disparity Unit, Children’s Commissioner, Committee on Standards in Public Life, or devolved governments. The most prominent author of reports was the Department for Culture, Media and Sport (DCMS), with 33 sole authored reports, followed by the Information Commissioner’s Office (ICO) at 20, and the Centre for Data Ethics and Innovation (CDEI) with 11, though notably a part of DCMS. The Department for Business, Energy and Industrial Strategy (BEIS), The Cabinet Office (and particularly its Central Digital and Data Office), OfCom, The Children’s Commissioner, and NHS or the Department for Health and Social Care (DHSC) also featured in a number of reports. Twenty-five reports were cross-departmental. These were either collaborations between major departments, for example the three reports co-authored between DCMS and BEIS, bodies working on related areas, like DHSC working with NHS England and BEIS working with OfGem and InnovateUK, or a department working with a more specific body, such as the Cabinet Office operating in a facilitating role working variously with the Geospatial Commission, Scottish Government and others.

The number of organisations counts sub-bodies separately, like NHS, NHSX, NHS Digital and NHS England, as they each give a particular voice, focus and framing to policy, and will each have different strategic priorities and level of engagement with technology. For example, NHSX is more likely to define technology strategy or standards for use in the health sector, for example the report “Artificial Intelligence: How to get it right”, while the broader NHS would situate tech as an enabler for wider needs in the health sector, such as “The Topol Review: Preparing the healthcare workforce to deliver the digital future”. The long tail of organisations working on tech policy shows the breadth and scale of the issues faced. Combined with the cross-departmental collaborations, this also demonstrates the dominance of technology as solutions within government rhetoric and priorities, and the need to integrate these aims into existing policy programmes and specific operational areas.
The categories over time (shown in Figures 2 and 3, and intersecting in Figure 4) show some interesting fluctuations. For example, 2019 saw a comparative spike in focus on AI. This could be due to a number of factors. The same year saw AI directives, strategies or principles from the US (including establishing the Centre for Security and Emerging Technologies), Singapore, The Netherlands, South Korea, Japan and Australia. It could also be seen as the next step in technology policy after the coming into force of the GDPR the following year. These AI documents could also be seen as a period of scoping before the 2023 push towards an AI White Paper as a counterpart to the development of the EU AI Act. However, data became a core focus, echoing government rhetoric and priorities around data-driven policy and solutions. Across topic areas, strategy and standards increased over time as overarching government policy became elaborated into specific strategies for specific departments or policy areas. Business centred reports remained a key component, again echoing government narratives around prioritising industrial strategies and seeing digital technologies as a key enabler for growth agendas. Other smaller scale observations include an increase in strategy documents in 2020 followed by an increase in society-wide issues the following year, perhaps suggesting responses to changes in strategy and the need to examine the impact on issues such as inequality and the effects of policy in specific areas.



Stakeholder
One hundred and forty-one out of the total 194 documents used the term ‘stakeholder’ (including plurals or hyphenations such as multi-stakeholder). Figures 5 and 6 show this over time. Notable spikes are 2019, which aligns with a focus on strategy where engagement with wider external actors may be more likely, as well as an increase in 2022. The latter perhaps demonstrates a more recent focus on stakeholder engagement, positioning the findings of this paper with increasing importance as policy-makers seek to include - or to elude to including - more varied perspectives in how tech policy is implemented across different sectors. Our focus is now on the 141 documents that mentioned stakeholders. Of these, 70% made reference to stakeholders in past engagement, while 80% referred to them as part of future plans (for themselves or others). Slightly under half these documents mentioned specific types of stakeholders (46%), while there were also common references to “other stakeholders” (47%), “key” or “relevant” stakeholders (41%) and “diverse” or “a range of” stakeholders (39%).The generic use of “stakeholders” without any qualifier was more dominant, being used by 81% of the documents who mentioned stakeholders at all. Total number of uses places the generic use significantly higher, at 703 compared to between 109 and 146 for more qualified uses, while past and future were more even at 543 and 562 respectively. Figure 7 shows the relative proportion of different mentions across temporal and use case categories.


Documents with the highest use of the term stakeholder were mostly government department reports detailing responses to consultations. These documents are important as they show how the potentially more inclusive consultation process translates into who is actually being represented as the policy is developed. They were, however, dominated by generic uses of the term stakeholder, not specifying who or even what type of stakeholder was being referenced. This speaks to a problem of opacity within the policy-making process, concealing influence and not validating representation within the consultation process. It is not clear whether these “stakeholders” include the full breadth of submissions or only a few insider groups already involved in the discussion. This use of the term stakeholder could therefore be considered non-performative insofar as it remains unattributed. It does not give the role of stakeholder to a particular organisation, group or interest, and thereby fails to acknowledge those who have a stake in the discussion. Instead, following the institutional inclusion narratives that Ahmed critiqued, this use of the stakeholder in consultation becomes more a discursive mobilisation of the term in support of existing policy agendas. It suggests that it is enough to listen to “some” stakeholders without thinking about “who” those stakeholders actually are.

The highest use of specific stakeholders was in relation to business topics. This demonstrates the historical use of the stakeholder as having a financial stake, falling back on the definitions of colonial or corporate stakes, ownership and influence discussed above (Sharfstein, 2016; Bannerjee, 2008). The focus on industrial strategy and growth highlights a political agenda driven by specific economic principles, supported by our observations on documents such as the Competition and Markets Authority reports on digital advertising on online platforms, which prioritised the references to the “industry” stakeholders. This use of specific stakeholders gives a stake, gives a voice, to those already in control of such systems rather than the users and especially marginalised groups who would be affected by their decisions.
Other topics with a high use of the term stakeholder overall were strategy, which is appropriate given the consultative process already mentioned, as well as information security topics, which builds on multi-stakeholderism as a key part of topics such as internet governance, national security and supporting smaller organisations in their cyber security practices. Of the vast majority who did refer to stakeholders in some way, most referred to them many times over. The few organisations who never made any reference to stakeholders were those with only one or two reports in the data set. As our sampling of policy documents was conducted prior to identifying their use of the term stakeholder, this is unlikely to be a sampling artefact. Instead, this finding is more likely due to these examples being more specific applications within other policy sectors, where the word stakeholder may not be used in favour of naming specific interested and involved parties. This demonstrates the potential for a more accountable approach towards engaging with stakeholders by naming directly those who are involved in decision-making and those who are affected.
Some documents, such as those by the NHS, NDG and NAO were almost if not exclusively referring to past engagement, which also (with varied usage) demonstrates a certain level of accountability in including different perspectives in the policy process. But many others, including many of those by the ICO, CDEI, OfCom, BEIS, DHSC, and HM Treasury, leant much more heavily towards future imperatives (in the case of ICO in particular, this was often as an instruction to other organisations rather than their own plans). This shows a split in terms of those making grand strategies for others to implement or to work out details later, versus the more operational side of building from existing relationships. Take the difference between DHSC and the NHS, for example: the government department focused on overarching strategy and expectation that stakeholder engagement would follow, while the health service already embedded with professionals, patients and suppliers brings that engagement into its policy documents.
These findings show that the use of stakeholder tends to performatively entrench the existing power of “industry stakeholders” or nameless but clearly already engaged and empowered “key stakeholders”. Meanwhile, they also construct a false sense of inclusion through the non-performative use of generic or “other stakeholders”. This creates significant risk of a veil of accountability, and raises significant questions over established processes such as consultation. When it is unclear who is influencing policy, whose voices and interests are being represented, then the indicators from specific uses suggest that the stakeholder becomes a foil for amplifying historical power and privilege, often on political and/or economic lines, and in doing so excludes the needs of those most affected by technologies who already suffer a lack of agency in how data, AI, platforms and other areas are used to shape their lives.
Citations
How do these uses of the term stakeholder compare to the voices actually (and identifiably) represented in policy documents? We return now to the full data set of 194 documents to consider which sources were being cited within the documents. Figure 8 provides an overview of the total citation sources represented in the data set over time. Half (50.4%) of all citations across all reports assessed were from government departments, while almost two thirds (125/194) had 50% or more sources from other government reports, showing a circularity and self-repetition across government departments and policies. Only 13 documents had no citations of government sources, though all but two of these had no citations at all. These 11 documents that had no citations included three that were based on direct surveys, while the remainder were strategies and standards in areas like Defence. The most cited source outside of government was industry, at 15% of all citations, followed by academia with 13%, media with 8%, civil society with 7% and government (international) with 6%.

The dominance of government sources shows a consistent and common agenda through authoritative sources. It follows that larger or strategic documents would go on to shape many future policy documents, often in more specific areas of implementation. This tracks with how tech policy has spread across UK government departments as more areas have had to grapple with the impact of data, AI, platforms and the like on their particular remit. However, this insular sourcing of views carries a high risk of political single-mindedness and a closed agenda. Once particular strategies are put in place, they echo across different policy areas, which potentially overrides the needs of specific contexts – specific stakeholders and specific people or groups whose lives are ‘at stake’ – not to mention the wider expertise that could be brought in to inform these more specific areas. While it is again perhaps an effect of standard policy practices, the homogenisation and setting in stone of policy agendas suggests the exclusion of alternative voices, particularly those most relevant to specific implementations of policy strategy.
We found that higher levels of government sources tended to correspond to lower inclusion of external sources and less diversity between sources (based on standard deviation of relative citation counts generally being significantly higher for such documents and qualitative observations of the spread across other categories). This leads to greater unevenness between voices, compared to documents showing more of any non-government source which tended to indicate a more diverse and even array of citations across different sectors. Therefore, documents reliant on government sources tend to escalate the insularity and circularity either internally or with specific ‘insider’ groups. The dominance of internal citation suggests that established government narratives become entrenched across departments and homogenise policy agendas and priorities, which could be concerning when specific areas like health or education require more directed engagement with sector-specific knowledge. However, it was reassuring in this sense that health, and social or ethical issues, were among the areas that most highlighted work by civil society and academia. Seven such reports relied more on academia than other sources combined. Meanwhile, the reports by the Children’s Commissioner were among the most reliant on civil society, and interestingly these didn’t mention stakeholders, despite the Children’s Commissioner having a trend towards higher levels of engagement with affected groups (specifically, children and parents).
Four reports were more reliant on industry than other sources combined. The reports that had a higher than 1 standard deviation above average of industry sources were perhaps unsurprisingly within the areas of business, finance and cyber security. More surprising perhaps was that reports by devolved governments were also in this group, which shows the focus of regional interest in technology and/or the areas where devolved governments are able to claim more agency over their engagement with technologies. Media reporting was unsurprisingly highest concerning online habits and platforms. If we take into account the private ownership of media outlets and therefore combine them with industry, however, then we see this group dominating the influence over: the regulation of online platforms (in which they have significant self-interest); business strategy; and, perhaps most alarmingly, the UK Digital Strategy 2017.
The report with the most evenly distributed sources across the categories was the ICO’s ‘Big data, AI, machine learning and data protection’. This is interesting as the ICO generally relies on a large number of self-citations, pointing readers to its wealth of guides and reports on the minutiae of data protection obligations. Against this generally narrow focus are reports such as this one which show a broader engagement with, for example, academia and civil society in establishing the groundwork for future policy work. This suggests that there are key moments, key documents where broader influence is possible and encouraged.
The UK Digital Strategy 2017 and 2022 show an interesting divergence. Where the 2017 strategy was varied in its sources, by 2022 this had narrowed significantly to other government documents. This could partially be explained by the increase in government work on the area during that time, so there was simply more to draw on, or that the 2022 strategy was more of an update than new policy. But the period also saw further work by academia, civil society and others, so it still represents a shift in focus. Similar shifts towards centralised government sources over time can be seen in the difference between the National AI Strategy and the strategy’s Action Plan the following year; while both were dominated by government sources, this was even more pronounced (58% to 79%) in the action plan. The same shift was also seen in the difference between interim and final reports, such as the CMA’s Online Platforms and Digital Advertising Market study. The interim report was more diverse, with minimal deviation seen only in industry (above average) and civil society (under), whereas the final report showed the trend towards government sources, in this case alongside industry.
This, in the context of the overall dominance of government sources, suggests that there are key moments – often more underlying research reports – where external parties can gain influence or make alternative voices heard. Then, once these are set in stone, the engagement narrows towards a more focused government narrative. This raises concerns surrounding the method of engagement with stakeholders, particularly when consultations list all submissions but do not directly match these in a transparent way to actual government policy.
DISCUSSION
This is a useful point to revisit the meaning of the term stakeholder within UK tech policy. Our findings show that a consistent definition remains impossible to pin down. However, our analysis within a performative framework is instructive in saying, if not what a stakeholder is then what the term stakeholder does to give a stake to certain types of actors and not others. We therefore offer the following division of the ways stakeholder is used:
Performative uses of the term stakeholder are those that give a stake, give voice, to particular actors or groups. This is seen in specifying specific parties, such as “industry stakeholders” or “government stakeholders”, which makes up almost a quarter of occurrences. This use constitutes those groups as having a stake, often aligning with financial incentives for businesses being regulated or public sector bodies whose expenditure may be affected. When this was observed in strategy-setting reports, this suggests a centralisation of agendas and deprioritisation of the groups most marginalised by the use of the technology. This is seen in our findings with the highest instances of specific named stakeholders being used in business and by organisations like the Competition and Markets Authority, which raises questions about influence and suggests an unwillingness for regulators to take a strong stance in enforcing laws around data protection and algorithmic manipulation, for example. Sometimes this use may denote status without specificity, such as “key stakeholders”, implying an opaque list of stakeholders able to wield power. These uses often align with past engagement within the policy process, designating a role of insider and granting influence without transparency or accountability.
Non-performative uses are those that deny a stake while making generally vague reference, performing a legitimising function. It often aligns with future plans or expectations, a generalised aim rather than a specific commitment to concrete activities, and with descriptions such as “wider stakeholders” or “diverse stakeholders”. While this language could be used to leave open the question of possible involvement, the tendency for past engagement to narrow down onto expected groups already featuring heavily in citational practices suggests it is not performing that function. We found that while the mention of an unspecified “range” (“wider”, “diverse”) stakeholders was a less than ⅛ of cases, it aligned heavily with future tense. The non-performative stakeholder is therefore a nonspecific statement of noncommittal intent. A stronger use that performatively constituted inclusion would specify which groups will be included in future policy decisions, even if space was left open to engagement with as yet undetermined groups (indeed, this dual approach would be encouraged to outline a clear and transparent plan for inclusion while also recognising the need to keep adapting and expanding whose voices are heard). The non-performative “range of” stakeholders is instead more an embodiment of shallow inclusion initiatives that feign openness while restricting the actual influence of those often most affected. In tech policy that tends to include users, vulnerable groups such as those from queer and trans* communities, or those marginalised by race, class, education and/or gender. The lack of specificity elides accountability for the exclusion of such groups, while the use of the term stakeholder in both performative and non-performative ways creates the discursive illusion of equality.
Peri-performativity in the use of the stakeholder is the third-person allocation of the role to others. The combination of performative and non-performative roles enacted in the policy documents we studied shows a set of speech acts that enable some groups to performatively constitute their own role as stakeholder, to claim their own stake, while non-performatively denying the same stakes to others. The former is often those already holding the resources, influence and power to make their voice known through invitations to roundtables, clear pathways of citation, named identification as a “key stakeholder” for automatic inclusion in a discussion. The latter, however, is often those who are at stake in such discussions, and who instead rely on the representation of their interests by third parties, often civil society or academic groups who are themselves less represented in citational practices than industry actors. Other times, the absence of specificity may constitute influence, such as report authors pointing to unnamed stakeholders whose contributions align with the predetermined agenda, notable in our findings in the cherrypicking that occurs in government responses to consultations or in the roll-out of policies from the Central Digital and Data Office (part of the centralised Cabinet Office) to other sectors. This use of the stakeholder therefore also highlights the self-referential nature of government policy. While central strategy followed by implementation in specific policy areas is to be expected, the common lack of engagement with sector-specific affected groups is concerning. Similarly, the decrease in external perspectives in central strategy over time suggests a closing off of who can be deemed a stakeholder, and an entrenching of the stakes along political and financial lines. This risk here is not merely that policy agendas become fixed but that who has a say becomes peri-performatively restricted and iterated.
Figure 9 shows an indicative distribution of these types of stakeholders according to the categories emerging from the data set as well as our analysis of who is implied but not mentioned in the discourse of stakeholder inclusion. It is by no means a complete mapping, but acts as a guide to the landscape of who is afforded a stake in tech policy in different settings and who is at stake in the results of the policy process. This diagram shows the divergence even within government as well as the varied scope of other sector actors. Those with more power to be able to perform the role of stakeholder are able to influence policy in a more direct way. Those able to performatively hold a stake are often those most likely to benefit from a policy, while those non-performatively denied it are often those at stake in the potential harms. This asymmetric distribution of stakes raises questions about the value, accountability and legitimacy of existing policy processes. This is true in policy more widely, but the particular contestations of terms, values and aims within tech policy make this an all the more pertinent area of concern.

Our findings show the dominance of a rhetoric of consultation within government policy which instills a peri-performative power upon government departments and bodies in defining who ‘counts’ as a valid stakeholder, whose voice should be heard. Rather than the content of these consultations, which is an alternative direction for future research, the focus of our analysis on the inclusion of consultation outcomes within government policy documents highlights the peri-performative work going on. This process of policy defining who has a stake furthers the performative construction of the stakeholder as a form of legitimacy after the fact. Consultation sits in the very middle of Sherry Arnstein’s ladder of citizen participation (1969), in the middle of ‘tokenism’. This is perhaps why consultation has become so aligned with stakeholder engagement as both performatively assigning power and non-performatively excluding those already marginalised. Only by making a periperformative shift, redefining who a stakeholder is and what influence they can have, can the process be used to embed social justice within tech policy. While policy processes have been slow to evolve, Pohle (2016) sees tech governance processes (specifically the multistakeholder nature of Internet governance) as a site of production that has required wider engagement ‘in the making’. Participation and representation should take place in a more meaningful way, earlier in the process, and as an ongoing part of its design and development.
Performativity is not about a single act, but the iterative repetition of roles and norms. A single policy proposal might start to shift the balance, but greater representation of a truly wide range of stakeholders (especially those currently marginalised from policymaking) must be an ongoing transformation. Any analysis of specific stakeholders in a given context has a time-limited usefulness (Brugha and Varvasovsky, 2000). Those affected by tech policy may change as new technologies, application areas, sociocultural trends or political shifts arise. The risk of solidifying the discourse of core policies – as we identified in UK tech policy – means that those roles and expectations are performed again and again across different departments and application areas. This has a genericising and normalising effect that reduces the space available for new stakeholders to engage, which is particularly concerning when overarching policy is applied to specific contexts such as health or education. Key alternatives seen in the documents we analysed are avoiding the term in favour of naming specific actors and groups. This increases accountability and transparency for the policy-making process and encourages a firmer commitment to engaging with different groups in the future. Including the voices of groups not usually part of the policy process also necessitates looking for more creative and inclusive approaches to decision-making. Situated within our queer feminist critique, this suggests that the notion of having a “stake” could be performed in more radical ways than tokenistic forms of stakeholderism, ways that provide specific roles for greater involvement of affected groups in policy and generate new and unexpected avenues for future tech policy, leading to processes that are more contextual, engaged and socially just.
Potential limitations of the study are its focus on UK tech policy over a specific timescale. Working on policy always entails a closing off for analysis, and regularly updating the research would be favourable. This also opens up future directions for the research, which include focusing on specific components of the policy-making process. For example, the same analysis could be applied to Parliamentary Committee meetings to assess the processes of scrutiny and who is called up to give evidence and in what ways. Similarly, specific key consultations could be analysed to relate the evidence submitted to the government response documents and final policy in order to assess exactly how far the term stakeholder is used to justify existing policy decisions or amplify already dominant voices. The methods could also be applied to different jurisdictional and policy contexts, or to documents from other sectors such as civil society reports or academic publications in order to gain different, outsider, perspectives. Other indicators could also be examined, such as the relation of the word stakeholder to its wider context, perhaps with corpus methods. We are also working on expanding our own engagement with other groups such as civil society, policy-makers and different publics in order to reach greater mutual understanding of the narratives, power structures and barriers towards more effective tech policy that prioritises the most vulnerable.
CONCLUSION
This study has shown the different ways the term stakeholder entrenches existing priorities and power inequalities in UK tech policy. By analysing 194 policy documents, we compared uses of the term to citational practices, to provide an image of which sectors and interests are influencing policy. Our findings show that, as it stands, the ‘stakeholder’ in UK tech policy performatively constructs power for those who already have it or whose interests align with prevailing government agendas. Meanwhile, the term also non-performatively excludes those who are currently marginalised or most negatively affected by proposals, and peri-performatively restricts who has what kind of voice within the policy-making process. In an area that cuts across different sectors and remains highly contested in its definitions, aims and outcomes, the ability to assign or claim the role of stakeholder is itself a form of discursive power over who may be involved in defining tech policy.
We therefore suggest the following recommendations for policy-makers:
- Stop using the term stakeholder: specificity improves transparency, accountability and representation. Our findings show that documents with a deeper engagement with particular affected groups (such as reports by the Children’s Commissioner) name their participants directly instead of using the catch-all term stakeholder, which promotes transparency over whose voices are being included. They are also more likely to cite a wider array of external sources, which shows a broader alignment between avoiding the problematic term stakeholder and seeking a more diverse set of perspectives. Conversely, the term stakeholder is more often than not used to obfuscate the concentration of existing power or offer merely tokenistic inclusion of more diverse or marginalised perspectives;
- Representation not consultation: stakeholder approaches, as currently employed, do not result in adequate representation, as evidenced by the dominance of powerful industry voices in defining tech policy. Examples of more positive representation align with lower use of the term stakeholder, as participant groups are specifically named and their particular needs, perspectives and contributions acknowledged (such as the connected but different roles of parents, children and teachers). Alternative, more representative (and earlier in the process) modes of engagement, such as direct public engagement (such as citizen assemblies, rapid consultation and co-creation), mediation (such as platform tools for collective deliberation or convening different groups through trusted third parties in more accessible settings) or creative approaches (such as world-building, artistic responses, speculative design and prototyping) are needed. Working more closely with civil society (who are underrepresented in the citations) and academia can help enable this;
- Make policy more flexible: fixing one set of influences is harmful to representative processes and limits adaptive and contextual policy-making. The insularity and circularity we found in citational practices shows a significant risk of entrenching specific values that may not be applicable under political, technological and social change. Seeing policy as an ongoing, reflexive and auditable process would improve representation and accountability.
- Increase transparency around key moments for intervention: our findings showed that there are a select few policy documents that set the tone and engage with wider groups, which are then overrepresented in citations in following policies. Government should be more clear about these and organisations should make additional efforts to seek out opportunities to engage at this stage.
Tech policy is still mired in contested terms and inequalities of access and understanding in decision-making processes. It is therefore an important point in which to intervene and ask who is being given a stake in these discussions. The performative analysis presented in this article offers a useful framework for understanding who is able to lay claim to such a stake, and who has a role of empty legitimisation thrust upon them. We argue that, rather than amplifying the influence of those already able to hold a stake, the focus shifts towards elevating the needs of those whose voice, rights and lives are at stake.
References
Ahmed, S. (2006). The nonperformativity of antiracism. Meridians, 7(1), 104–126. https://doi.org/10.2979/MER.2006.7.1.104
Arnstein, S. R. (1969). A ladder of citizen participation. Journal of the American Institute of Planners, 35(4), 216–224. https://doi.org/10.1080/01944366908977225
Barrett, B., & Kreiss, D. (2019). Platform transience: Changes in Facebook’s policies, procedures, and affordances in global electoral politics. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1446
Bauer, J. M., & Van Eeten, M. J. G. (2009). Cybersecurity: Stakeholder incentives, externalities, and policy options. Telecommunications Policy, 33(10–11), 706–719. https://doi.org/10.1016/j.telpol.2009.09.001
Belli, L. (2015). A heterostakeholder cooperation for sustainable internet policymaking. Internet Policy Review, 4(2). https://doi.org/10.14763/2015.2.364
Benjamin, G. (2020). From protecting to performing privacy. Journal of Sociotechnical Critique, 1(1), 1–30. https://doi.org/10.25779/ERX9-HF24
Benjamin, G. (2021). What we do with data: A performative critique of data ‘collection’. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1588
Benjamin, G. (2023). Mistrust issues: How technology discourses quantify, extract and legitimise inequalities. Bristol UP.
Brugha, R., & Varvasovszky, Z. (2000). Stakeholder analysis: A review. Health Policy and Planning, 15(3), 239–246. https://doi.org/10.1093/heapol/15.3.239
Burkell, J., & Regan, P. M. (2019). Voter preferences, voter manipulation, voter analytics: Policy options for less surveillance and more autonomy. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1438
Butler, J. (1990). Gender trouble. Routledge.
Butler, J. (2018). Notes towards a performative theory of assembly. Harvard UP.
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-9901-7
Corrêa, N. K., Galvão, C., Santos, J. W., Del Pino, C., Pinto, E. P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., & Oliveira, N. (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10). https://doi.org/10.1016/j.patter.2023.100857
DSIT & DCMS. (2019). National data strategy [Policy Paper]. UK Government. https://www.gov.uk/guidance/national-data-strategy
DSIT & Office for AI. (2023). AI regulation: A pro-innovation approach [Policy Paper]. UK Government. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach
DSIT, Office for AI, DCMS, & BEIS. (2021). National AI Strategy [Policy Paper]. UK Government. https://www.gov.uk/government/publications/national-ai-strategy
E Silva, K. (2013). Europe’s fragmented approach towards cyber security. Internet Policy Review, 2(4). https://doi.org/10.14763/2013.4.202
Elgot, J., & Courea, E. (2024). Online Safety Act not fit for purpose after far-right riots, says Sadiq Khan. The Guardian. https://www.theguardian.com/media/article/2024/aug/08/online-safety-act-not-fit-for-purpose-far-right-riots-sadiq-khan.
Epstein, D., Katzenbach, C., & Musiani, F. (2016). Doing internet governance: Practices, controversies, infrastructures, and institutions. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.435
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People – An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Freeman, R. (1984). Strategic management: A stakeholder approach. Pitman.
Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2), 1–22. https://doi.org/10.14763/2019.2.1407
Hickman, E., & Petrin, M. (2021). Trustworthy AI and corporate governance: The EU’s ethics guidelines for trustworthy artificial intelligence from a company law perspective. European Business Organization Law Review, 22(4), 593–625. https://doi.org/10.1007/s40804-021-00224-0
Hofmann, J. (2016). Multi-stakeholderism in internet governance: Putting a fiction into practice. Journal of Cyber Policy, 1(1), 29–49. https://doi.org/10.1080/23738871.2016.1158303
Janssen, H., Cobbe, J., & Singh, J. (2020). Personal information management systems: A user-centric privacy utopia? Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1536
Kokas, A. (2023). Data trafficking and the international risks of surveillance capitalism: The case of Grindr and China. Television & New Media, 24(6), 673–690. https://doi.org/10.1177/15274764221137250
Liaropoulos, A. (2016). Exploring the complexity of cyberspace governance: State sovereignty, multistakeholderism, and power politics. Journal of Information Warfare, 15(4), 14–26.
Malcolm, J. (2015). Criteria of meaningful stakeholder inclusion in internet governance. Internet Policy Review, 4(4). https://doi.org/10.14763/2015.4.391
Mannan, M., & Schneider, N. (2020). Exit to community: Strategies for multi-stakeholder ownership in the platform economy. Georgetown Law Technology Review, 4(1).
Marda, V., & Milan, S. (2018). Wisdom of the crowd: Multistakeholder perspectives on the fake news debate [White paper]. Internet Policy Review Observatory, Annenberg School of Communication. https://papers.ssrn.com/sol3/Delivery.cfm?abstractid=3184458
Niklas, J., & Dencik, L. (2021). What rights matter? Examining the place of social rights in the EU’s artificial intelligence policy debate. Internet Policy Review, 10(3). https://doi.org/10.14763/2021.3.1579
Parker, I. (2025). What are we really talking about when we talk about AI? Global Government Forum. https://www.globalgovernmentforum.com/what-are-we-really-talking-about-when-we-talk-about-ai/
Phillips, R. (2003). Stakeholder theory and organizational ethics. Berrett-Koehler.
Pohle, J. (2016). Multistakeholder governance processes as production sites: Enhanced cooperation ‘in the making’. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.432
Saheb, T., & Saheb, T. (2024). Mapping ethical artificial intelligence policy landscape: A mixed method analysis. Science and Engineering Ethics, 30(2), 9. https://doi.org/10.1007/s11948-024-00472-6
Schiff, D., Borenstein, J., Biddle, J., & Laas, K. (2021). AI ethics in the public, private, and NGO sectors: A review of a global document collection. IEEE Transactions on Technology and Society, 2(1), 31–42. https://doi.org/10.1109/TTS.2021.3052127
Scott, I. A., Carter, S. M., & Coiera, E. (2021). Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health & Care Informatics, 28(1). https://doi.org/10.1136/bmjhci-2021-100450
Sedgwick, E. K. (2003). Touching feeling: Affect, pedagogy, performativity. Duke UP.
Sharfstein, J. M. (2016). Banishing “stakeholders”. The Milbank Quarterly, 94(3), 476.
Smuha, N. A. (2021). Beyond the individual: Governing AI’s societal harm. Internet Policy Review, 10(3). https://doi.org/10.14763/2021.3.1574
Van Dijck, J., Nieborg, D., & Poell, T. (2019). Reframing platform power. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1414
Wilson, C. (2022). Public engagement and AI: A values analysis of national strategies. Government Information Quarterly, 39, 3–10. https://doi.org/10.1016/j.giq.2021.101652
Wolff, J. (2016). What we talk about when we talk about cybersecurity: Security in internet governance debates. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.430