How platform power undermines diversity-oriented innovation

Paula Helm, University of Amsterdam, Netherlands

PUBLISHED ON: 26 Jun 2024 DOI: 10.14763/2024.2.1780

Abstract

The paper contributes to the further development of platform studies by looking at the early stages of development. It draws on insights from an exemplary research and innovation project that aimed to develop a "diversity-aware online social platform". The article takes a co-productive stance. It is thus characterised by both critical analysis and active engagement. From this insider perspective, it discusses how emerging tensions were navigated in the attempt to develop a new diversity-oriented platform and how this affected the enactment of the idealistic motive to promote diversity that formed the initial objective for the project. The empirical observations are then engaged in conversation with insights from the fields of critical innovation and platform studies. These perspectives are mobilised to fathom how Silicon Valley agendas influence innovation imperatives that focus on rapid scalability through automation, privileging a narrow problem-solving mentality. It is argued that these imperatives limit the adoption of alternative platform models from the early stages of planning, design, and development in ways that trap innovators in the same logics that created the problems they are trying to respond to in the first place.
Citation & publishing information
Received: Reviewed: Published: June 26, 2024
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Innovation, Platform power, Platform design, Big data, Algorithms
Citation: Helm, P. (2024). How platform power undermines diversity-oriented innovation. Internet Policy Review, 13(2). https://doi.org/10.14763/2024.2.1780

This paper is part of Locating and theorising platform power, a special issue of Internet Policy Review guest-edited by David Nieborg, Thomas Poell, Robyn Caplan and José van Dijck.

Introduction

Powerful digital platforms have faced a multitude of criticisms over the past 15 years. These include accusations that they spread misinformation (Helberger, 2020), polarise populations (Vaidhyanathan, 2022), manipulate user behaviour (Susser et al., 2019), invade people's privacy without offering them meaningful choice (Helm & Seubert, 2020), centralise power (van Dijck et al., 2019), create a new class of precariously employed gig workers (Van Doorn, 2017), and last but not least, develop infrastructures that exploit resources (Crawford, 2021). These are serious accusations with different facets, but they all have one thing in common: they point to consequences of platformisation as it took shape in previous years. In contrast to these criticisms, in the early days, most platforms presented themselves as initiatives aimed at improving the world. Like Google with its proclaimed vision of making knowledge accessible to everyone. Or Facebook's emancipatory promise to connect users across geographic distances and enable them to create and share content. Or the marketing strategies of Airbnb and Uber, which were originally imbued with idealistic, even communitarian notions and claimed to unleash the sharing economy by helping people leverage the untapped resources of others. Regardless of whether these initial promises were ever to be taken at face value, how did they become dubious, even turning into their opposite? And how do new actors try to avoid this dialectic and do things differently? What challenges do they face? How are these challenges situated within broader dynamics of innovation and tech development? 

Taking up these questions, in this article, I will address the challenges I faced while navigating an ethics work package in a four-year Horizon Europe project called "WeNet - the Internet of Us".1 WeNet denotes the name of a new online social platform whose development was at the heart of the project. Its overarching aim was to provide a diversity-orientated alternative to the similarity-based matching solutions offered by powerful corporate players. The latter are accused of not only causing filter bubbles, but contributing to socio-political polarisation, thereby posing a serious threat to the very foundations of our democracies (Bruns, 2019). Funded by a Research & Innovation scheme, the project brought together various actors from academia and industry. Academic actors included Trento University in Italy, Open University of Cyprus, Ben Gurion University of the Negev in Israel, Aalborg University in Denmark, Tübingen University in Germany, London School of Economics in the UK, IDIAP Research Institute in Switzerland, the Artificial Intelligence Research Institute in Spain, the National University of Mongolia, and the Universidad Católica in Paraguay, Amrita University in India, Jilin University in China, and the Instituto Potosino in Mexico. Industry actors included Martel Innovate, a Swiss consultancy, and U Hopper, an Italy-based AI-developer.

To realise its bold ambition of creating a socio-technical solution to problems amplified by similarity-based matching, the project set out to develop a diversity-centred paradigm for machine-mediated social relations. The key technical innovation was supposed to be a family of computational models for diversity-aware matching. The idea was that learning models would construct diversity profiles based on people's interactions. These behaviours and interactions were collected via user surveys, I-log instruments, and a chat-bot application, called Ask4Help (Giunchiglia et al., 2022). The chat-bot ran on top of the online social platform, both were developed specifically for this project.

While the platform constituted the location for the development of diversity-aware applications to enact diversity, through the chat-bot users could send queries that would then be matched with the ideal candidate from the platform community. The chat-bot application hence fulfilled a dual purpose as both a living lab and a data collection instrument. Diversity-aware matching builds on comprehensive user profiling, which is why, in effect, most of the work and attention in the project revolved around collecting data. The profiles that were thus composed were to be used to train a set of algorithms that could automatically connect people in a personalised manner. To ensure high ethical standards, incentive mechanisms such as gamification elements as well as design features like nudges and alerts were implemented on the platform and the chat-bot application. As such, ethical considerations were supposed to go beyond compliance with e.g. data protection requirements but expand towards the careful navigation of diversity and the protection of vulnerable groups of people. To fulfil the public benefit orientation required for EU-funded projects, the services provided were aimed at supporting community help among university students.

Having been responsible for the ethics “work package” in this context, in the following, I will reflect on the outlined project by focusing on our attempts to enact values related to diversity through alternative platform design. While I acknowledge the specificities of dealing with diversity, I am at the same time convinced that the story I am telling would be a similar one, if we had focused our work on, say, solidarity, sustainability, inclusion, or any other value of comparable importance. Furthermore, with its focus on the inception and development phase of platforms, the analysis of the outlined case complements a rich body of existing research studying digital platforms that are already on the market (e.g. van Dijck et al., 2018). Rather than examining platforms that are in full swing, I focus on the early, visionary stages of platform development: fine tuning the initial idea, acquiring financial support, building a team and infrastructure, identifying and operationalising the various steps that need to be taken, and preparing “diffusion” among “early adopters” (Rogers, 2003).

Engaging my empirical analysis in conversation with scholarship from critical innovation and platform studies, the focus of my investigation lies on creating a better understanding of the broader dynamics that complicate processes of alternative platform development and illuminate how these challenges manifest in concrete practice. I will argue that even though projects like the one discussed here do follow an explicit public benefit orientation, being financed through tax revenues, they are nevertheless faced with requirements for “market-readiness”. These haunt them already at early stages, pressuring actors to take decisions that may run counter their normative causes. The angle of analysis I am contributing is important for understanding platform power, as it sheds light on how established Silicon Valley platforms shape innovation cultures not just in the US but also in Europe, thereby taking a hold not only on the present political economy, but on the imaginaries and potential pathways that determine the paths of future interventions.

To substantiate this argument, the paper starts by introducing the method of Sociotechnical Integration Research (STIR), in whose tradition this research falls (Fisher, 2019). Following this, the contested concept of diversity is introduced in order to clarify the normative horizon that drove my work on the case under discussion. The paper then turns to the intricacies and practicalities of building an alternative platform as a means of promoting diversity in the sense described before. It also analyses how these practicalities are entangled with the conditions to which our endeavour had to conform and within which it had to prove itself. The empirical insights are then situated within broader discussions on recent innovation imperatives and their relation to Silicon Valley agendas. The paper concludes by pointing to the more profound reasons that led to the idea of enacting a normative objective through alternative platform design ultimately being only rudimentarily achievable. Besides obvious factors such as lack of resources or the imperative of market valorisation, I argue that these reasons are to be located in a certain mindset that breaks down complex matters of concern into narrow problem-solution equations. This strategy may make sense from a standpoint that seeks to translate every demand for new solutions into a technical product that can then be sold. But its limitations show clearly when one considers innovation more broadly as sociotechnical transformation.

STIR: Socio-technical integration research

From what position do I tell the story about the project that set out to build a diversity-centred online platform? This is an important question as this story would surely be a different one if it would have been told by, say, “the big data guy” or “the design team”. The short answer to the question of positionality is that I am telling this story from the perspective of “the ethicist”. But in reality, it is not quite that easy because I do not identify merely as an ethicist, but as an ethnographer, STS-researcher, and designer as well. As such, my method can best be described through the paradigm of socio-technical integration research (STIR).

“Sociotechnical integration” refers to an “activity whereby technical experts take into account the societal dimensions of their work as an integral part” (Fisher, 2019, p. 1139). STIR is a popular method applied in this context as it serves to create “generative critique”, bringing together analytical rigour based on empirical analysis with an interventionist impetus to improve processes and products in the context of research & innovation (Smolka, 2019). By combining established methods of data collection (in this case participant observation, focus groups, and in-depth analysis of survey results and design options) with normative assessments of ethical concerns, it facilitates explorations of alternative ways of doing research in collaborative processes. Sociotechnical integration research hence expands armchair ethics towards an empirically grounded approach (Pols, 2017).

STIR brings social scientists and/or ethicists into a technoscience space to engage in a dialogue that supports reflection by illuminating the integration of socio-ethical concerns in real time. These questions are important both for understanding the capacity of experts to shape technoscience and for informing institutional policy and design (Fisher et al., 2019). In practice, the STIR method distinguishes between three phases, collectively referred to as “mid-stream modulations” (Fisher & Schuurbiers, 2013, p. 99).

Phase 1 of mid-stream modulation consists of “de facto modulation", in which socio-ethical dimensions play an implicit role in technoscientific practices, usually in the form of conceptual work to specify the norms that should inform the design and, at best, also the methodology. This phase is highlighted in the second section of this paper, where I discuss my conceptual work on the key concept of “diversity”. Phase 2 of STIR is referred to as “reflexive modulation”. It describes the heightened awareness of factual modulation that took place in the present project in the period between collecting and analysing the survey results and the launch of the first round of I-log and chat-bot pilots.The reflection during modulation phase 2 took the form of a series of focus group workshops with the design and survey teams of the project as well as selected groups of survey respondents (university students) who had agreed to participate in the i-log and chat-bot experiments.

After taking stock of the insights gathered after the first and before the second round of pilots, and discussing consequences of observations for the adoption of follow-up pilots, the 3rd phase of the STIR-process took place. It is called “deliberate modulation” (Fisher & Schuurbiers, 2013, pp. 100-101). This phase is characterised by concrete socio-ethical intervention, informed through the conceptual work taking place in phase 1 and by the reflective/analytical work in phase 2. In the current project, interventionist activities mostly revolved around design choices regarding the pilots, the affordances provided by the app, as well as the curation of content and ethical conditions for safe matching.

Diversity: Difference and inclusion

For the present project, diversity is the key concept, thus it requires careful consideration. Yet, diversity is fraught with complications because it is one of those concepts that has been so brutally corrupted that it requires enormous effort to restore the complexity that attending to it demands (Ahmed, 2012). Beyond its simplifying appropriations, on a more analytical level, diversity is complicated because of its material-discursive status as a moral-epistemic hybrid (Potthast, 2015). Diversity is used both as a descriptive and a normative category. When considering diversity in the context of digital platform development and deployment, it is crucial to unveil this dual meaning to better distinguish between (a) the actual notions of difference that underlie the technical implementation of diversity as a design strategy, and (b) the values we associate with it as the normative orientation of design.

Diversity as an instrumental value

When reviewing the ethics policies of large tech companies, diversity is regularly listed as a core corporate value. However, a closer look reveals that conceptualisation in such guidelines often lacks complexity, as it is reduced to simplistic but easily measurable categories like gender, race, or age (Chi, 2015). Ruha Benjamin aptly describes this as “cosmetic diversity” (2019). Cosmetic diversity is not just superficial. It is problematic. First, because it clouds our eyes to the ambiguity of diversity as an instrumental, and thus conditional, value. Second, relying on a performative commitment to diversity without appropriate action might come at the expense of more critical concepts such as equality, or justice (Ahmed, 2012). Third, because such portrayals enact diversity as a kind of resource that can be "exploited". Iris Young warned already in the 1980s against such capitalist appropriations of the concept, where diversity is instrumentalised as something that "enriches me", or as a means of optimally valorising people or enhancing the performance of institutions and organisations. Instead, diversity is about how we can live together in pluralistic societies in an inclusive, participatory, and nondiscriminatory way (Young, 1989). To clarify this difference, anthropologist Anna Tsing speaks of "meaningful diversity, that is, diversity that changes things," as opposed to scalable diversity, which accepts only what can be incorporated into pre-existing standards without adaptation (Tsing, 2012).

Problematised and conceptualised in the ways proposed by scholars like Ahmed, Young, or Tsing, diversity is an instrumental value important for other values (Vertovec, 2012). The UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions supports this idea of linking the preservation of diversity with values such as tolerance, inclusion, and dignity (UNESCO, 2005). It is also important to recognise the limits of the value of diversity: sometimes it can be important to create safe spaces for people who share the same vulnerabilities (Helm, 2018).

Diversity as a descriptive category

Conceptually, diversity is used to describe the differences between people, plants, things, etc. As a descriptive category, diversity helps software engineers define differences between users and eventually classify elements of their character (e.g. skills and practices) that can complement the attributes of others to create sophisticated profiles as a basis for “diversity-aware matching” (Schelenz et al., 2021). While seeking to build a diversity-centred social platform, the project employed a conceptual understanding of diversity that focuses on the interaction of different users at the community level. Furthermore, it is important to recognise that diversity is not a static category, but subject to constant change.

Conceived thus, diversity can also be used as a design strategy, which aligns well with understandings of diversity as an instrumental value but should be based on a reflexive integration of perspectives that are at once normative and descriptive. In recommender systems, for example, diversity is used as a strategy to distribute the recommended items to increase user satisfaction (Miyamoto, 2018) or, more normatively, to promote democratic principles. Helberger discusses diversity in the context of media and messaging recommendations, arguing that diversity is a goal, but also a strategy, to enable a variety of options and thus increase user autonomy (Helberger, 2019). In this context, the term "diversity by design" is used to describe "the idea that it is possible to create an architecture or service that helps people make diverse choices" (Helberger, 2011, p. 442). Given the portrayal of diversity as an instrumental but not unconditional value, in the context of the discussed project, I have argued for adopting Helberger's approach, but expanding it to focus not only on the inclusion of diverse choices but also on protections for safe, autonomous, and meaningful interactions in online social networks (Helm et al., 2022). 

When considering the complexity of diversity as a design strategy, it becomes clear that if a sociotechnical system is earnest about promoting diversity, it must be built to be as participatory and adaptable as it is decentralised and transparent. Given these requirements arising from my conceptualisation of "meaningful diversity" as a simultaneously normative and descriptive category, the question arises to what extent it is possible to realise these requirements by building an alternative platform ecosystem as a socio-technical innovation. In the next section, I will first outline “the project” in its initial set up, followed by an examination of the intricacies of its operationalisation, focusing on the various tensions we had to navigate.

The Project: Navigating tensions in alternative platform design

Set-up: Between cake and cherry

At the operational heart of the project that inspired this paper was the infrastructural, ethical, technical, and social ambition to not just build an add-on to an existing platform ecosystem, but to develop a holistic alternative. This bold idea was to be realised under the Horizon Europe funding program, launched under the "Grand Societal Challenges" scheme. This way of formulating research agendas implies, by the very vastness of its ambition, a preference for automated top-down solutions, whether intentionally or not. Paradoxically, many public benefit-oriented research and innovation projects, such as the one discussed here, fall into the category of "social innovations", which traditionally focus on bottom-up dynamics and local alternatives rather than top-down, technocratic interventions (Musa & Rodin, 2016). In the present context, this ambiguity becomes especially prevalent given the orientation towards diversity.

To realise the ambitious idea of building an alternative, diversity-centred platform ecosystem, the project followed a divide-and-conquer methodology (Bentley, 1980). This methodology is typically used in computer science research to break down complex, intertwined tasks into overseeable work packages (WP), with the ethics research traditionally being WP 9: the last in a row of WPs that focus not only on innovation but also including original research. Despite the project's holistic ambition of combining engineering, psychology, participatory design, and ethics perspectives to serve a public benefit, to prove its capacity to operate at the level of grand societal challenges, developing the algorithms took the centerstage of project activities. Developing a new family of algorithms was essential for the innovative claim of the project, as only it would enable the coordination of a platform community at scale, by intelligently matching users, recommendations, and incentives based on diversity rather than the usual affinity matching (Gillespie, 2018). For developing these algorithms, however, diversity was approached not in its normative sense as a value, but in its descriptive sense of difference, defined as varying personal traits and social practices that relate and distinguish individuals and communities (Schelenz et al., 2021).

The fact that the algorithms were considered the project's main innovation asset, highlighted as the one new product that would come out of it, was also reflected in the distribution of funding, where the majority revolved around generating and preparing data and user profiles needed for training. With the data collection and modelling thus constituting the cake, the public benefit, diversity design, and ethics orientation represented the cherries. This hierarchy in prioritisation was also reflected in the decline of test user numbers through the various phases of the project (Figure 1). This decline can arguably be interpreted as a result of the inconsistency between the project's official diversity message, which was welcomed by the local teams and test students, and its effective operationalisation, which aimed to develop scalable top-down solutions. The project started with surveys as the main tool for data collection (administered to over 40,000 students), then moved to mobile collection via I-Log geolocation (which included a subset of a few hundred survey participants), and finally the chat-bot application (which was the key tool for experimenting with diversity design and ethical incentives, but now attracted only a few dozen test users).

Ein Bild, das Diagramm enthält.</p>
<p>Automatisch generierte Beschreibung
Figure 1: Pilot trajectory from 1) surveys, 2) I-log, to 3) user behaviour on chat-bot application “Ask4Help”.

Pilots: Between scale and customisation

During the practical implementation of the project, the requirement to demonstrate the innovation potential of our research in its potential to tackle “grand societal challenges” led to a host of conflicts vis-à-vis our diversity orientation. This orientation aspired to account for different sensitivities, vulnerabilities, and constraints not just of individuals but also with view to cultural differences among communities involved (both within and outside the EU). While acknowledging such differences required us to attend to details, listen to local concerns, and customise approaches where necessary, grand societal challenges, by definition, favour bird-eye-view methodologies.

To complicate things further, innovativeness has been diagnosed as becoming increasingly synonymous with smooth market integration (Pfotenhauer et al., 2021). In this project, which targets the domain of online platforms, that market is dominated by a few powerful global players, operating across domains. These are also referred to as GAFAM — Google, Alphabet, Facebook, Amazon, and Microsoft (van Dijck et al., 2018). Given the tight grip of these infrastructural players over an entire sector, smooth market integration was seen (by our reviewers) as characterised through adaptability not to local needs, but to already established corporate ecosystems. In practice, this demands the development of abstracted but targeted solutions that can then be incorporated into existing ecosystems and from there on scaled up quickly (Rieder, 2020), hence running fundamentally at odds with the idea of meaningful diversity, as outlined above.

For example, while a critical amount of comparable data is essential for the technical implementation of automated matching algorithms, co-design is a key methodological component to enact meaningful diversity as it is crucial for translating the normative goal of accounting for difference into practice. However, enacting diversity methodologically through co-design implies radically adapting pilots to different socio-geographical contexts. This became apparent as we explored the needs of local communities through the initial surveys, as well as, in some locations, focus groups and interviews. One example where local customisation was pushed through rigorously is the Mexico pilot. In this case, the team in charge locally insisted on customisation as a prerequisite for their participation.

They advocated that diversity should also include focusing on various local challenges (Helm et al., 2023). In this case, partners chose the topic of the obesity epidemic, which affects large parts of the Mexican population (Meegahapola et al., 2021). They intended to focus the idea of community help on students supporting each other in their efforts to develop healthier eating habits. This idea was met with great support from the local population, resulting in exceptionally high participation rates.

Taking inspiration from this example, the Paraguayan partners would have also liked to focus on a specific scenario dealing with a matter urgent to their local population, like "car sharing among the local student population". But this idea only came up at a later stage and its realisation then proved too complicated. Eventually, the Paraguay site adopted the generic scenario of “Asking for Help”, which the European partners envisioned as the default version for all partners.

Data: Between big and local

Further tensions revolved around the comparability and size of data sets. The latter is important with respect to innovation imperatives, as only big data sets allow the training of robust, self-learning algorithms. In this regard, the scale of data may be beneficial to diversity when diversity is viewed from a purely individual perspective (Seaver, 2021). Conversely, for many customised pilots to address the diverse needs of local communities, we would have had to develop customised algorithmic models. However, training such models did not involve enough test users on each site to produce enough data, not to mention the enormous effort and resources that this would require. Customisation would also prove economically irrelevant, especially for small countries like Paraguay, because the potential customer base would be too small for any private actor to invest. However, a top-down approach, where Paraguayan test users are exposed to a European perspective on what diversity means and what a relevant use case looks like, would mean deviating from the original idea of putting diversity at the centre.

Faced with a privileging of high-tech solutionism to serve marketisation-driven scaling aspirations, similar issues arose regarding the surveys that we distributed to test users. Given the differences between contexts, it became clear that a survey addressing the diversity of social practices could not be conceived exclusively from a European perspective. For example, questions around mobility and housing were first designed from the perspective of a European student persona but caused confusion among students in other countries. Surely, transportation realities are different for a person living in Ulaanbaatar than for a person living in Copenhagen. However, many different surveys would, again, have resulted in a variety of distinct, insufficiently big data sets. As a compromise, it was agreed to work with one single survey that would incorporate questions speaking to various living realities. This resulted in an enormous questionnaire that caused many test users to drop out mid-way.

In summary, the emphasis on large scale automated solutions came at the expense of methodological considerations to enact meaningful diversity through co-design. Apart from tensions with technical requirements for automation, this would have required many more local experts to carry out the co-design process to a quality that would justify the label “diversity-aware”. Instead, design, evaluation, implementation, and local adaptations took place under rushed conditions and in severely underfunded teams, which, in addition to problems with the unequal distribution of funds within the European consortium, also had to do with the agreements between the EU and non-EU countries, in which partners from non-EU countries can only receive small amounts of funding, mostly limited to the collection of data but rarely to the implementation of original research (Evroux, 2023).2

The willingness to commit to the inclusion of diverse international partners, but within a policy that makes it difficult to use this inclusion in its more meaningful dimensions, reflects a tendency towards cosmetic notions of diversity on the part of the EU-funding instrument, which I have criticised above (Benjamin, 2019). Diversity is here reduced to the scalable dimension of the number of nationalities of test users involved but without signalling a willingness to engage with the deeper dimensions that this kind of diversity entails (Helm et al., 2023). Such deeper engagement with diversity would have required proper customisation, but that was not harmonisable with the pursuit of scalability through automation, which was central to justifying the project's potential for innovation in terms of market attractiveness.

Content: Between moderation and curation

In terms of managing content shared through the chat-bot application, conceptualising meaningful diversity as both a descriptive category and design strategy meant constantly juggling two demands: on the one hand, leveraging the diversity of students' practices and competencies to enhance interaction and inclusion in their community. On the other, helping students safely navigate the very diversity they would encounter. However, the tensions described around the design of the pilots already made it clear that, from the perspective of innovation reviewers and those project partners aligned with them, the idea of diversity remained attractive only insofar as it could be realised through technical solutions that enabled automation and scaling. This position came into conflict with ethical requirements vis-a-vis said juggling act.

Countering a narrow, technical view on diversity-awareness as difference-management, in our capacity as ‘the ethics team’, we advocated for what we called “diversity curation'' (Helm et al., 2022). By using the term “curation” instead of “matching” or “moderation” (Gillespie, 2018), we emphasised our explicitly ethical impetus that resulted from our, as explained above, identification of diversity as a moral-epistemic hybrid. Conceived of as such, diversity is even more multifaceted and pervasive than content. By curation, we then not simply refer to the management, or exhibition of diversity. Rather, our aim was to highlight the terms literal meaning, where curation in its Latin origin is derived from the word care, which, according to Tronto & Fischer, can be described as: "everything we do to care for, that is, preserve, and repair 'our world' so that we can live together in it as best we can'' (1990, p. 36).

Curating diversity meaningfully, not only as a descriptive but also normative category, I further advocated for grounding our work in the principles of “Design Justice”, which highlight that technology always reflects the positionalities and value hierarchies of its designers, developers, customers, and clients (Costanza-Chock, 2020). Embracing such a view, co-design must take centre stage, if we are to take diversity-awareness seriously. This idea of caring for diversity through 1) distinguishing between meaningful and cosmetic diversity and 2) considering our own positionalities, was immediately embraced by some of the partners, most notably the design team. Yet, it remained incomprehensible to others. This caused tensions, because the conceptual propositions I made would have demanded concrete actions.

Following the classical set up of projects of this kind, WP 9, the ethics work package, was intended to be a supporter of the technical and design solutions developed in WP1-8. That implies providing concepts, guidelines, and frameworks (STIR phase 1), engaging reflective discussions around ethics (STIR phase 2), but it usually does not go as far as making concrete interventions into design, methodologies, or engineering processes (STIR phase 3). The limitations of such a framing of the role of ethics in a project like this become clear when considering our unpreparedness to deal with unexpected, but very sensitive, queries that students posed via the chat-bot application. These were probably triggered by the social isolation that came with the Covid-19 pandemic and its corresponding lock-down measures. Such requests included: “How do you deal with exam anxiety?” “Do you also struggle with isolation during Corona?" Or, even more concerningly: “Do you sometimes think about suicide?” How to deal with such queries if you initially anticipated dealing with much less controversial requests, pertaining to mutual support concerning concrete activities, such as cooking together or helping each other out while preparing for exams?

If one is to take the idea of curating, as in caring for, diversity seriously, dealing with such queries requires measures, such as establishing round the clock helplines answered by well-trained experts. Yet, this was never foreseen in the budget.

Consequently, discussions quickly turned to the (im)possibilities of dealing with “sensitive content” technically. As NLP-expertise was not sufficiently represented in the consortium, the supposedly ideal solution, that is, the automatic detection of sensitive requests, i.e. through sentiment analysis (Gorwa, et al., 2020), was dismissed. This was not least because some of the pilots took place in communities speaking under-served languages, such as Jopara in Paraguay, for which NLP-resources are still lacking (Agüero-Torales, et al., 2021). In terms of curating shared content, automated top-down solutions further seemed inappropriate from an ethico-political perspective, given the diversity of represented cultures and the different vulnerabilities associated with them. For example, the topics of religious doubt and emancipation came up in queries like: “Do you ever question the value of marriage?” “Do you sometimes doubt religion?” These questions were considered uncontroversial from the perspective of a user persona situated in liberal societies, such as Great Britain. However, from the perspective of Indian test users living in conservative family constellations they might turn out very sensitive, even potentially dangerous, if matched with the wrong person.

Evaluation: Between innovation and value-orientation

These examples make clear that when taking the idea of diversity curation seriously, developing automated “solutions” can not mark the firmament of the imaginative horizon. Instead, building local capacity by establishing multilingual teams of diversity-aware humans in the loop becomes necessary. Such teams could not just handle sensitive requests but also process complaints about misconduct. This is also where the balancing of protection and inclusion becomes relevant, where inclusion in the name of diversity may be desired but limiting it to create safe spaces sometimes be required (Helm et al., 2022). Yet, since we had neither the resources for human diversity curators, nor the means to create robust automated solutions, we eventually ended up with a design strategy: the chat-bot would give students the opportunity to choose from a drop-down menu where indicating that a request tackles a sensitive topic is an option. If this option is chosen, advice on caution and, if available, university counselling services would appear, as well as options to narrow down the group of potential respondents and anonymise the request. This solution was not satisfactory. But it was born out of serious engagement with a value matter among interdisciplinary teams of researchers, working under severe time and resource constraints.

While we were discussing internally how to reconcile our methodological orientation towards diversity with technical requirements and resource constraints, we faced critical assessments from the business community during interim evaluation. The criticism was directed at the holistic and value centred design approach (Simon et al., 2020), by which we sought to create a comprehensive alternative to existing platform models. What we all agreed on, and the criticism we faced brought this to the fore, was that we did not want to let ourselves be reduced to a mere add-on to an already established platform ecosystem and broken down into pieces. This idea, however, was considered unreasonable from an industry perspective. Here, the logic would have suggested we develop extensive data sets, used to train a technical solution that we would then, after testing in different contexts, abstract from these very contexts so as to make them scalable. From the point of view of this logic, our alternative platform idea, with all its methodological and ethical dimensions, was not considered innovative.

Innovation as scalability through automation

In order to explore the reasons for the criticism we faced and to clarify to what extent the logic behind this criticism is also related to the various difficulties we encountered in operationalising our ideas, it is important to situate the project in the broader context of critical innovation and platform studies. To this end, I will now illuminate how the specific criteria for distinguishing between a successful and an unsuccessful innovation that we encountered correspond to different innovation regimes and what links can be drawn between these regimes and the corresponding market imperatives.

In doing so, I follow a diagnosis formulated by STS-informed innovation studies scholars Pfotenhauer and Jasanoff, who claim that a fetishisation of scalability enabled through technical automation is driving current paths of innovation (2017). Pfotenhauer and Juhl have further examined how Silicon Valley agendas are reflected in prevailing innovation models, such as the MIT-model, and how this, in turn, has become a global standard, including in the EU (2015). The MIT-model is characterised by exactly the criteria we were confronted with in the criticism we faced: scalability, integration into existing platform infrastructures, and a favouring of high-tech solutions (Gagliu, et al., 2019). Of course, these ideas have been key success factors in the private sector since the dawn of capitalism, and the pursuit of scalability is one of the primary concerns around which startup ecosystems revolve (Haas et al., 2019). However, the automation-driven dynamics of contemporary technology development have added further drivers that not only make diffusion, scalability, integrability, and high-tech based solutionism ever more central but also exert pressure on the pace at which they should be achieved. This is reflected in the almost grotesque notion underlying EU-funded R&I projects such as ours, namely to develop scalable socio-technical innovations within four years, based on rigorous research and high ethical standards.

This problem of overburdening, often contradictory requirements can be partly attributed to the framework conditions of the EU-funding instrument. However, on a more substantive level, it is also to be understood as the result of pressures exerted on these very instruments by powerful technology companies. These players increasingly set the terms for what can be deemed innovative and what not. But these terms may run counter to the more idealistic standards, such as ethical consideration and public benefit orientation, which are also still relevant when it comes to EU-frameworks, thus creating these contradictory demands that researchers are then faced with.

The framing of EU funding schemes in broad terms such as “grand societal challenges” can be seen as another indicator of the grip of Silicon Valley imaginaries on innovation regimes (Pfotenhauer & Jasanoff, 2017). To save the world implies scale, and scale can best be reached through automatisation. The power of innovation as a rhetorical and political tool to legitimise the allocation of large amounts of resources follows the sociotechnical imaginary of stylising innovation as a panacea for wide-ranging problems. These include global warming, poverty, pandemics, corruption, or, in the present case, polarisation and the systematic privileging of certain contents (Chun, 2018). However, the notion of innovation as a panacea is problematic, not only because it obscures the fact that already the formulation of a problem as a deficit to which a technical innovation can respond is political (Bozalek, 2020). It is also problematic because it presupposes that the results of an intervention must be capable of going beyond their initial testing context to reach scale. This presupposition can very quickly become at odds with meaningful diversity.

Following Anna Tsing’s theory of meaningful versus scalable diversity, it becomes clear that scalability is not an intrinsic property of a solution or product, but one that is produced through emphasising certain aspects and omitting others. Constructing scalability is a problem from a standpoint of meaningful diversity because for something to be scalable it must be designed to reduce the complexity of a problem and its associated solutions to isolated parameters that can then be more easily abstracted from the context of the specific domain or community for which it was developed (Engels et al., 2019). This abstraction work enables automation, making the innovation scalable in that the number of applications can be significantly increased without major adaptation (Tsing, 2012). This idea of scalability obviously contradicts the idea of meaningful diversity curation that I have outlined above. In the present case this is exemplified through the emphasising of the geographical diversity of partners, while omitting the different needs and vulnerabilities their student communities might have.

Another aspect of the pressures that caused friction is primarily economic, but also technical, and is accelerated by dynamics of platformisation. By platformisation, I here refer to the process through which digital communication and information infrastructures are being centralised to remain in the hands of a few, powerful Silicon Valley players, thereby placing them in a position of dominance over an entire market section — the one we were addressing. This market section is characterised by both monopolisation of power and by spanning across domains: from social networking, over geolocation tracking, to instant messaging, and cultural production (to name just a few, relevant to the case at hand) (van Dijck et al., 2018).

Silicon Valley companies like Facebook or Google are primarily interested in acquiring simple but scalable components as add-ons (Rietveld et al., 2020). In this way they can decentralise their data collection while centralising data processing (Helmond, 2015). Innovations to this market are hence considered attractive only if they are technically compatible with their standard (Caplan & boyd, 2018). This perspective was clearly reflected in the critical feedback we received from reviewers. Ironically, this implies that social innovations that are traditionally characterised by the novelty and creativeness of the alternatives they are offering must simultaneously ensure that they fit into predefined pathways. This includes not only being cut into pieces but also ensuring that these pieces can then be automated to meet the scaling demands of those who might want to purchase them. In the feedback we got, this prospect of rapid scalability was clearly seen as a requirement for market-readiness, where the market equates to powerful Silicon Valley platforms.

Values in platform innovation

Reflecting on my above analysis, a major factor that undermines ideas of meaningful diversity enacted through alternative platform design can be identified as the pressure to do the abstraction work necessary to meet upscaling demands, expressed in Silicon Valley agendas of innovation. These agendas are all about undermining alternatives, while supporting the development of creative new add-ons to already existing ecosystems. This creates a dynamic of homogenisation of solutions, clearly a problem from the point of view of diversity. This homogenisation of the extent and variety of innovations is not just limited to technical innovations, as Caplan & boyd have shown in their analysis of isomorphism (2018), but also extends to that of social innovation. Silicon Valley agendas create this limitation in four steps. 1) By defining the horizon under which innovation can take place, through 2) determining what constitutes an “innovative” solution, framing it 3) through factors of scalability which, given the scope and pace at which they are to be achieved, almost inevitably call for automation. In this way, it 4) becomes possible to determine which and whose value conceptions are prioritised, not only at launch, but already at the inception.

The above described Silicon Valley innovation imperatives speak to a concentration of power that requires new entrants to develop narrow solutions to easily isolated and well-packaged problems. Value issues, however, such as caring for meaningful diversity, are multi-faceted and require complex sociotechnical approaches to address them constructively, while avoiding dialectical dynamics of ethics washing. Therefore, their serious enactment resists being broken down into scalable pieces. Given this tension, I conclude, with reference to the above analysis, that one of the main reasons that has hampered our attempt to promote a particular value through an alternative platform design is the narrow notion of innovation that we have had to conform to. The logic behind this tends to reduce complex issues of concern to narrow problem-solving equations. This may make sense from a purely technical perspective if automation is the goal but not if you think of innovation more broadly as socio-technical innovation that is inspired by the needs of local communities and drives their enactment, in part through algorithmic and design strategies, but also through human capacity building. Such multimodal understanding of socio-technical innovation should have been at the core of this project if it wanted to reclaim diversity in its complexity as a moral-epistemic hybrid.

Yet needs and meaning often go beyond the possibilities of automation, at least in light of our current technical capabilities. Therefore, automation is far too limited a horizon for what diversity demands. The idea of meaningful curation of diversity, which I highlighted throughout this paper, requires building local capacity in the form of people who care about diversity and also have the means to do so. So far, machines are not capable of caring, at best they function. Given this limitation, algorithmic automation may be a means to achieve scale and thus innovation, if the latter is narrowly defined as market readiness, but it is not a means to promote diversity, let alone harness and curate it.

However, algorithmic automation is encouraged by current innovation regimes driven by Silicon Valley agendas, which are even applied to non-profit research projects like ours, publicly funded by the EU. This becomes clear when looking at the evaluation criteria and the way projects like ours are set up: they start promisingly by tackling a major socio-ethical challenge, but then, instead of developing alternatives, they break their challenge down into isolated, narrow, but scalable technical solutions. This is exactly in line with what Silicon Valley agendas aim for. But also these agendas are not all-encompassing. There are actors in academia, the EU-funding system, and industry that are pushing policy reforms to complicate them. They understand that it is not productive to make innovation synonymous with market readiness, nor with automation or scaling. Instead, we need a horizon of innovation that expands towards finding new ways to tackle challenges that the old ways cannot solve, because it is these that have caused them.

Acknowledgements

I thank the guest editors of the special issue and all participants of the workshop “Locating & Theorizing Platform Power”. In particular, I am grateful to David Nieborg, Natali Helberger and Ranjit Singh for their valuable feedback and to the managing editor of Internet Policy Review, Frédéric Dubois.

References

Agüero-Torales, M., Vilares, D., & López-Herrera, A. (2021). On the logistical difficulties and findings of Jopara sentiment analysis. Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, 95–102. https://doi.org/10.18653/v1/2021.calcs-1.12

Ahmed, S. (2012). On being included: Racism and diversity in institutional life. Duke University Press. https://doi.org/10.1515/9780822395324

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity.

Bentley, J. L. (1980). Multidimensional divide-and-conquer. Communications of the ACM, 23(4), 214–229. https://doi.org/10.1145/358841.358850

Bozalek, V. (2020). Rendering each other capable: Doing response-able research responsibly. In Navigating the postqualitative, new materialist and critical posthumanist terrain across disciplines (pp. 135–149). Routledge. https://doi.org/10.4324/9781003041177

Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426

Caplan, R., & boyd, danah. (2018). Isomorphism through algorithms: Institutional dependencies in the case of Facebook. Big Data & Society, 5(1). https://doi.org/10.1177/2053951718757253

Chi, N., Lurie, E., & Mulligan, D. K. (2021). Reconfiguring diversity and inclusion for AI ethics (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2105.02407

Chun, W. H. K. (2018). Queerying homophily. In C. Apprich, W. H. K. Chun, F. Cramer, & H. Steyerl (Eds.), Pattern Discrimination (pp. 59–98). Meson Press. https://doi.org/10.25969/mediarep/12350

Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press. https://doi.org/10.7551/mitpress/12255.001.0001

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t

Engels, F., Wentland, A., & Pfotenhauer, S. M. (2019). Testing future societies? Developing a framework for test beds and living labs as instruments of innovation governance. Research Policy, 48(9). https://doi.org/10.1016/j.respol.2019.103826

Evroux, C. (2023). The EU’s global approach to research and innovation [Briefing]. European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733550/EPRS_BRI(2022)733550_EN.pd

Fisher, E., O’Rourke, M., Evans, R., Kennedy, E. B., Gorman, M. E., & Seager, T. P. (2015). Mapping the integrative field: Taking stock of socio-technical collaborations. Journal of Responsible Innovation, 2(1), 39–61. https://doi.org/10.1080/23299460.2014.1001671

Fisher, E., & Schuurbiers, D. (2013). Socio-technical integration research: Collaborative inquiry at the midstream of research and development. In N. Van Doorn, D. Schuurbiers, I. van de Poel, & M. Gorman (Eds.), Early engagement and new technologies: Opening up the laboratory (Vol. 16, pp. 97–110). Springer. https://doi.org/10.1007/978-94-007-7844-3_5

Gaglio, G., Godin, B., & Pfotenhauer, S. (2019). X-innovation: Re-inventing innovation again and again. NOvation: Critical Studies of Innovation, 1, 1–16. https://doi.org/10.5380/nocsi.v0i1.91158

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. https://doi.org/10.12987/9780300235029

Giunchiglia, F., Bison, I., Busso, M., Chenu-Abente, R., Rodas Britez, M., De Götzen, A., Kun, P., Ganbold, A., Chagnaa, A., Gaskell, G., Bidoglia, M., Cernuzzi, L., Hume, A., Zarza, J. L., Miorandi, D., Caprini, C., Schelenz, L., Helm, P., Gatica-Perez, D., … Sierra, C. (2022). A worldwide diversity chat application pilot on interactions and social practices (2021—2nd wave) (Technical Report #DISI-2001-DS-06). University of Trento. https://iris.unitn.it/handle/11572/353704

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1). https://doi.org/10.1177/2053951719897945

Haas, S., Quinn, B., & Baskin, J. (2019, May 1). How to move fast: Innovation at speed and scale [Podcast]. Inside the Strategy Room — McKinsey & Company. https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/how-to-move-fast-innovation-at-speed-and-scale

Helberger, N. (2011). Diversity by design. Journal of Information Policy, 1, 441–469. https://doi.org/10.5325/jinfopoli.1.2011.0441

Helberger, N. (2019). On the democratic role of news recommenders. Digital Journalism, 7(8), 993–1012. https://doi.org/10.1080/21670811.2019.1623700

Helberger, N. (2020). The political power of platforms: How current attempts to regulate misinformation amplify opinion power. Digital Journalism, 8(6), 842–854. https://doi.org/10.1080/21670811.2020.1773888

Helm, P. (2018). Treating sensitive topics online: A privacy dilemma. Ethics and Information Technology, 20, 303–313. https://doi.org/10.1007/s10676-018-9482-4

Helm, P., de Götzen, A., Cernuzzi, L., Hume, A., Diwakar, S., Ruiz Correa, S., & Gatica-Perez, D. (2023). Diversity and neocolonialism in Big Data research: Avoiding extractivism while struggling with paternalism. Big Data & Society, 10(2). https://doi.org/10.1177/20539517231206802

Helm, P., Michael, L., & Schelenz, L. (2022). Diversity by design?: Balancing the inclusion and protection of users in an online social platform. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 324–334. https://doi.org/10.1145/3514094.3534149

Helm, P., & Seubert, S. (2020). Normative paradoxes of privacy: Literacy and choice in platform societies. Surveillance & Society, 18(2), 185–198. https://doi.org/10.24908/ss.v18i2.13356

Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media and Society, 1(2). https://doi.org/10.1177/2056305115603080

Meegahapola, L., Ruiz-Correa, S., Robledo-Valero, V. D. C., Hernandez-Huerfano, E. E., Alvarez-Rivera, L., Chenu-Abente, R., & Gatica-Perez, D. (2021). One more bite? Inferring food consumption level of college students using smartphone sensing and self-reports. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(1), 1–28. https://doi.org/10.1145/3448120

Miyamoto, S., Zamami, T., & Yamana, H. (2018). Improving recommendation diversity across users by reducing frequently recommended items. 5392–5394. https://doi.org/10.1109/BigData.2018.8622314

Musa, M., & J, R. (2016). Scaling up social innovation. Stanford Social Innovation Review, 14(2). https://doi.org/10.48558/5VP8-F519

Pfotenhauer, S., & Jasanoff, S. (2017). Panacea or diagnosis? Imaginaries of innovation and the ‘MIT model’ in three political cultures. Social Studies of Science, 47(6), 783–810. https://doi.org/10.1177/0306312717706110

Pfotenhauer, S., Laurent, B., Papageorgiou, K., & Stilgoe, A. J. (2022). The politics of scaling. Social Studies of Science, 52(1), 3–34. https://doi.org/10.1177/03063127211048945

Pols, J. (2015). Towards an empirical ethics in care: Relations with technologies in health care. Medicine, Health Care and Philosophy, 18(1), 81–90. https://doi.org/10.1007/s11019-014-9582-9

Potthast, T. (2014). The values of biodiversity: Philosophical considerations connecting theory and practice. In D. Lanzerath & M. Friele (Eds.), Concepts and values in biodiversity (pp. 132–146). Routledge.

Rieder, B. (2020). Engines of order: A mechanology of algorithmic techniques. Amsterdam University Press. https://doi.org/10.2307/j.ctv12sdvf1

Rietveld, J., Ploog, J. N., & Nieborg, D. B. (2020). The coevolution of platform dominance and governance strategies: Effects on complementor performance outcomes. Academy of Management Discoveries, 6(3). https://doi.org/10.5465/amd.2019.0064

Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.

Schelenz, L., Bison, I., Busso, M., De Götzen, A., Gatica-Perez, D., Giunchiglia, F., Meegahapola, L., & Ruiz-Correa, S. (2021). The theory, practice, and ethical challenges of designing a diversity-aware platform for social relations. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 905–915. https://doi.org/10.1145/3461702.3462595

Seaver, N. (2021). Care and scale: Decorrelative ethics in algorithmic recommendation. Cultural Anthropology, 36(3), 509–537. https://doi.org/10.14506/ca36.3.11

Simon, J., Wong, P.-H., & Rieder, G. (2020). Algorithmic bias and the Value Sensitive Design approach. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1534

Smolka, M. (2020). Generative critique in interdisciplinary collaborations: From critique in and of the neurosciences to socio-technical integration research as a practice of critique in R(R)I. NanoEthics, 14, 1–19. https://doi.org/10.1007/s11569-019-00362-3

Susser, D., Roessler, B., & Nissenbaum, H. (2019). Technology, autonomy, and manipulation. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1410

Tronto, J., & Fisher, B. (1990). Toward a feminist theory of caring. In E. K. Abel & M. K. Nelson (Eds.), Circles of care: Work and identity in women’s lives (pp. 36–54). State University of New York Press.

Tsing, A. L. (2012). On Nonscalability: The living world is not amenable to precision-nested scales. Common Knowledge, 18(3), 505–524. https://doi.org/10.1215/0961754X-1630424

United Nations Educational, Scientific and Cultural Organization. (2005). The 2005 Convention on the protection and promotion of the diversity of cultural expressions (CLT-2016/WS/7; pp. 1–52). https://en.unesco.org/creativity/convention

Vaidhyanathan, S. (2022). Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press. https://doi.org/10.1093/oso/9780190056544.001.0001

van Dijck, J., Nieborg, D., & Poell, T. (2019). Reframing platform power. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1414

van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society. Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

Van Doorn, N. (2017). Platform labor: On the gendered and racialized exploitation of low-income service work in the ‘on-demand’ economy. Information, Communication & Society, 20(6), 898–914. https://doi.org/10.1080/1369118X.2017.1294194

Vertovec, S. (2012). "Diversity” and the social imaginary. European Journal of Sociology, 53(3), 287–312. https://doi.org/10.1017/S000397561200015X

WeNet. (n.d.). Welcome to WeNet. WeNet — Internet of Us. https://www.internetofus.eu/

Young, I. M. (1990). Justice and the politics of difference. Princeton University Press. https://doi.org/10.2307/j.ctvcm4g4q

Footnotes

1. For more information, see WeNet (n.d.)

2. Some of the countries involved, such as India, were unable to receive any funding at all due to the lack of bilateral agreements, while for others better conditions have been fought for in previous years.