From threat to opportunity: Gaming the algorithmic system as a service
Abstract
Gaming the system – i.e., strategic attempts to manipulate the input(s) for, or one’s interactions with, an algorithmic system to try to secure a better outcome than intended by the system’s design – is commonly portrayed as a threat to online platforms and services. Tech companies often use this gaming concern to justify their reluctance to provide algorithmic transparency. In this paper, however, we will explore a new business model in the digital economy we call gaming-the-system-as-a-service (GaaS). In this model, transparency promises are wrapped into an assisted gaming service and sold as a premium feature. This way, the alleged risk of transparency – gaming the system – is turned into a monetisation feature for service providers. As such, GaaS is a typical example of how tech companies can attempt to turn regulatory pressures (e.g., to provide more insight into how its algorithmic curation and recommendation systems work) into a commercial opportunity. To begin to rethink our normative and regulatory approaches to the interface of transparency and gaming, we perform a first exploration of several potential challenges posed by this new business model. First, GaaS is entwined with an incentive structure that is hostile to consumers and exploitative in nature. Second, GaaS is essentially a pay-to-win feature, raising questions of equality and fairness. Third, the commodification of transparency through GaaS can ‘taint’ and erode transparency as an important democratic value.
Introduction
Call it a permanent mud wrestling match, or an eternal game of tug of war: lawmakers demanding more transparency and online platforms and service providers in the digital economy pushing back against those demands.1Lawmakers often rely on transparency obligations as a core feature of their strategy to tame the power of digital technology companies (Diakopoulos, 2020). For example, the recent European Union’s legislative agenda aimed at the technology sector – the Digital Services Act (DSA), the Digital Markets Act (DMA), and the Artificial Intelligence Act (AIA) – contains several transparency obligations. Companies typically do not welcome (additional) transparency obligations. An often heard argument against transparency measures is the gaming-the-system-argument: transparency of (often algorithmic) core systems used for decision-making, curation, and/or ranking can provide users with helpful information to game those systems (Kroll et al., 2017, p. 639, Bambauer & Zarksy, 2018: 15). In this context, ‘gaming’ refers to strategic attempts to manipulate the input(s) for, or one’s interactions with, an algorithmic system to try to secure a better outcome than intended by the system’s design. Petre et al. (2019) point out that online platforms and services “routinely denigrate these activities as system-gaming or manipulation” to signal that such practices are understood as problematic undermining of the ‘proper’ functioning of the platforms or services (p. 1). Following the gaming-the-system-argument, transparency obligations are thus mainly portrayed as risks to the business operations of platform and service providers in the digital economy.
The gaming argument against transparency may, however, need to be nuanced in answer to new commercial strategies in the digital economy. In this space where transparency obligations are often portrayed as risks by service providers themselves, one can observe those same service providers offering premium features that look suspiciously similar to – ironically – services that promise gaming of algorithmic ranking and outcomes. In what follows, we explore a new business model we call gaming-the-system-as-a-service (GaaS). The core idea behind GaaS is that users can be charged a premium to help them game the system of a service they use to secure (better chances of) better outcomes. Most interestingly, GaaS and algorithmic transparency are closely related. The promise of GaaS can be understood as being predicated on promises related to transparency. To see why, consider the fact that gaming works best when one has some degree of privileged insight into how the system works. In the context of GaaS, however, one often deals with indirect transparency at best. The transparency is indirect because the insight into the workings of the system is partial and offered through a service that is designed and controlled by the system owner. So, rather than providing complete transparency2on the entire system which allows users to exploit that information to devise strategies for gaming themselves, the offered GaaS service pre-structures the opportunities for gaming. This pre-structuring can take shape by the gaming service containing limited information in combination with specific tools, or by structuring the service around gaming with the mandatory ‘help’ of employees of the service. Put simply, the alleged risk of algorithmic transparency – i.e. gaming the system – is actually turned into a monetisation feature.
In this paper, we understand algorithmic transparency in two ways. Typically, it is seen from a techno-centric perspective, focusing on the openness and disclosure of how algorithms work from a technical perspective. This involves examining data sets, parameters, and models to provide insights into their inner workings, relating to concepts like explainable and interpretable algorithms (Diakopoulos, 2020). This type of transparency is often aimed at experts, regulators, or stakeholders who need to understand the precise mechanics of the system to secure fairness and accountability. However, as Mittelstadt et al. (2019) show, this technical perspective on transparency is not the only type of transparency that matters; there is also another type that involves everyday explanations and is more embodied for humans. For this type, mathematically-founded algorithms are translated into human terms (Larsson & Heintz, 2020), and transparency emerges from the user’s practical experiences and interactions with the platform (Haresamudram et al., 2023). This is what Haresamudram et al. call ‘interaction transparency,’ or embodied transparency, which refers to the way transparency is experienced and understood through direct interaction with a system or platform, rather than through abstract or purely technical explanations (Haresamudram et al., 2023). This form of transparency often creates “a nuanced understanding” and “rich, contextual, situated explanations” of platforms (Haresamudram et al., 2023, pp. 97-98). For example, consider how including a pet in the photos on your profile may (seemingly) improve the performance of your profile in Tinder’s recommendation algorithm. (Wang, 2023). Although Tinder’s algorithm remains mysterious in terms of its precise technical operation, users can still gain a contextual understanding of how it works through their interactions, forming nuanced and everyday insights. This embodied transparency is vital for users (often non-experts) who may not grasp algorithmic details but care deeply about how platforms affect them and their interactions.
This paper builds on two examples – one already operational, and one that was announced but later abandoned – to explore and explain how these two types of algorithmic transparency can be monetised. The FICO Score example demonstrates how platforms package its algorithmic transparency as a premium service, revealing to credit consumers how the FICO Score works and how their scores are influenced by different data points and weights. Another example is Tinder Concierge, which was announced (and later abandoned) as a premium service focusing less on revealing technical details of algorithms and data sets, but instead promised a paid-for coaching service to help users to gain nuanced insights and contextual understanding about how its algorithm works (like how certain photos or actions may enhance their profile visibility) to hopefully secure more (and ‘better’) matches. In practice, when platforms monetise algorithmic transparency, they usually blend these two types and engage with them to varying extents.
These two examples serve as a first exploration, but they reflect a more general phenomenon of monetising algorithmic transparency as a gaming service. The FICO case shows how transparency as a gaming service is already operational, while the Tinder Concierge case tells us something about the types of initiatives big platform providers are actively considering and experimenting with. We find it instructive to not only discuss examples of services that have already been implemented, but also take seriously services that are ‘only’ considered by platforms. In a fast developing platform economy, discussions on the ethics and regulation of platform services requires an ongoing anticipatory mindset and with it a willingness to discuss the ongoing experiments of platforms; a merely reactive mindset will undermine our ability to develop future-proof and creative analyses for the platform economy.
As we will discuss in detail later, we increasingly live in what Citron and Pasquale referred to as a “scored society,” where predictive algorithms rank crucial aspects of individuals' lives (Citron & Pasquale, 2014). These algorithm-driven ranking systems often include reward and punishment mechanisms, encouraging users to optimise their positions. However, their opaque nature makes it challenging for users to understand how to improve their rankings, creating an opportunity for gaming services to emerge. Whether through technical or embodied explanations, algorithmic transparency can be monetised as a premium or commercial product, enabling users to game the system for their benefit. This might sound like a win-win situation. Users benefit from gaming as a service, while companies balance the risk of system manipulation with additional revenue from users paying from premium gaming services and continued engagement with their services. This seemingly mutual benefit creates a potentially advantageous business model.
However, if these business models which flip transparency and gaming risks upside down (from alleged risk to business opportunity) indeed start to materialise and proliferate, we may have to rethink our normative and regulatory approaches to them. In this article we make a start by also exploring several possible challenges posed by gaming-the-system-as-a-service. First of all, GaaS is entwined with an incentive structure that is hostile to consumers and exploitative in nature. If a service provider wants to offer GaaS, this introduces the incentive for the service provider to actively make or keep the workings of their systems opaque or unpredictable precisely because uncertainty concerning how the system works serves as a precondition for offering GaaS. Second, GaaS is essentially a pay-to-win feature, raising questions of equality and fairness. Depending on the context where GaaS is introduced, granting advantages to those who can pay a premium can lead to unfair and unjust (market) outcomes. Third and last, the commodification of transparency through GaaS can ‘taint’ and erode transparency as an important democratic value. When transparency is reduced to a commodity, it not only opens the door to manipulating its presentation for commercial interests but also weakens people’s motivation to actively engage in critical thinking or resist unfair practices in GaaS.
This article is structured as follows. In Section 2 we discuss transparency as a regulatory philosophy and unpack the alleged threat of gaming the system resulting from transparency obligations. Here we also discuss how GaaS can position itself in the space typically occupied by discourses on (the need for) transparency obligations. In Section 3 we turn to the phenomenon of GaaS. We use the examples of FICO Score (already implemented) and Tinder Concierge (announced but not implemented) to illustrate core features of GaaS. In Section 4 we discuss potential challenges posed by GaaS, namely the exploitative incentive structure it introduces for platforms and service providers, the unfairness of pay-to-win, and the erosion of transparency as a democratic value.
Transparency as an obligation and the alleged threat of gaming the system
2.1 Transparency as a regulatory philosophy
Before we turn to GaaS, we first want to briefly discuss transparency and the alleged risk of gaming the system in the digital economy. The principle of transparency has, of course, a long history in the context of policy and regulation. Already in 1913 Brandeis coined the now famous phrase that “sunlight is said to be the best of disinfectants” (Brandeis, 1913, p. 1). Suffice it to say, the core idea that transparency can serve as an important precondition for accountability by making information available that allows for the inspection and evaluation of entities or actors is not a new one. As Ananny and Crawford (2018, p. 974) summarise it: “The implicit assumption behind calls for transparency is that seeing a phenomenon creates opportunities and obligations to make it accountable and thus to change it”.
So, the principle of transparency and its link to accountability is not a recent invention. What is a relatively new development though, is the more explicit embrace by legislators of transparency as a pronounced regulatory principle in response to the increasing use of, generally put, algorithmic decision making systems the functioning of which seem opaque to the outsider (Pasquale, 2015; Leerssen, 2023). According to Morozovaite (2024) “the recurring theme in all examined (proposed) legal instruments [i.e., the DSA, DMA, and AIA] is a strong emphasis on transparency obligations” (p. 253). One can clearly see this in the DSA, where transparency obligations are at the core of the legislative philosophy behind the act. The DSA does not only contain more general provisions on, for instance, transparency reporting obligations for providers of intermediary services (Article 15) and providers of online platforms (Article 24), but also specific recommender system transparency obligations (Article 27). Article 27 mandates online platforms to “set out in their terms and conditions, in plain and intelligible language, the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters”. Because the recommender engines used by platforms are algorithmically driven, Article 27 can rightly be understood as a provision aimed at what is often called algorithmic transparency.
2.2 Transparency (claims) as a strategic tool and the space it affords for GaaS
To understand the precise relation between transparency and accountability, it can be helpful to distinguish between what can be called the openness dimension and the epistemic dimension of transparency. The openness dimension refers to making information public and inspectable; bringing information out in the open so to say. Birkinshaw, for instance, (2006) emphasises that “openness is very similar to transparency” (p. 190). One can think of the government releasing documents after a freedom of information request, or of a leak such as the Panama Papers where large amounts of previously inaccessible documentation suddenly become available. The openness of information is, however, not the same as the comprehensibility, explainability, and/or usability of information. Either the nature of the information (e.g., highly technical documentation) or the amount of information can result in the difficulties to truly understand or process the now open information. This is why transparency is often thought to have an important epistemic dimension as well: “transparency also requires external receptors capable of processing the information made available” (Heald, 2006, p. 25). When the epistemic dimension is taken seriously, true transparency also requires that public/open information must be understandable to its target audience.
Transparency’s openness dimension and its epistemic dimension can, of course, be misaligned – not all information that is made public is also understandable, and not everything that is explained in an understandable manner is backed by publicly accessible and verifiable information. It is precisely in this potential for misalignment, we argue, that one can find the inherent political and strategic nature of transparency (Wang, 2022). Transparency is always afforded by an actor with particular interests and incentives, and to an actor (or several actors) with particular interests and incentives. One actor’s transparency can be another actor’s incomprehensibility. Because our argument focuses on how the space of transparency discourse as it is shaped by (especially) recent regulation can potentially be monetised by service providers with gaming-the-system-as-a-service, we are mostly interested in how transparency claims can be put to strategic use. A service provider can, for instance, claim that providing some additional explanation of how certain functions/systems work counts as practising transparency, in an attempt not to make actual documentation public. Or, vice versa, an actor can make public large amounts of highly technical documentation which can be very difficult to make sense of without additional explanatory guidance. We are, therefore, not interested in being the arbiters of what defines real or true transparency. For our argument it is much more important to acknowledge how different types of actors tend to make different transparency claims to achieve different – often self-serving – outcomes.
2.3 Proxies and gaming
It is precisely in the context of sweeping transparency regulation such as the EU’s DSA, DMA, AIA package that online service providers and platforms decry the risk of their systems being gamed. Cofone and Strandburg (2019) provide a very helpful overview of the debate on algorithmic transparency and the concern of ‘gaming the system’. They draw on literature in (empirical) legal studies, computer science, and game theory to explain how gaming can occur and when it is and isn’t a realistic risk. “Fundamentally, the gaming threat stems from decision maker reliance on proxies for criteria” (Cofone & Strandburg, 2019, p. 626). As a user of a system, knowledge of which proxies are used for decision making can allow one to (try to) exploit those proxies to (try to) steer the eventual outcome/decision in the desired direction. The use of proxies in algorithmic decision-making contexts is inevitable, because they are used “when the ideal decision-making criteria are unascertainable as a practical matter or simply unknowable” (Cofone & Strandburg, 2019, p. 635). Take, for instance, a dating app. If we assume that the dating app decides on matches based on which matches have the highest chance to develop into a successful relationship3, it immediately becomes clear that the ideal decision-making criteria – i.e., a successful relationship – is situated in the future and unknowable at the time of matching people. So, proxies have to be used to approximate as it were the ideal decision-making criteria.
Not all proxies function in the same manner though and not all types of proxies are equally suitable for gaming. Cofone & Strandberg (2019, pp. 636-640) describe three layers of proxies. The first layer concerns input data that goes into an algorithmic decision-making procedure. In a dating app, this can be data concerning one’s age, sexual preferences, and interests. The second layer uses the input (and possibly other) data “to compute a predicted value of the outcome variable that is only a proxy for that individual’s “true” outcome value” (Cofone & Strandburg, 2019, p. 638). In the dating app example, this would concern the ways in which input data will be used to compute a predicted value for an outcome value such as ‘predicted chance that a match leads to a date’. The third layer concerns how the outcome value chosen by the designer of the system at the second level itself serves “as a proxy for the ideal decision criterion” (Cofone & Strandburg, 2019, p. 638). For a dating app, the ideal decision criterion – which is not directly measurable and/or knowable, hence the need for the use of proxies – would be something like ‘the two people matching will develop a successful relationship’.
Exploitation of proxies for the purpose of gaming can happen in different ways, depending on the layer one ‘attacks’. At the first input data layer, one can sometimes simply put in different data oneself. For example, in the dating app example, one can put a different age in one’s profile to game the system in an attempt to get matches in a different age bracket. An algorithmic decision-making system may also rely on input data that is gathered by turning behaviour into input data. For example, using cookies an app may gather information on one’s online search and click behaviour and turn those data into input data on a person’s preferences. In this example, one cannot game the system directly by changing the input on preferences oneself but one has to change one’s behaviour (e.g., search for specific content) to trick the system into working with specific input data. In cases where one can change input data oneself, there is of course a more direct relation between one’s gaming attempt and the preferred outcome. If one tries to game by changing behaviour, one also has to anticipate how the system will register and process one’s behaviour and turn it into input data. When we turn to the second and third layer, it becomes increasingly difficult for a person to effectively and predictably game the system. Influencing the predicted value for the outcome variable at the second layer directly is very difficult to do, simply because at the second layer an outcome variable is computed based on a formula that is often not known by users. A dating app user, for instance, typically has no direct knowledge of how the outcome variable ‘is a suitable potential match’ is calculated and, moreover, does typically not have a complete overview of all data that goes into the formula. It follows that if said dating app user wants to game the matching algorithm, the first input data layer is the most logical and convenient ‘point of attack’. The described difficulties to game the second layer of proxies carries over directly to the third layer, where the predicted outcome variable of layer two itself becomes a proxy for the real ideal decision criterion the service providers aim to capture.
Notably, these three layers of proxies can be described by the different types of algorithmic transparency mentioned earlier. These different layers often appear as model-oriented transparency, which involves a technical description of the algorithm’s inner workings – from data input (data points) and computation models (mathematical models to compute variables) to the generated content (real-world actionable results). This type of transparency can be very helpful for experts and regulators who often need to grasp the precise mechanics of the system to ensure accountability and fairness. However, this technical information can sometimes be very difficult for users (often non-experts) to translate into everyday language, making it less practical for them to game the system. So, gaming the system sometimes also relies on a more embodied form of transparency, which operates through what Bucher (2017) calls “algorithmic imaginaries” or the understanding of how the algorithm works through daily interactions with the platform. It’s about users’ gaining insight into the system by engaging with it in a tangible, experiential manner. This form of transparency is often nuanced and contextual, emerging from the user’s practical experiences and interactions with the platform. For example, instead of providing users with a detailed technical breakdown of an algorithm, a platform might let users understand its workings by observing the outcomes of their interactions. Through these interactions, users can form a more intuitive and situated understanding of the system, making it feel more relevant and accessible than technical explanations alone. The detailed relationship between GaaS and algorithmic transparency can be summarised in Table 1. With this clarification, we can explain in the next section how gaming through algorithmic transparency can be (re)packaged as a commercial service.
Algorithmic transparency in GaaS | Technical transparency (A detailed technical breakdown of an algorithm) | Embodied Transparency (A situated understanding of algorithm through users’ everyday interaction with platforms) |
---|---|---|
Data layer (Raw data collection) | Disclosing the categories of data collected, how it’s collected, stored, and pre-processed before being fed into the algorithm. | Through interaction with the app, users learn which types of data seem to improve their recommendation rate. |
Computation layer (A calculation of predicted values) | Explaining the specific models used, the features derived from the input data, and how these features are weighted and combined to compute a predicted value or score. | Users begin to see patterns in their interactions, which help them gain an embodied understanding of how their behaviour and input data affect their predictions. |
Consequence layer (Actionable outcomes) | Explaining how the predicted values (e.g., match scores) are used to make decisions within the app. | Users perceive the effectiveness of their profiles and interactions in terms of real-life outcomes, like the number of matches that lead to dates. |
From a threat to an opportunity: gaming the system as a service
In this section, we will explore how gaming as a service can work through algorithmic transparency. By focusing on two main examples – the FICO Score system and Tinder Concierge – we will look at two types of gaming through monetising algorithmic transparency. First, there is gaming the system by monetising the technical form of algorithmic transparency (FICO example). This often involves providing a paid informational report that shows how the algorithm works on a technical level, giving users the opportunity to game the system. Second, there is gaming the system by monetising the embodied form of algorithmic transparency (Tinder example). This usually involves offering paid expertise or coaching services to help users improve their understanding of the algorithm, enabling them to game the system more effectively. In discussing these two examples, we do not mean to suggest that the FICO Score is purely about technical transparency or that the proposed Tinder Concierge service would function as a purely embodied type of transparency.4In the real world, these two types of algorithmic transparency often blend together in GaaS. The distinction between the two is mostly analytical, although the FICO score may lean more towards technical transparency, while the proposed Tinder Concierge service would lean more towards embodied transparency.
3.1 The FICO Score
Let us consider the first example – the FICO Score – which is already an everyday phenomenon in our digital society. In the US, the FICO Score, owned by Fair Isaac Corporation (FICO), is the most widely used system in predicting consumers’ creditworthiness or how likely they are to repay their bills.5The FICO Score is a typical automated decision-making system, and FICO Scores are the numeric representative or snapshot of individuals’ credit profile. This scoring system has been extensively used in various domains and is increasingly determining many crucial parts of American people’s lives. For example, when individuals apply for credit cards, car loans, or mortgages, banks often check their FICO Scores to determine if they can be granted and on what terms (Pasquale, 2015). In some cases, an applicant’s FICO score can also be the deal breaker for getting a job or renting an apartment (Lauer, 2017). Some dating apps even incorporate FICO scores into their platforms to determine the dateability of users (Wang, 2022).
Like most algorithmic systems, the FICO Score algorithm is often largely hidden from the public. What’s interesting, however, is that some degree of algorithmic transparency can be monetised by charging credit consumers for certain services. Since 2012, the FICO Score has launched an app called myFICO, which provides paid services to help credit consumers understand how FICO Scores work and how to improve them.6For example, if an individual’s car loans or mortgage applications are denied, they can buy one-time myFICO credit reports for $19.95 based on one major credit bureau or $59.85 for all three major credit bureaus (myFICO, n.d.-b). Each credit report is accessible for one month after it is purchased.
In these credit reports, users can get insights into the three layers of information about its algorithm. In the data layer, the report clearly outlines which types of data are used to calculate credit scores. It covers four main categories: personally identifiable information (PII), credit accounts, credit inquiries, and public records and collections. It also explains how these sources of information are gathered. For instance, a FICO Score is derived solely from data collected from credit reports at the three major credit bureaus: Experian, TransUnion, and Equifax.
More importantly, this paid service provides detailed insights into the predicted values (the second layer) at the computation level of the algorithm. For example, it discloses the components that make up the FICO Score and the weight of each component accounting for the FICO Score (see Figure 1). As shown in this figure, the payment history has 35 percent impact on the FICO score, while amounts owed accounts for 30 percent. The length of credit history makes up 15 percent, new credit weighs in at 10 percent, and credit mix is 10 percent. What’s more, the report also discloses how these weights are illustrated in the personalised situation. If a user clicks on ‘payment history’, for instance, it may indicate the status of ‘bad’, ‘good’, or ‘exceptional’ etc. to clearly show how much current credit situation (there is one late repayment in the last one month) can be improved. By providing how the scoring algorithm works and what aspects can be improved in a personalised case, these paid myFICO reports make it easier for credit consumers to improve their credit performance and scores.


What’s more, this paid report can also disclose some information about the third layer of algorithmic transparency – the consequence level. For instance, it includes tools like the FICO Score Simulator, which acts as an interactive model and digital interface, allowing individuals to enter different scenarios to estimate how various decisions might impact their credit scores. This simulator tool lets credit users run 24 different simulations to see how various actions could impact their FICO Scores, such as applying for a mortgage or requesting a credit limit increase. This tool also informs users about how lenders and mortgage providers might view their score, calculating the likelihood of approval and the terms of the loan. It thereby gives users a clearer “sense of how their future (credit) decisions will affect their evaluation” (Citron & Pasquale, 2014, p. 29).
Hence, the FICO Score case shows how gaming the system can happen through monetising three layers of technical transparency. Credit users who buy an informational report on how the algorithm technically works and performs, can improve their credit scores and thereby gain more economic and social benefits.
3.2 Tinder Concierge, and other dating apps
Having discussed the FICO scores case, we now turn to a slightly more speculative example, namely Tinder Concierge – a coaching service announced by Tinder in 2020 which promised Tinder users who were willing to pay for it help from Tinder employees to craft better performing profiles (Brown, 2020). The service has not (yet) been released in its originally advertised form, but some of its features have been integrated into some of Tinder’s premium membership package. Despite its somewhat speculative nature, we find this example especially useful because 1) the announced features are almost an ideal type of the type of coaching-based GaaS we are interested in, and 2) the fact that Tinder announced this feature publicly clearly shows that the industry is in fact already thinking along these lines. In a fast-paced platform economy, anticipating future developments by scrutinising the experiments platforms are engaged in is a good way to ensure one’s critical analyses will not only be reactive in nature. Moreover, other services in the dating app ecosystem are also moving in similar directions with premium memberships and features that promise to help one perform better in the (algorithmic curation and ranking of the) dating app in question. Where Tinder promised the expertise of real employees to those willing to pay a premium, other dating apps have relied on AI-driven coaching bots rather than human experts. For example, Match.com has developed an AI dating chatbot named “Lara” that serves as a personal love coach by using natural language processing (Li, 2019). Some other dating companies, like eHarmony, Happn, and Loveflutter, have also developed AI-driven love coaches (Tuffley, 2021; Ghosh, 2017; Silva, 2018). These coaches promise to help users navigate dates and optimise profiles to get more dating opportunities (Tuffley, 2021).
Let us now look at Tinder and Tinder Concierge in some detail. Algorithmic ranking and curation obviously plays a central role on Tinder. Users create a profile which contains several pictures, personal information (e.g., age, gender, what one is looking for on Tinder) and information on preferences (e.g., hobbies, music). When using the app, one profile at a time is shown to you based on an algorithmic ranking procedure that is largely opaque. It seems obvious that the information one has provided oneself plays a significant role, but one’s interaction history with Tinder profiles shown to oneself is also hypothesised to play a role. We deliberately write ‘hypothesised’ because as a user you can only guess how the algorithmic ranking works. There is a whole ecosystem of dating websites and communities trying to reverse-engineer Tinder’s algorithmic ranking by modelling it on the concept of Elo rating systems used more generally in game theory.7Academics, in turn, have also noted how Tinder’s algorithmic opaqueness poses not only methodological challenges in terms of research but also introduces uncertainties to which users can respond in a variety of ways (see, e.g., Duguay, 2017; Courtois & Timmermans, 2018; Wang, 2023). A core presumed feature of Tinder’s algorithm – often discussed online – is an indirect attractiveness score assigned to profiles by proxy, based on how others have interacted with that profile (e.g., if many people swipe right on your profile you are presumed to be attractive). Another often discussed assumption8is that one’s own swiping behaviour also serves as a proxy for one’s attractiveness, where being a more ‘picky’ swiper is assumed to be a proxy for being more attractive and being a very eager swiper is assumed to be a proxy for being a less desirable member of the dating pool.
Now, this is not the place to explore these matters in (more) detail. What is interesting though is the fact that the “algorithmic imaginaries” (Bucher, 2017) which are being discussed on dating websites and communities are also, directly or indirectly, constitutive of understanding Tinder as an algorithmic service that can – or maybe even should – be gamed. Here we also see the different proxy layers from Section 2.3 again. One can try to game the matches one will get directly by changing information one puts on one’s profile oneself (e.g., if you change your age on your profile, your profile will be shown to people who have indicated they are looking within a specific age range). But as the Tinder Elo community shows, there are also more indirect ways of trying to influence the relevant proxies by, for instance, experimenting with different types of swiping behaviour to – hopefully – indirectly influence the proxies that determine one’s performance in the algorithm.
Within this context of Tinder as a permanent object of gaming attempts by the community, Tinder Concierge was announced in March 2020 as a premium service that would cost between$20 and$50 a month (different Tinder users got different pop-ups announcing the service at different price points9). The pop-up shown to some users read as follows:
“Our Concierge service may be headed your way. For $20, you’ll get access to our team of experts who will help you craft the perfect profile. Go on, have a taste of the good life.”10
The GaaS offered here clearly doesn’t revolve around the offering of extensive technical transparency on Tinder’s algorithm directly to the user. Rather, Concierge, in its announced form, could be a great example of embodied transparency. The service promises access to Tinder employees who presumably have a good understanding of how the algorithm works (they are “experts”) and who can make the algorithm’s performance transparent to the user indirectly as it were by making suggestions for changes to the profile for better performance of said profiles on Tinder. As for the specific suggestions the Tinder experts can make, we can only guess what those could be because at the moment of writing Tinder has not officially launched the service in its advertised form.11
If the service would ever materialise in its full advertised form, we could imagine several GaaS features. First of all, one could imagine the Tinder experts offering advice on how to build one’s profile: which photos tend to work well, which profile texts and stated interests work well. Such advice would basically come down to advice on gaming the first input data layer. If swiping behaviour is indeed also an important proxy in the matching algorithm one could also imagine what could be called ‘behavioural advice’ on swiping to be part of the Concierge service.
3.3 The phenomenon of GaaS
These two examples reflect a general phenomenon that algorithmic transparency can be monetised as part of a gaming service. As argued by Danielle Citron and Frank Pasquale, we are increasingly living in a “scored society” where predictive algorithms are applied to rank individuals’ in important parts of their lives (Citron & Pasquale, 2014). For instance, different algorithms are created to rank individuals who are most likely to get a job, commit a crime, default on bills, and find a date. The cases of Tinder (or other dating apps) and FICO Score are typical examples of this general trend toward a scored society. On the one hand, these algorithm-driven ranking systems are often embedded with a series of reward and punishment mechanisms, which may encourage users to game the system to their best benefits. For example, if Uber drivers are algorithmically ranked with low scores, they will be immediately punished by having their recommended passengers reduced or even being removed from the platform (Muldoon & Raekstad, 2023). On the other hand, the opaque nature of these ranking systems discourages users from gaming the systems for their own benefits. As we mentioned, the three layers of gaming make it rather difficult to know how algorithms actually work and how to improve their performance, especially when it comes to the second and third layers of gaming.
This gap between users’ needs to improve their ranking and the actual difficulty in knowing how to do so opens room for gaming as a commercial service. This GaaS can work through some degree of algorithmic transparency, as improving performance often requires some knowledge of how the algorithm works. Just like what is shown in the example of FICO Score, which discloses its scoring algorithm’s breakdown into five components, letting credit consumers know how to improve their scores according to these five factors. Similarly, coaching services in dating apps can provide users with some more embodied knowledge of how their behavior as a user interacts with the (largely) opaque algorithm(s) of the service they are using. Coaches can use everyday language to explain or indicate how, roughly speaking, factors such as profile information and photo selection influence matches. Besides providing some general information about how platform algorithms function, GaaS can also offer users with some personalised knowledge about how their particular situation, such as not ranking high in the dating market, is influenced by the platform’s algorithms. Based on these personalised insights, coaching-based GaaS could suggest tailored methods to improve performance on the platform.
GaaS can thus be understood as a process of monetising algorithmic transparency in the scored society, where service providers can charge its users a premium to receive ‘transparency benefits’ which the user can use to make one’s profile or content perform better in algorithmic ranking or curation processes.12In an environment that is controlled by the service provider, however, such a premium service will typically not offer ‘real’, blanket transparency, but rather a more strategic, partial transparency wrapped in an additional service where the same service provider offers ‘tools’ and ‘advice’ to help ‘optimise’ one’s content or profile for algorithmic ranking. Put simply, the alleged risk of transparency – i.e., gaming the system – is actually turned into a monetisation feature. In the following section, we will explore the potential ethical challenges of this GaaS.
4. Exploring ethical challenges of GaaS
In this section we discuss some of the possible ethical challenges posed by GaaS. Because GaaS as we describe it is a recent phenomenon, varieties of which we expect to become more frequent in the near future, this section is meant as a first exploration.
4.1 Consumer-unfriendly incentive structure and exploitation
First, GaaS offered by the service itself introduces (or reinforces) an incentive structure that is hostile to consumers. To see why the incentive structure is hostile, it should first be noted that GaaS can only be offered against a background of sufficient opaqueness. If a service or platform relies on algorithmic curation/ranking which is completely transparent and understandable, GaaS will quickly lose its value proposition. Consider the announced (but later abandoned) Tinder Concierge service; this service would only be an interesting service for the public because the Tinder algorithm remains to be seen as enigmatic. So, if GaaS is to be pursued as a commercial strategy, it introduces an incentive for maintaining (or even introducing new) levels of opaqueness as a necessary precondition for offering GaaS. Such an outcome would be disappointing given the recent push for (even more) transparency obligations in, for instance, the European legislative agenda for the digital economy (DSA, DMA, AI Act). The strategic market reaction of finding ways to monetise a desire for transparency in ways that, if anything, introduce incentives to not practise genuine transparency, could indeed be seen as both ironic and cynical. It remains to be seen, however, whether variances of GaaS will actually be compliant with e.g. transparency provisions in the DSA.
Besides the hostile incentive structure itself, one could also question what type of relationship between the service provider and the user GaaS results in. GaaS is premised on actively fostering opaqueness to be able to offer a service that has users pay an additional fee to alleviate/overcome negative externalities resulting from the deliberate opaqueness. One possible way of characterising such a relationship is as exploitative. Philosophically, there is no consensus on the precise meaning and definition of exploitation (see Zwolinski et al., 2022 for an extensive overview of different views and debates). A minimal, uncontroversial understanding of exploitation is that exploitation involves taking unfair13advantage of one’s target by using a vulnerability of the target (either a personal characteristic, or something in the target’s environment) for one’s own benefit. When we consider Tinder Concierge as an example of GaaS, a case can indeed be made for the service having significant exploitative characteristics. Users use Tinder to find something that is very important to them: dates, love, intimacy, companionship. To ‘gain access’ to those ‘goods’ users are made very aware of the fact that they have to perform in a competitive dating market that is structured around the illustrious proprietary Tinder algorithm (of which there exists a lively algorithmic imaginary as we discussed in Section 3.1). So, what we have is 1) a large group of users with a strong desire for particular outcomes (matches, dates, love, intimacy), and 2) a largely opaque (algorithmic) gate-keeping mechanism. Combined, these two circumstances constitute the fertile soil for deliberately exploitative commercial practices. The strong desires for matches, dates, love, and intimacy serve as an exploitable vulnerability. Tinder’s projected premium service Tinder Concierge would certainly qualify as an exploitative strategy that can be used to take advantage of those vulnerabilities for the benefit of Tinder. And even though Tinder Concierge ultimately did not get implemented, already existing Tinder premium services such as Plus, Gold, and Platinum also bear the marks of a similar exploitative logic. These premium services promise a range of ‘power-ups’ to make you more competitive vis-à-vis your Tinder competitors (Tinder, n.d.-a). For example, for the most expensive subscription called Platinum, Tinder writes: “Increase your match-making potential and enjoy most of Tinder’s premium features with Tinder Platinum™! Dating online just got easier. See someone you’d love to meet and can’t wait to match? As a Platinum subscriber, you can attach a note to every Super Like you send, increasing your match-making potential by up to 25%” (emphasis added, Tinder, n.d.-b). The same analysis of exploitation applies in an even more straightforward manner for FICO Scores. Credit scores clearly determine one’s ability to access a wide range of essential services, as well as the conditions (e.g., interest rates) under which one can access those essential services. Selling premium transparency services which are premised on people’s real fear of bad/worsening credit qualifies as an exploitative practice under most conceptions of exploitation.
One may object that Tinder is ‘just a dating app’ which users do not need to use at all if they don’t want to. Deciding to use the app also means consenting to Tinder attempting to pressure you into purchasing premium services that can help you ‘game’ Tinder’s algorithmic match making. There are three brief answers to this possible objection. First, the idea that as long as consumers are not actively forced to use a service anything goes, is simply wrong. In the EU there is, for instance, unfair commercial practice law (Directive 2005/29/EC) which forbids many types of misleading and aggressive commercial practices that take unfair advantage of consumers. Second, the sentiment that Tinder is ‘just a dating app’ is misguided. Love, intimacy, and companionship are basic human needs and people are in fact increasingly turning to apps like Tinder to help fulfil those needs. Moreover, being in a relationship with someone also tends to privilege one societally since it allows one to, for instance, apply for a mortgage together. There are socio-economic implications of one’s ‘dating status’, meaning that a popular dating app cannot be dismissed as ‘just a dating app’.
Third and last, even if one were to (still) think that Tinder is not important and the exploitative characteristics of the user-GaaS-provider relationship are therefore not worrisome, then we should still think of other contexts GaaS could be introduced and where it would be considered (more) problematic. The FICO Scores example with the paid-for services that help users who can afford it to optimise their credit scores comes to mind. Credit scores can play such a decisive role in people's lives – it can determine whether you can get the right mortgage or not – that GaaS(-like) services in the credit scoring context are a legitimate concern.
4.2 Pay-to-win and equality
If one uses GaaS in the hope of performing better in an algorithmic ranking/matching/curation scenario in order to, in the end, secure better outcomes, one is basically engaging in what in the videogame context is known as pay-to-win. Pay-to-win entails paying for a competitive advantage (e.g., by receiving better weapons or receiving health upgrades), often without the absolute guarantee of actually winning; you still have to actually defeat opponents with your bought advantages helping you. So, technically speaking pay-to-win means pay-to-be-more-likely-to-win in most cases. Paid-for advantages typically ‘stack’, so the more you pay, the more advantages you can activate and/or the stronger those advantages are. In video games pay-to-win features are usually seen in a negative light by most players, but pay-to-win features are still becoming more widely available due to the rise of the lucrative freemium model (Tregel et al., 2020; Sax & Ausloos, 2021).
Using pay-to-win in video games is mainly seen as a violation of the integrity of the game itself, which is supposed to be won by skill instead of willingness to pay (Alha et al., 2018; Freeman et al., 2022). It is frowned upon, but there are no direct societal implications of paying to win a videogame. This might change when pay-to-win ventures beyond the confines of video games. When GaaS is offered in contexts that were discussed earlier – dating, credit scores – its pay-to-win character raises more serious questions on equality and distributive justice.
A core question to ask is which burdens and benefits in society we want to be assigned and distributed according to ability and willingness to pay. This article is not the place to develop a full theory of which goods and services should be accessible to which citizens under which conditions. What should be observed, however, is the fact that all democratic theories that have something to say on the proper ordering and functioning of society also have something to say about the extent to which power and money should (not) be allowed to influence democratic and social institutions. For example, what Rawls has called the “primary social goods” (Rawls, 1971, p. 90-95) should be equally accessible to all, regardless of one’s position in society. Another example is Walzer’s (1983) theory of spheres of justice which argues that different spheres in society should be governed by their own distributive principles appropriate to the sphere they govern. A concrete implication of this argument is that many spheres of society should be organised according to egalitarian principles where ability to pay for a better outcome should not be considered legitimate. Consider education or health care. Most convinced democrats would agree that the proper ‘logic’ of those spheres is such that people should be able to pursue education based on their educational abilities and merit (not spending power), and health care should be widely accessible in an equal manner for all citizens and should be assigned based on medical needs (not spending power).
When GaaS is offered in societal contexts where access to and distribution of goods – e.g., housing, labour, education, relationships – matters to the people involved, offering such a pay-to-win option can upset principles of equality. Put differently, GaaS can be a driver of illegitimate inequality when it appears in contexts where ability/willingness to pay is not considered a legitimate principle of distribution. Again, the FICO Score case does not seem to require much interpretation in this context. Given the fact that one’s credit score plays a pivotal role in one’s life and one cannot escape the disciplinary influences of credit scores, it follows that we can – and should – question whether the privileged ability to pay for better insights in one’s credit score is in line with basic democratic principles of equality. If we then return to the Tinder Concierge example, it is easy enough to contend that a dating app is ‘only about dating’ and that GaaS as pay-to-win therefore does not raise serious ethical concerns. We would like to briefly indicate that even in the dating context one can raise legitimate questions on pay-to-win. With house prices and rent being notoriously expensive in many parts of the world, being able to split the rent or a mortgage with a partner makes a big difference to one’s financial possibilities. Seen from this perspective, online dating is not ‘just’ about dating; it is also partly about one’s ability to build a stable financial future for oneself with a partner. It also follows that pay-to-win GaaS in online dating is – at least partly – intertwined with questions of distributive justice and equality.
4.3 Commodifying transparency as the erosion of a democratic value
In regulations like the GDPR, the AI Act, or proposed AI Bill of Rights, transparency is construed as a crucial democratic value.14It fosters democratic participation, empowering citizens to not only understand algorithmic decision-making processes but also to contest unfair practices, question biases, and actively engage in shaping algorithms that influence their lives (Binns, 2018; Citron & Pasquale, 2014). However, if transparency is commodified, it can negatively affect this democratic participation. Below we explore a few potential effects.
First, this commodification of transparency can reduce the fundamental obligation and right of algorithmic transparency to a commodity, making it easier for companies to avoid legal regulation. As we explained in Section 2, algorithmic transparency is supposed to be a legal obligation, where citizens should have the right to know how algorithms generate their results, requiring companies to be as transparent as possible. When transparency is commodified, it gives companies another excuse to dodge regulations on their transparency policies, as doing so may harm their profits in monetising transparency – transparency itself becomes more of a business than a legal requirement. This dominant commercial logic can indirectly justify the insufficient transparency of algorithms. If the algorithm is largely obscured, the blame shifts from the company to the users who are unable or unwilling to pay for the gaming service. A related issue is fairness which has been mentioned in the previous section, but here it concerns the monetisation of transparency itself rather than gaming of the service. Financially vulnerable consumers are targeted and influenced more often by biased algorithms (Eubanks, 2018). For instance, poor credit users can be trapped more easily in a debt cycle if the algorithm calculates their credit scores unfairly (Wang, 2022; Citron & Pasquale, 2014). However, when transparency is commodified, it may become a luxury affordable only to the wealthy, leaving financially vulnerable consumers without access. As a result, those who need transparency the most are often the ones who cannot afford it.
A second effect is that commodifying algorithmic transparency shifts the focus to profit incentives, which can lead to the manipulation of transparency to serve commercial interests, while ignoring some ethical issues of bias and discrimination. Studies show that transparency involves not just informational disclosure but also power dynamics, where companies might steer user behavior to their benefit by selectively disclosing information (Wang 2022; Annany & Crawford, 2018; Weller, 2017). In the case of monetised transparency, the provided information can be manipulated in a way that only focuses on aspects related to commercial services, deliberately hiding parts of the algorithm linked to potential bias. For instance, dating apps guide users on selecting better photos, or crafting an attention-grabbing bio, but they may deliberately withhold information about how their algorithms can selectively show profiles based on racial and sexual biases (Conner, 2023). This type of selective transparency creates a smokescreen or ‘transparency washing,’ where the service appears transparent, but in reality, it avoids revealing critical details that could expose biases or unfair practices (Wang, 2022; Weller, 2017). This lack of comprehensive information makes it challenging for users and regulators to try to assess and address potential unfair issues.
A last point is that the commodification of transparency may foster a passive consumption model, turning active citizens into passive consumers of GaaS. Consumers become more passive because they tend to focus on improving their matchmaking ranking by passively following the disclosed guides or coaching provided by GaaS, without actively engaging in critical thinking or resisting those unfair practices (see Annany & Crawford, 2018 for a similar argument). Similar to Habermas’s critique of instrumental rationality, this commodification of transparency not only reduces democratic values to mere commodity but also shapes users’ willingness to resist this trend (Habermas, 1987). When transparency becomes a commodity for exchange, it assumes an equal relationship between buyers (end users) and sellers (platforms, service providers): users pay premiums, and platforms offer the service, allowing users to game (elements/functionalities of) the platform or service for benefits. However, this framing overlooks the existing power asymmetry between users and service providers (Zuboff, 2019). As mentioned earlier, GaaS may impose unfriendly or exploitative incentive structures, turning (partial, strategic) transparency into a privilege for those with sufficient economic means, and manipulating the presentation of transparency to align with commercial interests. This commodification of algorithmic transparency can have an ‘ideological conditioning’ effect, undermining users’ inclination to critically assess those unfair practices of GaaS (Wang, 2022, p. 17).
Conclusion
In this article we have explored the emerging phenomenon of GaaS as well as its normative implications. One way to understand GaaS is as a market response to transparency obligations, transforming the perceived risk of gaming the system, resulting from forced transparency measures, into a premium service that allows the provider to profit and to control how transparency is practised in an often highly restricted yet seemingly empowering manner. Seen from this perspective, GaaS is an ethically dubious practice which (1) (further) commodifies the core democratic value of transparency which is so central to recent legislative initiatives for the digital economy, (2) exhibits exploitative tendencies, and (3) introduces an infrastructure for pay-to-win applications that may lead to inequality and unfairness.
Our first exploration of GaaS raises several questions that deserve further research. In this article we discussed two main cases to aid our explorative analysis. The FICO Score case illustrates how the (more) technical dimension of transparency is already being monetised. Because of the inescapable, central role credit plays in people’s lives, the GaaS practices observed in FICO Scores can be evaluated as exploitative, problematically pay-to-win services. The Tinder Concierge example offered us a more speculative insight into GaaS applications exploiting the (more) embodied or contextual nature of transparency, with a (potentially) heavier emphasis on coaching services provided by the service itself. Future GaaS(-like) services in other contexts may look substantially different and, as a result, raise different questions and challenges. We did try to describe more general features of GaaS based on the FICO Score and Tinder Concierge examples, but it remains a possibility that other cases will require us to tweak our understanding of GaaS.
The phenomenon of GaaS also clearly underlines the complex nature of transparency obligations and practising transparency in the digital economy (Leerssen, 2023). As regulatory pressures to comply with transparency obligations seem to become stronger (as exemplified by the DSA), one can expect service providers in the digital economy to look for creative ways to turn these regulatory pressures into commercial opportunities. GaaS is such an example and it raises questions on manipulative design and choice architectures due to its potentially exploitative nature (Sax, 2021). In this way GaaS also exemplifies that unfair commercial practice law will remain of central importance amidst newer legislation (DSA, DMA, AI Act) with a strong focus on transparency obligations (Helberger et al., 2022). The precise, creative ways in which service providers in the platform economy will react to regulatory pressures remains difficult to predict. We hope that with our explorative conceptualisation of GaaS we have added a tool to the analytical toolbox to help anticipate future developments in the platform economy.
References
Alha, K., Kinnunen, J., Koskinen, E., & Paavilainen, J. (2018). Free-to-play games: Paying players’ perspective. Proceedings of the 22nd International Academic Mindtrek Conference, 49–58. https://doi.org/10.1145/3275116.3275133
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Bailey, B. (2025, January 1). Understanding Tinder ELO: Boost your matches with effective strategies. Roast. https://roast.dating/blog/tinder-elo
Bambauer, J., & Zarsky, T. (2018). The algorithm game. Notre Dame Law Review, 94(1), 1–48.
Barr, K. (2023, April 3). Yes, Tinder is working on a $500 subscription tentatively called ‘Tinder Vault’. Gizmodo. https://gizmodo.com/tinder-dating-app-tinder-vault-hinge-bumble-1850295751
Bayley, K. (2020, July 29). How to calculate your and increase your Tinder Elo score. TechJunkie. https://social.techjunkie.com/calculate-increase-tinder-elo-score/
Binns, R. (2018). Algorithmic accountability and public reason. Philosophy & Technology, 31(4), 543–556. https://doi.org/10.1007/s13347-017-0263-5
Birkinshaw, P. J. (2006). Freedom of information and openness: Fundamental human rights. Administrative Law Review, 58(1), 177–218.
Brandeis, L. D. (1913). What publicity can do. Harper’s Weekly. https://www.sechistorical.org/collection/papers/1910/1913_12_20_What_Publicity_Ca.pdf
Brown, A. (2020, March 3). Exclusive: Tinder is launching a concierge service to save your love life. Forbes. https://www.forbes.com/sites/abrambrown/2020/03/03/exclusive-tinder-is-launching-a-concierge-service-to-save-your-love-life/
Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086
Carman, A. (2019, March 19). Tinder says it no longer uses a ‘desirability’ score to rank people. The Verge. https://www.theverge.com/2019/3/15/18267772/tinder-elo-score-desirability-algorithm-how-works
Citron, D. K., & Pasquale, F. A. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1–34.
Cofone, I. N., & Strandberg, K. J. (2019). Strategic games and algorithmic secrecy. McGill Law Journal, 64(4), 623–663. https://doi.org/10.7202/1074151ar
Conner, C. T. (2023). How sexual racism and other discriminatory behaviors are rationalized in online dating apps. Deviant Behavior, 44(1), 126–142. https://doi.org/10.1080/01639625.2021.2019566
Courtois, C., & Timmermans, E. (2018). Cracking the Tinder code: An experience sampling approach to the dynamics and impact of platform governing algorithms. Journal of Computer-Mediated Communication, 23(1), 1–16. https://doi.org/10.1093/jcmc/zmx001
Diakopoulos, N. (2020). Transparency. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI (pp. 196–213). Oxford University Press. https://academic.oup.com/edited-volume/34287/chapter/290661457
Duguay, S. (2017). Dressing up Tinderella: Interrogating authenticity claims on the mobile dating app Tinder. Information, Communication & Society, 20(3), 351–367. https://doi.org/10.1080/1369118X.2016.1168471
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
European Parliament and Council. (2005). Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market. http://data.europa.eu/eli/dir/2005/29/oj
European Parliament and the Council. (2024). Regulation (EU) No 2024/1689 of the European Parliament and the Council of 13 June 2024 laying down harmonised rules on artificial intelligence. http://data.europa.eu/eli/reg/2024/1689/oj
faffner100. (2021, June 15). Being picky works like a charm for my friend’s account. Reddit. https://www.reddit.com/r/SwipeHelper/comments/o0phzf/being_picky_works_like_a_charm_for_my_friends/
Freeman, G., Wu, K., Nower, N., & Wohn, D. Y. (2022). Pay to win or pay to cheat: How players of competitive online games perceive fairness of in-game purchases. Proceedings of the ACM on Human-Computer Interaction, 6(CHI PLAY), 1–24. https://doi.org/10.1145/3549510
Ghosh, S. (2017, April 14). Dating app Happn is getting paid subscriptions and will use AI to recommend matches. Business Insider. https://www.businessinsider.com/happn-is-getting-paid-subscriptions-and-will-use-ai-to-recommend-matches-2017-4?international=true&r=US&IR=T
Habermas, J. (1987). The theory of communicative action, volume 2: Lifeworld and system: A critique of functionalist reasoning (T. McCarthy, Trans.). Beacon Press.
Haresamudram, K., Larsson, S., & Heintz, F. (2023). Three levels of AI transparency. Computer, 56(2), 93–100. https://doi.org/10.1109/MC.2022.3213181
Helberger, N., Sax, M., Strycharz, J., & Micklitz, H.-W. (2022). Choice architectures in the digital economy: Towards a new understanding of digital vulnerability. Journal of Consumer Policy, 45(2), 175–200. https://doi.org/10.1007/s10603-021-09500-5
Hood, C., & Heald, D. (Eds.). (2006). Transparency: The key to better governance? (1st ed.). British Academy. https://doi.org/10.5871/bacad/9780197263839.001.0001
Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633–705.
Larsson, S., & Heintz, F. (2020). Transparency in artificial intelligence. Internet Policy Review, 9(2). https://doi.org/10.14763/2020.2.1469
Lauer, J. (2017). Creditworthy: A history of consumer surveillance and financial identity in America. Columbia University Press.
Leerssen, P. J. (2023). Seeing what others are seeing. Studies in the regulation of transparency for social media recommender systems [Doctoral dissertation, University of Amsterdam]. https://hdl.handle.net/11245.1/18c6e9a0-1530-4e70-b9a6-35fb37873d13
Li, H. (2019, June 15). The online dating industry loves artificial intelligence. Synced. https://syncedreview.com/2019/06/15/the-online-dating-industry-loves-artificial-intelligence/
m8keup. (2020, March 3). What is tinder concierge? Reddit. https://www.reddit.com/r/Tinder/comments/fd2v5m/what_is_tinder_concierge/
Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency, 279–288. https://doi.org/10.1145/3287560.3287574
Morozovaitė, V. (2024). Taming digital influence: Hypernudging and the role for European competition law [Doctoral dissertation, Utrecht University]. https://doi.org/10.33540/2080
Muldoon, J., & Raekstad, P. (2023). Algorithmic domination in the gig economy. European Journal of Political Theory, 22(4), 587–607. https://doi.org/10.1177/14748851221082078
myFICO. (n.d.-a). Choose a credit report. myFICO. https://www.myfico.com/products/fico-score-credit-reports
myFICO. (n.d.-b). How it works. myFICO. https://www.myfico.com/products/fico-score-how-it-works
myFICO. (n.d.-c). What’s the difference between FICO scores and non-FICO credit scores? myFICO. https://www.myfico.com/credit-education/fico-scores-vs-credit-scores#:~:text=FICO%20Scores%20are%20used%20by,looking%20at%20your%20FICO%20Score
myFICO (Director). (2021, April 19). myFICO 20th anniversary – timeline of our first 20 years [YouTube video]. https://www.youtube.com/watch?v=ATwgHpOQ6bw
Olmeda, F. (2022). Towards a statistical physics of dating apps. Journal of Statistical Mechanics: Theory and Experiment, 2022(11), 113501. https://doi.org/10.1088/1742-5468/ac9bed
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Petre, C., Duffy, B. E., & Hund, E. (2019). “Gaming the system”: Platform paternalism and the politics of algorithmic visibility. Social Media + Society, 5(4). https://doi.org/10.1177/2056305119879995
Rawls, J. (1971). A theory of justice. Harvard University Press.
Sax, M. (2021). Between empowerment and manipulation. Kluwer.
Sax, M., & Ausloos, J. (2021). Getting under your skin(s): A legal-ethical exploration of Fortnite’s transformation into a content delivery platform and its manipulative potential. Interactive Entertainment Law Review, 1–24. https://doi.org/10.4337/ielr.2021.0001
Silva, D. (2018). Dating apps use artificial intelligence to help search for love. In Phys.org. https://phys.org/news/2018-11-dating-apps-artificial-intelligence.html
Tinder. (n.d.-a). Subscription tiers. Tinder. https://tinder.com/feature/subscription-tiers
Tinder. (n.d.-b). Tinder platinum. Tinder. https://tinder.com/feature/platinum
Tregel, T., Schwab, M. C., Nguyen, T. T. L., Müller, P. N., & Göbel, S. (2020). Costs to compete – Analyzing pay to win aspects in current games. In M. Ma, B. Fletcher, S. Göbel, J. Baalsrud Hauge, & T. Marsh (Eds.), Serious games (pp. 177–192). Springer International Publishing. https://doi.org/10.1007/978-3-030-61814-8_14
Tuffley, D. (2021, January 17). Love in the time of algorithms: Would you let your artificial intelligence choose your partner? The Conversation. https://theconversation.com/love-in-the-time-of-algorithms-would-you-let-artificialintelligence-choose-your-partner-152817
Walzer, M. (1983). Spheres of justice: A defense of pluralism and equality. Basic Books.
Wang, H. (2022). Transparency as manipulation? Uncovering the disciplinary power of algorithmic transparency. Philosophy & Technology, 35(3), 69. https://doi.org/10.1007/s13347-022-00564-w
Wang, H. & Philosophy Documentation Center. (2023). Algorithmic colonization of love: The ethical challenges of dating app algorithms in the age of AI. Techné: Research in Philosophy and Technology, 27(2), 260–280. https://doi.org/10.5840/techne202381181
Weller, A. (2017). Transparency: Motivations and challenges (Version 2). arXiv. https://doi.org/10.48550/ARXIV.1708.01870
Zuboff, S. (2019). The age of surveillance capitalism. Profile Books.
Zwolinski, M., Ferguson, B., & Wertheimer, A. (2022). Exploitation. In E. N. Zalta & U. Nodelman (Eds.), Stanford encyclopedia of philosophy. https://plato.stanford.edu/archive
Footnotes
1. Special thanks to Samantha Bradshaw, Camille Girard-Chanudet, Frédéric Dubois, and Francesca Musiani for their helpful comments and suggestions.
2. Full or complete transparency is, of course, an (almost) unintelligible notion to begin with. Transparency is always transparency of something and in the digital economy it is difficult to see how, for instance, an online platform can be completely transparent. Even if a person gains access to all existing internal documentation of said platform, there are still ways in which the platform is not fully transparent. Top-level executives may have made decisions on the basis of informal meetings that are not documented and intentions stated in documents may not correlate with the real intentions existing only in the heads of senior management.
3. The term ‘successful relationship’ should be read as ‘a type of relationship or interaction that the persons that are dating consider to be satisfying relative to whatever standard they themselves deem relevant’.
4. FICO’s algorithm does show the math behind each data point, but users can also develop some more nuanced intuitive and contextual understanding of how its algorithm works based on these disclosed technical explanations. Similarly, a dating app’s dating coach isn’t purely embodied transparency either, as it is possible that some coaching experts might provide basic technical explanations of the algorithm before giving more nuanced and contextualised advice on how it influences users’ matches.
5. FICO claims that FICO Score has a 90 percent market share among top lenders in the US (myFICO, n.d.-a):.
6. For a brief explainer on the history of myFICO, see myFICO (2021).
7. Elo rating systems (named after physics professor Arpad Elo) are used in, for instance, chess to assign values to individual players to predict their performance in tournaments (see, e.g., Olmeda, 2022). Already in 2019 The Verge reported that Tinder stopped using ‘desirability scores’ in its algorithmic ranking (Carman, 2019). The fact that an alleged literal desirability score is no longer used, does not imply that certain proxies for desirability are not still important for Tinder’s algorithmic ranking of profiles. At the moment of writing, there are still many dating websites and communities writing detailed breakdowns of the hypothesised working of Tinder’s Elo score (see, e.g., Bailey (2025) . There are also websites that promise to help you calculate (or better: approximate) your own Tinder Elo score (see, e.g., Bayley, 2020)
8. See for instance this Reddit thread on the subreddit /r/SwipeHelper (faffner100, 2021)
9. On Reddit, this led several Tinder users to compare the prices they were shown for the service and ask whether the price they were shown said something about their attractiveness (m8keup, 2020).
10. The screenshot is included in a Forbes article on Tinder Concierge (Brown, 2020).
11. Tinder Concierge service did, however, resurface as a part of an even more expensive $500 a month ‘Tinder Vault’ service that was being piloted in the Spring of 2023 (Barr, 2023) .
12. There are some other more general types of GaaS. For instance, users can be charged a premium to receive more direct ‘gaming benefits’ which serve as de facto pay-to-win features to increase one’s chances to perform better in (ranking) systems one interacts with. But for this paper, we’re more interested in the particular phenomenon of GaaS related to the monetisation of algorithmic transparency.
13. By using the word ‘unfair’ here, the proposed minimal understanding of exploitation is a normative one: exploitation in this sense is understood as in principle wrong (overriding reasons for deeming the exploitation acceptable all things considered can still exist). A minimal understanding which does not incorporate the word ‘unfair’ is of course also possible, which would lead to a more neutral understanding of exploitation. In that case a football player making use of a weakness in the opponent’s defensive positioning would also ‘exploit’ that particular vulnerability for their own benefit. But no one would call that type of exploitation wrong; it’s part of the game. We would, however, generally consider it unfair if the football player would – for instance – feign an injury to cause the opponents to huddle around him to check on him. If the football player then exploits the defensive disorganization he caused by faking an injury, we would say he exploited his opponents’ sportsmanship in an unfair (i.e., ethically problematic) manner.
14. For example, the AI Act highlights this democratic value of algorithmic transparency in Recital 59: “transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress” (Regulation 2024/1689).