Governance by algorithms

Francesca Musiani, MINES ParisTech, France, francesca.musiani@mines-paristech.fr

PUBLISHED ON: 09 Aug 2013 DOI: 10.14763/2013.3.188

Abstract

Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013) is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Citation & publishing information
Received: July 2, 2013 Reviewed: July 8, 2013 Published: August 9, 2013
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Internet governance, Algorithms, Rules, Search engine, Recommendation
Citation: Musiani, F. (2013). Governance by algorithms. Internet Policy Review, 2(3). https://doi.org/10.14763/2013.3.188

Note

This article is partially a recollection and account of the Governing Algorithms conference held at New York University on May 16-17, 2013.

Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes “algorithms” beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013) is often taken for granted. At the same time, they are “invoked as powerful entities that control, govern, sort, regulate, and shape everything from financial trades to news media” (Governing algorithms, 2013). Recently (May 16-17, 2013), an interdisciplinary event organised at New York University has addressed this issue through an interesting lens: that of governance – governance by algorithms in addition to governance of algorithms.

Taking stock of the event, which this author attended, the article seeks to contribute to the discussion of “what algorithms do” and in which ways they are artefacts of governance, providing two illustrative examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. Indeed, the question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

The omnipresence of data, the consequences of their organisation

The role of invisibility in the classification processes that order human interaction, the procedures through which categories are made and kept invisible, the ways in which people can change this invisibility when necessary, and the extent to which systems of classification are crucial to the building of information infrastructures have been core preoccupations of science, technology and society scholars for several years. (Bowker & Star, 1999) Yet, the issue of information classification and organisation has perhaps never been as relevant as in our current times of “information overload” (Flew, 2008) and internet-mediated access to the vast majority of the information surrounding us. (Cardon, 2013) Indeed, digital data seem to proliferate in the complex world of today, building on the variety of platforms and supports that allow for dematerialisation and rapid circulation and distribution. They serve different purposes, from trading to surveillance, from evaluation to recommendation; they are listed, regrouped and organised by means of many supports and devices, from search engines to e-commerce websites. While companies leverage traces left by consumers on the web so as to better target, customise (and take advantage of) their next purchases and interactions, some users worry about the portraits that such traces allow others to paint of them, and of the impossibility to modify or erase them, left to the perusal of generations to come.1

Arguing that we are currently entering in the era of big data and algorithms, several authors argue that this “is a major breakthrough in the development of digital services (as it) gives decisive importance not only to the owners of data, but also and especially to those who can make them intelligible” (Cardon, 2013: 10). The algorithms subtending the information and communication technologies we daily use, the internet first and foremost, are (also) artefacts of governance, arrangements of power and “politics by other means” (Latour, 1988).

The power of algorithms

By naming a conference held at New York University last May “Governing Algorithms”, its organisers were making a deliberate choice of ambiguity – hinting at both the governance of algorithms, the extent to which political regulation can affect the functioning of the instructions and procedures subtending technology, and the governing power of algorithms themselves.

The ways in which the pervasiveness of algorithms into human society has political implications appear as a core issue of our times; they are a key feature of both today’s information ecosystem (Anderson, 2011: 529-547) and underlying cultural norms (Striphas, 2009), as they contribute to the shaping of the information we access and of its organisation. In a recent paper, communication scholar Tarleton Gillespie highlights six dimensions of political valence for algorithms that have public relevance, i.e., those algorithms that are used to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions” (Gillespie, 2013: 2). These six dimensions are:

  • patterns of inclusion, the choices behind the constitution of an index, what is included and excluded in it, and how data is “prepared” for the algorithm;

  • cycles of anticipation, the consequences of attempts, by those creating the algorithms, to have information about their users and make predictions on their future behaviours;

  • the evaluation of relevance, the criteria by which algorithms determine what is not only relevant, but appropriate and legitimate;

  • the promise of objectivity, the way the technical nature of the algorithm is presented as a guarantee of impartiality, particularly in the case of controversy;

  • the entanglement with practice, the processes by which users reshape their practices to suit the algorithms they depend on, and turn algorithms into terrains for political contest;

  • finally, the production of calculated publics, the process of algorithmic presentation of publics back to themselves, and how this shapes a public’s sense of itself. (Gillespie, 2013: 2-3)

These six dimensions bring to the fore two main consequences of the “computation” of our information society. By delegating to algorithms a number of tasks that would be impossible to perform manually, the process of submitting data to analysis is automated; and in turn, the results of these analyses automate decision-making. This double automation, in turn, poses the question of agency and control (Barocas et al., 2013). By asking questions such as: who are the arbiters of algorithms? Is algorithm design an assertion of authority over more than the algorithm itself? What is the autonomy of algorithms, if any? - it is the accountability and the responsibility of algorithms as socio-technical artefacts that is examined, that of their creators and users, and ultimately, of the balance of power facilitated or caused by algorithms.

Algorithmic governance: Part I. Web search

The ways in which the web gives more visibility to some information and content than to other is at the very heart of the recurring debate on the defining features of the digital space as a “public space.”  According to Jürgen Habermas, the “father” of the public sphere concept, two conditions are necessary to structure a public space: freedom of expression, and discussion as a force of integration. The architecture of the “network of networks” seems to articulate these two conditions. However, if the first is frequently recognised as one of the widespread virtues of the internet, the second seems more uncertain (Cardon, 2013: 11). In his book The Wealth of Networks, legal scholar Yochai Benkler argues for a global “order” intrinsic to the web, whose core feature is the fact that the selection of information is no longer the monopoly of gatekeepers, journalists, librarians and editors, but is delegated to internet users, now publishers in their own right. By citing and quoting one another in conversational niches, these individuals and groups single out quality information for algorithms, which, in turn, order and classify them and make them available in search engines. (Benkler, 2006: 33-35) Thus, the ordering of web-hosted information appears as a co-production and co-construction of internet users and computational tools.

Algorithms are delegating the integration of conversations and discussions taking place at the micro level. The aggregated arguments that result from this integration are perceived as “implicit universal consensus”; they have both the strengths and the weaknesses of any information that cannot be traced back to any specific individual, and at the same time, results from a wide assemblage of opinions. (Geiger, 2009) Search engines, and the multiple measures underlying the internet hierarchise the visibility of information by proposing it at the very beginning of search result lists, or dissimulating it at the end. By de facto deciding “what must be seen,” they are susceptible to encourage or discourage controversy and discussion - while constructing the public agenda of political and social priorities in the process, as well as selecting interlocutors that matter. (Cardon, 2013: 11)

In particular, thanks to the current quasi-monopoly that Google holds on web search practices, its PageRank algorithm has been widely examined as the new gatekeeper (Smith, 2013) and “benevolent dictator” (Masnick, 2008) of the digital public spaces and spheres. The algorithm implements, according to a “recipe” that partly remains an industrial secret, different sets of measurement criteria that assess authority (according to the number of citations), audience (according to the number of visits or clicks), proximity and affinity (according to recommendations) or speed (according to real-time aggregation and relay of “hot” topics). PageRank, as the “master switch” of the internet, (Wu, 2010: 279-280) centralises and organises the circulation of information in the network of networks, and for every search interrogation and request, arbitrates on what’s important and relevant.

Algorithmic governance: Part 2. Recommendations in e-commerce

For some years now,2 online seller Amazon has been “a remarkable prescriber”, whose prescriptions are based on the recommendations of its readers/buyers. The vendor’s website makes it possible for each of its subscribed users to know, in a single click, about other purchases made in the past by users who have acquired the same title (Benhamou, 2012). Personalised recommendations are not something new in the world of book publishing and selling, be they digital or not. Simply, a librarian remarks ironically, they historically have been “the exclusive purview of booksellers, librarians... and friends. Now your best friend for advice on reading is called ‘recommendation Al Gorithm’… and it loves you very much!” (Lemaire, 2011)

Indeed, it is on the systematisation and automation of a very widespread and very social phenomenon - the exchange of advice and guidance among users, sharing preferences and affinities - that Amazon and other online sellers base their recommendation systems. Drawing from methods based on both content (considering two books “similar” if they share a large number of words) and collaborative filtering (the intersection of lists containing particular books and lists based on previous records of books purchased or borrowed by readers), Amazon has developed an algorithm called “item-to-item collaborative filtering”. Its details remain an industrial secret, but the algorithm displays every day its effectiveness in “personalising” recommendations according to the interests of each of its consumers. As its name suggests, rather than match a user with similar users, this algorithm relates each item ordered and purchased by users with similar items, and eventually combines them in a recommendation list. (Linden et al., 2003)

Behind this algorithm - and causing readers/buyers to think that Amazon knows very well, perhaps too well, their tastes - lie years of research and experiments in a recent subfield of computer science whose practical applications are increasingly widespread, albeit discrete: data mining, in particular affinity analysis and market basket analysis. For readers looking for new things to read, suggestions similar to their previously purchased articles are constructed by relying on a mix of several sources of information about them, feeding a large database where they are combined with other shopping histories. This information can range from the most obvious demographics about oneself and close relatives, to more complex assessments based on the sites one consults before arriving at Amazon, or one’s “habit of clicks.” The entanglements within this large database about the purchasing behaviour of users, activated in accordance with Amazon’s patented algorithm, are the basis of the suggestions familiar to the user, such as “Recommended as you bought…” or “Recommended because you add X to your wish…”, and influence book purchases on Amazon every day.

Algorithms and rules, rule by algorithm

We live in an increasingly algorithmic world. This article has examined, in particular, two cases related to web-based information and communication technologies where the importance of algorithms is high and their presence pervasive. However, the invisible computational structures that guide our search results and our online purchases extend to a number of other contexts, in which algorithms are deployed and regulatory work has been insistently called for in face of recent crises, from facial recognition software to financial markets. (Hardt, 2013)

The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, and more generally, of the governance of the complex, automated systems that permeate today’s world.

The academic landscape in the interdisciplinary fields of communication studies, internet studies and science and technology studies reflects a thriving and increasing interest for this question. As an additional path towards answering the key question, “who does the algorithm serve?”, scholars also investigate the historical process from which the algorithm has emerged as a key topic of our times and attempt to situate it in the larger context of political economy. (Berry, 2012: 277–296)

As not only academic research but current news show ever more frequently (e.g. BBC Bews, 2011), two faces of the algorithms/rules relationship are currently under scrutiny, and are likely to be even more in the close future. On the one hand, there is the issue of institutions’ ruling of algorithms. Should the locus of legal reasoning related to these systems shift to the coding of algorithms? Should regulation, or further regulation, of algorithms be pushed or advocated for in specific contexts? What would this regulation look like, would it even be possible, and what effects would it cause? (Barocas et al., 2013)

On the other hand, the extent to which we live in a world ruled by algorithms has to be assessed. We need to research not only the extent to which, given the ubiquity of algorithms, they regulate us in a sense, but also “what it would mean to resist them”. (Barocas et al., 2013)

Footnotes

1. On the Internet’s “persistent memory” and the so-called “right to be forgotten”, championed by the EU in the recent past, see e.g. Beckles, C.-A. (2013). “Will the Right to Be Forgotten Lead to a Society That Was Forgotten?”, Privacy Perspectives, https://www.privacyassociation.org/privacy_perspectives/post/will_the_right_to_be_forgotten_lead_to_a_society_that_was_forgotten or the critical

2. This section is partly based on an article I wrote in French in March 2012: Musiani, Francesca (2012). “‘Bienvenue sur votre Amazon’: les systèmes de recommandation d’ouvrages”, Labs Hadopi. Retrieved from http://labs.hadopi.fr/actualites/bienvenue-sur-votre-amazon-les-systemes-de-recommandation-douvrages

References

Anderson, C. W. (2011). Deliberative, agonistic, and algorithmic audiences: Journalism's vision of its public in an age of audience. Journal of Communication, 5.

Barocas, S., Hood, S., & Ziewitz, M. (2013). Governing Algorithms: A Provocation Piece. Discussion Paper for the Governing Algorithms conference, NYU, May 16-17, 2013. doi:10.2139/ssrn.2245322, Retrieved from http://ssrn.com/abstract=2245322.

BBC News. (2011). Disappearing tycoon Souter blames Google. Retrieved from http://www.bbc.co.uk/news/technology-14884717

Benhamou, F. (2012). 3e étape de la stratégie verticale d’Amazon. Blog L’Eco(nomie) des Livres, October 24, 2012. Retrieved from http://www.livreshebdo.fr/weblog/l-eco%28nomie%29-des-livres-24/776.aspx

Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.

Berry, D. (2012). The relevance of understanding code to international political economy. International Politics, 49.

Bowker, G.C., & Star, S.L. (1999). Sorting Things Out: Classification and Its Consequences. Cambridge, MA: The MIT Press.

Cardon, D. (2013). Présentation. In La Découverte. Dossier Politique des algorithmes, Réseaux, 177, 9-21.

Flew, T. (2008). New Media: An Introduction (3rd Ed.). Oxford : Oxford University Press.

Geiger, S. (2009). Does Habermas Understand the Internet? The Algorithmic Construction of the Blogo/Public Sphere. Gnovis: A Journal of Communication, Culture and Technology, 1 (10).

Gillespie, T. (2013). The Relevance of Algorithms. In Media Technologies: Essays on Communication, Materiality, and Society. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot (Ed.). Cambridge, MA: MIT Press. Retrieved from http://governingalgorithms.org/wp-content/uploads/2013/05/1-paper-gillespie.pdf

Governing Algorithms. (2013). Governing algorithms - A conference on computation, automation, and control. Retrieved from http://governingalgorithms.org

Hardt, M. (2013). Occupy Algorithms: Will Algorithms Serve The 99%? Response Paper for the Governing Algorithms Conference, NYU, May 17, 2013. Retrieved from http://governingalgorithms.org/wp-content/uploads/2013/05/2-response-hardt.pdf

Harris, L. (2013). How to fix the EU’s ‘Right to be Forgotten’, The Huffington Post, http://www.guardian.co.uk/technology/series/internet-privacy-the-right-to-be-forgotten

Latour, B. (1988). The Pasteurization of France. Cambridge, MA: Harvard University Press, p. 229.

Lemaire, A. (2011). Madame Machine, pouvez-vous me conseiller un bon livre? Les nouveaux outils Web de recommandation de lectures. Association des Bibilothécaires de France, June 27, 2011. Retrieved from http://bibliolab.fr/cms/content/les-nouveaux-outils-web-de-recommandation

Linden, G., Smith, B., & York, J. (2003). Amazon.com Recommendations: Item-to-Item Collaborative Filtering. IEEE Internet Computing, 7(1), 76-80. Retrieved from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1167344&userType=inst

Masnick, M. (2008). Google As Benevolent Dictator: The Gatekeeper and the Data Collector. TechDirt, December 2008. Retrieved from http://www.techdirt.com/articles/20081201/0119292980.shtml

Smith, D. (2013). Google: Gatekeeper of the Internet’s Grey Area. The Telegraph, June 10, 2013. Retrieved from http://www.telegraph.co.uk/sponsored/technology/technology-trends/10103907/google-law-ethics.html

Striphas, T. (2009). The Late Age of Print: Everyday Book Culture from Consumerism to Control. New York, NY: Columbia University Press.

Wu, T. (2010). The Master Switch: The Rise and Fall of Information Empires. Random House Digital, pp. 279-280.

Add new comment