Beyond “Points of Control”: logics of digital governmentality

Romain Badouard, Université de Cergy-Pontoise, France, Romain.Badouard@u-cergy.fr
Clément Mabi, Université de Technologie de Compiègne, France, clement.mabi@utc.fr
Guillaume Sire, Université Paris II (Panthéon-Assas), France

PUBLISHED ON: 30 Sep 2016 DOI: 10.14763/2016.3.433

Abstract

In this paper, we aim to show the heuristic benefit of Michel Foucault’s concept of “governmentality”, in order to describe three logics of power and control within digital environments. These three logics – directing, constraining and framing online behaviours – are similar to Foucault’s approach to power, which is understood as a means to “lead other people’s behaviours”, as it is in this case enacted through the mediation of technical resources, such as software, algorithms and operating systems. This paper provides three illustrations of these logics of governmentality: the way in which Google tries to direct webmasters’ practices with the help of its SEO guidelines and a webmaster ranking system (governmentality by incentives); the way by which developers constrain online behaviours through websites and software (governmentality by design); the way Apple frames the work of app developers in order to institute specific standards for action and interaction within its iPhone operating system (governmentality by framing).
Citation & publishing information
Received: April 29, 2016 Reviewed: June 17, 2016 Published: September 30, 2016
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Governmentality, Framing, User behaviour
Citation: Badouard, R. & Mabi, C. & Sire, G. (2016). Beyond “Points of Control”: logics of digital governmentality. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.433

This paper is part of Doing internet governance, a special issue of Internet Policy Review guest-edited by Dmitry Epstein, Christian Katzenbach, and Francesca Musiani.

Research in the internet governance realm has for long shown how the management of infrastructures and critical resources allowed private companies to regulate online behaviours on a massive scale. On the internet, infrastructures are more powerful than laws, i.e. user behaviours are more likely to be limited and directed by what technologies allow them (or prevent them) to do, than what laws display as desired behaviours (Lessig, 2000, 2006). During the 2000s, the concentration of uses around a limited number of services lead to a dominant position of GAFA (Google, Apple, Facebook, and Amazon) in the digital economy, raising concerns about a "privatisation of internet governance" (Papacharissi, 2010; DeNardis, 2010, 2014a; Musiani, 2013; Sargsyan, 2016). Because these companies started to “govern expressions” (DeNardis, 2014a), by exerting content removal actions - sometimes under government mandate - we have witnessed a shift from a management of infrastructures and services to a management of contents, as these private companies can now, albeit not in an unrestricted manner, decide what should, and should not be expressed online (Musiani, 2013).

In the late 2000s, the boom of mobile internet communication sanctioned the role of these private companies as key actors of internet governance. The success of mobile operating systems such as the iOS for Apple or Android for Google, allowed them to control both the design of these new "digital environments" and the whole process of app production. Thus, these firms took hold of strategic “points of control” (Zittrain, 2008; DeNardis, 2014b; Benkler, 2016), by deciding what could or could not be done while using mobile internet technologies.

In this sense, science, technology and society (STS)-oriented research has re-evaluated the very meaning of internet governance, not only as the governance of technical infrastructure, but also as the management of online practices through technical resources (Mueller, 2010; Epstein, 2015; Hofmann, Katzenbach, and Gollatz, 2016; Musiani et al., 2016). Infrastructures and technical resources bear political purposes, because they enact models of content production and exchange that constrain behaviour. Thus, designers exert power over users by directing their actions and framing their interactions.

In this paper, we would like to contribute to this debate by showing the heuristic benefit of the the concept of "governmentality". This will help us understand the way in which online behaviours are directed, constrained and framed by resources such as algorithms, content-management systems (CMS) and operating systems (OS). The concept of “governmentality” was first developed by Michel Foucault (2004) to describe “logics of power”. It “consists in guiding the possibility of conduct and putting in order the possible outcome” (Foucault, 1982: 789). By studying how governments exerted control over their populations through policy instruments, and their evolution through time, Foucault showed that the logic of power is less about preventing people to act, than allowing them to act in a certain way. The rise of a neoliberal governmentality, Foucault says, results from the combination of the gift of rights, and the limitation of these rights through laws and policies that settle the conditions of their exercise. According to Foucault, exerting power is about leading other people’s behaviours.

In this paper, we argue that the logics of governmentality described by Foucault can be useful to understand how private companies exert power over users through the design of technologies. Sociology of uses and design studies have for long shown how social norms become embedded in technologies and infrastructures: they make people act and interact according to the designer’s will, and they produce collective behaviours in agreement with specific values and principles. Yet, these technologies and infrastructures "have a range of flexibility in the dimensions of their material form [and it is] precisely because they are flexible that their consequences for society must be understood with reference to the social actors able to influence which designs and arrangements are chosen" (Winner, 1986: 38).

The aim of this paper is to discuss three logics of digital governmentality, i.e. three different ways of leading users’ behaviours online, by designing technical resources: directing, constraining and framing. These three logics are far from being the only ones that could fit the definition of "digital governmentality". They are three examples based on three different fieldworks, that intend to contribute to the understanding of internet governance as a mundane activity related to economic strategies of private actors. First we discuss the logic of “directing” users’ behaviours, by observing the way in which Google instructs web developers to follow a specific way of building and managing websites through its search engine optimisation (SEO) practices. We show how Google promotes a “disciplinary regime” in which developers and users are encouraged to act in a certain way without directly constraining their behaviours.

Then we discuss the logic of "designing". Here we focus on how developers mobilise resources to set up websites or applications in order to make users adopt a specific behaviour while using their technologies. In this way we discuss the dynamics of normalisation of both writing and content production through the popularisation of a content management system (CMS) such as Wordpress.

Finally we discuss the logics of "framing". Here engineers and companies that develop operating systems settle what can and cannot be done while using them. Through the example of the iPhone operating system (iOS), we study how the company directs the work of application developers in order to make them fit into a general practices’ rationale.

The choice of these examples is drawn from our respective PhD fieldworks of the past few years (Badouard, 2012; Sire, 2013; Mabi, 2015). In this theoretical paper, we remain at a conceptual level in order to discuss more broadly the implications of importing the "governmentality" concept into internet governance studies.

Section 1: Directing online behaviour

The first regime of governmentality that we propose to define is characterised by an incentive scheme. The term "incentive" — from the latin incitare, to push, excite, stimulate — has been extensively used in economics to designate the tool that can be used to push others to act in ways that do not serve a priori their interests. “Incentive” means, for example, a tax measure which is intended to reward companies that respect the environment, and reduce gas emissions. Michel Foucault used this term by mentioning dynamics of “inciting-regulating” as a way of exercising power without coercing individuals whose “possibilities of conduct are guided” (Foucault, 2004). Under such a scenario, freedom can be a tool of power. Hence, there is no contradiction here between the idea of exerting power and a liberalist society which encourages “people to be self-determining [and] to pursue their own economic goal. That policy would work [...] only if there were a surplus that guaranteed enough to go around” (Winner, 1986: 45).

Here we use the term "incentive" to describe the ways in which actions are oriented in a digital environment. An incentive consists of two main elements: information and interpretation. Incentives are both the result and the originator of the incited individual’s perceptions. The incitor takes the wishes of the incited into consideration and translates the order he wants to give them by shifting and transforming this order as many times as necessary until the incited act according to the incitor’s wish. The incited receive signals, the interpretation of which conduct them to act in a non-suitable — or at least unintended — way. As long as it has not been experienced, the incentive and its concrete effects can only be hypothetical. The sender of the signal can only hope that it will actually become an incentive and that this incentive will have the desired effects. On the other hand, the signals can be interpreted in a totally unexpected way, and thus become incentives that do not have the same effect as originally intended. Moreover, it is important to consider that even if an incentive encourages taking action, it doesn’t necessarily provoke action. Individuals can decide not to follow it, either because the incentive is not incentivising enough, or because the incited individual cannot act in accordance with the incentive, or even because other incentives are stronger.

An incentive can be material, financial, and/or moral. It is mediated by the agency of discourses, technologies, values and interests. It is always the result of a forecast made by one or several individuals concerning the way one will benefit from a particular action. Hence, an incentive is made of will, knowledge and power. It is the result of what one wants to do, given what one can have, and what one wants to have, given what one can do. The result is a form of power based on both capabilities to act and capabilities to estimate and forecast. This form is consistent with the Foucaldian perspective: "there is no face-to-face confrontation of power and freedom, which are mutually exclusive (freedom disappears everywhere power is exercised), but a much more complicated interplay. In this game, freedom may well appear as the condition for the exercise of power (at the same time its precondition, since freedom must exist for power to be exerted, and also its permanent support, since without the possibility of recalcitrance, power would be equivalent to a physical determination)" (Foucault, 1982: 790).

This governmentality regime can be a good way to denominate how power is exercised by numerous actors in the digital ecosystem. Indeed, there are plenty of organisations (big and small) which do not force individuals to serve the organisation’s interests, but which conduct them to act in a way that will eventually do. Take Google for example. Introna and Nissenbaum explain,

that search-engine design is not only a technical matter but also a political one. Search engines are important because they provide essential access to the Web both to those with something to say and offer as well as to those wishing to hear and find. (…) [They] give prominence to popular, wealthy, and powerful sites at the expense of others. This they do through the technical mechanisms of crawling, indexing, and ranking algorithms, as well as through human-mediated trading of prominence for a fee. As long as this tendency continues, we expect these political effects will become more acute as the Web expands. (Introna et Nissenbaum, p. 181)

Therefore, web users are incentivised to access websites that are listed in the search engine results, and will have a harder time accessing less popular, less "wealthy" and less “powerful” websites. 

The incentive-oriented governmentality regime powered by Google does not only impact web users, but also publishers: Google can encourage them to produce their content in a certain way by giving advice on the best way to make and publish content, if they want to be visible on the leading search engine. This is done without promise that these measures will actually generate traffic. Publishers cannot ignore that search engines exist and that, amongst them, Google is in a dominant position. They also cannot ignore that their position within the results depends on their own actions: chosen topics, website structure, loading speed, text, links, code. As a result some web developers act based on what they know - or on what they think they know - about Google’s algorithm. They optimise their content in order to maximise their chances of garnering substantial traffic from Google. The advice given provided by Google’s Webmaster Tools Center plays a role in this, and shapes publishers’ actions. Theo Röhle shows how Google Webmaster Tools allow the company

to establish means of communication with webmasters (…). The webmasters are encouraged to adapt their content in a way that is advantageous for Google. Furthermore, webmasters are asked to report sites that do not comply with the rules set up by Google. For this purpose, the Webmaster Tools provide forms where webmasters can report spam and link selling. (Röhle, 2009)

Making a reference to Michel Foucault, Röhle concludes that there is a "disciplinary regime" where Google regularly reminds publishers that they will be rewarded if they follow the advice given on the Webmaster Tools and, furthermore, that they risk being punished or banned if they do not. Other academic papers have looked into SEO and the subtle influence that Google can have on the content and on the daily life of content producers (Dick, 2011; Sire, 2015). It is exactly this type of power that we propose to call “incentives-oriented regime of governmentality” and that we find in numerous situations within the digital ecosystem. In fact, it is a place where everybody is free to act but where some companies or some individuals will make others do what they want them to do by using an incentive scheme. In doing so, they make users serve the “incitor’s” interests while serving their individual interests at the same time.

At another level, Bernhard Rieder and Guillaume Sire show that Google could subtly lead publishers to prefer Google’s advertising services to those of its competitors. They demonstrate that this is because publishers have an interest in being on "Google’s side" in the case where its algorithm would aim to favour Google’s partners in its organic ranking (Rieder & Sire, 2014). The authors do not claim that Google succumbs to the temptation to skew the results in favour of its partners in order to maximise its profits. However, Rieder and Sire (2014) allege that Google has all the necessary tools to do so, and that publishers can make some of their decisions according to what they know about Google’s opportunities and incentives. Publishers do not buy Google’s intermediation services on the ads market because they think Google tweaks the results of its search engine to favour its partners’ organic ranking. These mixed incentives are reinforced by what Google’s founders wrote in 1998:

a search engine could add a small factor to search results from "friendly" companies, and subtract a factor from results from competitors. This type of bias is very difficult to detect but could still have a significant effect on the market. [...] [W]e believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm (Brin & Page, 1998)

The case of Google’s influence on the shape of information and the modes of its communication is an obvious example of the power of private intermediaries on the web — which seem stronger than any public institution (DeNardis, 2010; Papacharissi, 2010). Academic work addressing Google’s power on the web and its dominant market position — a consubstantial phenomena — come to similar conclusions about what we call an incentive-oriented regime of governmentality. We believe that this regime is typical of the way power is exercised at many levels and in many places in the digital ecosystem. In this ecosystem individuals and companies are free to do what they want, but are also influenced by incentive schemes. Therefore, the term "incentives", often used by economists, could be used in media studies, as they can influence content and the way it is mediated. Furthermore, it also could be used to analyse the shape of internet infrastructures by both public and private institutions, in particular if we face a situation where “the practices and the records related to civil liberties among many nation-states suggest that incentives behind their infrastructure-based proposals are not to protect privacy and security, but to gain/regain control over data” (Sargsyan, 2016: 198).

Section 2: Constraining online behaviour

The second mode of governmentality we would like to describe is that of "design". We consider the power relationships exerted by designers and developers over users by focusing on the direction of users’ behaviours through technical constraints. In a digital environment, constraining behaviour means to set up a possibility for action through the determination of technical abilities. In this sense, constraining also means allowing action, but in a specific way. In Political Machines, Governing a Technological Society, Andrew Barry builds on the work of both Foucault and Latour to show how new logics of government are enacted through technologies. He argues that the “technological society” endorses “interactive” logics of power and control that become embedded in tools and instruments citizens are provided with. “In an interactive model,” Barry says, “subjects are not disciplined, they are allowed” (Barry, 2001: 150).

Therefore, the constraint should not be seen as a prohibition, but as a normative proposition for action. The designers partly determine the normative framework in which users operate. For example, one does not write the same way when they use a word processor as they do in a spreadsheet or when preparing presentation slides. Similarly, it is not possible to produce messages over 140 characters on Twitter, or to rearrange the order of comments on a Facebook status. These constraints are not accidental: they are designed according to a model provided by their designers. Unlike incentive-oriented regimes, which do not diminish the freedom of the incited, constraints diminish the freedom of users by restricting the extent of their possible actions. Thus, websites and software can be considered modes of participation and communication, in which injunctions become embedded in technologies and consequently promote specific kinds of practices and behaviours (Wright & Street, 2007; Bennet, 2008 ; Coleman, 2008 ; Monnoyer-Smith, 2009). This expression of "governmentality" does not limit itself to a digital environment; it can be more generally considered as a kind of “delegation” of social norms to technical artefacts (Simondon, 2016 (1958); Latour, 1999).

Digital environments all have constraints built in. At one level, the digital code forces the users to express themselves in a very specific way, which limits the range of possibilities: "The constraints imposed by the fact that any programming language is a formal language is that it does not allow ambiguous statements and accepts only a perfect syntax; implying that the number of solutions to a specific problem is finished" (Rieder, 2006: 243). According to the binary logic, coding is like programming choices (Rushkoff, 2010). Software or website developers can use different resources to enact a project and plan a vision for its usage. They can use applications that enable actions on content (production, editing, customising, filtering, evaluation) and those that allow interactions (forums, comments, "poke"), and then determine the organisation of the sections of the tool that directs users towards certain types of contents.

Analyses of website design (Wright & Street, 2007; Coleman and Blumer, 2009; Badouard, 2014) support that notion that values ​​can be embedded in the script of technologies. To what extent do available technical resources influence a political project as perceived by designers? This perspective has been especially developed in the study of online deliberation devices, which show that designers’ political ambitions become embedded in the constraints imposed by their tools. Scott Wright (2011) identifies a number of normative criteria that directly affect the behaviour of users and influence the course of their deliberation. He puts a particular emphasis on the role of moderators and the necessity to organise the confrontation of different points of view. Freelon (2015) recently argued that online discussion dynamics become predictable by analysing the "discourse architectures" on which they rely. In this sense, evolution and changes in websites’ design can be seen as attempts to influence various discursive behaviours (Wright, forthcoming).

This normative dimension of technical constraints may also occur for other types of behaviour in a digital environment. To illustrate this expression of governmentality, one can take the example of CMS used to publish information. CMS allows users to publish information in natural language, without using any programming language. Famously referred to as a part of the "web 2.0", a CMS restricts the range of possibilities of HTML code in order to simplify the editor’s task. They impose a publication standard on the web that users have to accept if they choose this tool. The web designer can then allow users to customise options that contribute to making the technical framework invisible.

For example, Wordpress software helps standardise online publication practices by providing an interface in which the same writing constraints apply to all users. CMS specifies some actions and offers users the same content organisation system, the same filtering tools and the same formatting options. However, even if the constraint is important, some users may still take some degree of ownership of the socio-technical resources of the device and change the original settings through its code. Using formal language enables the modification of the tool and of its logic of appropriation.

What we call a design-oriented regime of governmentality is thus characterised by a certain plasticity of technical constraints, which depend on both the project of the entity trying to exercise power and on the skills and tools that users have at their disposal.

Section 3: Framing online behaviour

The third regime of governmentality we want to describe in this paper is the most "robust" of all: framing. Framing refers to the fixing of technical architectures for actions in a digital environment. It is about deciding what can and cannot be done. Operating systems (OS) are good examples of what framing means. An OS is not a software that users handle, it is an environment that rules the way users and software interact through an interface. Mac OS, Windows and Linux are three different OS’ that provide their users with different norms of action and interaction, even if they are based on similar logics of interface (mouse, windows, and folders 1).

In 2007, the launch by Apple of its iPhone operating system raised many questions about the new logics of action and interaction that were embedded in this technology. As a closed system in which all processes producing and distributing applications are monitored by Apple, the iOS has been seen as a breakdown with regards to the open standards of the web 2 and a first move towards a more centralised internet - where only a few companies hold strategic "points of control" (Zittrain, 2008; Benkler, 2016). However if Androïd (first powered by Linux, before being bought by Google in 2005) bears similar logics, these criticisms were mainly addressed to Apple, as a company which historically promotes closed operating systems.

The main feature of an operating system like the iOS is to provide a segmentation of users’ capacity of action. As Apple presents its iOS on its websites, mobile applications (mentioned as "apps" in the following) are designed "as islands": independent, isolated, without any bridge that could make them interact. A more commonly used metaphor about apps is the one of "walled gardens" (Hazlett, Teece & Waverman, 2011; Mehra, 2011; Mac Sitigh, 2012): each application is designed for a specific need or function for individual uses, which makes apps coexist without interacting. The information produced by users through specific apps cannot be used through another, which means that users’ capacities of action are not cumulative.

One way to understand the logics that underpin the design of the iOS is to look at the process by which apps are produced. While the OS is designed by Apple, applications available for the iPhone, iPad and iPod Touch are made by independent companies or individual developers who intend to provide users with specific services. In order to make their apps work with the iOS, they have to comply with the many requirements stipulated by Apple. These requirements are displayed through the vast library that is made available for developers who subscribe to the Mac Developer Program. The company thus provides them with resources such as guides, technical notes and samples of code that show them how to produce apps properly. Then, during the apps evaluation process, independent assessors check that apps comply with these requirements.

Amongst these requirements, Apple states that an app must include four elements 3: a navigation bar at the top or at the bottom of the screen; a content view that allows a limited amount of actions, such as scrolling or inserting information; a control bar that enables users to select information; temporary views that provide users with "one-shot" information (“push”). Through these requirements, Apple founds a digital environment which is oriented towards the consultation of content rather than production. The user is perceived as an active consumer who can act upon content, but through the model of "integration", i.e. the filling of empty boxes within a pre-established form. Moreover, Apple explicitly states in the very same document that users should be prevented, as far as possible, from having to insert content: developers should furthermore provide them with pick lists. For instance, Apple says that it is better for users to choose within a list of locations or dates than to have to write names and numbers. In other words, Apple promotes particular logics of action that are based on consultation and choice over pre-programmed options. Users’ capacities of action are also defined by Apple which expects from developers that they only allow "simple, short and narrowly targeted actions" related to the small number of interactions allowed by a tactile interface: open, close, zoom, select, move, and validate. Finally, Apple also asks developers to reduce the setting parameters to a minimum. In other words, in digital environments such as the iOS, users’ capacities are greatly reduced in comparison with an OS that runs on computers: they are limited in numbers, pre-defined beforehand and not “customisable”.

Apple finally checks the compliance to its requirements during the evaluation process. Other developers are paid by the company to assess the compatibility of apps with the overall logic of the iOS, in order to allow these apps to be distributed on the App Store. To help developers in their assessment, Apple provides them with the App Store Review Guidelines, which describe the reasons why an app might be rejected. There are 141 different reasons presented in these guidelines. These can be related to the content of an app (e.g. pornography is forbidden), to the programme itself (e.g. the app is not allowed launch independent scripts) or to the very function of the application - which must be "original". Once the app is approved, it is made available on the App Store, which is the single point of sale for apps running on the iOS. Apple charges a 30% fee on the selling prices of applications.

The regime of governmentality by framing is thus related to the production of a digital environment, with its norms of functioning and its operating standards. There is no direct interaction between the "governor" and the "governed". The "governors" make sure that the tools that will be used by the "governed" meet these standards and respect an overall logic of action. In the case of the iOS, Apple defines how users’ capacities of action should be framed by applications, and monitors the fulfillment of these requirements both through technical standards and the control of the apps’ production line. What Apple does is to define what can and cannot be done, what is allowed, and what is forbidden. Developers have to cope with these requirements, otherwise they will be rejected. From a user’s perspective, all apps available will respect these rules, which are enacted through the technical standards of this digital environment.

Conclusion

We have defined in this article three ways of leading other people’s behaviour in digital environments, particularly on the internet: incentive, design and frame. The incentive "pulls". Here, “governors” encourage the governed to adopt certain behaviours through a profit/sanction system. The constraint “limits”. Here, the dominant actor guides the behaviour by setting the tools through which actions are performed. Finally, the frame “fixes” the context of action. In the latter, a range of possible actions is open for the governed, while unwanted actions are banned from the frame. These forms of "conducting the conducts" correspond to power relationships between individuals, or groups of individuals, which are enacted through the mediation of technical resources such as algorithms, content management systems or operating systems. The resources available for the dominant actors to "impose" some forms of governmentality does not belong only to the technical register and must be articulated with economic strategies (competitor purchasing for example) and legal arrangements (antitrust laws). In the digital ecosystem, some actors seek to place themselves in a position of such dominance that they contribute to devitalise other governance areas, including institutions, to impose a more diffuse power that can be partly incarnated within the architecture and critical resources of communication technologies. In accordance with the intuition of Gilles Deleuze, this assertion reflects the shift from a “disciplinary model” of power, as disclosed by Michel Foucault, to a new logic of control, where exerting power is not about preventing people to act, but rather allowing them to act according to a specific scheme.

These logics of power and control perfectly fit digital environments (Barry, 2001), where power is exerted through the uses of technologies designed to make people act and interact. In this sense, the concept of "governmentality" can be useful to describe these logics, among other by highlighting that within a digital environment possibilities are constrained. Due to the plasticity of digital technologies, negotiations, detours and re-appropriations are still possible. If all stakeholders do not benefit from the same ability to exert a “possibility to do" and a “power to do", they all benefit, at some level, from opportunities to overcome the political projects embedded in digital environments. Thus, analysing the logics of power and control operating behind the uses of technology is in line with the emancipatory project of empowering ordinary users in their daily relationship to technology.

References

Badouard, R. (2014). La mise en technologie des projets politiques. Une approche « orientée design » de la participation en ligne. Participations, 8(1), 31-54. doi:10.3917/parti.008.0031

Badouard, R. (2012). Les “technologies politiques” du web. Une analyse des plateformes des consultation de la Commission Européenne et de leurs publics (Phd Dissertation). Université de Technologie de Compiègne, Compiègne, France.

Barry, A. (2001). Political Machines: Governing a Technological Society. London: Athlone Press.

Benkler, Y. (2016). Degrees of Freedom, Dimensions of Power. Daedalus, 145(1), 18–32. doi:10.1162/DAED_a_00362

Bennett, L. (2008). Changing Citizenship in the Digital Age. In L. Bennett (Ed.), Civic life online: learning how digital media can engage youth (pp. 1–24). Cambridge, Mass: MIT Press.

Brin S., Page L. (1998). The Anatomy of a Large-Scale Hypertextual Web Search Engine. Computer Science Department, Stanford University. Available at http://infolab.stanford.edu/~backrub/google.html

Coleman, S. (2008). Doing IT for Themselves: Management versus Autonomy in Youth E-Citizenship. In L. Bennett (Ed.), Civic life online: learning how digital media can engage youth (pp. 189–206). Cambridge, Mass: MIT Press.

Coleman, S., Blumler, J. (2009). The Internet and Democratic Citizenship, Cambridge University Press.

DeNardis, L. (2014a). The global war for Internet governance. New Haven: Yale University Press.

DeNardis, L. (2014b). Internet Points of Control as Global Governance. In G. S. Smith & M. Raymond (Eds.), Organized chaos: reimagining the Internet. Waterloo, Ontario: Centre for International Governance Innovation. Available at http://www.cigionline.org/sites/default/files/no2_3.pdf http://www.cigionline.org/sites/default/files/no2_3.pdf

DeNardis, L. (2010). The Privatization of Internet Governance. Presented at the Fifth Annual Meeting of the Global Internet Governance Academic Network, Vilnius, Lithuania.

Epstein, D. (2015). Duality squared: Technology and governance in the making of the web. In R. A. Lind (Ed.), Produsing theory in a digital world 2.0: the intersection of audiences and production in contemporary theory. (pp. 41–56). New York, NY: Peter Lang.

Foucault, M. (1982). The Subject and Power. Critical Inquiry, 8(4), 777–795. doi:10.1086/448181

Foucault, M. (2004). Sécurité, territoire, population. Cours au collège de France, 1977-1978. Paris: Gallimard.

Freelon, D. (2015). Discourse architecture, ideology, and democratic norms in online political discussion, New Media & Society, 17(5): 772-791. doi:10.1177/1461444813513259

Fuller, M. (Ed.). (2008). Software Studies: A Lexicon. Cambridge, Mass: The MIT Press. doi:10.7551/mitpress/9780262062749.001.0001

Goode, L., McCullough, A., & O'Hare, G. (2011). Unruly publics and the fourth estate on YouTube, Participations, 8(2). Available at: http://www.participations.org/Volume%208/Issue%202/4b%20Goode%20et%20al.pdf

Hazlett, T., Teece, D., & Waverman, L. (2011). Walled garden rivalry: The creation of mobile network ecosystems (George Mason University Law and Economics Research Paper Series No. 11–50). Retrieved from https://www.law.gmu.edu/assets/files/publications/working_papers/1150WalledGardenRivalry.pdf

Hofmann, J., Katzenbach, C., & Gollatz, K. (2017). Between coordination and regulation: Finding the governance in Internet governance. New Media & Society, 19(9), 1406–1423. doi:10.1177/1461444816639975

Introna, L. D., & Nissenbaum, H. (2000). Shaping the Web: Why the Politics of Search Engines Matters. The Information Society, 16(3), 169–185. doi:10.1080/01972240050133634

Latour, B. (2007). Reassembling the social: an introduction to Actor-Network-Theory. Oxford: Oxford University Press.

Lessig, L. (2000). Code and Other Laws of Cyberspace. New York, NY: Basic Books.

Lessig, L. (2006). Code version 2.0. Available at http://codev2.cc/download+remix/

Mabi, C., "Les débats CNDP et leurs publics à l’épreuve du numérique. Entre espoir d’inclusion et contournement de la critique sociale" Phd Dissertation in communication studies.

Mac Sithigh D. (2012). App Law Within: rights and regulation in the smartphone age (Edinburgh University School of Law Research Paper 2012/22).

Manovich, L. (2013). Software Takes Command, Bloomsbury Academic.

Mehra, S. K. (2011). Paradise is a Walled Garden? Trust, Antitrust and User Dynamism George Mason Law Review. Forthcoming. Available at SSRN: http://ssrn.com/abstract=1813974

Monnoyer-Smith, L. (2008). Deliberation and Inclusion: Framing Online public debate to Enlarge Participation - A Theoretical Proposal. I/S: A Journal of Law and Policy for the Information Society, 5, 87.

Mueller, M. (2010). Networks and states: the global politics of Internet governance. Cambridge, Mass: MIT Press.

Musiani, F. (2012). Caring About the Plumbing: On the Importance of Architectures in Social Studies of (Peer-to-Peer) Technology, Journal of Peer Production, 1. Retrieved from: http://peerproduction.net/issues/issue-1/peer-reviewed-papers/caring-about-the-plumbing/

Musiani, F. (2013). Dangerous Liaisons? Governments, companies and Internet governance. Internet Policy Review. Retrieved from https://policyreview.info/articles/analysis/dangerous-liaisons-governments-companies-and-internet-governance doi:10.14763/2013.1.108

Musiani, F., Cogburn, D. L., DeNardis, L., & Levinson, N. S. (Eds.). (2016). The Turn to Infrastructure in Internet Governance. New York: Palgrave Macmillan US. doi:10.1057/9781137483591

Papacharissi, Z. (2010). A private sphere: Democracy in a digital age. Cambridge, England: Polity Press.

Rieder , B. (2006). Métatechnologies et délégation. Pour un design orienté-société dans l’ère du Web 2.0, PhD thesis, Université Paris 8 – Vincennes-Saint-Denis

Rieder, B., & Sire, G. (2014). Conflicts of interest and incentives to bias: A microeconomic critique of Google’s tangled position on the Web. New Media & Society, 16(2), 195–211. doi:10.1177/1461444813481195, Retrieved from: https://guillaumesire.files.wordpress.com/2012/05/new-media-society-2013-rieder-sire-google-microeconomics.pdf

Röhle, T. (2010). Dissecting the Gatekeepers: Relational Perspectives on the Power of Search Engines. In K. Becker & F. Stalder (Eds.), Deep search: the politics of search beyond Google (pp. 117–132). Innsbruck; Piscataway, N.J.: Studienverlag.

Rushkoff, D. (2010). Program or be programmed: ten commands for a digital age. New York: OR Books.

Sargsyan, T. (2016). The Turn to Infrastructure in Privacy Governance. In F. Musiani, L. DeNardis, D. L. Cogburn, & L. S. Nanette (Eds.), The Turn to Infrastructure in Internet Governance (pp. 189–201). Palgrave Macmillan, New York. doi:10.1057/9781137483591_10

Simondon, G. (2016). On the mode of existence of technical objects. Minneapolis, MN: Univocal Publishing.

Sire, G. (2015). Google, la presse et les journalistes: analyse interdisciplinaire d’une situation de coopétition. Paris: Institut de droit de la concurrence.

Thierry, B. (2013), Donner à voir, permettre d’agir. L’invention de l’interactivité graphique et du concept d’utilisateur en informatique et en télécommunications en France (1961-1990), PhD thesis, Université Paris-Sorbonne (Paris IV).

Winner, L. (1986). The whale and the reactor: a search for limits in an age of high technology. Chicago: University of Chicago Press.

Wright, S. (2012). Politics as usual? Revolution, normalization and a new agenda for online deliberation. New Media & Society, 14(2), 244–261. doi:10.1177/1461444811410679

Wright, S., & Street, J. (2007). Democracy, deliberation and design: the case of online discussion forums. New Media & Society, 9(5), 849–869. doi:10.1177/1461444807081230

Zittrain, J. (2008). The future of the Internet – and how to stop it. New Haven: Yale University Press.

Footnotes

1. On the logics of interface, see (Fuller, 2008; Manovitch, 2013; Thierry, 2013).

2. On this subject, see the debate hosted by Wired in 2010, between Chris Anderson and Michael Wolff: http://www.wired.com/2010/08/ff_webrip/

3. See the the iOS Human Interface Guidelines: https://developer.apple.com/ios/human-interface-guidelines/

Add new comment