The algorithmic dance: YouTube’s Adpocalypse and the gatekeeping of cultural content on digital platforms

Sangeet Kumar, Denison University, Granville, United States, kumars@denison.edu

PUBLISHED ON: 30 Jun 2019 DOI: 10.14763/2019.2.1417

Abstract

The March 2017 advertiser revolt on YouTube, popularly known as the adpocalypse, introduced widespread and radical changes on the platform’s policies related to the moderation of content, their ‘monetisability’ and the terms of the relationship between creators and the platform. These changes in turn have caused significant discontent within the creator community while also gradually transforming the predominant nature of content on the platform. This essay analyses this controversy that is yet to be subjected to a scholarly investigation, in order to probe the ways in which algorithmic moderation of content affects their monetisability and consequently the viewership patterns of culture. Through closely studying the new regime of content moderation and analysing user testimonies in the aftermath of the ‘Adpocalypse’, this essay poses critical questions about the public utility like role of digital platforms whose gatekeeping function remains largely outside the purview of public debate and deliberation.
Citation & publishing information
Received: March 6, 2019 Reviewed: May 29, 2019 Published: June 30, 2019
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Algorithms, Gatekeeping, Platforms, YouTube, Monetisation, Adpocalypse
Citation: Kumar, S. (2019). The algorithmic dance: YouTube’s Adpocalypse and the gatekeeping of cultural content on digital platforms. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1417

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

The consequences of algorithmic selection and ranking of “culture” (Striphas, 2015; Rieder et al., 2017) on the plurality and diversity of the online cultural realm have been widely critiqued by scholars who have shown the homogenising effect of the process. This essay adds to these critiques by probing the financial and cultural ramifications of the automated curation of culture through a close analysis of the raging controversy resulting from the advertisers’ revolt against YouTube in 2017, an event widely termed the Adpocalypse (Dunphy, 2017; Hess, 2017). The controversy erupted in March 2017 after news reports about advertisements playing around extremist content, first appearing in the UK press (Mostrous, 2017; Neate, 2017) rippled over into Europe and the USA (Solon, 2017), bringing a wide-ranging list of latent grievances against the platform to a head. As the event unfolded, news reports (Rath, 2017) listed threats from as many as 250 big advertisers from across industries, that announced plans to freeze advertising on the platform. I focus on the changes brought about by YouTube in the event’s aftermath, within its processes of classifying and monetising content to show how the platform’s reaction has had enduring consequences on its culture of creation. Through an analysis of the specific policy changes brought about on YouTube as well as in-depth conversations with content creators who were affected by them, this essay advances three claims about the consequences of algorithmic classification and monetising of cultural content on its plurality and financial viability.

First, the essay claims that the Adpocalypse controversy shows us how algorithmic decisions about the sorting and categorisation of content on YouTube are simultaneously also decisions about the financial trajectory of the said content. This is the case because a new regime of classification has come alongside expanded options for advertisers to exclude entire categories of content to run their ads on. Secondly, given the pre-eminence of profit-centred ecology of social media platforms (Wasko & Erickson, 2009; Fuchs, 2014; Fuchs & Mosco, 2016), the Adpocalypse allows us to establish that decisions about the categorisation (and hence the monetising) of cultural content invariably have a bearing on the extent of their viewership on social media platforms. The emerging hierarchy of content where some videos accrue income for YouTube each time they run and others that do not, creates structural “incentives to bias” (Rieder & Sire, 2014, p. 202) on the platform that are bound to have enduring consequences on content plurality. It is reasonable to ask if the algorithmic architecture of a profit-driven platform will treat a demonetised video (that makes no money for YouTube) equal to the “valuable and profitable” (Bucher 2012, p. 1169) content around which it can play advertisements and hence accrue earnings for the platform. If not (as the evidence in subsequent sections seems to suggest) then a demonetised video, by virtue of its inability to make money for the platform, will be suppressed from viewership and caught in a downward spiral of diminishing visibility, thus emphasising the enduring consequences of the gatekeeping function (Granka 2010; Beer 2017) of algorithmic sorting and classification on the diversity and plurality of online content. Lastly and leading from the two prior conclusions, this project helps interrogate the quandary arising from the increasingly utility-like role of social media platforms and their popular perception as being publicly accountable. While content moderation, the “central service platforms offer” (Gillespie 2018, p.202), has far-reaching social, cultural and political implications, the process nevertheless remains entirely out of the purview of democratic processes of public debate and deliberation (Lewandowski, 2014). The Adpocalypse, unlike any other event before, allows us to consider the policy implications of a scenario where financial interests compete with ensuring the viability of contrarian, risky, unpopular or non-mainstream ideas that are necessary to enable a robust debate and has historically served as an antidote to power by disallowing the pre-emption (Andrejevic, 2017) of critique.

In closely analysing the events around the Adpocalypse, this essay focuses on a raging controversy that has received scant critical scholarly attention (Hill, 2019) but one that helps shine new light on key dimensions of the algorithmic regulation of digital culture. It helps interrogate the relationship between creators and platforms especially in cases where the popular expectation of public accountability from them has meant their dual role as digital infrastructures on which we must live our cultural, political and social lives but also as profit making businesses. Close study of controversies caused due to breakdowns, stalemates and malfunction (Larkin, 2008; Gillespie, 2017; Kumar, 2017; Burgess & Matamoros-Fernández, 2016) have provided scholars with rich case studies to draw out their broader ramifications. Gillespie’s 2017 analysis of the controversy around manipulation of Google search results by Dan Savage to advance a particular meaning of Rick Santorum’s name exemplifies this method. His analysis shows how specific incidents within glitches, controversies and conflicts can function as data to be analysed in order to deduce their broader ramifications. Such analyses have shown that public controversies resulting from the breakdown of procedures and systems when confronted with an anomalous variable or incident, reveal the limits and outer boundaries of existing norms that may otherwise remain invisible and hidden. Hence, “accidents and breakdowns” represent fertile test cases where the underlying infrastructure “comes out of the woodwork” (Peters, 2015, p. 52) and becomes “more visible” (Larkin, 2008, p. 245). The rich lessons from disruptions such as the Adpocalypse underscore that “Glitches can be as fruitful intellectually as they are frustrating practically.” (Peters, 2015, p. 52). In closely analysing the key occurrences within this controversy, the essay replicates this method of studying moments of disruption and breakdown in order to draw out their broader implications. It supplements its analysis through testimonies from creators (both publicly available and through interviews) to build the sequence of events that it seeks to analyse. This multi-method analysis is important for a study that seeks to both reconstruct the sequence of an important event and analyse its implications. The wide-ranging changes introduced in YouTube’s processes of content classification, terms of partnership with creators and its advertising structure after the Adpocalypse form vital components of this case study whose ramifications are analysed below.

The Adpocalypse and asymmetrical power

With its advertisement and profit driven economic model under threat, YouTube’s response to the Adpocalypse was understandably swift and (some would argue) extreme. What began as short term quick fixes to the threats of boycott were soon institutionalised into permanent changes on the platform with long-lasting effects on the relationship between creators, advertisers and YouTube and arguably the very future of the global digital ecosystem (of which the platform is an important part). The Adpocalypse led to a slew of policy changes on YouTube that included (but were not limited to) the unprecedented decision (in the digital era) of refunding advertisers for ads that had already played (McGoogan, 2017), significant expansion of human moderation of content (Levin, 2017), allowing advertisers to exclude broad categories of content from playing their advertisements on 1, a steeper and longer on-ramp for content creators to join the YouTube Partners’ Program (Popper, 2017) - the YPP makes creators eligible for monetising their videos, a stricter regime of demonetising videos found to not be “advertiser friendly” 2, a higher threshold for content creators to qualify for appeal once their videos had been demonetised 3 (Patel, 2017) and allowing creators to self-certify their videos as meeting YouTube’s conditions for monetisation (Peterson, 2018). When taken together these sweeping changes (which included others in addition to the seven mentioned above) reveal how grave of a threat to its existence YouTube perceived the Adpocalypse to be and establish the platform’s prioritisation of advertisers’ interests over those of creators and users. The event also forces us to confront the consequences of the inevitable algorithmic moderation of content by showing the convergence of algorithmic gatekeeping of content (Kitchin, 2017; Yeung, 2017) with the profit motive of digital platforms. When these two come together, decisions about the categorisation, regulation and classification of culture (through machine learning systems) have a direct and significant bearing on the financial viability of cultural producers and the sustainability of independent cultural production.

Perhaps the most consequential decision after the Adpocalypse was the expanded ability given to advertisers to exclude broad categories of videos from their advertising campaigns. These content categories (see Figure 1) - five in all that YouTube made available to advertisers, had broad labels such as “Tragedy and conflict”, “Sensitive social issues”, “Sexually suggestive content”, “Sensational and shocking” and “Profanity and rough language” thus giving first evidence of YouTube’s new (but hidden) practice of evaluating all videos and then labeling and categorising those fitting the above classifications. Entirely concealed from the creators who have uploaded them, the process of evaluating and labeling all content now provide wide-ranging powers to advertisers to exclude broad categories of content through simply checking the required box and thus enabling a blunt, context-free and algorithmically moderated method of excluding particular formats, genres and topics. Given the challenge of categorising its total video archive, (ranging between 6 to 7 billion and growing at the rate of over 400 videos per minute), the five labels can only be described in broad, catch all terms, each with a descriptive paragraph that cast a wide net. For instance the category “Sensitive social issues”, is described as including, “news commentary, documentaries, and educational or historical content related to wars, conflicts, or tragic events” thus allowing advertisers to exclude their ads from a key content vertical of news commentary on the platform with a single click. For the creator, the signal is that such content will invariably make no money. In addition to the excludable content areas, YouTube now also allows advertisers the alternative of letting the platform do the work of matching advertisements to content by allowing advertisers to choose from three broad groupings of Expanded, Standard and Limited Inventories on which to run their ads. These groups of content that can only be accessed after signing into one’s Google Ads account (see Figure 2) represent a sequence of increasingly exclusive categories that also come with YouTube’s recommendation for advertisers to choose the middle one of “Standard Inventory”. Notably, this recommended middle category omits a significant amount of content on the platform including those covered by generic descriptions such as, “focus on sex as a topic”, “blood shown in body modifications or medical procedures” and “News, documentary, or education content with words that could be considered biased”. Allowing the exclusion of these meta groupings of content, further delegates the task of matching content with audiences to automated decision making systems thus reifying their power, “to decide what matters and to decide what should be most visible” (Kitchin, 2017 p. 6). As opposed to the finer levels of control provided by the five categories, this global option pushes more of the process of selection, evaluation and categorisation behind the black box of machine learning. As the experience of creators on the platform shows, this process is far more likely to punish the riskier and diverse types of content that push the boundaries of mainstream discourses thus disincentivising their production and sharing and functioning to “suppress content creators’ freedom of speech” (Cantz, 2018).

Figure 1: The categories of content (accessible through a Google Ads account) that potential advertisers can now exclude on YouTube.
Figure 2: The three groupings of content offered to advertisers to run their advertisements on.

In addition to the expanded categories for exclusion, the second decisive change introduced by YouTube after the controversy has been a gradually steepening on-ramp for new creators before they could join the YPP (the YouTube Partners Program that makes channels eligible for monetisation). This occurred in multiple stages with the first step immediately after the events of the Adpocalypse (in April 2017) when YouTube raised the benchmark for YPP to 10,000 lifetime views 4, a move clearly aimed at changing the fundamental character of the platform by introducing a gestation period before creators could begin to earn money from their content. While a drastic enough change by itself, it was followed up (at the beginning of 2018) by a further raising of the bar with the rationale that, “it’s been clear over the last few months that we need a higher standard” (16 January 2018). The new criteria for joining the YPP was raised to 4,000 hours of watchtime during the previous 12 months and 1,000 subscribers (Synek, 2018) which, taken together, virtually eliminated the chances of an amateur YouTuber from making any money on the platform for a significant period of time. The motivation, persistence, financial support and resources (for producing and uploading content) now needed to continue without the hope of any returns for a substantial period shifts the balance on the platform towards the professional, financially secure, and the determined creator/producer rather than the unsure and tentative amateur looking to gauge the value and popularity of an idea but without the necessary resources or the motivation to sustain for long without financial returns. When juxtaposed with the fact that most native YouTube stars started out as amateurs without necessarily having the support and motivation to continue in the absence of any financial returns, this raised bar underscores the blurring of the radical edges and the mainstreaming of a platform whose early rise cannot be divorced from its peer to peer networked architecture and its ruthless disregard for restrictions and prescribed formats of television.

This mainstreaming was further underscored in another rule that prescribed higher threshold of appeal after a creator’s video has been deemed unfit for monetisation by YouTube’s algorithm, the third key change post-Adpocalypse. Instituted a couple of months after the Adpocalypse, the policy now requires a minimum count of a thousand views within a week for a demonetised video to be eligible for human re-evaluation (Patel, 2017). There was however an exception built into this rule for channels with over 10,000 subscribers whose appeal would be considered irrespective of the view count. YouTube explained this exception stating, “We do this because we want to make sure that videos from channels that could have early traffic to earn money are not caught in a long queue behind videos that get little to no traffic and have nominal earnings” (Kain, 2017). Presented as a plausible arrangement to deal with the paucity of human moderators who could re-evaluate every appealed video, this caveat nevertheless underscores a prioritisation whereby the bigger well-established channels’ interests supersede those of the new ones still finding their feet. When combined with the higher requirements for YPP, such exceptions instantiate a dynamic wherein the big channels get bigger while newcomers find it far tougher to gain a sustainable foothold on the platform. The prioritisation of well-established channels in the appeals process legitimises and makes public the unacknowledged phenomenon of hierarchical tiers on the platform. While stratification along the lines of popularity, viewer count or subscriber numbers are inevitable on any platform, the application of different yardsticks and rules by the platform depending on the popularity levels of channels makes permanent and institutionalises the barriers that are insurmountable for new and upcoming channels and creators. Eventually, these institutionalised hierarchies are bound to reflect in the platform’s content thus making it a different entity than one animated by the “youthful exuberance of its early years” (Lobato, 2016, p. 348) and encapsulating the web’s early ethos of innovation, expression and creation without permission or fear.

Perhaps the death knell of that stated ethos of giving everyone a voice and ensuring equality of expression 5 comes as the fourth decisive change in the form of guidelines for “advertising friendly” content 6 introduced after the Adpocalypse. These criteria (comprising of nine descriptive categories) are now used to make an early determination about the monetisability of content and are applied prior to the finer distinctions about content classification within categories. They are written in broad catch-all terms, with the face-saving caveat that “We aren’t telling you what to create - each and every creator on YouTube is unique and contributes to the vibrancy of YouTube”, before adding that, “However, advertisers also have a choice about where to show their ads.” The very first of the nine content types listed as “not suitable for most advertisers” is “Controversial issues and sensitive events” whose description includes,

Video content that features or focuses on sensitive topics or events including, but not limited to, war, political conflicts, terrorism or extremism, death and tragedies, sexual abuse, even if graphic imagery is not shown, is generally not suitable for ads. For example, videos about recent tragedies, even if presented for news or documentary purposes, may not be suitable for advertising given the subject matter. 6

Notable in this description is the denial of “advertising friendly” label to videos that even discuss potentially pressing social issues such as sexual abuse and political conflicts. This broad description ends up excluding a common and popular genre of channels focused on news, analysis and critique but also content such as political and dissenting speech whose sharing through digital platforms in the years prior to the Adpocalypse was instrumental in several political and social movements (Wall & El Zahed, 2011; Meek, 2011). Other items on the guidelines for “advertising friendly” videos similarly discourage the mention of certain topics even if done in a satirical or comedic context (e.g., in the guideline “Inappropriate use of family entertainment characters”). To be sure, critiques of the tightening guidelines for advertisement friendly content must be presented with the caveat that they only matter if creators seek revenue from videos and hence do not apply to videos uploaded for a myriad of other motivations. These could include merely spreading the word through organised campaigns as well as uploading for educational or archival/storage purposes. Acknowledging these varied motivations however does not take away from the discriminatory effect of the newly instituted policies both because the algorithmic architecture of a profit driven platform is unlikely to treat unmonetisable (and non-profitable) videos equal to monetisable (and profitable) ones (as shown below) irrespective of the motive of the creator and because such an ecosystem skews the incentives for creating particular genres and types of content and away from others. Taken together the combined effects of these changes incentivise particular kinds of content creation while disincentivising others, thus raising worrying questions about the consequences of the algorithmic “nudging” (Yeung, 2017; Thaler & Sunstein, 2008) of the creative process towards mainstream, conformist subjects and away from other genres and topics.

The algorithmic dance

Changes such as the newly introduced criteria for advertiser friendly content and a new regime that evaluates all videos to ascertain if they fall under the five labels instituted after the Adpocalypse have initiated an era of content regulation that radically alters the creator ecosystem and engenders an anxiety laden environment of second-guessing, self-surveillance and continuous tweaking (Bishop, 2018; Nieborg & Poell, 2018) of the content being produced. The delegation to algorithms of tasks that were earlier managed by the community on YouTube has strengthened the perception that not only the evaluation of content but (as we will note in the subsequent section) even the earnings (which in some cases is the livelihood) of creators is at the mercy of “YouTube’s unfeeling, opaque and shifting algorithms” (Hess, 2017). Besides sending an irrefutable message of conformity with their guidelines, the enhanced algorithmic authority induces a perpetual sense of precarity among creators given how often they bear the brunt of minor tweaks within the architectural code of the system. The famed “objectivity” of algorithmic systems (Gillespie, 2016) that is a boon when having to sort through homogeneous data at a computational scale, is far less so when discrete labels need to be applied to heterogeneous content varying across the dimensions of context, culture and the umpteen linguistic cues that shape the meaning of words and texts. Limitations exposed during the algorithmic scrutiny of uploaded videos starkly elevate the importance of the “warmly human” (Gillespie, 2016, p. 26) qualities of deciphering nuance, reading context and subjectively differentiating between the varied intentionalities, cues and frames within cultural content. Stark instances of these limitations repeatedly confront creators, for instance in the experience of Nick Schade, whose eponymous YouTube channel about “Boat building and Sea Kayaking clips” frequently uses a specialised technique called “strip built” that requires the bending and shaping of thin strips of wood. Under the new system Schade’s videos have repeatedly been flagged and demonetised by YouTube’s algorithm, a phenomenon he ascribes to the machine’s singular understanding of the word “strip” as the removal of one’s clothes. While his high subscriber count has allowed him immediate manual review that has reversed the algorithmic decision each time, his sobering lesson from the ongoing cycle of demonetisation and its reversal that has continued for months is that,

So far, I have no evidence that the algorithm is learning that there may be multiple definitions of the word “strip“. If I include the word in the title, it is flagged immediately, if I change the title it is unflagged immediately. This has held true for months.” 7 (March 2018).

That a simple equation of the word “strip” with a fixed immutable meaning can throw a wrench in the system of algorithmic sorting of content, leading to flagging and demonetisation of videos and causing immense inconvenience to creators, goes at the very heart of the limitations of machine learning in handling nuance, contexts and cues within human language. Such frustrating encounters abound in the post-Adpocalypse era as in the case of the creators of the channel Faerie Rings Crochet Things, focused on crochet videos and tutorials who sought to investigate the constant demonetisation of their videos by experimentally uploading two versions of it with a few scenes changed between them. Their experiment led them to conjecture that one version of the wig-tutorial video was continuously getting flagged by the automated system because it had the phrase “was just bugging me” 8, the only slang-like word in the entire video. Such stories about the erasure of context and misreading of meaning within the automated process of content moderation entrench a culture of speculative guessing about the reasons behind their content’s demonetisation. With the usual offline channels of feedback about individual decisions missing, the comment sections of channels and official YouTube blogs on policy are replete with conversations that resemble a process akin to reading tea leaves. While the rampant disaffection stands out as a prominent thread within creators’ responses to the policy changes, so does a sense of communal solidarity to help each other by decoding the “mind” of the algorithm, thus pointing to the strange new relationship between creators and machine learning systems that moderate their content. A case in point is creator Drina Dayz’s dismay at the demonetisation of her video about unboxing a crate of snacks, 9 “This makes no sense, what about this video is not suitable to all advertisers,” she asks along with the link to the said video on the YouTube help forum. 10 A good digital samaritan by the name of Shaun Joy replies below her comment offering to help her out since, “YouTube refuses to do anything to help their creators,” and after closely analysing the video and its metadata, he concludes that it is perhaps the word “gross” in the demonetised video’s description that is triggering a flag. He goes on to explain,

I’m guessing that advertisers don't want to have their product associated with something "gross", which makes sense on a 10000 ft level. Except you know, CONTEXT., which Youtube's automatic portion of the algorithm seems to be unable to figure out.” (19 September 2017)

Contextual differentiation between meanings, that is key to the human experience of language, is germane to the rising discontent against algorithmic gatekeeping introduced after the Adpocalypse. The pervasiveness of algorithmic power in cultural curation can be gauged by the fact (shared by YouTube) that despite it expanding human moderation, almost ninety-eight percent decisions to remove videos for “violent extremism” are now taken by algorithms (Levin, 2017). While the official help pages of both YouTube 11 and Google 12 advise creators to contextualise their videos to help the platform “understand background and intent”, it is just as clear that delegating the task to creators can barely begin to anticipate the complex ways in which language, culture and meaning can be related. As YouTube expands globally, not only does it need this sort of contextual intelligence across languages but it does so in different national, local and cultural versions of the same language.

Their repeated unpleasant encounters with this irreducible and insurmountable gap between human and machinic understandings of language and culture has ensured that creators have begun to take a far more cautious approach to dance around the algorithmic blindspots and avoid the frustrating cycle of demonetisation, appeal and restoration leading to loss of precious time and revenue. This algorithmic dance is akin to what Battelle (2005) described as the “Google Dance” - “the moniker given to Google’s periodic update of its algorithms” (Battelle 2005, p.157) that could wildly swing fortunes of small businesses dependent on Google to be discovered. Evading its capricious power necessitates that in addition to avoiding topics and issues likely to be deemed unfriendly to advertisers or that could possibly be slotted under the five labels, creators often find themselves paying close attention to the choice of words, phrases and metaphors that, despite being commonplace in day-to-day language, could trigger the system’s alarms. In describing the new regime, David Pakman (who runs the channel The David Pakman Show) explains how they, “are more careful about what words we include in video titles and description to lower the chances of the video being automatically flagged for demonetization” 13. If Pakman’s established channel with over half a million subscribers would need to be cautious, it would be safe to conjecture that new and emerging channels seeking to grow and develop a good relationship with the algorithm would go further to conform voluntarily.

Since most of the policy decisions to sanitise and mainstream content on YouTube can only be exercised and enforced algorithmically, the algorithmic dance to avoid being mislabeled by the automated systems adds an entirely new reason for the “becoming contingent of cultural commodities” (Nieborg & Poell, 2018, p.2) that is a hallmark of the platformisation of culture. The fear of the all-powerful and indecipherable algorithm begins to function as a deterrence to creating risky, edgy or experimental content and yet this linguistic self-policing, while consequential, is among the last stages of self-regulation in the process. Before that lie the challenges of ensuring that a video has already met the criteria of being advertising friendly and has avoided being labelled under one of the five categories that the advertisers can exclude. Those prior levels of exclusion in the post-Adpocalypse regime already omit a vast quantity of content and move the platform in a direction that it had positioned itself against from the start. The now-famous words of YouTube CEO Susan Wojicki that, “YouTube is not TV, and we never will be,” (McCracken, 2017) are often contrasted by the disgruntled creators with the ongoing moves towards sanitising the platform’s content to make it a more desirable place for advertisers. This regime of sanitisation ensures creators even tangentially referring to one of the categories of exclusion must now continuously choose between their revenue and their ability to speak and create content freely. Genres such as news, commentary, political shows and comedy seem particularly vulnerable to the loss of revenue due to the risk of partial or full demonetisation. In explaining the predicament Ethan Klein of the comedy channel h3h3 says,

It’s getting so bad that you can’t even speak your mind or be honest without fear of losing money and being not ‘brand-friendly. YouTube is on the fast track to becoming Disney vloggers: beautiful young people that wouldn’t say anything controversial and are always happy.” (Hess, 2017).

This chilling effect of algorithmic changes is elaborated upon by Jörg Sprave, the owner of The Sling Shot Channel who in July 2018 quit being a full-time YouTuber to go back to a regular job, and claims that the episode did not just have losers but also winners. “I know a lot of channels who are on the winning side of this too – if you are into cooking recipe videos or if you make songs to make babies fall asleep – these people now make a lot of money – many more times the money they used to make,” he explains (Sprave, Personal Conversation). The disincentive to produce particular types of content comes alongside the implicit nudge and the explicit incentive to produce other types of content thus pointing to a turn on the platform whose effects are likely to be far more widespread in the coming years.

The contrast in this visible turn would not be as notable, were it not for YouTube’s positioning of itself as a subversive medium “to give everyone a voice” 14 and imbibing the web’s culture of freedom, rebelliousness and disruption (as captured in Wojicki’s quote above). In their authoritative text on YouTube, Burgess and Green (2018) discuss the mainstream media’s worried recognition of the platform’s disruptive effect. Its promise to empower the common creator was a wager on a cultural ecology independent of the pressures of institutions such as states or corporations that have historically sought to exercise editorial control over channels of communication. Events following the Adpocalypse show a reversal in that culture on the platform, a turn that is explained by the historically symbiotic relationship between advertisers and the media that has brought attendant complications alongside. Historically, not only have corporations sought to exercise editorial control on media but have openly deployed the power of their advertising dollars to create a sanitised environment for their commercials through “economic censorship” (Richards & Murphy, 1996; Baker, 1992). In a now infamous memo to broadcasters, Procter & Gamble, the largest advertiser in the US (and incidentally also a leader of the post-Adpocalypse boycott of YouTube 15) had very specific instructions to facilitate a “buying mood” that included instructions to avoid depicting the horrors of war, portraying criminal activities on screen, showing business as “cold, ruthless and lacking all sentiment” and any attacks on “some basic conception of the American way of life” (Bagdigkian, 2009). The consumer giant’s diktat however was just one instance among a normalised culture (elaborately documented by Richards & Murphy, 1996) wherein corporations routinely sought to shape and regulate the content of media. So explicit was this practice that there existed job titles such as “screeners” for trained experts who worked at agencies specialising in screening television content to vet shows as appropriate for advertisements to run on. In an eerie similarity with YouTube’s sensitive categories that advertisers can exclude today, one such screener Tami Engelhardt described their method as, ''Basically, we look for what we call the Big Six: sex, violence, profanity, drugs, alcohol and religion’’ (Carter, 1990). The migration of this historical phenomenon onto the digital platform is not surprising but evidence of its gradual emulation of TV, a point further underscored by YouTube’s decision to hire more than 10,000 human moderators for “reviewing content that could violate its policies” (Levin, 2017) after the Adpocalypse.

Explicit diktats by advertisers to shape media’s content perhaps conceal the more pernicious consequence of the process, that is the invisible process of self-censorship both by individuals and media institutions with a reverberating chilling effect that self-proscribes any allusions to controversial issues and topics. When creators on YouTube discuss the process of “self-optimization” (Bishop, 2018) in order to make their content algorithm friendly (see Gillespie, 2014), they are only acting according to “a pervasive awareness that deviation can be costly,” (Baker, 1992, p. 2142) for advertising supported media. In describing how this fear can ripple through the media ecosystem, Baker (1992) explains that, “Eventually, this system of predicted disapproval dissuades reporters or producers from even thinking about a problematic story,” (p. 2143). The uncanny similarities between the historical experience of media personnel and the process of second-guessing and the cautious approach to play safe by YouTube creators today, irrefutably signals a new era on the platform. The privileging of advertisers’ interests to the detriment of creators after the Adpocalypse fits well within this altered scenario and is only the latest such move by the platform that is now in serious competition (with television networks) for the advertising spend of the big television advertisers. An important prior link in that chain was the launch of the Google Preferred programme that created an elite list of channels (comprising the top 5% of YouTube videos) determined by “a proprietary algorithm involving total audience and passion level among viewers.” (McCracken, 2017) to be given special treatment and to be offered as a package to advertisers. The creation of an elite tier only further establishes a truth that is by now a well-known dictum - that everyone is not equal on YouTube - thus institutionalising the very hierarchical structuration (Mosco, 1996) that were the hallmarks of older, pre-digital media entities.

Public infrastructures, private interests

Creators’ frustration and YouTube’s gradual pivot towards a more mainstream media outlet encapsulates the emerging quandary when private digital entities “inhabit a new position of responsibility” akin to essential public utilities and are “entwined with the institutions of public discourse” (Gillespie, 2018, p.203) but also continue to remain largely outside the purview of democratic deliberation and public accountability. Their exasperation arises from unmet expectations that have naturally risen as users have got used to building their creative and commercial lives around digital infrastructures such as YouTube. The increasingly common apperception of private digital platforms as public infrastructures becomes acutely real during conflicts, breakdowns or other moments of dissatisfaction as users instinctively calling up public institutions such as the police to restore service during disruptions of access. The most recent instance of this fascinating phenomenon occurred on 16 October 2018 when YouTube faced a rare hour long global outage (Almasy, 2018) leading to a social media eruption of accounts and testimonies of what it was like to live without access to the platform even for a few minutes. As the twitter hashtag #YouTubeDown began to trend and a sense of purposeless ennui gripped social media users, police departments as far apart as Philadelphia and New Zealand reported calls from desperate users seeking their help in restoring service. In tweets whose hilarity cannot conceal their significance, the Philadelphia police department (@PhillyPolice) said, “While it is extremely annoying, @YouTube‬ being down is not a police matter #YouTubeDOWN‬” and the New Zealand police (@nzpolice) said, “Yes, our @YouTube is down, too. No, please don't call 911 - we can't fix it”. The phenomenon of users calling the police during interruptions of service is not limited to YouTube either and has occurred just as well in the case of other digital media platforms. When Facebook went down briefly due to technical issues at the end of September 2017 for instance, users as far apart as a small town of Cheshire in the UK to a big city such as Houston in the US called their local police departments to complain (Griffin, 2015; Hooper, 2017), thus underscoring its spontaneous equation with a public institution with service obligations and accountability. ‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬

Reactions to breakdowns such as the above reveal both the creeping role of digital platforms as social, cultural and material infrastructures but also their “taken-for-grantedness” in the public mind akin to public services that are inconspicuous, “until they break down or something goes awry” (Larkin, 2008, p. 245). Users’ spontaneous calls to the police show a subconscious and illusory process of identification of platforms as publicly accountable entities, a perception arising no doubt from their central role in digital life. While this perception would not hold upon thoughtful scrutiny, the basis for its popular prevalence nevertheless must be acknowledged and has led to interrogations of their role in ensuring rights of citizens (Belli et al., 2017) in areas such as freedom of speech, access to information as well as data protection. It is important to acknowledge that this perception has been a long time in the making and expectations of fairness and accountability amidst the asymmetrical relationship between platforms and users were already visible in the early years of the web, when Google emerged as the giant of online advertising it is today. John Battele’s (2005) exhaustive account of Google’s birth and expansion contains a prescient nugget about Neil Moncrief, a small business owner whose niche trade of making shoes for people with bigger sized feet witnessed a precipitous rise and an equally dramatic fall due to tweaks in Google’s search algorithm. In describing the sharp drop Moncrief claimed that it was, “…as if the Georgia Department of Transportation had taken all the road signs away in the dead of night and his customers could no longer figure out how to drive to his store” (Battelle, 2005, p. 156). Moncrief’s metaphorical comparison of Google’s services with the public highway system was prophetic as it anticipated by more than a decade, the experience of creators on YouTube after the Adpocalypse. These instances, though a decade apart, show a continuing re-orientation of the boundaries between our understanding of the public and the private in a scenario where our social, political, commercial and cultural world is increasingly mediated by digital platforms.

Their function as public utilities that form the bedrock of digital life is made all the more salient given the acute material and economic repercussions of their decisions. Creators lament that a vital dimension of the Adpocalypse controversy that went relatively un-interrogated was that in addition to the financial consequences of algorithmic decisions about content categorisation and curation, those decisions far more worryingly, affected the visibility of their content. To parse out the significance of this point we have to navigate the thicket of the controversy and the policy statements made by YouTube with a thought experiment 16 that asks: given a choice between content (that is advertiser friendly and not excludable by the five labels) on which all ads can run and those excludable either entirely (due to being non-advertiser friendly) or partially by being excluded by advertisers, who can check off a label that the video has been categorised under, which among these two categories of videos would the YouTube algorithm give preference to? This question is akin to invoking the sanctity of the historical divide between the editorial and the commercial side of traditional media such as newspapers and television. YouTube’s answer to this question (given through its unofficial channel Creator Insider) is an emphatic denial that the algorithm for search and discovery has any knowledge of the monetisation status of a video thus implying that whether or not a video is fit for running ads has no implications on its viewership numbers. In emphasising this point a YouTube engineer Todd, explains 17 in a video,

The search and discovery systems that decide which videos to recommend - they don’t have any knowledge about what is going on in the advertising system - so if you get that yellow icon that you see that says it may not be suitable for all advertisers the information about that does not even flow into our system. So they are completely different systems different teams manage them.

The implication of this answer, that YouTube’s worries about its bottomline and its established affinity to its advertisers (as evident in the policy changes instituted post Adpocalypse), do not influence its decisions about the discoverability of videos and recommendations made to viewers, would be laudable if true. However, given the thrust of YouTube’s decisions that were aimed to protect their bottomline and advertisers’ interest, it would seem like an aberration for the platform to treat videos that cannot make them money at par with those that can. Its denial instantiates an enduring quandary for critical scholars faced with seemingly incredulous claims by platforms that are difficult to disprove unless the algorithmic black box (Pasquale, 2015) can be circumvented. Thanks to the angst arising from the Adpocalypse, evidence to disprove YouTube’s claims emerged from among the community of creators and users who were desperately seeking answers to the unfolding controversy. By delving deeper to unearth evidence from his video’s analytics one of them was able to show a clear correlation between the monetisation status of videos and their viewership patterns. Discussing the 90 day overview of his videos 18 the German YouTuber Jörg Sprave not only showed that viewership numbers fell exactly at the time his videos were demonetised but also that they were back to normal levels the precise moment the demonetisation decision was reversed (see figure 3 & 4). Claiming that there was “100% correlation” between monetisation status and viewership patterns, Sprave argues that, “they are only supporting videos that make money for them and if they don’t make any money they are filtering them out.” Refuting this conclusion, representatives of YouTube (writing below Sprave’s video and using the Creator Insider handle) 18 deny any correlation and instead ascribe the drop in viewership numbers of his demonetised videos to the fact that “the underlying reason for less ads or no ads is also used by search and discovery”.

Figure 3: Screenshot from Sprave’s video analysing how viewership numbers on his video (two graphs on the left) dropped at the same time his video was demonetised (bottom right graph).
Figure 4: Screenshot from Sprave’s video showing the perfect synchronisation between viewership (two graphs on the left) and the monetisation status of his video (bottom right graph).

While plausible, this explanation however is unable to explain the perfect synchronicity in the timing between monetisation status and viewership numbers, which can only be explained by a coordination between the search and discovery algorithms and those classifying and categorising content. That the decisions about demonetisation and suppressed viewership arrived at through two separate systems (i.e., content classification and search and discovery) had perfect synchronicity is too striking a coincidence to be explained by chance alone, that representatives of YouTube would have us believe. Moreover, if algorithmic decisions about monetisation and visibility are informed by the same underlying reason, they raise worrying questions about the types of content that will get visibility on the platform.

If true, this similarity between the two sides of the platform fits well with YouTube’s aggressive attempts to increase its bottomline that include proactively courting the global corporate advertising spend by encashing its latent potential as a hub of online cultural creativity. The rising graph of its share of the global advertising pie aided by the increasing migration of the advertising dollars from television to the digital domain (McCracken, 2017) was estimated to make YouTube a US$15 billion business (Jhonsa, 2018) in 2018 thus entrenching its role as a profitable cash cow for its parent company Alphabet. That YouTube’s search and discovery algorithms would prioritise profitable content over those unlikely to bring in any advertising dollars (as shown by Sprave’s experience) seems a natural fit with the platform’s financial ambitions and yet raises troubling questions about it’s role in shaping the global cultural ecology. When financial considerations prevail over other factors within decisions about what content gets promoted and what does not, they begin to skew content moderation towards particular types of discourses, truths and versions of realities and away from others.

In contrast to its “implicit contract” (Gillespie, 2018, p. 203) of having no stakes in the ongoing duel of ideas that define liberal democracies, YouTube’s financial priorities begin to create invisible pathways for users thus “shaping what they know, who they know, what they discover, and what they experience” (Kitchin, 2017, p. 6). There are precedents to such power in prior media institutions (McChesney, 2015) and yet its consequence in the digital domain, given the kind of monopoly that YouTube enjoys in the field of video content globally is far more enduring. And while those earlier monopolies were critiqued, resisted and regulated, digital platforms such as YouTube function with wide protection wherein they are trusted to self-regulate for the common good. As the aftermath of the Adpocalypse reveals, trusting digital platforms to foreground plurality, contrarianness and heterogeneity (Mill, 2014) while also remaining impartial adjudicators between competing ideas and truth claims leaves decisions far too consequential at the mercy of their pecuniary profit-driven interests.

Conclusions

The scholarly history of media industries is replete with laments about the consequences of unrequited trust in profit seeking corporations to take decisions for the larger common good. The public perception of globally dominant digital platforms as public infrastructures, emboldened no doubt by their necessity for living our digital life, makes their inclusion within the purview of democratic deliberation and accountability an imperative for our age. The chilling effect created by the looming fear of demonetisation, and the loss of viewership resulting due to algorithmic categorisation has consequences far beyond the immediate discontent generated by YouTube’s decisions post the Adpocalypse. The Adpocalypse signals a decisive shift in the incentive structures of content creation on YouTube, thus likely to deter creators away from particular topics, genres and categories of content and charting a path away from a plural, free and heterogenous ecosystem to a more sanitised, family-friendly and mainstreaming of the platform.

While YouTube is continuously tweaking its rules in response to emerging crises, what remains entirely missing from the process is any formalised process of stakeholder participation in the decisions it makes. In a system where it is not answerable to any regime other than concerns about its own survival and bottomline, it is free to arbitrarily take cognisance of concerns that it considers worthy of attention and ignore the rest. Its expanding global footprint and the millions of users that come to rely on it for a wide range of activities that make up our cultural life, make such an asymmetrical relationship untenable in the long run. More than two years after the controversy, the lack of a formalised redressal mechanism has ensured that the rumblings among the creator community have not diminished thus raising critical questions about precarity of creator labour and the exploitative nature of the relationship between platforms and ‘produsers’. Instituting formalised mechanisms for stakeholder participation that go beyond mere gestures to recognise how the platform profits from uncompensated labour is key to redressing the grievances arising from the controversy. Such mechanisms must seek to live up to the original promise of the platform, by recognising the precarious position of the creators, who remain the most vulnerable and the least powerful voice among the stakeholders affected by the Adpocalypse.

References

Almasy, S. (2018, October 17). YouTube back online after brief outage. CNN. Retrieved November 18, 2018 from https://www.cnn.com/2018/10/16/tech/youtube-outage/index.html

Anderson, C. (2008, June 23). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired. Retrieved from https://www.wired.com/2008/06/pb-theory/

Andrejevic, M. (2017). To Pre-Empt A Thief. International Journal of Communication, 11, 879–896. Retrieved from https://ijoc.org/index.php/ijoc/article/view/6308

Bagdikian, B. H. (2009). Dr. Brandreth Has Gone to Harvard. In J. Turow & M. P. McAllister (Eds.), The Advertising and Consumer Culture Reader (pp. 76–90). New York: Routledge.

Baker, C. E. (1991). Advertising and a Democratic Press. University of Pennsylvania Law Review, 140(6), 2097–2243. Retrieved from http://scholarship.law.upenn.edu/penn_law_review/vol140/iss6/1

Battelle, J. (2006). The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture (Reprint edition). New York: Portfolio.

Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. doi:10.1080/1369118X.2016.1216147

Belli, L., Erdos, D., Fernández Pérez, M., Francisco, P. A. P., Garstka, K., Herzog, J., … Zingales, N. (2017). Platform regulations: How platforms are regulated and how they regulate us. Retrieved from http://bibliotecadigital.fgv.br/dspace/handle/10438/19402

Bishop, S. (2018). Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm. Convergence, 24(1), 69–84. doi:10.1177/1354856517736978

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180. doi:10.1177/1461444812440159

Burgess, J., & Green, J. (2018). YouTube: Online Video and Participatory Culture (2nd ed.). Cambridge; Malden, MA: Polity.

Burgess, J., & Matamoros-Fernández, A. (2016). Mapping sociocultural controversies across digital media platforms: One week of #gamergate on Twitter, YouTube, and Tumblr. Communication Research and Practice, 2(1), 79–96. doi:10.1080/22041451.2016.1155338

Carter, B. (1990, January 29). THE MEDIA BUSINESS; Screeners Help Advertisers Avoid Prime-Time Trouble. The New York Times. Retrieved from https://www.nytimes.com/1990/01/29/business/the-media-business-screeners-help-advertisers-avoid-prime-time-trouble.html

Craig, D., & Cunningham, S. (2019). Social Media Entertainment: The New Intersection of Hollywood and Silicon Valley. New York: NYU Press.

Dunphy, R. (2017, December 28). Can YouTube Survive the Adpocalypse? Intelligencer Retrieved November 18, 2018, from http://nymag.com/intelligencer/2017/12/can-youtube-survive-the-adpocalypse.html

Fuchs, C. (2017). Social Media: A Critical Introduction (Second edition). Thousand Oaks, CA: SAGE Publications.

Fuchs, C., & Mosco, V. (Eds.). (2017). Marx in the Age of Digital Capitalism. Chicago: Haymarket Books.

Gillespie, T. L. (2014). The Relevance of Algorithms. Media Technologies: Essays on Communication, Materiality, and Society Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot (ed.), MIT Press.

Gillespie, T. (2016). Algorithm. In B. Peters (Ed.), Digital Keywords (pp. 18–30). Princeton; Oxford: Princeton University Press.

Gillespie, T. (2017). Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem. Information, Communication & Society, 20(1), 63–80. doi:10.1080/1369118X.2016.1199721

Gillespie, T. (2018). Platforms are not intermediaries. Georgetown Law Technology Review, 2, 198–216. Retrieved from https://georgetownlawtechreview.org/platforms-are-not-intermediaries/GLTR-07-2018/

Granka, L. A. (2010). The politics of search: A decade retrospective. The Information Society, 26(5), 364–374. doi:10.1080/01972243.2010.511560

Griffin, A. (2015, September 29). Facebook Down: Don’t Ring Us When Site Stops Working, Say Police. Retrieved November 18, 2018, from The Independent website: http://www.independent.co.uk/life-style/gadgets-and-tech/facebook-down-don-t-ring-us-when-site-stops-working-say-police-a6672081.html

Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix Prize and the production of algorithmic culture. New Media & Society, 18(1), 117–137. doi:10.1177/1461444814538646

Hess, A. (2017, December 22). How YouTube’s Shifting Algorithms Hurt Independent Media. The New York Times. Retrieved from https://www.nytimes.com/2017/04/17/arts/youtube-broadcasters-algorithm-ads.html

Hillis, K., Petit, M., & Jarrett, K. (2013). Google and the culture of search. Abingdon: Routledge. doi:10.4324/9780203846261

Hooper, B. (2017, October 12). Police: Don’t call 911 to report Facebook is down. UPI. Retrieved from https://www.upi.com/Odd_News/2017/10/12/Police-Dont-call-911-to-report-Facebook-is-down/5381507814083/

Jhonsa, E. (2018, May 12). How Much Could Google’s YouTube Be Worth? Try More Than $100 Billion. TheStreet. Retrieved from https://www.thestreet.com/investing/youtube-might-be-worth-over-100-billion-14586599

Kain, E. (2017, September 18). YouTube Wants Content Creators To Appeal Demonetization, But It’s Not Always That Easy. Forbes. Retrieved from https://www.forbes.com/sites/erikkain/2017/09/18/adpocalypse-2017-heres-what-you-need-to-know-about-youtubes-demonetization-troubles/#33a23c4c6c26

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. doi:10.1080/1369118X.2016.1154087

Kumar, S. (2017). A river by any other name: Ganga/Ganges and the postcolonial politics of knowledge on Wikipedia. Information, Communication & Society, 20(6), 809–824. doi:10.1080/1369118X.2017.1293709

Larkin, B. (2008). Signal and Noise. Duke University Press.

Levin, S. (2017, December 5). Google to hire thousands of moderators after outcry over YouTube abuse videos. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/dec/04/google-youtube-hire-moderators-child-abuse-videos

Lewandowski, D. (2014). Why We Need an Independent Index of the Web. In R. Konig & R. Miriam (Eds.), Society of the Query Reader: Reflections on Web Search (pp. 49–58). Amsterdam: Institute of Network Cultures. Retrieved from http://networkcultures.org/query/reader/articles-for-download-society-of-the-query-reader/

Lobato, R. (2016). The cultural logic of digital intermediaries: YouTube multichannel networks. Convergence, 22(4), 348–360. doi:10.1177/1354856516641628

McChesney, R. W. (2015). Rich Media, Poor Democracy: Communication Politics in Dubious Times. New York: The New Press. Retrieved from https://www.telegraph.co.uk/technology/2017/07/03/youtube-refunds-advertisers-terror-content-scandal/

McCracken, H. (2017, June 18). Susan Wojcicki Has Transformed YouTube—But She Isn’t Done Yet. Fastcompany. Retrieved from https://www.fastcompany.com/40427026/susan-wojcickis-youtube-isnt-tv-but-its-tvs-biggest-rival

McGoogan, C. (2017, July 3). YouTube refunds advertisers after terror content scandal. The Telegraph. Retrieved from https://www.telegraph.co.uk/technology/2017/07/03/youtube-refunds-advertisers-terror-content-scandal/

Meek, D. (2012). YouTube and Social Movements: A Phenomenological Analysis of Participation, Events and Cyberplace. Antipode, 44(4), 1429–1448. doi:10.1111/j.1467-8330.2011.00942.x

Mosco, V. (1996). The Political Economy of Communication. London: Sage Publications.

Mostrous, A. (2017, February 9). Google faces questions over videos on YouTube. The Times. Retrieved from https://www.thetimes.co.uk/article/google-faces-questions-over-videos-on-youtube-3km257v8d

Neate, R. (2017, March 17). Extremists made £250,000 from ads for UK brands on Google, say experts. The Guardian.

Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New Media & Society, 20(11), 4275–4292. doi:10.1177/1461444818769694

Pasquale, F. (2016). The Black Box Society: The Secret Algorithms That Control Money and Information (Reprint edition). Cambridge, MA; London: Harvard University Press.

Patel, S. (2017, September 6). The “demonetized”: YouTube’s brand-safety crackdown has collateral damage. Digiday. Retrieved from https://digiday.com/media/advertisers-may-have-returned-to-youtube-but-creators-are-still-losing-out-on-revenue/

Peters, J. D. (2015). The Marvelous Clouds: Toward a Philosophy of Elemental Media. Chicago: University of Chicago Press.

Peterson, T. (2018, April 24). To identify unsafe content, YouTube tries asking creators to rate their own videos. Digiday. Retrieved from https://digiday.com/media/identify-unsafe-content-youtube-tries-asking-creators-rate-videos/

Popper, B. (2017, April 6). YouTube will no longer allow creators to make money until they reach 10,000 views. The Verge. Retrieved July 8, 2019, from https://www.theverge.com/2017/4/6/15209220/youtube-partner-program-rule-change-monetize-ads-10000-views

Randy Cantz. (2018, May 1). Adpocalypse: How YouTube Demonetization Imperils the Future of Free Speech. Berkeley Political Review. Retrieved July 8, 2019, from https://bpr.berkeley.edu/2018/05/01/adpocalypse-how-youtube-demonetization-imperils-the-future-of-free-speech/

Rath, J. (2017, March 23). Here are the biggest brands that have pulled their advertising from YouTube over extremist videos. Business Insider India. Retrieved November 18, 2018, from https://www.businessinsider.in/here-are-the-biggest-brands-that-have-pulled-their-advertising-from-youtube-over-extremist-videos/articleshow/57793895.cms

Richards, J. I., & Murphy, J. H. (1996). Economic Censorship and Free Speech: The Circle of Communication between Advertisers, Media, and Consumers. Journal of Current Issues & Research in Advertising, 18(1), 21–34. doi:10.1080/10641734.1996.10505037

Rieder, B., Matamoros-Fernández, A., & Coromina, Ò. (2018). From ranking algorithms to ‘ranking cultures’: Investigating the modulation of visibility in YouTube search results. Convergence, 24(1), 50–68. doi:10.1177/1354856517736982

Rieder, B., & Sire, G. (2013). Conflicts of interest and incentives to bias: A microeconomic critique of Google’s tangled position on the Web. New Media & Society, 16(2), 195-211. doi:10.1177/1461444813481195

Roio, D. (2018). Algorithmic Sovereignty (PhD Thesis, University of Plymouth). Retrieved from http://hdl.handle.net/10026.1/11101

Solon, O. (2017, March 25). Google’s bad week: YouTube loses millions as advertising row reaches US. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/mar/25/google-youtube-advertising-extremist-content-att-verizon

Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4–5), 395–412. doi:10.1177/1367549415577392

Synek, G. (2018, January 17). YouTube raises activity requirements for partner program monetization. TechSpot. Retrieved July 9, 2019, from https://www.techspot.com/news/72792-youtube-raises-activity-requirements-partner-program-monetization.html

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven: Yale University Press.

Wall, M., & El Zahed, S. (2011). “I’ll Be Waiting for You Guys”: A YouTube Call to Action in the Egyptian Revolution. International Journal of Communication, 5, 1333–1343. Retrieved from https://ijoc.org/index.php/ijoc/article/view/1241

Wasko, J., & Erickson, M. (2009). The Political Economy of YouTube. In P. Snickars & P. Vonderau (Eds.), The Youtube Reader (pp. 372–386). Stockholm: National Library of Sweden.

Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. doi:10.1080/1369118X.2016.1186713

Add new comment