Online advertising, content moderation, and corporate accountability: A civil society perspective

Alex Rochefort, Boston University, College of Communication, United States of America

PUBLISHED ON: 31 Mar 2025

This paper is part of Content moderation on digital platforms: beyond states and firms, a special issue of Internet Policy Review guest-edited by Romain Badouard and Anne Bellon.

Disclaimer

From 2021-2022 Alex Rochefort was a policy fellow with Ranking Digital Rights, a non-profit research programme formerly located at New America where he supported the organisation's policy research and advocacy.

In recent years many civil society organisations have increasingly focused on identifying the underlying structural basis of digital platform dysfunction. And from this critical scrutiny, one culprit has surfaced repeatedly: “It’s the business model” (Amnesty International, 2019; Maréchal & Biddle, 2020; Lemoine et al., 2021). The “business model” diagnosis recognises online behavioural advertising (OBA) as the beating heart, economically and strategically, of organisations like Google (Alphabet) and Meta, among other digital platform companies.1However, while a thriving OBA portfolio may nicely meet the needs of platform operators and their business customers, for users it can exacerbate a variety of ills (Hoffnagle et al., 2012; Vaidhyathan, 2018; Ali et al., 2019; Crain & Nadler, 2019; Access Now, 2020; Maréchal et al., 2020; Milano et al., 2021).

This op-ed centres on the interface of OBA, content moderation, and the corporate accountability work of the civil society group Ranking Digital Rights (RDR). Highlighting RDR’s use of benchmarking tools to inform and to engage investor communities, the argument is that a focus on platform business models not only represents a logical response to the opaque management controls and incentives that configure users’ online experiences, it also helps set the agenda for future governance reforms.

Content moderation as risk mitigation for platform business models

Often discussed in regard to the (flawed) attempts by platforms to protect users from hate speech, inappropriate content, and other online hazards, the practice of content moderation is also employed by platforms to make their services “safe” for business clients by ensuring that advertisements do not appear alongside material damaging to a company’s image or reputation (Griffin, 2023). As Roberts (2018) summarises, “content moderation is fundamentally a matter of brand protection for the firm” (see generally, Gillespie, 2018; Roberts, 2019). The necessity of this function surfaced dramatically in 2020 with the international “Stop Hate for Profit” campaign, a movement in which more than 1,200 businesses and non-profits halted their advertising on Facebook due to the presence of violent and other objectionable content on the platform (Stop Hate for Profit, 2020). One platform company, Reddit, disclosed in a recent pre-IPO filling with the Securities and Exchange Commission (SEC) that its volunteer-based, “layered,” content moderation strategy “inherently subjects [them] to numerous risks” that, if not dealt with effectively, jeopardise the company’s advertising operations (Reddit Inc., 2024). Many advocates for protecting online users and their rights also complain about, first, the link between OBA and privacy infringements due to large-scale collection of users’ personal data for ad targeting (e.g., Cyphers & Schwartz, 2022), and, second, the perpetuation of a platform environment calculated to maximise user engagement for the sake of increased exposure to ads (e.g., Ghussain, 2021).

Greater transparency about the techniques and impacts of OBA is indispensable to any concept of platform governance featuring a role for outside authorities and experts, yet this expectation conflicts with what platforms have been willing to disclose about their advertising operations so far (Leerssen et al., 2019; Edelson et al., 2021; Hawker et al., 2022; Mozilla & CheckFirst, 2024). Responding, in part, to this information gap, the civil society group Ranking Digital Rights has put “the business model” at the core of its corporate accountability work.

Ranking Digital Rights and corporate accountability

Ranking Digital Rights (RDR)2is a digital rights-focused research programme known for its Corporate Accountability Index, an annual evaluation of the freedom-of-expression and privacy commitments/policies/practices of information and communication technology companies (ICT).3Released annually, the index ranks 26 different companies and their performance across 58 different indicators comprising more than 300 individual standards based on publicly disclosed policies and practices.4As it relates to online advertising and content moderation, RDR’s indicators evaluate whether companies clearly disclose human rights impact assessments of targeting advertising systems, rules for prohibited advertising content, descriptions of ad targeting rules and targeting parameters, processes for detecting and responding to violations of ad content/targeting policies, and evidence of policy enforcement in the form of data about content removals for policy violations (RDR, n.d.a).

To date, research on the RDR index has highlighted its methodological construction (Lokot & Wijermars, 2023), as well as the challenge of utilising the tool for policy ends (Kogen, 2022). With some exceptions (Radsch, 2022), use of the index within the investor community for purposes of corporate governance has received less notice. In fact, the index was originally “designed” for shareholder needs (RDR, n.d.b), with the goal of informing investor proposals on human rights due diligence and achieving greater corporate transparency for issues that now include OBA-related activities (Nabors, 2023).

A significant development is the growing relationship between benchmarking and environmental, social, and governance (ESG) investing, or the notion that a firm’s financial performance can be boosted by operating in a socially responsible manner. ESG ratings, which are maintained by commercial and non-profit organisations, score businesses on issues from environmental policies to labour practices and beyond. Investors–large institutional entities as well as retail, or non-professional, investors—, often rely on these ratings to allocate funds. Although a detailed critique of ESG rating and benchmarking lies beyond the scope of this op-ed, it bears noting that, as RDR has argued, many human rights considerations are being “neglected by existing ESG frameworks” (Rydzak, 2023). The RDR Index addresses this limitation by providing a tool that highlights human rights concerns, specifically in relation to freedom of expression and privacy as well as the impact of a company’s superordinate corporate governance structures.

In tandem with its index work, RDR’s investor engagement activities include working with standard-setting bodies to formulate industry-specific indicators as part of corporate disclosures;5disseminating findings from the RDR Index to ESG data providers for incorporation into benchmarking reports; and collaborating with investor groups in the drafting of shareholder proposals for company management improvements. In this last case, content moderation and behavioural advertising have become pivotal concerns within the investor ecosystem over the past several years (SASB Standards, 2020; 2023; RDR Staff, 2022; Rydzak, 2023). Based on the 2024 shareholder voting period,6it is apparent that these issues are not going away (Investor Alliance for Human Rights, 2024a). For example, citing the material risk to investors from targeted advertising, one proposal filed at Meta’s April meeting calls for a shareholder vote to require that the company complete a human-rights impact assessment of its business model (Investor Alliance for Human Rights, 2024b).

Expanding the governance agenda for content moderation

The claim that digital platforms are infringing upon human rights and freedoms is at once empirical and normative in nature. It succeeds only to the extent, first, that data exist to confirm the factual details of the statement and, second, that relevant moral arguments are made to affirm the judgment of an unacceptable transgression. The RDR Index concretises the ways with which surveyed organisations’ policies and practices relate to specified rights and freedoms. Further, in characterising this problem as the collision between universal concepts of human dignity and a corrupted “business model,” RDR’s position explains who is to blame (platforms), who has been harmed (vulnerable users), and the ethical principle that has been sacrificed (established moral sentiment and legal protections under law).

No small group of actors can control the trajectory of a public issue like platform governance. It will take patience, and a constellation of stakeholders, to establish an accountability regime that meets the needs of both platforms and users (Papaevangelou 2021; Gorwa, 2022). For RDR, political agenda-setting on this matter commenced with the publication of its index, but this was only the jumping off point for a larger campaign of action centred on “surveillance advertising” and its effects (Maréchal et al., 2020). With its work within the investor community, RDR expanded the debate surrounding content moderation, turning to new avenues of politicisation while building a broader coalition of allies for long-term reform. Amid ongoing regulatory developments, this effort, and related shareholder advocacy, represents an alternative means of content governance “beyond states and firms.” These endeavours demand widespread attention, not only from advocates and their partners, but also researchers who can clarify the strengths and weaknesses of emerging policy alternatives.

Acknowledgements

I would like to thank the editors of this special issue and Frédéric Dubois for their thoughtful feedback on this op-ed. Thanks also to Nathalie Maréchal for helpful comments on an earlier draft of this piece.

References

Access Now. (2020). Raising the alarm: Online tracking harms human rights. Access Now. https://www.accessnow.org/online-tracking-harms-human-rights/

Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). Discrimination through optimization: How Facebook’s ad delivery can lead to biased outcomes. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–30. https://doi.org/10.1145/3359301

Amnesty International. (2019). Surveillance giants: How the business model of Google and Facebook threatens human rights. Amnesty International. https://www.amnesty.org/en/documents/pol30/1404/2019/en/

Crain, M., & Nadler, A. (2019). Political manipulation and internet advertising infrastructure. Journal of Information Policy, 9, 370–410. https://doi.org/10.5325/jinfopoli.9.2019.0370

Cyphers, B., & Schwartz, A. (2022, March 21). Ban online behavioral advertising. Electronic Frontier Foundation. https://www.eff.org/deeplinks/2022/03/ban-online-behavioral-advertising

Edelson, L., Chuang, J., Franklin Fowler, E., Franz, M., & Ridout, T. N. (2021, August 3). Universal digital ad transparency. TPRC49: The 49th Research Conference on Communication, Information and Internet Policy. https://ssrn.com/abstract=3898214.

Ghussain, A. A. (2021, October 15). Facebook files: How a ban on surveillance advertising can fix Facebook. Amnesty International. https://www.amnesty.org/en/latest/campaigns/2021/10/facebook-files-how-a-ban-on-surveillance-advertising-can-fix-facebook/.

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. https://www.researchgate.net/publication/327186182_Custodians_of_the_internet_Platforms_content_moderation_and_the_hidden_decisions_that_shape_social_media

Gorwa, R. (2022). Who are the stakeholders in platform governance? Yale Journal of Law and Technology, 24, 493–509.

Griffin, R. (2023). From brand safety to suitability: Advertisers in platform governance. Internet Policy Review, 12(3). https://doi.org/10.14763/2023.3.1716

Hawker, K., Carah, N., Angus, D., Brownbill, A., Tan, X., Dobson, A., & Robards, B. (2022, September 2). Advertisements on digital platforms: How transparent and observable are they? Fare. https://fare.org.au/transparency-report/.

Hoofnagle, C. J., Soltani, A., Good, N., Wambach, D. J., & Ayenson, M. (2012). Behavioral advertising: The offer you can’t refuse. Harvard Law & Policy Review, 6, 273–296.

Investor Alliance For Human Rights. (2024a). Mercy investments shareholder resolution. Investor Alliance For Human Rights. https://investorsforhumanrights.org/sites/default/files/attachments/2024-02/meta%20-%20Human%20Rights%20Impact%20Assessment.pdf

Investor Alliance For Human Rights. (2024b, February 13). Investors say tech companies are failing to address systemic human rights risks inherent in business models and exacerbated by AI. Investor Alliance For Human Rights. https://investorsforhumanrights.org/news/investors-say-tech-companies-are-failing-address-systemic-human-rights-risks-inherent-business

Kogen, L. (2024). From statistics to stories: Indices and indicators as communication tools for social change. The International Journal of Press/Politics, 29(4), 1090–1108. https://doi.org/10.1177/19401612221094246

Leerssen, P., Ausloos, J., Zarouali, B., Helberger, N., & De Vreese, C. H. (2019). Platform ad archives: Promises and pitfalls. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1421

Lemoine, L., Jakubowska, E., Belu, A., & Naranjo, D. (2021). Targeted online: An industry broken by design and default. EDRi. https://edri.org/wp-content/uploads/2021/03/Targeted-online-An-industry-broken-by-design-and-by-default.pdf.

Lokot, T., & Wijermars, M. (2023). The politics of internet freedom rankings. Internet Policy Review, 12(2). https://doi.org/10.14763/2023.2.1710

Maréchal, M., & Biddle, E. (2020). It’s not just the content, it’s the business model: Democracy’s online speech challenge. New America. https://www.newamerica.org/oti/reports/its-not-just-content-its-business-model/.

Maréchal, M., MacKinnon, R., & Dheere, J. (2020). Getting to the source of infodemics: It’s the business model. New America. https://www.newamerica.org/oti/reports/getting-to-the-source-of-infodemics-its-the-business-model/.

Milano, S., Mittelstadt, B., & Wachter, S. (2021). Targeted ads isolate and divide us even when they’re not political – new research. The Conversation. https://theconversation.com/targeted-ads-isolate-and-divide-us-even-when-theyre-not-political-new-research-163669

Mozilla & CheckFirst. (2024). Full disclosure: Stress testing tech platforms’ ad repositories. https://assets.mofoprod.net/network/documents/Full_Disclosure_Stress_Testing_Tech_Platforms_Ad_Repositories_3FepU2u.pdf

Nabors, A. L. (2023). Is momentum on tech shareholder activism stalling? How to reinvigorate it in 2024. Ranking Digital Rights. https://rankingdigitalrights.org/2023/07/05/is-momentum-on-tech-shareholder-activism-stalling-how-to-reinvigorate-it-in-2024/

Papaevangelou, C. (2021, July 1). The existential stakes of platform governance: A critical literature review. Open Research Europe. https://open-research-europe.ec.europa.eu/articles/1-31/v2

Radsch, C. (2023). Transparency reporting: Good practices and lessons from global assessment frameworks (Global Internet Forum to Counter Terrorism (GIFCT) Working Group Paper Series 2022). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4416400.

Ranking Digital Rights. (n.d.-a). Investor guidance. Ranking Digital Rights. https://rankingdigitalrights.org/investor-guidance/

Ranking Digital Rights. (n.d.-b). Methods & standards. Ranking Digital Rights. https://rankingdigitalrights.org/methods-and-standards/

RDR Staff. (n.d.). Alphabet shareholders move to tackle privacy-invasive ad-targeting technologies. Ranking Digital Rights. https://rankingdigitalrights.org/2022/01/11/alphabet-shareholders-resolutions-human-rights/.

Reddit Inc. (2024). Form S-1 registration statement. https://www.sec.gov/Archives/edgar/data/1713445/000162828024006294/reddits-1q423.htm

Roberts, S. T. (2018). Digital detritus: ‘Error’ and the logic of opacity in social media content moderation. First Monday, 23(3). https://doi.org/10.5210/fm.v23i3.8283

Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press. https://markovicdejan.com/wp-content/uploads/2022/03/Text-1_Sarah-T.-Roberts-Behind-The-Screen_Digital-Humanity.pdf

Rydzak, J. (2022). Meta shareholders push for better governance of human rights risk ahead of May AGM. Ranking Digital Rights. https://rankingdigitalrights.org/2022/04/11/meta-shareholders-push-for-better-governance-of-human-rights-risks-ahead-of-may-agm/.

Rydzak, J. (2023). ESG data needs a human rights upgrade. Ranking Digital Rights. https://rankingdigitalrights.org/mini-report/esg-data-needs-a-human-rights-upgrade/.

Stop Hate for Profit. (2020). Stop hate for profit campaign. Stop Hate for Profit Campaign. https://www.stophateforprofit.org

Sustainability Accounting Standards Board. (2020). Content moderation taxonomy: A foundation for standard setting on the issue of content moderation. SASB. https://sasb.org/wp-content/uploads/2020/12/Content-ModerationTaxonomy-v7b.pdf

Sustainability Accounting Standards Board. (2023). Content governance in the internet & media services industry. SASB. https://sasb.ifrs.org/standards/process/projects/content-governance-in-the-internet-media-and-services-industry/

Vaidhyanathan, S. (2018). The three major forms of surveillance on Facebook. Slate. https://slate.com/technology/2018/06/antisocial-media-excerpt-there-are-three-major-forms-of-facebook-surveillance.html

Waters, G. (2020). Building a foundation to explore online content moderation. SASB. https://sasb.ifrs.org/blog/building-a-foundation-to-explore-online-content-moderation/.

Footnotes

1. OBA is characterised by the collection of information about users and delivery of personalised advertisements based on this collected (and inferred) data.

2. Until 2023, the RDR Index was administered as an independent project of the New America Foundation. In 2024, RDR announced its departure from its previous parent organisation, and the transition to Netherlands-based World Benchmarking Alliance. See more: https://rankingdigitalrights.org/2024/03/21/joint-statement-on-the-transition/

3. In 2022, the Corporate Accountability Index was released as two reports titled “Big Tech Scorecard” and “Telco Giants Scorecard.” Here I refer to RDR’s major research outputs as “scorecards” or “the index” interchangeably, for the sake of simplicity.

4. For more on the production of the RDR Index see: https://rankingdigitalrights.org/methods-and-standards/.

5. Many of the ESG and sustainability reports published annually by publicly traded companies draw upon frameworks and benchmarks developed by organisations such as the Sustainability Accounting Standards Board (SASB) and the Global Reporting Initiative (GRI). RDR’s involvement in this area has included, for example, providing guidance to SASB’s research on standards for content moderation (see Waters, 2020).

6. Owners of stock in a publicly traded company are referred to as shareholders and have the ability to participate in annual corporate governance meetings. Decisions about the future of the firm are voted upon at these events.