News and Research articles on Content moderation

Framing the role of experts in platform governance: Negotiating the code of practice on disinformation as a case study

Kateryna Chystoforova, European University Institute
Urbano Reviglio, European University Institute
PUBLISHED ON: 31 Mar 2025 DOI: 10.14763/2025.1.1823

This study examines experts' role within the EU's Code of Practice on Disinformation, highlighting challenges in co-regulatory processes and platform governance.

Introduction to the special issue on content moderation on digital platforms

Romain Badouard, Paris-Panthéon-Assas University
Anne Bellon, University of Technology of Compiègne
PUBLISHED ON: 31 Mar 2025 DOI: 10.14763/2025.1.2005

Content moderation encompasses a great diversity of actors who develop specific practices. Their precise contribution to the democratisation of content regulation, and to the balance between public and private interests in platform governance, remains little studied. This special issue is an attempt at remedying this.

Labour pains: Content moderation challenges in Mastodon growth

Charlotte Spencer-Smith, University of Klagenfurt
Tales Tomaz, University of Salzburg
PUBLISHED ON: 31 Mar 2025 DOI: 10.14763/2025.1.1831

Mastodon growth after mass user switching from Twitter posed many challenges to content moderation, but responses have varied in Mastodon communities, differing from a top-down approach as in corporate social media.

Aspirational platform governance: How creators legitimise content moderation through accusations of bias

Blake Hallinan, Hebrew University of Jerusalem
CJ Reynolds, Hebrew University of Jerusalem
Yehonatan Kuperberg, Hebrew University of Jerusalem
Omer Rothenstein, Hebrew University of Jerusalem
PUBLISHED ON: 31 Mar 2025 DOI: 10.14763/2025.1.1829

Although commercial social media platforms provide few formal channels for participation in platform governance, creators aspire to influence decisions and policies through expressive forms of civic engagement that ultimately legitimate platforms as arbiters of public discourse.

Safer spaces by design? Federated socio-technical architectures in content moderation

Ksenia Ermoshina, National Centre for Scientific Research (CNRS)
Francesca Musiani, National Centre for Scientific Research (CNRS)
PUBLISHED ON: 31 Mar 2025 DOI: 10.14763/2025.1.1827

This article analyses the role that informational architectures and infrastructures in federated social media platforms play in content moderation processes.

This article compares the Stop Hate for Profit campaign and the Global Alliance for Responsible Media to evaluate efforts that leverage advertisers’ financial power to challenge platform content moderation.

Decentralised content moderation

Paul Friedl, Karlsruhe Institute of Technology
Julian Morgan, Humboldt Universität Berlin
PUBLISHED ON: 4 Apr 2024 DOI: 10.14763/2024.2.1754

Decentralised content moderation describes and potentially advocates for moderation infrastructures in which both the authority and the responsibility to moderate are distributed over a plurality of actors or institutions.

A platform policy implementation audit of actions against Russia’s state-controlled media

Sofya Glazunova, Queensland University of Technology
Anna Ryzhova, University of Passau
Axel Bruns, Queensland University of Technology
Silvia Ximena Montaña-Niño, Queensland University of Technology
Arista Beseler, University of Passau
Ehsan Dehghan, Queensland University of Technology
PUBLISHED ON: 14 Jun 2023 DOI: 10.14763/2023.2.1711

A platform policy implementation audit of how major digital platforms implemented their content moderation policies towards RT and Sputnik accounts at the beginning of Russia’s full-scale invasion of Ukraine in February 2022. It shows a wide, yet inconsistent range of measures taken by tech giants.

Humour as an online safety issue: Exploring solutions to help platforms better address this form of expression

Ariadna Matamoros-Fernández, Queensland University of Technology
Louisa Bartolo, Queensland University of Technology
Luke Troynar, Queensland University of Technology
PUBLISHED ON: 25 Jan 2023 DOI: 10.14763/2023.1.1677

The policies and content moderation practices of social media companies are not well equipped to recognise how and when humour harms. All too-often, therefore, platforms take down important harmless humour while they fail to effectively moderate humour that sows division and hate.

Information interventions and social media

Giovanni De Gregorio, University of Oxford
Nicole Stremlau, University of Oxford; University of Johannesburg
PUBLISHED ON: 30 Jun 2021 DOI: 10.14763/2021.2.1567

The spread of hate speech and disinformation on social media has contributed to inflaming conflicts and mass atrocities as seen in Myanmar. Is the doctrine of information intervention a solution to escalations of violence?

Expanding the debate about content moderation: scholarly research agendas for the coming policy debates

Tarleton Gillespie, Microsoft Research
Patricia Aufderheide, American University
Elinor Carmi, University of Liverpool
Ysabel Gerrard, University of Sheffield
Robert Gorwa, University of Oxford
Ariadna Matamoros-Fernández, Queensland University of Technology
Sarah T. Roberts, University of California, Los Angeles
Aram Sinnreich, American University
Sarah Myers West, New York University
PUBLISHED ON: 21 Oct 2020 DOI: 10.14763/2020.4.1512

Content moderation has exploded as a public and a policy concern, but the debate remains too narrow. Nine experts suggest ways to expand it.

To ban content that might possibly violate their own content policies, social media platforms use the term 'borderline‘. This means categorising content as potentially unwanted (e.g. harmful, inappropriate, etc) and sanctioning legitimate expressions of opinion - hence putting lawful speech in a twilight zone.

Algorithmic governance

Christian Katzenbach, Alexander von Humboldt Institute for Internet and Society
Lena Ulbricht, Berlin Social Science Center (WZB)
PUBLISHED ON: 29 Nov 2019 DOI: 10.14763/2019.4.1424

Algorithmic governance as a key concept in controversies around the emerging digital society takes up the idea that digital technologies produce social ordering in a specific way.

Reading between the lines and the numbers: an analysis of the first NetzDG reports

Amélie Heldt, Hans-Bredow-Institut
PUBLISHED ON: 12 Jun 2019 DOI: 10.14763/2019.2.1398

The German Network Enforcement Act is an attempt to counteract the effects of hate speech on social media platforms. This paper analyses and evaluates the reports on the handling of complaints about unlawful content after its coming into force.