News and Research articles on Content moderation

A platform policy implementation audit of actions against Russia’s state-controlled media

Sofya Glazunova, Queensland University of Technology
Anna Ryzhova, University of Passau
Axel Bruns, Queensland University of Technology
Silvia Ximena Montaña-Niño, Queensland University of Technology
Arista Beseler, University of Passau
Ehsan Dehghan, Queensland University of Technology
PUBLISHED ON: 14 Jun 2023 DOI: 10.14763/2023.2.1711

A platform policy implementation audit of how major digital platforms implemented their content moderation policies towards RT and Sputnik accounts at the beginning of Russia’s full-scale invasion of Ukraine in February 2022. It shows a wide, yet inconsistent range of measures taken by tech giants.

Humour as an online safety issue: Exploring solutions to help platforms better address this form of expression

Ariadna Matamoros-Fernández, Queensland University of Technology
Louisa Bartolo, Queensland University of Technology
Luke Troynar, Queensland University of Technology
PUBLISHED ON: 25 Jan 2023 DOI: 10.14763/2023.1.1677

The policies and content moderation practices of social media companies are not well equipped to recognise how and when humour harms. All too-often, therefore, platforms take down important harmless humour while they fail to effectively moderate humour that sows division and hate.

Information interventions and social media

Giovanni De Gregorio, University of Oxford
Nicole Stremlau, University of Oxford; University of Johannesburg
PUBLISHED ON: 30 Jun 2021 DOI: 10.14763/2021.2.1567

The spread of hate speech and disinformation on social media has contributed to inflaming conflicts and mass atrocities as seen in Myanmar. Is the doctrine of information intervention a solution to escalations of violence?

Expanding the debate about content moderation: scholarly research agendas for the coming policy debates

Tarleton Gillespie, Microsoft Research
Patricia Aufderheide, American University
Elinor Carmi, University of Liverpool
Ysabel Gerrard, University of Sheffield
Robert Gorwa, University of Oxford
Ariadna Matamoros-Fernández, Queensland University of Technology
Sarah T. Roberts, University of California, Los Angeles
Aram Sinnreich, American University
Sarah Myers West, New York University
PUBLISHED ON: 21 Oct 2020 DOI: 10.14763/2020.4.1512

Content moderation has exploded as a public and a policy concern, but the debate remains too narrow. Nine experts suggest ways to expand it.

Freedom of expression

Borderline speech: caught in a free speech limbo?

Amélie Heldt, Hans-Bredow-Institut
PUBLISHED ON: 15 Oct 2020

To ban content that might possibly violate their own content policies, social media platforms use the term 'borderline‘. This means categorising content as potentially unwanted (e.g. harmful, inappropriate, etc) and sanctioning legitimate expressions of opinion - hence putting lawful speech in a twilight zone.