Over the past fifty years, surveillance practices once considered untenable due to their incompatibility with democratic rights and values have been rebranded as tolerable, neutral, or even desirable.
We focus on using patent data, with machine learning methods, in the context of China, for the purpose of tracking the pace of development of potentially human rights sensitive smart city technologies.
Advertisers’ concerns about “brand safety” and “brand suitability” are an underappreciated influence on social media platforms’ content governance, with concerning implications for social equality and the freedom of public debate online.
Digital access surveys do not cover barriers experienced by limited users making them ineffective to capture and respond to their needs.
This international comparative research explores the socio-political impact of voting in online surveys on voters, civil society organisations, government authorities and open government overall in Moldova and Ukraine.
A platform policy implementation audit of how major digital platforms implemented their content moderation policies towards RT and Sputnik accounts at the beginning of Russia’s full-scale invasion of Ukraine in February 2022. It shows a wide, yet inconsistent range of measures taken by tech giants.
Internet freedom rankings are a comparative tool that serves as an evaluative shorthand in decision-making contexts internationally. Understanding their aims and how they define internet freedom, as well as the power relationships within the ranking ecosystem, can reveal a lot about their politics – and their limits.
Introduction Globally, there are now over 800 AI policy initiatives, from the governments of at least 60 countries, with most being introduced after 2016. The United Kingdom (UK) is at the forefront of AI governance efforts, at least quantitatively, being second only to the United States (US) in terms of the number of national-level AI policies
News media discourses on datafication and automation have become more sensitive to data risks, but their complexity is a challenge for informing lay audiences about root causes and solutions.
This paper critically engages with key responses to algorithmic governance, including access and inclusion, transparency, and refusal. How can these responses effectively address the harms produced by algorithmic governance?