Future-proofing the city: A human rights-based approach to governing algorithmic, biometric and smart city technologies

Alina Wernick, Legal Tech Lab, University of Helsinki, Helsinki, Finland, alina.wernick@helsinki.fi
Anna Artyushina, City Institute, York University, Toronto, Canada, artanna@yorku.ca

PUBLISHED ON: 31 Mar 2023 DOI: 10.14763/2023.1.1695

Abstract

While the GDPR and other EU laws seek to mitigate a range of potential harms associated with smart cities, the compliance with and enforceability of these regulations remain an issue. In addition, these proposed regulations do not sufficiently address the collective harms associated with the deployment of biometric technologies and artificial intelligence. Another relevant question is whether the initiatives put forward to secure fundamental human rights in the digital realm account for the issues brought on by the deployment of technologies in city spaces. In this special issue, we employ the smart city notion as a point of connection for interdisciplinary research on the human rights implications of the algorithmic, biometric and smart city technologies and the policy responses to them. The articles included in the special issue analyse the latest European regulations as well as soft law, and the policy frameworks that are currently at work in the regions where the GDPR does not apply.
Citation & publishing information
Published: March 31, 2023
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Smart cities, Governance, Biometric, Algorithmic governance, Human rights
Citation: Wernick, A. & Artyushina, A. (2023). Future-proofing the city: A human rights-based approach to governing algorithmic, biometric and smart city technologies. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1695

Papers in this special issue

Papers in this special issue

Introduction

Since the early 2000s, smart city policies have aimed to make urban spaces safer, more sustainable and innovative with the help of big data, biometric technologies and, more recently, artificial intelligence (AI). This special issue scrutinises the human rights implications of these initiatives. As guest editors of this special issue, we necessarily draw on our own experiences. For one of us, the intergenerational memory of the circumstances in which protections of privacy, data, and freedom of speech were matters of life or death served as an impetus to research the long-term human rights risks of smart city technologies. For another, the smart city notion came to life in the form of a dystopian urban development that Alphabet proposed to build in Toronto, the editor’s hometown. We believe there is a practical necessity and immense theoretical importance in creating an interdisciplinary platform for research on human rights in smart cities. This special issue is a rich collection of studies, which analyse the technologies that have been quietly invading our cities from the perspectives of law, science and technology studies (STS), AI ethics and surveillance studies. In this editorial, we propose a novel analytical framework for creating and governing smart city technologies that protect fundamental human rights.

The discursive career of the “smart city” notion is not different from other overused terms such as “artificial intelligence” or “nanotechnology”. Coined in the early 2000s, the notion of the smart city never became a fully-developed academic concept, yet its presence is pervasive across academic research and policy frameworks (Government of India, 2015; European Commission, 2020; Sadowski & Bendor, 2019; Lorinc, 2020; Micheli, 2022; Sengupta & Sengupta, 2022). While some researchers expect that the study of digital platforms will render smart city scholarship obsolete (Wood & Monahan, 2019; Sadowski, 2020; Zwick & Spicer, 2021), others see heuristic value in the smart city concept, with all its versatility and contextual richness (Shelton et al., 2015; Kitchin, 2015, 2022; Voorwinden, 2021; Calzada, 2021; Frischmann et al., 2023). This special issue shows that the smart city notion can be productively employed as a point of connection for interdisciplinary research on the data-driven solutions deployed in cities, harms brought on by these technologies and policy responses to them. We will subsequently use the term smart city technologies, as a reference to “computational models of urbanism and data-driven and algorithmically intermediated technologies” (Botero Arcila, 2022), including biometric surveillance technologies adopted in an urban context.

The public sector has been rapidly adopting smart city technologies in areas ranging from law enforcement to transportation to healthcare. Smart city technologies have implications on a wide range of fundamental human rights recognised by international and European sources for human right protection.1 These systems target individuals and communities with surveillance, nudging and automated decision-making, which may threaten rights to privacy, self-determination and freedom of expression (Galdon-Clavell, 2013; Jewell, 2018; Joh, 2019; Ranchordas, 2020; Monahan, 2022). Some technological systems branded as smart cities have been found to advance gender discrimination and land dispossession practices (Greenfield, 2013; Datta, 2015, 2020). Employed in the urban realm, these technologies may facilitate surveillance creep (Edwards, 2016; Frischmann & Selinger, 2018; Wood & Steeves, 2021; Botero Arcila, 2022) and risk chilling effects on freedoms of movement, association and thought (Solove, 2005; Penney, 2021; Ahmad & Dethy, 2019). Their use for policing (Joh, 2019) may also undermine the rights to a fair trial and the presumption of innocence. In addition, it can violate the right to non-discrimination, where policing targets minorities and marginalised groups withracialised surveillance (Jefferson, 2018; Monahan, 2023). Smart city technologies may also restrict citizens’ access to services and space in other ways. For example, when states and municipalities adopt algorithmic systems to distribute social welfare benefits or detect welfare fraud, rights to privacy, data protection, non-discrimination, social security, health and education rights may be at stake (Heikkilä, 2022; Ranchivitsa & Johann, 2022). The implementation of emotion recognition devices in the streets further undermine our rights to human dignity and autonomy (Valcke et al., 2021). In the worst-case scenario, smart city technologies, such as biometric recognition systems, may enable digital repression (Feldstein, 2021; Williams, 2021; Akbari, 2023).

Currently, the deployment and regulation of smart city technologies in cities are studied in different disciplines and from a variety of perspectives. This epistemic fragmentation may prevent us from gaining a full understanding of these technologies’ impacts on human rights and carefully evaluating the human-rights-informed governance approaches to them.

Our goal in this special issue is twofold: to offer a human-rights based framework as an anchor for the studies and design of smart cities, and initiate interdisciplinary dialogue on human rights between the fields of socio-legal research on technology (Brauneis & Goodman, 2018; Goodman, 2020; Ranchordas, 2020; Ranchordás & Goanta, 2020; Smuha, 2021b; Kempin Reuter, 2021; Botero Arcila, 2022; Lane, 2023b), AI ethics (Floridi, 2019; Sloane et al., 2021), critical data studies (Kitchin & Lauriault, 2014; Taylor, 2017; boyd & Crawford, 2019; Viljoen, 2020; Dencik et al., 2022), science and technology studies (Sadowski & Pasquale, 2015; Sadowski, 2020; Birch et al., 2020; Artyushina, 2023), surveillance studies (Lyon, 2005, 2015; Monahan & Wood, 2018; Lyon & Wood, 2020), smart city scholarship (Hollands, 2008, 2015; Kitchin, 2015, 2021; Kitchin et al., 2019; Cardullo et al., 2021; Wiig, 2015, 2016; Green, 2020) and other adjacent fields.

The necessity and urgency of using human rights as a framework for governing technologies’ risk has been brought up in connection with smart cities (Galdon-Clavell, 2013; Brown, 2019; Kempin Reuter, 2021), data processing (Hildebrandt, 2013; Scassa, 2020) and, more recently, artificial intelligence (Latonero, 2018; Donahoe & Metzker, 2019; Yeung et al., 2020; Smuha, 2021b). The appeal of the human rights-based approach (HRBA) stems from the perception of universal acceptance of human rights as a normative standard to govern technology (Donahoe & Metzker, 2019; Karppinen & Puukko, 2020; Smuha, 2021b; Mantelero, 2022; Prabhakaran et al., 2022). The 1948 Universal Declaration of Human Rights is the most translated document in history (UN, n.d.), whereas both International Covenants of Civil and Political Rights (ICCPR) and Economic Social and Cultural Rights (ICESC) have over 170 signatories (UN, 2023a, 2023b). Furthermore, unlike AI ethics frameworks, human rights are enforceable and, therefore, more fitting to govern AI throughout its life cycle (Donahoe & Metzker, 2019; McGregor et al., 2019; Yeung et al., 2020; Smuha, 2020; Cobbe et al., 2020). The HRBA is of high relevance in Europe, where citizens are afforded protections both under the European Convention of Human Rights (ECHR, Coe, 1950) and the EU Charter of Fundamental Rights (CFREU). Lately, the EU has also adopted a fundamental rights-based, risk-driven approach to technology regulation (see GDPR, AI Act proposal). Finally, the growing relevance of the HRBA in technology governance relates to the trend of the gradual expansion of human rights obligations from states to businesses (UN, 2011; OECD, 2018) and actors, such as cities (Oomen & Baumgärtel, 2018) and operators of digital platforms (Digital Services Act, 2022).

To understand the role of and opportunities for HRBA in the governance of smart city technologies, it is crucial to recognise that the emerging literature on HRBA and smart cities falls into two distinct streams, which do not always acknowledge and interact with one another. To transcend this disconnect, in our special issue we will introduce contributions in the context of two distinct human rights-based approaches present in smart cities: HRBA by design and HRBA in cities. The former is informed by the European risk-based, fundamental rights-driven technology regulation pioneered by the GDPR (see Hildebrandt, 2015; Gellert, 2017, 2021) and STS/human-computer interaction research on value-sensitive design (Winner, 1980; Nissenbaum, 1998; Hildebrand & Koops, 2010; Hildebrandt, 2011, 2015; Friedman & Nissenbaum, 2021); whereas the city-centred HRBA draws from the discourses on human rights cities (Oomen & van den Berg, 2014; Oomen, Davis & Grigolo, 2016; Oomen & Baumgärten, 2018), rights to the city (Lefebvre, 2009; Harvey, 2008; Shaw & Graham, 2017; Cardullo et al., 2019) and digital rights (Morozov & Bria, 2018; Cities Coalition, 2018; D'Ignazio & Klein, 2020; Mattern, 2021).

Human rights-based approach by design

In by design HRBA, human rights protections are built into the technology design. This approach rests on the assumption that technology can embody values (Winner, 1980; Friedman & Nissebaum, 1996; Lessig, 1999; Hildebrandt, 2015; Koulu, 2021) and can be deliberately designed to reflect the values of choice (Friedman et al., 2013). Initially, the approach was pioneered through the privacy-by-design framework created by the Canadian privacy expert Ann Cavoukian (2009) and General Data Protection Regulation (GDPR), which introduced the principle of data protection by design. The central idea behind this approach is that both the organisational and technical context for processing personal data need to uphold data protection principles and fundamental rights (GDPR, Art. 25). More recently, by design HRBA has been used for the governance of dual-use technologies and AI (Penney et al., 2018; Donahoe & Metzker, 2019), representing an approach to regulate algorithms (Ulbricht & Yeung, 2022).

By design approaches should recognise and seek to impact the “social processes that shape design choices and the social consequences that follow the development and deployment of technological systems” (Koulu, 2021, p. 87). In practical terms, by design HRBA can be executed through instruments such as human rights–based designs in AI (Aizenberg & van den Hoven, 2020), impact assessment methods (Edwards, 2016; Mantelero & Esposito, 2021; Selbst, 2021; Castets-Renard, 2021; NIST 2023), algorithmic audits (CoE, 2017, Digital Services Act Art. 37), as well as evaluation and procurement measures (Donahoe & Metzker, 2019).

By-design HRBA, in its risk-based form, is increasingly adopted in European policy-making on AI (AI HLEG, 2019; CoE, 2021), and is reflected in the recent legislative initiatives for the EU AI Act proposal (2021) and the Digital Services Act (2022), as well as in the UK Online Safety Bill (2022). The by-design HRBA approach is also evident in the Council of Europe’s very recent proposal for the Draft of the Convention on Artificial Intelligence, Human Rights, Democracy and The Rule Of Law (CoE, 2023), which aims to introduce human rights’ safeguards where the AI Act proposal is likely to fall short.

While the by-design HRBA manifests an important step toward the technologies and policies that prioritise human rights, it is not clear how these emerging, partially overlapping, legislative frameworks apply in the context of smart cities. Several articles in this special issue address this problem. Hacker and Neyer (2023) explore the cumulative impact of the GDPR, non-discrimination law and the AI Act proposal, through the analytical lens of “substantial smartness” that emphasises the intertemporal relationship between citizen participation and the protection of fundamental rights in the smart city. The article by Lane (2023a) is the first one to investigate the impact of the recent EU proposal for a due diligence directive (2022) on smart city technologies. The directive would introduce binding preventive corporate responsibility obligations on European businesses; Lane (2023a) analyses the relevance of the directive in the context of smart city AI systems and its alignment with the AI Act proposal. Both Hacker & Neyer’s (2023), and Lane’s (2023a) articles, focus on the temporal dimension of law and technology, contributing to the emerging research on the temporality in AI and smart cities (Teo, 2022; Kitchin, 2023a).

Wernick et al. (2023) conducted a case study study of the practices of human rights protection by design and compliance in publicly funded smart city R&D projects in Finland with a focus on data protection. The study provides compelling evidence of the knowledge, design and localisation costs that drive European companies towards participating in smart city projects in jurisdictions with fewer human rights safeguards. The authors provide policy recommendations for European governments, specifically, they call for the alignment of the EU policies for smart cities, fundamental rights-driven technology regulation and technology exports.

The EU HRBA by design approach, even when mandated by law, is framed, on the one hand, by legal remedies to enforce it and, on the other hand, by constitutional protections for fundamental rights, such conditions for limitations of rights (CFREU Art. 52(1)). The scope of possible interferences in rights to privacy or data protection on the grounds of national security or law enforcement is determined by the proportionality principle (Brown & Korff, 2009; Dalla Corte, 2022). In his article, Mobilio (2023) evaluates the potential guardrails needed to allow for the use of facial recognition technology by law enforcement authorities through the lens of proportionality. He arrives at the conclusion that the data protection law and the AI Act proposal do not offer sufficient safeguards for fundamental rights risks associated with facial recognition and its disproportionate use in policing. The deployment of facial recognition technologies in policing is further addressed in the article by Ramiro and Cruz (2023), who focus on the role of public-private partnerships in technology governance. In technology regulation, not all jurisdictions follow the European fundamental-rights-driven route. Analysing the “grey zones of surveillance” executed by private companies, Ramiro and Cruz (2023) show how consumer protection law, and methods more closely connected to the HRBA in cities, strategic litigation and activism, have helped to limit the adoption of facial recognition technology in Brazil.

Whereas the European regulatory HRBA approach is geared at mitigating the risks associated with technologies, an important tool for future-proofing the city is the proactive HRBA-by design approach (e.g., UN-Habitat, 2022), where the technology is deliberately designed not only to avoid breaches of fundamental and human rights, but also for their positive fulfilment. In this special issue, a study by Tupasela et al. (2023) explores the challenges and means for developing data-driven decision-making systems for urban planning. The study draws on the existing smart city project in Finland that addresses the needs of the elderly citizens in cities, to illustrate how the developers of smart city technologies in Europe align the processes of protective HRBA by design (e.g., data protection and privacy compliance) with the goals of proactive HRBA by design. The authors argue that Big Data may become a useful tool in urban planning when vulnerable groups are included in the design processes and when the planners aim to narrow the digital age gap.

Human rights-based approach in cities

The European risk-based HRBA by design approach is geared towards mitigating technologies’ threats to fundamental rights as they are integrated into the fabric of the city. For example, under the GDPR, it would be difficult to legitimise systematic monitoring of a publicly accessible area on a large scale (Art. 35 GDPR). Yet, normatively speaking, this form of HRBA is geared towards limiting the technology’s negative impact on human rights. However, it does not offer guidance with respect to the proactive fulfilment of human and fundamental rights through technology in an urban context. For example, it does not guide the development of technologies that narrow digital divides or facilitate the enjoyment of economic, social and cultural rights, such as rights to social security, (Art. 9 ICESC) or to take part in cultural life (Art. 15 (a) ICESC) that requires positive actions from the state to fulfil.

In contrast, the HBRA in cities approach proposes more aspirational human rights-informed city governance that often goes beyond cities’ constitutionally mandated legal obligations to commit to human and fundamental rights. This approach is informed by three different discourses: human rights cities, human rights in smart cities and digital rights in smart cities. First, the human rights cities movement, which emerged in 1990’s focuses on the cities’ role in promoting and upholding human rights as actors distinct from nation states, that, under international law, are traditionally vested with human rights obligations: “A human rights city as an urban entity or local government that explicitly bases its policies, or some of them, on human rights as laid down in international treaties, and thus distinguishes itself from other local authorities” (Oomen & Baumgärtel, 2014, p. 1) Originally, the human rights cities did not deal with technology, but questions such as migrants’ rights (Grigolo, 2010; Oomen, 2019; Baumgärtel & Oomen, 2019). This stream of literature has received little attention in smart city research (cf. Kempin Reuter, 2019), with the exception of Voorwinden and Ranchordas (2022), who have drawn attention to the role of municipalities in shaping the development of smart city technologies by exerting international influence through transnational networks and developing soft-law instruments for technology governance. The human rights cities often have “more social and political than purely legal” motivations to champion human rights (Oomen & Baumgärtel, 2014, p. 2), often relying on soft-law instruments (Oomen & Baumgärtel, 2018.) However, cities appear to emerge as novel and remarkably active actors in international law. Dutch municipalities have even adopted policies that were more protective of undocumented migrants than those of the state and successfully protected them in court, displaying the role of cities as emerging actors in the field of international law (Oomen et al, 2021). This suggests that in the future, cities could adopt more human rights-friendly and enforceable technology policies than the government.

The fulfilment of human rights in smart cities has drawn attention both from academics and international organisations, most importantly the UN. Galdon-Clavell (2013) and Kempin Reuter (2019, 2020) have drawn attention to the human rights implications of smart cities, and cities’ proactive attempts to empower their citizens to participate in the selection and governance of technologies that are adopted by the municipality. However, the norms, values and tools proposed by the city-centred digital rights initiatives like the UN-Habitat and the Cities Coalition for Digital Rights appear to be rather eclectic and much more ambitious than human rights frameworks would normally presuppose (see Cities Coalition for Digital Rights & UN-Habitat, n.d.; Calzada, 2021; Calzada et al., 2021). The reason for this eclecticism lies in the roots of this approach. These HRBAs are inspired by the academic, policy and planning discourses on the right to the city (Lefevbre, 1968) and digital rights (Cardullo et al. 2019; Kitchin, 2022, 2023; Calzada, 2018, 2020). For example, The Digital Rights framework puts forward the notion of “people-centred smart cities” to “ensure that considerations over human rights in the digital space become as evident as human rights in the near future” (Cities Coalition for Digital Rights & UN-Habitat, n.d., p. 4). While they contain references to rights, which are legally protected in part of the jurisdictions, their aim is to codify the emerging digital human rights connected with smart cities and offer non-legal governance tools to municipalities and citizens (see Cities Coalition for Digital Rights & UN-Habitat, n.d., pp. 4-5).

As reaffirmed by a number of international frameworks, human rights should receive the same protection online and offline (CoE, 2015; UN Secretary General, 2020; European Declaration on Digital Rights, 2022; UN-Habitat, 2022). The city-centred HRBA intersects with two discourses surrounding digital rights. The first one stems from research on internet governance and focuses on technological infrastructures, such as centralised, multinational platforms’ role in regulating the online environment (Karppinen & Puukko, 2020) and upholding constitutional values against them to contain their power (Suzor et al, 2018; Suzor, 2020; Celeste, 2019; De Gregorio, 2022; Kettemann, 2022; Celeste et al., 2022). Technology corporations have been active in the smart city market: for example, Alphabet was behind the infamous and failed Sidewalk Toronto/Quayside smart city (Goodman & Powles, 2019; Artyushina, 2020; Carr & Hesse, 2020). However, the governance of digital rights in smart cities must differ from those on media platforms because many smart city technologies would exist at the intersection of the digital and tangible world and are installed in an ad-hoc and decentralised manner. The work of civic organisations, like Cities Coalition for the Digital Rights (n.p.) and Open North (Pembleton et al., 2022; Qarri & Gill, 2022) demonstrate that the concept of digital rights are fragmentary: different cities and communities have proposed different sets of digital rights. In this special issue, Sanfilippo and Frischmann (2023) draw attention to the problem of polycentricity in the governance of US smart city initiatives. In their case studies, American university cities that implemented smart city solutions, provide rich data on value-driven and community-based approaches to the governance of smart city infrastructure. Christofi (2023) highlights the problem of the accumulation of human rights effects from legally compliant smart city projects and offers a useful comparison with environmental law, where cumulative harms are being addressed. On this basis, she proposes an impact assessment model to review the projects’ cumulative effects on citizens’ rights. Both Sanfilippo and Frischmann’s (2023), and Christofi’s (2023) studies, call for the greater involvement of communities in the design, decision-making and governance of smart cities; Sanfilippo and Frischmann (2023) offer examples of the technologies that have been dismantled or redesigned as they did not address the needs of the residents.

The second digital rights discourse connected to HRBA, digital rights in smart cities, is framed by Henry Lefevbre’s (1968) concept of the right to the city. Applied in the context of smart cities, the right to the city and digital rights discourses represent a growing political movement that aims at countering the neoliberal and technocratic dynamics in city governance (Shaw & Graham, 2017; Morozov & Bria, 2018; Kitchin et al., 2019; Galič & Schuilenburg, 2020): “The right to the city is a rallying cry for transformative political mobilization to create such a humanizing urbanism, a more emancipatory and empowering city” (Kitchin et al., 2019, p. 16). Researchers point out that city infrastructure retrofitted with the smart city devices has become a new asset class (Morozov & Bria, 2018; Artyushina, 2023), and reversing the smart city paradigm would require the public ownership of digital and physical infrastructure (Green, 2019). Kitchin (2023b, p. 261) further notes:

The transformation in the organisation and ethos of government by neo-liberalism and the use of smart city technologies alters the social contract between the state and citizens. Neo-liberal citizenship moves away from inalienable rights and the common good towards individual autonomy, freedom of choice and personal responsibilities and obligations defined largely by market principles, with checks and balances that seek to limit excessive discrimination and exploitation.

If smart cities represent a form of post-political governance (Vanolo, 2016; Morozov, 2017; Carr & Hesse, 2020), then, digital rights activists argue, smart cities need to be politicised. Considerations of individual freedom and democracy must be brought to the centre to counter the risks posed by “platformisation, domination and privatisation” in smart cities (Goodman, 2020) The data feminism approach also informs the digital rights movement as it calls for activists to use digital data proactively in establishing and protecting the rights of marginalised groups online and offline (D'Ignazio & Klein, 2020).

It must be acknowledged that the rights-based approach, which the HRBA represents, is not the only means to mitigate harms associated with smart cities or foundations to develop imaginaries of urban futures. HRBAs can be criticised, for example, for neglecting collective dimensions and structural inequalities (Karppinen & Puukko, 2020; Yeung, 2019; Smuha, 2021a), atmospheric impacts of compliant smart city technologies (Galič & Gellert, 2021), ethical and societal implications of technology (Mantelero, 2018) being too western, narrow or abstract (Smuha, 2021b), shortcomings of available remedies (Hacker, 2018; Hakkarainen, 2021; Kosta, 2022) and misaligned typologies of harm with respect to risks posed by AI (Teo, 2022). Smart city technologies can also be governed by relying on alternative normative standards, such as welfare and democracy (Karppinen & Puukko, 2020), justice-based approaches (Karppinen & Puukko, 2020; Taylor, 2017) and consumer privacy governance in US law (Jones, 2017; Guay & Birch, 2022; Solove Khan, 2019), along with governance of data as a right of speech in the US (Balkin, 2015) and the capabilities-based approach (Sen, 1993; Nussbaum, 1997; Alexander, 2004).

In this special issue, Sanfilippo and Frischmann (2023) propose a “slow-governance” framework for smart cities informed by the governance of knowledge commons framework and the capabilities approach. The slow governance framework aims to counter techno-solutionism associated with smart cities and foster human flourishing – a positive value beyond protections awarded by human rights. While this framework departs from the perceived legalism of HRBA, it has significant potential in the jurisdictions that have not passed laws in support of HRBA by design, or when implementation of explicit HRBAs will face resistance or even risk.

Policy lessons: Protecting human rights in smart cities

Speaking from the legal perspective, the adoption of biometric, algorithmic and smart city technologies is a global phenomenon that has implications on a very wide spectrum of fundamental human rights. Often, smart city initiatives are justified by policy goals like efficiency, sustainability and a green transition (see European Commission, 2020; Moving FIRST Act 2021-2022). Yet, prioritising these values or placing all human rights under the umbrella of sustainability would be myopic as the deployment of smart city technologies may have drastic consequences to civil and political liberties. They may deprive entire communities of their rights to realise their social, economic and cultural goals. Furthermore, smart cities are also the terrain for the identification and fulfilment of digital rights.

This special issue underscores the relevance of human rights-based approaches to the governance of the human rights impacts of smart city technologies. For analytical depth, it is crucial to distinguish the HRBA by design, where human rights compliance aims to be embedded in the technology design from HRBA in cities, which focuses on the role of cities as loci of respecting and fulfilling human rights. The articles in this special issue contain several important policy lessons. First, the implementation of HRBA-by-design in policy frameworks is a working tool that helps protect individual, collective and public interests in our rapidly digitised and automated cities (see Hacker & Neyer, 2023; Lane, 2023a). Second, the HRBA in cities approach may serve as a means to protect fundamental rights that do not offer HRBA by design protections similar to the GDPR (see Sanfilippo & Frischmann, 2023; Ramiro & Cruz, 2023). Third, implementing HRBA at the design stage can help develop technological tools that actually serve the needs of citizens (see Wernick et al., 2023; Tupasela et al., 2023). Fourth, it is immensely important to address the long-term, cumulative effects of smart city technologies, and there may be a practical necessity in adopting the regulatory approach that has been successfully implemented in environmental law (see Christofi, 2023).

The transition of technologies from cyberspace into city spaces has been far from seamless, and that is reflected in the still-developing HRBA-informed regulatory approaches. As we seek to future-proof our cities, we may accept, as a fact, that there will be no one-size-fits all approach to the governance of smart cities’ technologies. It is, however, important to keep implementing the national and supranational regulations of smart city technologies (e.g., EU AI Act, Corporate Responsibility Act), as well as international guidelines (e.g., UN-Habitat) that help promote awareness about HRBA. On the municipal level, communities should take a proactive approach to smart city governance and help formulate and realise their collective rights.

Future-proofing the city in alignment with HRBAs requires law, governance and political action – the tools available depend on the context and jurisdiction. By-design HRBA is dominant in the EU and, albeit imperfectly, enforceable there. The availability of legal remedies is critical for providing incentives to execute HRBAs and ensure sufficient level of protections (see Hildebrandt, 2015). Yet, even without legal remedies to uphold HRBAs or their alternatives in the particular jurisdiction, we expect these approaches to allow for the more ethical and future-proof adoption of smart city technologies than smart city executions drawing from neoliberal and technosolutionist approaches (Morozov, 2013; Carr & Hesse, 2020).

As showcased by Tupasela et al. (2023), Ramiro & Cruz (2023) and Christofi’s (2023) articles, HRBA by design and HRBA in cities are not mutually exclusive, but complementary. HRBA by design approaches, both in its risk-based and proactive forms, belong to the toolbox of HRBA in cities (UN-Habitat, 2022). Additionally, both approaches would benefit from further research on value-sensitive design concerning their execution in practice (Scott, 2008; Safransky, 2020; Koulu, 2021). As demonstrated by articles by Hacker & Neyer (2023), Lane (2023a), Wernick et al. (2023) and Mobilio (2023), there is a need for further legal and socio-legal research on the systematisation and coherence of European law- and policy-making concerning smart cities. It is of value to follow the developments on HRBA in cities relating to cities’ sovereignty to govern technology and emerging digital rights. Although the impact of HRBA in cities often rests on policy-making rather than enforceable rights, it also represents the mobilisation of human rights in the local context (Nijman et al, 2023). Finally, it is valuable for the research community studying HRBAs and smart cities to acknowledge and explore the limitations of human rights as means to govern technology. When controlling the risks technology poses to our society is impossible, due to the imperfections of the human rights framework, its impact on core societal values such as democracy (Smuha, 2021b), or shortcomings of known regulatory and governance tools, we maintain the option of exercising the precautionary principle (Clarke, 2005) and keeping it away from our streets.

This production of this special issue was made possible by the generous support of the Kone Foundation, Finland, in connection with the Long-term risks of the smart city technologies -project and the Catalyst Grant from the Helsinki Institute for Social Sciences and Humanities. First, we would like to thank the former managing editor Thomas Christian Bächle and the deputy editor Lena Marie Henkes for their consistent support in the editorial process of this special issue. We are also very grateful for the peer reviewers who dedicated their time and expertise to the publication. This special issue would not have materialised without Frédéric Dubois’ and the editorial and managing board’s support for our call for papers. We also thank Marcel Wrzesinski, Derek JG Williams and Alexander Mörelius Wulff for their help during the editorial process. We would also want to extend our thanks to the colleagues at the Legal Tech Lab of the University of Helsinki, as well as Bruna De Castro e Silva for their valuable feedback on the draft call for papers, as well as the team supporting the organisation of the preparatory workshop on the special issue in August 2022 at the University of Helsinki.

References

Ahmad, W., & Dethy, E. (2019). Preventing surveillance cities: Developing a set of fundamental privacy provisions. Journal of Science Policy & Governance, 15(1), 1–11.

Aizenberg, E., & van den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720949566

Akbari, A. (2022). Authoritarian smart city: A research agenda. Surveillance & Society, 20(4), 441–449. https://doi.org/10.24908/ss.v20i4.15964

Alexander, J. M. (2004). Capabilities, human rights and moral pluralism. The International Journal of Human Rights, 8(4), 451–469. https://doi.org/10.1080/1364298042000283585

Artyushina, A. (2020). Is civic data governance the key to democratic smart cities? The role of the urban data trust in Sidewalk Toronto. Telematics and Informatics, 55, 101456. https://doi.org/10.1016/j.tele.2020.101456

Artyushina, A. (2023). Can a smart city exist as commons? In B. Frischmann, M. Madison, & M. Sanfilippo (Eds.), Governing smart cities as knowledge commons (pp. 248–266). Cambridge University Press. https://www.cambridge.org/core/books/governing-smart-cities-as-knowledge-commons/can-a-smart-city-exist-as-commons/D12C9A4D0C152EE0DF8CCFAF7CAD5956

Balkin, J. M. (2016). Information fiduciaries and the first amendment. UC Davis Law Review, 49(4), 1183–1234.

Baumgärtel, M., & Oomen, B. (2019). Pulling human rights back in? Local authorities, international law and the reception of undocumented migrants. The Journal of Legal Pluralism and Unofficial Law, 51(2), 172–191. https://doi.org/10.1080/07329113.2019.1624942

Birch, K., Chiappetta, M., & Artyushina, A. (2020). The problem of innovation in technoscientific capitalism: Data rentiership and the policy implications of turning personal digital data into a private asset. Policy Studies, 41(5), 468–487. https://doi.org/10.1080/01442872.2020.1748264

Botero Arcila, B. (2022). Smart city technologies: A political economy introduction to their governance challenges. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford handbook of AI governance (1st ed., p. C48.S1-C48.S14). Oxford University Press. https://academic.oup.com/edited-volume/41989/chapter/386779620

boyd, danah, & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Brauneis, R., & Goodman, E. (2017). Algorithmic transparency for the smart city (pp. 1–58) [Preprint]. LawArXiv. https://osf.io/fjhw8

Brown, I., & Korff, D. (2009). Terrorism and the proportionality of internet surveillance. European Journal of Criminology, 6(2), 119–134. https://doi.org/10.1177/1477370808100541

Brown, T. E. (2019). Human rights in the smart city: Regulating emerging technologies in city places. In L. Reins (Ed.), Regulating new technologies in uncertain times (Vol. 32, pp. 47–65). T.M.C. Asser Press. http://link.springer.com/10.1007/978-94-6265-279-8_4

Calzada, I. (2018). (Smart) citizens from data providers to decision-makers? The case study of Barcelona. Sustainability, 10(9), 1–25. https://doi.org/10.3390/su10093252

Calzada, I. (2020). Smart city citizenship. Elsevier.

Calzada, I. (2021). The right to have digital rights in smart cities. Sustainability, 13(20), 1–28. https://doi.org/10.3390/su132011438

Calzada, I., Pérez-Batlle, M., & Batlle-Montserrat, J. (2021). People-centered smart cities: An exploratory action research on the cities’ coalition for digital rights. Journal of Urban Affairs, 1–26. https://doi.org/10.1080/07352166.2021.1994861

Cardullo, P., Di Feliciantonio, C., & Kitchin, R. (Eds.). (2019). The right to the smart city. Emerald Publishing Limited. https://www.emerald.com/insight/publication/doi/10.1108/9781787691391

Carr, C., & Hesse, M. (2020). When Alphabet Inc. plans Toronto’s waterfront: New post-political modes of urban governance. Urban Planning, 5(1), 69–83. https://doi.org/10.17645/up.v5i1.2519

Castets-Renard, C. (2021). Human rights and algorithmic impact assessment for predictive policing. In H.-W. Micklitz, O. Pollicino, A. Reichman, A. Simoncini, G. Sartor, & G. De Gregorio (Eds.), Constitutional challenges in the algorithmic society (1st ed., pp. 93–110). Cambridge University Press. https://doi.org/10.1017/9781108914857

Cavoukian, A. (2009). Privacy by design: The 7 foundational principles. https://www.ipc.on.ca/wp-content/uploads/resources/7foundationalprinciples.pdf

Celeste, E. (2019). Digital constitutionalism: A new systematic theorisation. International Review of Law, Computers & Technology, 33(1), 76–99. https://doi.org/10.1080/13600869.2019.1562604

Celeste, E., Heldt, A., & Inglesias Keller, C. (Eds.). (2022). Constitutionalising social media. Hart Publishing, an imprint of Bloomsbury Publishing.

Christofi, A. (2023). Smart cities and cumulative effects on fundamental rights. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1701

Cities Coalition for Digital Rights. (n.d.). Declaration of Cities Coalition for Digital Rights [Declaration]. https://citiesfordigitalrights.org/assets/Declaration_Cities_for_Digital_Rights.pdf

Cities Coalition for Digital Rights & United Nations Human Settlements Programme (UN-Habitat). (n.d.). Digital Rights Governance Framework. Concept draft open for feedback (pp. 1–26) [Concept]. https://citiesfordigitalrights.org/sites/default/files/DIGITAL%20RIGHTS%20FRAMEWORK_CONCEPT%20FOR%20FEEDBACK.pdf

Clarke, S. (2005). Future technologies, dystopic futures and the precautionary principle. Ethics and Information Technology, 7(3), 121–126. https://doi.org/10.1007/s10676-006-0007-1

Cobbe, J., Lee, M. S. A., Janssen, H., & Singh, J. (2019). Centering the rule of law in the digital state. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3673843

Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR), (1950). https://www.echr.coe.int/documents/convention_eng.pdf

Council of Europe. (2015). Your digital rights in brief [Guide to human rights for internet users]. https://edoc.coe.int/en/internet/6545-your-digital-rights-in-brief.html

Council of Europe. (2017). Algorithms and human rights: Study on the human rights dimensions of automated data processing techniques and possible regulatory implications (Study DGI(2017)12; pp. 1–54). https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5

Council of Europe. (2021). Ad hoc committee on artifical intelligence: Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law(CAHAI(2021)09rev; pp. 1–13). https://rm.coe.int/cahai-2021-09rev-elements/1680a6d90d

Council of Europe. (2023). Committee on artificial intelligence: Revised zero draft [framework] convention on artificial intelligence, human rights, democracy and the rule of law (CAI(2023)01; pp. 1–13). https://rm.coe.int/cai-2023-01-revised-zero-draft-framework-convention-public/1680aa193f

Dalla Corte, L. (2022). On proportionality in the data protection jurisprudence of the CJEU. International Data Privacy Law, 12(4), 259–275. https://doi.org/10.1093/idpl/ipac014

Datta, A. (2015). New urban utopias of postcolonial India: ‘Entrepreneurial urbanization’ in Dholera smart city, Gujarat. Dialogues in Human Geography, 5(1), 3–22. https://doi.org/10.1177/2043820614565748

Datta, A. (2020). The “smart safe city”: Gendered time, speed, and violence in the margins of India’s urban age. Annals of the American Association of Geographers, 110(5), 1318–1334. https://doi.org/10.1080/24694452.2019.1687279

De Gregorio, G. (2022). Digital constitutionalism in Europe: Reframing rights and powers in the algorithmic society(1st ed.). Cambridge University Press. https://doi.org/10.1017/9781009071215

Dencik, L., Hintz, A., Redden, J., & Treré, E. (2022). Data justice. SAGE Publications Ltd. https://doi.org/10.4135/9781529770131

D’Ignazio, C., & Klein, L. F. (2020). Data feminism. The MIT Press.

Donahoe, E., & Metzger, M. M. (2019). Artificial intelligence and human rights. Journal of Democracy, 30(2), 115–126. https://doi.org/10.1353/jod.2019.0029

Edwards, L. (2016). Privacy, security and data protection in smart cities: A critical EU law perspective. European Data Protection Law Review (Lexxion). https://doi.org/10.2139/ssrn.2711290

European Commission. (2020). European missions. 100 climate-neutral and smart cities by 2030. Implementation plan(pp. 1–62) [Working document]. https://research-and-innovation.ec.europa.eu/system/files/2021-09/cities_mission_implementation_plan.pdf

European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council Laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts (COM/2021/206 final). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

European Commission. (2022). Proposal for a Directive of the European Parliament and of the Council on Corporate Sustainability Due Diligence and amending Directive (EU) 2019/1937 (COM/2022/71 final). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0071

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016/679 (2016). https://eur-lex.europa.eu/eli/reg/2016/679/oj

Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act), 2022/2065 (2022). https://eur-lex.europa.eu/eli/reg/2022/2065/oj

Charter of Fundamental Rights of the European Union (CFREU), Pub. L. No. C326/391 (2009). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:C2012/326/02

European Declaration on Digital Rights and Principles for the Digital Decade, (2022). https://ec.europa.eu/newsroom/dae/redirection/document/92399

Feldstein, S. (2021). The rise of digital repression: How technology is reshaping power, politics, and resistance (1st ed.). Oxford University Press. https://doi.org/10.1093/oso/9780190057497.001.0001

Floridi, L. (2020). AI and its new winter: From myths to realities. Philosophy & Technology, 33(1), 1–3. https://doi.org/10.1007/s13347-020-00396-6

Friedman, B., Kahn, P. H., Borning, A., & Huldtgren, A. (2013). Value sensitive design and information systems. In N. Doorn, D. Schuurbiers, I. van de Poel, & M. E. Gorman (Eds.), Early engagement and new technologies: Opening up the laboratory (Vol. 16, pp. 55–95). Springer Netherlands. http://link.springer.com/10.1007/978-94-007-7844-3_4

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347. https://doi.org/10.1145/230538.230561

Frischmann, B. M., Madison, M. J., & Sanfilippo, M. R. (Eds.). (2023). Governing smart cities as knowledge commons(1st ed.). Cambridge University Press. https://doi.org/10.1017/9781108938532

Frischmann, B. M., & Selinger, E. (2018). Re-engineering humanity. Cambridge University Press.

Galdon-Clavell, G. (2013). (Not so) smart cities?: The drivers, impact and risks of surveillance-enabled smart environments. Science and Public Policy, 40(6), 717–723. https://doi.org/10.1093/scipol/sct070

Galič, M., & Gellert, R. (2021). Data protection law beyond identifiability? Atmospheric profiles, nudging and the Stratumseind Living Lab. Computer Law & Security Review, 40, 105486. https://doi.org/10.1016/j.clsr.2020.105486

Galič, M., & Schuilenburg, M. (2020). Reclaiming the smart city: Toward a new right to the city. In J. C. Augusto (Ed.), Handbook of smart cities (pp. 1–18). Springer International Publishing. https://doi.org/10.1007/978-3-030-15145-4_59-1

Gellert, R. (2021). The role of the risk-based approach in the General Data Protection Regulation and in the European Commission’s proposed Artificial Intelligence Act: Business as usual? Journal of Ethics and Legal Technologies, 3(11/2021), 15–33. https://doi.org/10.14658/pupj-JELT-2021-2-2

Gellert, R. M. (2017). Why the GDPR risk-based approach is about compliance risk, and why it’s not a bad thing. Trends and Communities of Legal Informatics: IRIS 2017 - Proceedings of the 20th International Legal Informatics Symposium. Schweighofer, E., Kummer, F. & Sorge, C. (Eds.), 527–532.

Goodman, E. P. (2020). Smart city ethics: How “smart” challenges democratic governance. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI (pp. 822–839). Oxford University Press. https://academic.oup.com/edited-volume/34287/chapter/290681705

Goodman, E. P., & Powles, J. (2019). Urbanism under Google: Lessons from Sidewalk Toronto. Fordham Law Review, 88(2), 457–498.

Green, B. (2020). The smart enough city: Putting technology in its place to reclaim our urban future (First MIT Press paperback edition). The MIT Press.

Greenfield, A. (2013). Against the smart city (Kindle edition). Do projects.

Grigolo, M. (2010). Human rights and cities: The Barcelona Office for Non-Discrimination and its work for migrants. The International Journal of Human Rights, 14(6), 896–914. https://doi.org/10.1080/13642987.2010.512134

Guay, R., & Birch, K. (2022). A comparative analysis of data governance: Socio-technical imaginaries of digital personal data in the USA and EU (2008–2016). Big Data & Society, 9(2). https://doi.org/10.1177/20539517221112925

Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55(4), 1143–1185. https://doi.org/10.54648/COLA2018095

Hacker, P., & Neyer, J. (2023). Substantively smart cities – Participation, fundamental rights and temporality. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1696

Hakkarainen, J. (2021). Naming something collective does not make it so: Algorithmic discrimination and access to justice. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1600

Harvey, D. (2008). The right to the city. The City Reader, 6(1), 23–40.

Heikkilä, M. (2022, March 30). AI: Decoded: A Dutch algorithm scandal serves a warning to Europe—The AI Act won’t save us. Politico. https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/

High-Level Expert Group on Artificial Intelligence (AI HLEG). (2019). Ethics guidelines for trustworthy AI (pp. 1–39) [Guidelines]. European Commission. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419

Hildebrandt, M. (2011). Legal protection by design: Objections and refutations. Legisprudence, 5(2), 223–248. https://doi.org/10.5235/175214611797885693

Hildebrandt, M. (2013). Balance or trade-off? Online security technologies and fundamental rights. Philosophy & Technology, 26(4), 357–379. https://doi.org/10.1007/s13347-013-0104-0

Hildebrandt, M. (2015). Smart technologies and the end(s) of law: Novel entanglements of law and technology. Edward Elgar Publishing.

Hildebrandt, M., & Koops, B.-J. (2010). The challenges of ambient law and legal protection in the profiling era: The challenges of ambient law and legal protection in the profiling era. Modern Law Review, 73(3), 428–460. https://doi.org/10.1111/j.1468-2230.2010.00806.x

Hollands, R. G. (2008). Will the real smart city please stand up?: Intelligent, progressive or entrepreneurial? City, 12(3), 303–320. https://doi.org/10.1080/13604810802479126

Hollands, R. G. (2015). Critical interventions into the corporate smart city. Cambridge Journal of Regions, Economy and Society, 8(1), 61–77. https://doi.org/10.1093/cjres/rsu011

Jefferson, B. J. (2018). Predictable policing: Predictive crime mapping and geographies of policing and race. Annals of the American Association of Geographers, 108(1), 1–16. https://doi.org/10.1080/24694452.2017.1293500

Jewell, M. (2018). Contesting the decision: Living in (and living with) the smart city. International Review of Law, Computers & Technology, 32(2–3), 210–229. https://doi.org/10.1080/13600869.2018.1457000

Joh, E. E. (2019). Policing the smart city. International Journal of Law in Context, 15(2), 177–182. https://doi.org/10.1017/S1744552319000107

Jones, M. L. (2017). The right to a human in the loop: Political constructions of computer automation and personhood. Social Studies of Science, 47(2), 216–239. https://doi.org/10.1177/0306312717699716

Kälin, W., & Künzli, J. (2019). The law of international human rights protection (Second edition). Oxford University Press.

Karppinen, K., & Puukko, O. (2020). Four discourses of digital rights: Promises and problems of rights-based politics. Journal of Information Policy, 10, 304–328. https://doi.org/10.5325/jinfopoli.10.2020.0304

Kempin Reuter, T. (2019). Human rights and the city: Including marginalized communities in urban development and smart cities. Journal of Human Rights, 18(4), 382–402. https://doi.org/10.1080/14754835.2019.1629887

Kempin Reuter, T. (2020). Smart city visions and human rights: Do they go together? Understanding the impact of technology on urban life (pp. 1–20) [Carr Center Discussion Paper Series]. Harvard Kennedy School. https://carrcenter.hks.harvard.edu/publications/smart-city-visions-and-human-rights-do-they-go-together

Kettemann, M. C. (2022). How platforms respond to human rights conflicts online: Best practices in weighing rights and obligations in hybrid online orders. Verlag Hans-Bredow-Institut. https://doi.org/10.21241/ssoar.81873

Khan, L. (2019). The separation of platforms and commerce. Columbia Law Review, 119(4), 973–1098.

Kitchin, R. (2015). Making sense of smart cities: Addressing present shortcomings. Cambridge Journal of Regions, Economy and Society, 8(1), 131–136. https://doi.org/10.1093/cjres/rsu027

Kitchin, R. (2022). Conceptualising smart cities. Urban Research & Practice, 15(1), 155–159. https://doi.org/10.1080/17535069.2022.2031143

Kitchin, R. (2023a). Afterword: Decentering the smart city. In S. Flynn (Ed.), Equality in the city. Imaginaries of the smart future (pp. 260–267). Intellect Books.

Kitchin, R. (2023b). Digital timescapes: Technology, temporality and society. John Wiley & Sons.

Kitchin, R., Cardullo, P., & Di Feliciantonio, C. (2019). Citizenship, Justice, and the Right to the Smart City. In P. Cardullo, C. Di Feliciantonio, & R. Kitchin (Eds.), The Right to the Smart City (pp. 1–24). Emerald Publishing Limited. https://doi.org/10.1108/978-1-78769-139-120191001

Kitchin, R., & Lauriault, T. P. (2014). Towards critical data studies: Charting and unpacking data assemblages and their work (Working Paper No. 2; The Programmable City). Maynooth University.

Kosta, E. (2022). Algorithmic state surveillance: Challenging the notion of agency in human rights. Regulation & Governance, 16(1), 212–224. https://doi.org/10.1111/rego.12331

Koulu, R. (2021). Crafting digital transparency: Implementing legal values into algorithmic design. Critical Analysis of Law, 8(1), 81–100.

Lane, L. (2023a). Preventing long-term risks to human rights in smart cities: A critical review of responsibilities for private AI developers. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1697

Lane, L. (2023b). Artificial intelligence and human rights: Corporate responsibility in AI governance initiatives. Nordic Journal of Human Rights, 1–22. https://doi.org/10.1080/18918131.2022.2137288

Latonero, M. (2018). Governing artificial intelligence: Upholding human rights & dignity (pp. 1–37) [Report]. Data & Society. https://apo.org.au/sites/default/files/resource-files/2018-10/apo-nid196716.pdf

Lefebvre, H. (2009). Le droit à la ville (originally published in 1968). Economia.

Lessig, L. (1999). Code and other laws of cyberspace. Basic Books.

Lorinc, J. (2022). Dream states: Smart cities, technology, and the pursuit of urban utopias. Coach House Books.

Lyon, D. (Ed.). (2006). Theorizing surveillance. The panopticon and beyond (0 ed.). Willan. https://doi.org/10.4324/9781843926818

Lyon, D. (2015). Surveillance after Snowden. John Wiley & Sons.

Lyon, D., & Wood, D. M. (2020). Big data surveillance and security intelligence: The Canadian case. University of British Columbia Press.

Mantelero, A. (2018). AI and big data: A blueprint for a human rights, social and ethical impact assessment. Computer Law & Security Review, 34(4), 754–772. https://doi.org/10.1016/j.clsr.2018.05.017

Mantelero, A. (2022). Beyond data: Human rights, ethical and social impact assessment in AI. Asser Press. https://link.springer.com/book/10.1007/978-94-6265-531-7

Mantelero, A., & Esposito, M. S. (2021). An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems. Computer Law & Security Review, 41, 105561. https://doi.org/10.1016/j.clsr.2021.105561

Mattern, S. (2021). A city is not a computer: Other urban intelligences. Princeton University Press.

McGregor, L., Murray, D., & Ng, V. (2019). International human rights law as a framework for algorithmic accountability. International and Comparative Law Quarterly, 68(2), 309–343. https://doi.org/10.1017/S0020589319000046

Micheli, M. (2022). Public bodies’ access to private sector data: The perspectives of twelve European local administrations. First Monday. https://doi.org/10.5210/fm.v27i2.11720

Mihr, A., & Gibney, M. (2014). The SAGE handbook of human rights. SAGE Publications Ltd. https://doi.org/10.4135/9781473909335

Mobilio, G. (2023). Your face is not new to me – Regulating the surveillance power of facial recognition technologies. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1699

Monahan, T. (2022). Crisis vision: Race and the cultural production of surveillance. Duke University Press.

Monahan, T., & Wood, D. M. (2018). Surveillance studies: A reader. Oxford University Press.

Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs.

Morozov, E. (2017, October 22). Google’s plan to revolutionise cities is a takeover in all but name. The Guardian. https://www.theguardian.com/technology/2017/oct/21/google-urban-cities-planning-data

Morozov, E., & Bria, F. (2018). Rethinking the smart city: Democratizing urban technology (Report No. 5; City Series, pp. 1–54). Rosa Luxemburg Stifung, New York Office. https://www.rosalux.de/fileadmin/rls_uploads/pdfs/sonst_publikationen/rethinking_the_smart_city.pdf

Moving and Fostering Innovation to Revolutionize Smarter Transportation Act (Moving FIRST Act), 117th Congress (2021). https://www.congress.gov/bill/117th-congress/senate-bill/652/text

Nijman, J., Oomen, B., Durmuş, E., Miellet, S., & Roodenburg, L. (2022). Urban politics of human rights (1st ed.). Routledge. https://doi.org/10.4324/9781003315544

Nussbaum, M. C. (1997). Capabilities and human rights. Fordham Law Review, 66(2), 273–300.

OECD. (2018). Due diligence guidance on responsible business conduct (pp. 1–96). https://www.oecd.org/investment/due-diligence-guidance-for-responsible-business-conduct.htm

Oomen, B., & Baumgärtel, M. (2014). Human rights cities. In A. Mihr & M. Gibney (Eds.), The SAGE handbook of human rights.

Oomen, B., Baumgärtel, M., & Durmuş, E. (2021). Accelerating cities, constitutional brakes? Local authorities between global challenges and domestic law. In E. Hirsch Ballin, G. van der Schyff, M. Stremler, & M. De Visser (Eds.), European yearbook of constitutional law 2020 (Vol. 2, pp. 249–272). T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-431-0_12

Oomen, B. M. (2019). Cities of refuge: Rights, culture and the creation of cosmopolitan cityzenship. In R. Buikema, A. Buyse, & A. C. G. . M. Robben (Eds.), Cultures, citizenship and human rights (pp. 121–136). Routledge.

Oomen, B. M., & Baumgärtel, M. (2018). Frontier cities: The rise of local authorities as an opportunity for international human rights law. European Journal of International Law, 29(2), 607–630. https://doi.org/10.1093/ejil/chy021

Oomen, B. M., Davis, M. F., & Grigolo, M. (Eds.). (2016). Global urban justice: The rise of human rights cities (1st ed.). Cambridge University Press. https://doi.org/10.1017/CBO9781316544792

Oomen, B. M., & van den Berg, E. (2014). Human rights cities: Urban actors as pragmatic idealistic human rights users. Human Rights & International Legal Discourse, 8(2), 160–185.

Pembleton, C., Ahmed, N., Lauriault, T., Landry, J.-N., & Planchenault, M. (2022). State of open smart communities in Canada (pp. 1–49) [Report]. Open North. https://opennorth.ca/wp-content/uploads/2022/06/open-smart-cities-v6-digital.pdf

Penney, J., McKune, S., Gill, L., & Deibert, R. J. (2018). Advancing human-rights-by-design in the dual-use technology industry. Journal of International Affairs, 71(2), 103–110.

Penney, J. W. (2022). Understanding chilling effects. Minnesota Law Review, 106, 1451–1530.

Prabhakaran, V., Mitchell, M., Gebru, T., & Gabriel, I. (2022). A human rights-based approach to responsible AI(arXiv:2210.02667). arXiv. http://arxiv.org/abs/2210.02667

Qarri, A., & Gill, L. (2022). Smart cities and human rights (Community Solutions Network) [Research Brief]. Future Cities Canada. https://opennorth.ca/wp-content/uploads/legacy/RB_-_Human_Rights.pdf

Rachovitsa, A., & Johann, N. (2022). The human rights implications of the use of AI in the digital welfare state: Lessons learned from the Dutch SyRI case. Human Rights Law Review, 22(2). https://doi.org/10.1093/hrlr/ngac010

Ramiro, A., & Cruz, L. (2023). The grey-zones of public-private surveillance: Policy tendencies of facial recognition for public security in Brazilian cities. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1705

Ranchordás, S. (2020). Nudging citizens through technology in smart cities. International Review of Law, Computers & Technology, 34(3), 254–276. https://doi.org/10.1080/13600869.2019.1590928

Ranchordás, S., & Goanta, C. (2020). The new city regulators: Platform and public values in smart and sharing cities. Computer Law & Security Review, 36, 105375. https://doi.org/10.1016/j.clsr.2019.105375

Sadowski, J. (2020). Too smart: How digital capitalism is extracting data, controlling our lives, and taking over the world. The MIT Press.

Sadowski, J., & Bendor, R. (2019). Selling smartness: Corporate narratives and the smart city as a sociotechnical imaginary. Science, Technology, & Human Values, 44(3), 540–563. https://doi.org/10.1177/0162243918806061

Safransky, S. (2020). Geographies of algorithmic violence: Redlining the smart city. International Journal of Urban and Regional Research, 44(2), 200–218. https://doi.org/10.1111/1468-2427.12833

Sanfilippo, M. R., & Frischmann, B. (2023). Slow-governance in smart cities: An empirical study of smart intersection implementation in four US college towns. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1703

Scassa, T. (2020). A Human Rights-Based Approach to Data Protection in Canada. In E. Dubois & F. Martin-Bariteau (Eds.), Citizenship in a Connected Canada: A research and Policy Agenda (pp. 173–188). University of Ottawa Press.

Scott, J. C. (2008). Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press.

Selbst, A. D. (2021). An institutional view of algorithmic impact assessments. Harvard Journal of Law & Technology, 35(1), 117–191.

Sen, A. (1993). Capability and well‐being. In M. Nussbaum & A. Sen (Eds.), The quality of life (pp. 30–53). Oxford University Press. https://doi.org/10.1093/0198287976.003.0003

Sengupta, U., & Sengupta, U. (2022). Why government supported smart city initiatives fail: Examining community risk and benefit agreements as a missing link to accountability for equity-seeking groups. Frontiers in Sustainable Cities, 4, 960400. https://doi.org/10.3389/frsc.2022.960400

Shaw, J., & Graham, M. (2017). An informational right to the city? Code, content, control, and the urbanization of information: An informational right to the city? Antipode, 49(4), 907–927. https://doi.org/10.1111/anti.12312

Shelton, T., Zook, M., & Wiig, A. (2015). The ‘actually existing smart city’. Cambridge Journal of Regions, Economy and Society, 8(1), 13–25. https://doi.org/10.1093/cjres/rsu026

Sloane, M., Chowdhury, R., Havens, J. C., Lazovich, T., & Rincon Alba, L. (2021). AI and procurement: A primer (pp. 1–51). New York University. https://archive.nyu.edu/handle/2451/62255

Smuha, N. A. (2021a). Beyond the individual: Governing AI’s societal harm. Internet Policy Review, 10(3). https://doi.org/10.14763/2021.3.1574

Smuha, N. A. (2021b). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy & Technology, 34(S1), 91–104. https://doi.org/10.1007/s13347-020-00403-w

Solove, D. (2006). A taxonomy of privacy. University of Pennsylvania Law Review, 154(3), 477–564. https://doi.org/10.2307/40041279

Suzor, N. (2020). A constitutional moment: How we might reimagine platform governance. Computer Law & Security Review, 36, 105381. https://doi.org/10.1016/j.clsr.2019.105381

Suzor, N., Van Geelen, T., & Myers West, S. (2018). Evaluating the legitimacy of platform governance: A review of research and a shared research agenda. International Communication Gazette, 80(4), 385–400. https://doi.org/10.1177/1748048518757142

Tabassi, E. (2023). AI risk management framework: AI RMF (1.0) (NIST AI 100-1; pp. 1–42). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.AI.100-1

Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717736335

Teo, S. A. (2022). How artificial intelligence systems challenge the conceptual foundations of the human rights legal framework. Nordic Journal of Human Rights, 40(1), 216–234. https://doi.org/10.1080/18918131.2022.2073078

The Government of India. (2015). Smart Cities Mission. 100 Smart Cities by 2020. Competition and Grant program.https://smartcities.gov.in/

Tupasela, A., Devis Clavijo, J., Salokannel, M., & Fink, C. (2023). Older people and the smart city – Developing inclusive practices to protect and serve a vulnerable population. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1700

UK Government. Department for Digital, Culture, Media and Sport. (2022). The UK Online Safety Bill (Impact assessment RPC-DCMS-4347(4)). https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1061265/Online_Safety_Bill_impact_assessment.pdf

Ulbricht, L., & Yeung, K. (2022). Algorithmic regulation: A maturing concept for investigating regulation of and through algorithms. Regulation & Governance, 16(1), 3–22. https://doi.org/10.1111/rego.12437

United Nations. (n.d.). About the Universal Declaration of Human Rights Translation Project. About the Universal Declaration of Human Rights Translation Project. https://www.ohchr.org/en/human-rights/universal-declaration/universal-declaration-human-rights/about-universal-declaration-human-rights-translation-project

International Covenant on Civil and Political Rights, (1966). https://treaties.un.org/Pages/ViewDetails.aspx?chapter=4&clang=_en&mtdsg_no=IV-4&src=IND

International Covenant on Economic, Social and Cultural Rights, (1996). https://treaties.un.org/pages/ViewDetails.aspx?src=IND&mtdsg_no=IV-3&chapter=4&clang=_en

United Nations. (2011). Guiding principles on business and human rights. Implementing the United Nations “Protect, Respect and Remedy” Framework (HR/PUB/11/04; pp. 1–35). https://www.ohchr.org/sites/default/files/documents/publications/guidingprinciplesbusinesshr_en.pdf

United Nations Human Settlements Programme (UN-Habitat). (2022). Mainstreaming digital rights in the digital transformation of cities. A guide for local governments (HS/033/22E; pp. 1–40). https://unhabitat.org/sites/default/files/2022/11/digital_rights_guide_web_version_14112022.pdf

United Nations Secretary General. (2020). The United Nations Secretary-General’s roadmap for digital cooperation ensuring the protection of human rights. https://www.un.org/techenvoy/sites/www.un.org.techenvoy/files/general/Digital_Human_Rights_Summary_PDF.pdf

Valcke, P., Clifford, D., & Dessers, V. K. (2021). Constitutional challenges in the emotional AI era. In H.-W. Micklitz, O. Pollicino, A. Reichman, A. Simoncini, G. Sartor, & G. De Gregorio (Eds.), Constitutional challenges in the algorithmic society (1st ed., pp. 57–77). Cambridge University Press. https://doi.org/10.1017/9781108914857.005

Vanolo, A. (2016). Is there anybody out there? The place and role of citizens in tomorrow’s smart cities. Futures, 82, 26–36. https://doi.org/10.1016/j.futures.2016.05.010

Viljoen, S. (2020). Democratic data: A relational theory for data governance. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3727562

Voorwinden, A. (2021). The privatised city: Technology and public-private partnerships in the smart city. Law, Innovation and Technology, 13(2), 439–463. https://doi.org/10.1080/17579961.2021.1977213

Voorwinden, A., & Ranchordás, S. (2022). Soft law in city regulation and governance. In U. Morth, E. Korea-aho, & M. Eliantonio (Eds.), Research handbook on soft law. Edward Elgar Publishing.

Wernick, A., Banzuzi, E., & Mörelius-Wulff, A. (2023). Do European smart city developers dream of GDPR-free countries? The pull of global megaprojects in the face of EU smart city compliance and localisation costs. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1698

Wiig, A. (2015). IBM’s smart city as techno-utopian policy mobility. City, 19(2–3), 258–273. https://doi.org/10.1080/13604813.2015.1016275

Wiig, A. (2016). The empty rhetoric of the smart city: From digital inclusion to economic promotion in Philadelphia. Urban Geography, 37(4), 535–553. https://doi.org/10.1080/02723638.2015.1065686

Williams, R. (2021). Whose streets? Our streets! (Tech edition). 2020-21 "Smart City” cautionary trends & 10 calls to action to protect and promote democracy (pp. 1–69). Belfer Center for Science and International Affairs. Harvard Kennedy School.

Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136.

Wood, D. M., & Monahan, T. (2019). Platform surveillance. Surveillance & Society, 17(1/2), 1–6. https://doi.org/10.24908/ss.v17i1/2.13237

Wood, D. M., & Steeves, V. (2021). Smart surveillance. Surveillance & Society, 19(2), 150–153. https://doi.org/10.24908/ss.v19i2.14916

Yeung, K. (2019). Responsibility and AI (Council of Europe Study DGI(2019)05; pp. 1–96). The Council of Europe.

Yeung, K., Howes, A., & Pogrebna, G. (2020). AI governance by human rights–centered design, deliberation, and oversight: An end to ethics washing. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI (pp. 76–106). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.5

Zwick, A., & Spicer, Z. (Eds.). (2021). The platform economy and the smart city: Technology and the transformation of urban policy. McGill-Queen’s University Press.

Footnotes

1. Most widely accepted international sources for human rights protections are The Universal Declaration of Human Rights (1948) (no binding legally), the International Covenant of Civil and Political Rights (ICCPR, 1966) and the International Covenant of Economic Social and Cultural Rights (ICESC, 1966). In Europe, the human rights organisation Council of Europe (CoE) has adopted the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR, 1950), with its additional Protocols, European Charter of Fundamental Rights (CFREU), as well as other instruments such as The European Social Charter (1961) (see Kälin & Künzli, 2019). CoE is an organisation distinct from the EU. The interpretation of EU legislation, such as the GDPR, is guided by The European Charter of Fundamental Rights, which is part of the EU Constitution, and protects the right to protection of personal data as a distinct fundamental right (Art. 8), alongside the right to private and family life (Art. 7).

Add new comment