Do European smart city developers dream of GDPR-free countries? The pull of global megaprojects in the face of EU smart city compliance and localisation costs
Abstract
Smart city technologies can have detrimental effects on human rights, making it crucial to mitigate them in the R&D phase. This qualitative socio-legal study of the Helsinki metropolitan area (HMA) explores how public funding for smart city research and development (R&D), and the data protection by design principle (DPbD) of the General Data Protection Regulation (GDPR), facilitate the development of human rights compliant technology. Our study shows that the tension between the neoliberal logic of smart cities and that human rights compliance extends from the local to the global level. High compliance and localisation costs, one-sided inputs and a push for scalability in smart city technology development in Finland and other EU states may attract companies to overlook human rights risks and pursue markets outside the EU with lower standards of respect for human rights and the rule of law. We propose policy measures to facilitate human rights compliant smart city R&D, localisation and procurement, and discuss human rights due diligence and export control measures as means to mitigate the potential adverse effects of smart city technology exported from the EU. The study contributes to research on human rights-based approaches to smart city technology development and European innovation and export policy, with attention given to the role of public R&D funding agencies.This paper is part of Future-proofing the city: A human rights-based approach to governing algorithmic, biometric and smart city technologies, a special issue of Internet Policy Review guest-edited by Alina Wernick and Anna Artyushina.
Introduction
Research and development (R&D) funding is one of the policy tools shaping the development of technologies associated with smart cities (Kitchin, 2022). Due to their capacity to be used for surveillance and policing purposes, smart city technologies can have detrimental effects on human rights. Here, we presume that research and development (R&D) funding is one of the policy tools shaping the development of technologies associated with smart cities (Kitchin, 2022), and that it is crucial to mitigate human rights risks in the R&D phase. This socio-legal study of the Helsinki metropolitan area (HMA) reviews whether and how the smart city technology R&D funding instruments present in the area, and the data protection by design principle (DPbD) of the General Data Protection Regulation (GDPR), facilitate the development of human rights compliant technology.
In our analysis, we discovered four features in the Finnish and EU landscape for smart city technology development that may attract companies to pursue markets outside the EU with lower standards of respect for human rights and the rule of law. The smart city R&D funding landscape in HMA displays the following features:
- GDPR compliance is perceived as a burden.
- The funding instruments do not push participants to have sufficiently deep citizen, multidisciplinary and international feedback on the technology. Consequently, they may lose valuable insight into the impact of the technology on human rights at the design stage.
- The potential for replicability, scalability and exports are important in smart city R&D initiatives and piloting.
- Integrating the technology into local contexts is subject to the burden of compliance, with multiple sets of norms and (often) long-term coordination with a network of city officials. As a result, large ground-up smart city initiatives outside Europe attract attention.
These features may lead to technology developers overlooking human rights risks in the R&D phase due to one-sided inputs and myopia in identifying human rights risks in technology exports. The neoliberal, technological solutionism of smart cities is in conflict with human rights on the local, national and the EU global level. Therefore the governance measures should include both facilitation of the development and adoption of local, human rights-compliant smart city technology and oversight of the human rights effects of exported technology.
This section introduces the technosolutionst and neoliberal characteristics of the smart city concept, and the contradictory policy goals of smart city R&D in the EU. We will then explain human rights risks associated with smart city technology, the importance of their mitigation in the R&D phase and the role GDPR plays in addressing them. While human rights are universally recognised (Universal Declaration of Human Rights), in this paper we will refer to the protections as transposed to the European Charter of Fundamental Rights (CFREU).
Technological solutionism with neoliberal characteristics: On smart city discourse
The discourse on smart cities has been described as a “language game”, involving corporations, universities, city officials and other actors over how cities are to be understood, conceptualised and planned (Söderström et al., 2014). As a result, there is no set definition of smart cities. However, according to the bibliometric work of Mora, Bolici and Deakin (2017), two dominant interpretative models can be discerned: a technocentric view and a human-centred perspective. The former is mainly corporate-driven, whereas the latter is led by (in particular European) universities and connected to the needs of governments. The technocentric view tends to understand the “smart city” as a depoliticised matter of merely applying ICT technologies to urban challenges to create datafied conditions for decision making, efficiency and innovation (e.g. Dirks & Keeling, 2009; Washburn & Sidhu, 2010; Mitchell et al., 2013). In contrast, the human-centred perspective views technology as a tool to facilitate broader goals: the empowerment of citizens through participation and investment in their human and social capital become crucial to facilitating their use of the city as a laboratory for urban change (e.g. Giffinger et al., 2007; Schaffers et al., 2012; Manville et al., 2014).
In practice, however, the two paradigms might have more in common than not, as both tend to promote technological solutionism with neoliberal characteristics (Cardullo & Kitchin, 2019). Technosolutionism refers to the assumption that societal and social problems can be solved with “computable solutions”, or through optimised processes (Morozov, 2013, p. 5). It relies on the algorithmic processing of data that does not account for local, urban knowledge that defies computation (Mattern, 2020). The datafication of city processes reduces urban problems into technical problems that can be solved by applying ICT (Coletta & Kitchin, 2017; Vanolo, 2013; Green, 2020), in addition to paving the way for “neoliberal governance” by quantifying the social into something “nudgeable” (Krivy, 2016; Gandy & Nemorin, 2019). Ultimately, even human-centred governments rely on data capture, analysis and open transparent information for evidence-informed policy development (Kitchin, 2014). The smart city has therefore been viewed as a site for capital accumulation, profit-making and tech-led urban entrepreneurialism, through which a neoliberal political economy is deepened, often through strategies of “accumulation by dispossession”. These strategies include: (a) capturing public assets and services through which city administrations are pressured to draw on the competencies and technological solutions of the industry through public-private partnerships, deregulation, privatisation and market competition; (b) fostering local economic development conducive to attracting foreign direct investment, the establishment of innovative start-up sectors or digital hubs and human capital; (c) encourage property-led development and attract investment in real-estate projects where smart technologies feature; and (d) putting in place architecture that enables neoliberal governmentality and governance (Kitchin et al., 2019, pp. 5-7).
In Finland, tension emerges within the neoliberal values of the smart city discourse. The country’s technology adoption is facilitated by the high level of the rule of law (WJP, 2020), popular trust in the government and science (OECD, 2022) and the low levels of perceived corruption (Transparency International, 2021). Using citizen data for democratic purposes has traditionally defined the Nordic social-democratic welfare state. However, in more contemporary national initiatives, such as the Act on the Secondary Use of Health and Social Data (26.4.2019/552), many smart city projects are characterised by the facilitation of commodification, and the accumulation of data with the aim of participating in the global data markets (Tupasela et al., 2020) and the logic extends to smart city projects (Ylipulli & Luusua, 2020). This tension is also expressed in the growing data activism in Finland, with organisations such as MyData striving for citizen and consumer agency, and ensuring human dignity, while promoting the monetisation of personal data, where each individual will be the commercial owner of their data, as a means to these ends (Lehtiniemi & Ruckenstein, 2019).
Contradictory policy goals behind EU Smart City R&D and exports
The European Union (EU) and European states continuously invest in programmes that support the development of smart city technologies, such as applications of big data and artificial intelligence (AI), to facilitate the provision of public services or enable business opportunities in an urban context. Supporting the development of smart cities has been on the EU agenda, through a number of programmes, since the early 2010s (Paskaleva, 2011; Vanolo, 2013), predated by EU initiatives for the promotion of intelligent cities (Komninos, 2002; Hollands, 2008). More recently, the public R&D funding of smart city technology is aimed at goals such as innovation and regional competitiveness, ecosystem building and sustainability (EC, 2022a; Business Finland, n.d.-a), often emphasising scalability and replication (Cardullo & Kitchin, 2019) as well as exportation (Business Finland, n.d.-b; Manville et. al., 2014, p.18).
Hence, competition emerges as another feature of the neoliberal technosolutionism of the smart city discourse in the EU. Various parameters of smartness are developed to help cities sharpen their locational profile and improve their position in competition with other cities and rival global economies (Griffinger et al., 2007). The EU approaches competitiveness by creating conditions to attract “resources (human or otherwise) throughout Europe and the globe and returning ideas, income and other benefits” (Manville et al., 2014, p. 18), or more concretely, by developing a competitive advantage in smart city-related goods and services that could “help Europe to assist developing countries in managing mega-city development in ways that improve their welfare, reduce the risk of exported problems and help them to become better trading partners for Europe” (p. 23).
How does human rights compliance align with the goal of competitive exports? The GDPR grants EU citizens protection for their personal data and other fundamental rights and freedoms within and, to some extent, outside the EU (Arts. 1,3 & Chapter V). The protection enjoyed by EU citizens has impacted global markets: non-EU states have adopted privacy laws following the GDPR standard (de jure Brussels effect). To ensure access to the EU market, companies are incentivised to standardise their technology production across the world, rendering the EU standard a global standard (de facto Brussels effect) (Bradford, 2012). However, EU companies are not obliged to comply with the GDPR in exports outside the EU, but must comply with the laws of importing countries, which offer varying levels of protection for privacy, data and fundamental rights.
Legal and ethical concerns regarding EU states’ technology exports have been raised before (e.g. Amnesty International, 2020), relating also to data colonialism, i.e. companies’ practices of extracting, appropriating and privatising user data while testing data-driven systems in developing countries (Couldry & Mejias, 2019; Hao & Swart, 2022). The EU’s attempts to expand export controls for surveillance technology have been watered down (Kanetake 2019; Seoane, 2020). The EU contends that it promotes human rights in its foreign policy (Art. 21(1) TEU), and human-centric digital transformation in its relations with third countries (European Declaration (2022) para 11). In 2020, the European Commission proposed an action plan on Democracy and Human rights aimed at harnessing opportunities and addressing challenges associated with new technologies, promoting a global system for human rights and democracy, and building resilient, inclusive and democratic societies (EP, 2022). However, the plan is currently on hold, although the proposal for EU Due Diligence Directive (DDD) stems from the initiative.
Human rights risks of smart city technologies
In smart cities, technologies such as big data, Internet-of-Things, CCTV, cloud computing (Edwards, 2016), facial recognition technology, platforms, digital apps (Goodman, 2020) and more recently, AI, are embedded into the urban context (Goodman, 2020; Brauneis & Goodman, 2018; Luusua & Ylipulli, 2022; Botero Arcila, 2022).1Many of the technologies associated with smart cities also allow for the surveillance and identification of individuals and pose threats to a range of human rights, potentially undermining democratic development (Williams, 2020, pp. 1-3). Due to the growing digital divides in the city (Graham & Marvin, 2001, pp. 291-292), citizens may also be excluded from enjoying their rights, such as the integration of persons with disabilities (Art. 26 CFREU; Kempin Reuter, 2020), good administration (Art. 41 CFREU) and social security (Art. 34 CFREU).
Smart city technologies are also associated with dual use (Galdon-Clavell, 2013, p. 720). The idea behind this concept is that technology, research or products which were intended to be used for something positive also have the capability to be used for (morally negative) purposes that are not aligned with the intentions of their developers (Forge, 2010). In EU law, “dual-use items” refer to items “which can be used for both civil and military purposes” (Art. 1 Dual-Use Regulation). It is not uncommon for military technology to later be adopted for civilian use in a smart city (Sadowski, 2020, p. 139). The surveillance capacity may also expand over time due to function creep (Monahan, 2007, p. 378; Wisman, 2012), which can be defined as “an imperceptibly transformative and therewith contestable change in a data-processing system’s proper activity” (Koops, 2021, p. 53).
Constant and ubiquitous surveillance by smart infrastructures (Sadowski, 2020) endangers citizens’ rights to privacy (Art. 7 CFREU) and the protection of personal data (Art. 8 CFREU) (Edwards, 2016; von Grafenstein, 2020; Botero Arcila, 2022). The protection of these two fundamental rights is a prerequisite for enjoying other fundamental rights and freedoms, such as freedom of speech, non-discrimination and political and religious beliefs (Buttarelli, 2017). For instance, smart city technology, especially when used for safety purposes, also facilitates expansive, data-driven and predictive policing (Joh, 2019, Sadowski, 2020), which can reinforce exclusion and the policing of marginalised groups (Sadowski, 2020; Williams 2020; Ranchordas, 2021a; Botero Arcila, 2022). It may also undermine the presumption of innocence (Art. 48 CFREU); the right to liberty and security (Art. 6 CFREU); the right to non-discrimination (Art. 21 CFREU, Schlehahn et al., 2015) and fair trial (Art. 47 CFREU, Williams, 2020; von Grafenstein, 2020). Furthermore, citizens' enjoyment of rights may be chilled (Penney, 2021) due to socio-technical processes governing smart policing – sensors, algorithms, data-flows and control rooms (Sadowski, 2020, pp. 137, 148, 155). Indeed, the use of smart city technologies in public spaces may curtail the freedom of expression and information (Art. 11 CFREU), movement (Art. 45 CFREU), religion (Art. 10 CFREU) and assembly (Art. 12 CFREU) (Williams, 2020). Ultimately, smart city technologies can serve as a means to digital repression (Feldstein, 2021, pp. 25-26) and empower and maintain authoritarian regimes (Akbari, 2022).
GDPR as an example of fundamental rights-driven technology regulation
Smart cities may amplify digital divides, reinforce inequalities and support state or city-level surveillance while minimising citizens’ privacy (Kempin Reuter, 2020). However, the EU is well-positioned to develop human rights-compliant smart city technology. It has been a forerunner in the risk-based and fundamental-rights-driven approach to technology regulation, which the GDPR, adopted in 2018, represents (Gellert, 2017). The approach was recently followed by the passing of the Digital Services Act to regulate platforms and the AI Act (AIA), which proposed to regulate algorithmic systems. In this article, we concentrate on the role of the GDPR in risks associated with smart city technologies at their design phase.
The GDPR establishes a unified approach to the protection of personal data “any information relating to an identified or identifiable natural person” (Art. 4(1)) amongst all EU Member States (Recitals 10, 13). Its purpose is to lay down rules relating to the processing of personal data, to facilitate the free movement of personal data within the EU internal market and to protect the fundamental rights and freedoms of natural persons, in particular their right to data protection (Art. 1). The Regulation codifies general data protection principles, i.e. lawfulness, purpose limitation, data minimisation, accuracy, storage limitation, confidentiality, security and accountability (Art. 5); grants rights to data subjects (individuals concerned), while imposing obligations on data controllers (actors who determine the purposes and means of personal data processing (Art. 4(7)).
Our focus will be on controller obligations (Art. 24-43), in particular, the data protection by design (DPbD) requirement enshrined in Article 25 GDPR, which demands consideration of data protection and other “rights and freedoms”, including the fundamental rights (EDPD, 2020) of individuals throughout the technological design process. The core idea behind the DPbD principle is to mandate controllers to adopt technical and organisational measures that are designed to implement the general data protection principles effectively. It also requires them to integrate the necessary safeguards into personal data processing in order to meet the requirements of the GDPR, while ensuring the protection of data subject rights. Article 25 aims to reduce the commonly voiced “catch-up-with-technology” issue in regulatory progress by imposing data protection interests in the architecture of ICT systems, and enforcing these interests throughout the lifecycle of systems’ development (Bygrave, 2017).
R&D phase matters for human rights compliance
The human rights risks associated with smart city technologies, and the contradictory policy goals of EU smart city technology development raise a question as to whether the R&D instruments and the DPbD principle may contribute to the development of human rights compliant smart city technologies inside and outside the EU. Research on ethical and legal aspects of smart city technologies further emphasise the importance of the R&D phase to this end.
In this study, we recognise that technology is normative (Hildebrandt & Tielemans, 2013; Larsson, 2019) and embodies values upon adoption (Winner, 1980) and design (Nissenbaum, 1998, pp. 38-39; Koulu, 2021). Technology may reflect biases present in the social realm, such as the values of its developers (Friedman & Nissenbaum, 1996, pp. 332-336). For instance, algorithmically-driven search engines such as Google, by the very way they render web content findable, tend to perpetuate the racial and gender biases present in society (Umoja Noble, 2018). Facial recognition technology was found to have discriminatory performance concerning race and gender due to the biassed datasets used for training the algorithms (Buolamwini & Gebru, 2018; Najibi, 2020). Discriminatory effects may also emerge from the context of use or the features and limitations of technology (Friedman & Nissenbaum, 1996, pp. 332-336). Respectively, we adopt the view that technology can be proactively designed to reflect desired values (Friedman et al., 2013), such as human and fundamental rights.
In contrast to ethical values, enforceable fundamental rights are passed through a democratic process, and their protection should be considered when the technology is designed, for example, through stakeholder participation (Hildebrandt, 2021, pp. 235, 236, 237, 247). This approach represents the goal of ex-ante oversight (Hakkarainen, 2021). It was codified in law by the GDPR, through obligations to implement DPbD (Arts. 24(1), 25(1)) and to carry out data protection impact assessments meant to address risks, not only to data protection, but all fundamental rights of data subjects (Art. 35(1)). While the GDPR permits derogations from data subjects’ rights for data processing for scientific research purposes, it does not free the controllers from DPbD obligations (Art. 89) during the R&D phase. In the EU, the GDPR is generally used as a “proxy for the protection of other individual fundamental rights and freedoms” (Oostveen & Irion, 2016, p. 17) in connection with data-driven technologies.
However, the existing human rights framework is not always well-suited to address the risks associated with AI applications (Mantelero, 2018; Yeung et al., 2019; Smuha, 2021), as is increasingly common in smart city projects. For example, the protections afforded by the GDPR are insufficient with respect to profiling, algorithmic decision-making and solutions that do not rely on processing personal data (Ostveen & Irion, 2016). AI systems may fall out of the scope of GDPR’s protection due to human involvement in their decision-making mechanism; the ambiguity of legal effects produced by automated decision-making (Bygrave, 2020); as well as multi-stage (Binns & Veale, 2021), and group profiling (Schreurs et al., 2008; Galič & Gellert, 2021).
The EU has taken action to address these shortcomings: it has published ethical guidelines for AI (AI HLEG, 2019) and is in the process of adopting the AIA, which aims to prevent AI applications’ adverse impact on fundamental rights (para 13) by means such as conformity assessments (Art. 43) and other obligations that cover the life-cycle of the system (p. 3). These are set to direct collaborative work in technology development. Meanwhile the Council of Europe (CoE, 2021; Breuer, 2022) is working on a proposal for a “legally binding transversal instrument”, the scope of which covers, “the development, design and application of AI systems” (para 12) to mitigate the risks AI can pose to human rights, democracy and the rule of law (paras 3-4) by means such as human rights, democracy and rule of law impact assessment (para 19, XII). Europe can thus be seen as moving towards a human rights-based approach, not only in the governance of technologies processing personal data, but also AI (Yeung et al., 2019; Smuha, 2021).
The wider discussion surrounding ethical AI and data, which also inform the legislative reforms discussed above, stress the importance of multidisciplinary and multi-stakeholder collaboration in the design phase. Compliance with the DPbD principle of Article 25 requires adopting interdisciplinary methods (von Grafenstein et al., 2022). Furthermore, socio-legal scholars also emphasise the importance of studying inherently political social processes and consequences of technology design (Koulu, 2021), and call for interdisciplinary socio-legal research to ensure that AI-driven technologies, which have long-term impacts, are ethical (Larsson, 2019). Developers should engage in multidisciplinary exchange to avoid digital divides and reinforcement of inequalities in urban spaces (Kempin Reuter, 2019, 2020; Graham, 2002; Sloane, 2022). Multi-stakeholder consultation is also highlighted in connection with intersectional data feminism as a means to foster data justice and combat oppression from data science designed by dominant societal groups (D’Ignazio & Klein, 2020a, 2020b, 2020c). The role of public R&D funding and tendering in building human rights-compliant smart city technology is understudied (only example Brown, 2019, pp. 55-59, 61), and its role in the development of ethical AI has only recently been researched (Gardner et al., 2021).
Methodology
We employed a multi-method approach (Nielsen, 2012) by conducting Case Study Research (CSR) (Yin, 2014) of the funding landscape of smart city projects in the Helsinki metropolitan area (HMA) of Finland, interviewing experts representing various funding instruments. We analysed the material utilising a Grounded Theory (GT) approach (Glaser & Strauss, 2006 [1967]) in combination with legal doctrinal analysis.
Our study aimed to explore the potential human-rights risks and legislative gaps of smart city projects during their R&D phase. We chose Finland and the HMA due to their internationally recognised favourable attitude toward smart city projects. We identify Finland as a “critical case” for observing the tendency of smart city technology developers to pursue markets with less restrictive human rights and data protection regimes than the EU, in the sense of providing a clear set of circumstances from which one can draw analytical generalisations (Yin, 2014, p. 51).
The GT approach is here understood as a form of qualitative content analysis where the analysis is carried out in a reflexive and comparative manner for the purpose of developing concepts and theories (Glaser & Strauss, 2006 [1967]). The Straussian variant of GT (as opposed to the Glasserian approach (Glaser, 2012)), sees it as desirable to acquaint oneself with the relevant literature to derive relevant questions (Strauss & Corbin, 2015, pp. 52ff), thus making it compatible with CSR (Halaweh, et al., 2008). Our legal knowledge and assumption about potential human rights risks involved in smart city projects were a constant backdrop to our process. Resultantly, the content analysis was “directed”, where human rights and EU law represented sensitising concepts that guided the data analysis of the phenomenon (Bowen, 2006; Hsieh & Shannon, 2005). Moreover, conducting expert interviews demands the researcher’s familiarity with the discourse and field within which the interviewees are active, as their readiness to share knowledge might depend on the level of competency with which the interviewer presents themself (Meuser & Nagel, 2009). Thus, previous knowledge helped us formulate and focus our research propositions, define relevant samples and direct our analysis.
Helsinki smart metropolitan area as a “critical case”
As an object of study, we chose the role of public funding instruments in smart city technology development in the HMA, which includes the cities of Helsinki, Espoo, Vantaa and Kaunianen.
Finland is highly digitalised, being the EU state that ranked highest in the 2022 Digital Economy and Society Index (EC, 2022b), with regard to human capital (digital skills) of the population, connectivity, integration of digital technology among firms and digital public services. The share of ICT-related activities in GDP, as well as ICT employees as the share of total employees, are at the higher end in comparison with other EU-countries (figure 1 and figure 2).


Finland ranks as one of the most innovative countries in the world (WEF, 2015, p. 14; EC, 2020, p. 16). The country has high levels of R&D investment (figure 3), but was also the world leader in patent filings (per million inhabitants) for technologies related to the “Fourth Industrial Revolution” in the period 2000-2018 (figure 4).


Within Finland, the HMA is considered to be the leading innovative cluster of the country, as the most developed region in terms of innovation activity, such as R&D spending, patenting, human-capital concentration and knowledge-intensive firms (Makkonen & Inkinen, 2015; Kiuru & Inkinen, 2017). The Finnish economy is open, with exports accounting for over 40% of GDP, and this is where HMA distinguishes itself as the leading region in exports of services embodying intangible capital consisting of R&D, organisational capital and ICT (Piekkola, 2018, p. 9).
These conditions make Finland fertile soil for smart city projects, with the HMA being particularly relevant as it features the country's largest and wealthiest cities, hosting numerous smart city initiatives. Helsinki ranked 6th in the Smart City Index (IMD, 2021), whereas both Helsinki and Espoo were selected in the EU’s Mission to build 100 European carbon-neutral and smart cities by 2030 (Espoo, 2022). Finland supported smart city development with public R&D funding through Witty City programme 2013-2017 (Mustonen et al., 2014, p. 26; Tekes, 2018) and INKA programme in 2014-2017 (TEM, 2017). During this period, smart city initiatives such as living labs (Hielkema & Hongisto, 2013), open data sharing practices (Jaakola, 2013) and initiatives for smart districts also emerged in the HMA (Tekes, 2015).
Sampling and sample
We based our sampling by defining an expert as someone who holds particular technical, procedural and interpretative knowledge relevant to smart city projects, and who can affect the practice and actions of others in the field (Bogner & Menz, 2009, pp. 54-55; Meuser & Nagel, 2009, pp. 26-30). Following the GT approach, our sampling procedure was “theoretical”, meaning purposely “biassed” to get the most relevant participants who could provide data conducive to theory development. As such, the sampling is likewise guided by the concepts that are derived from the evolving analysis of previous data to refine the concepts further (Glaser & Strauss, 2006, pp. 45ff).
Our sample consists of 19 semi-structured, thematic interviews with a total of 21 representatives (table 1, appendix) with complementary grey literature on the initiatives. The interviews were conducted between November 2021 and December 2022 via video call platforms and recorded with consent. The interviews lasted between 50 to 90 minutes. Our sampling started with the R&D initiatives in the HMA region that focused explicitly on smart cities, or integration of AI and IoT in urban environments or city functions and appeared, on the basis of grey literature review, to receive the largest amount of public funding. Additional interviewees would often be contacted on the basis of snowballing from previous interviews. This allowed us to verify the expert status of the interviewees through other sources of data and experts (Bogner & Menz, 2009, p. 55). The sampling emphasised the diversity of public funding instruments and pre-competitive public-private R&D initiatives until reaching saturation (Strauss & Corbin, 2015, pp. 149-150, 203). To verify initial findings, we also interviewed four non-HMA-based experts.
Since the study deals with potentially sensitive topics, interviewees were provided with a higher level of anonymity at the expense of more detailed descriptions of the R&D initiatives researched. The types of initiatives as sources are described in the appendix. The majority of the interviews concerned ongoing funding programmes. The governance of many of the funding instruments reviewed featured “nestedness” (Madison et al., 2009). The distribution of certain types of EU funding was governed on the national, regional and municipal levels. Some instruments required matching funding from companies or municipalities. An individual programme or a project may have received funding from numerous funding sources or have interdependencies with other initiatives.
Expert interviews
When interviewing experts, it is generally advised to make use of a semi-structured interview model, which is topic-guided (Meuser & Nagel, 2009) and problem-centred (Döringer, 2021), asking open-ended questions to stimulate narration structured by the individual’s concerns (revealing tacit knowledge), while also allowing for more specific questions to be asked. The benefit of this interview approach is that it highlights the individual perspective while enabling comparability of the gathered data. Our themes would revolve around (a) smart cities and smart city technology, (b) governance of procuring and getting involved with smart cities and (c) managing risks and opportunities.
We acknowledged that interviewees may have identified important risks that were not legal by nature. Therefore, we posed general questions on the topic before bringing up the topic of legal risks, such as threats to data protection. We held assumptions regarding the human-rights risks of smart city technology while also realising that most people – expert or not – are generally not used to reflecting on risks in legal terms. Therefore, we asked open-ended, exploratory questions regarding the ‘how’ of decision-making and procedures to get answers that might reveal such risks implicit in the decision-making processes (see appendix for questions).
Interview analysis
We conducted directed (qualitative) content analysis. "Directed” means the analysis process was informed by previous research, human rights and EU law, from which we deductively developed an initial rough scheme of relevant concepts and codes. The following coding procedure remained “open”, or inductive, towards new emerging concepts and modifications (Hsieh & Shannon, 2005, p. 1281).
Our coding procedure was initially “open” (breaking the data into discrete parts with labels), but gradually we started drawing connections between the concepts and categories, thus “axially” coding them (Strauss & Corbin, 2015, pp. 222ff, 240ff). We also kept separate analytical notes of the material, as comparing these allowed us to notice varying connections and interpretations within the team, acting as a corroborative mechanism.
Qualitative research findings
In our analysis, we found that the Finnish and EU smart city technology development, and the smart city R&D funding landscape in HMA, displays four features, as discussed below. For the full quotes referred to throughout the text, see our quote repository (table 2, appendix).
GDPR as a perceived burden
The relevance of data protection was generally acknowledged by the interviewees. Applications, such as video recordings and cameras, data collection from public spaces, user interfaces with citizens, healthcare and social service applications, were perceived as riskier than projects focused, for example, on infrastructural connectivity and analytics.
Legal-wise, the GDPR is a huge risk, with the dozens and dozens of cameras that we have (E1: Q1).
Generally, data protection was perceived to involve a trade-off between diligent compliance that minimises the risk of re-identification and the targeting of individuals, and having the possibility to utilise the data more broadly, for instance for the purposes of service development.
If we make it completely anonymous, it’s completely useless, there is no data left. But if we keep certain aspects of data, then it becomes [increasingly] likely that someone with access to anonymized data will identify people. And erring on the side of caution, what we do, leads to risks of underutilizing data that we have (E8: Q1).
Data protection compliance is a part of, or an indication of professionalism for actors involved in smart city R&D. Especially in collaborations involving cities, ensuring GDPR compliance was a necessary part of setting up the R&D initiative. Companies that have not thoroughly considered data protection in their applications are perceived as “not qualifying” for funding (E3: Q1). Especially for companies, the risk of GDPR enforcement and potentially getting fined motivated compliance efforts.
GDPR compliance was perceived as a novelty in both the private and public sectors, requiring considerable effort to interpret the law and resources to establish the required risk management processes. An interviewee mentioned that employees might have learned to work in a certain way, and using data or requirements for data protection impact assessments required efforts to re-learn ways of working (E9: Q1). Another interviewee expressed challenges with the ambiguity of GDPR requirements: “[The GDPR was] just recently set up in Europe [...] and [the EU regulators] are making the situation very unclear because people are not really understanding what it means on many practical levels” (E5: Q1).
For successful compliance in municipality-driven projects, key factors echoed the DPbD criteria (Art. 25 GDPR). Firstly, organisational processes were viewed as key to compliance: “All risks are subordinate to lack of proper process, [...], if you don’t have proper processes, for example, legal compliance cannot be fulfilled. You will always miss something” (E12: Q1). The establishment of organisational processes also enabled multistakeholder learning on technologies and risks, requiring collaboration between the “technology side” and the “legal side” (E20: Q1). Secondly, balancing the limitations against making use of the flexibilities within the GDPR was deemed a best practice for successful compliance (E8: Q1). Lastly, GDPR compliance was ensured through devising technical means to ensure compliance, such as automated pseudonymisation or focusing the surveillance on the masses, rather than individuals. For instance, an interviewee explained that the technology used was “designed to not detect any biometric information” (E17: Q1).
At its worst, the compliance was perceived as complicated, costly and time-consuming both by the private and public sectors, leading to project delays. Difficulties arose from three factors, one of them being the timing of compliance measures for technology that is under development. Some R&D projects were reported to be in too early a stage to initiate plans for data protection compliance: “We need to tackle legal aspects such as the GDPR at some point, before reaching market availability of [our solutions]. However, we are in such an early phase that we do not know what the solutions might be” (E4: Q1). Additionally, risks could become evident only through testing a solution: “Development of AI and data-driven tech led to the awareness about their risks. Also, the maturing of the tech helps us to understand the possible risks. One needs also to experiment to become aware of the materialisation of such risks” (E21: Q1).
Other challenges related to the disproportionate compliance costs for small, short-term R&D projects with an uncertain life-cycle. Compliance was a challenge also within multiparty projects where compliance policies must be successfully introduced to each new participant.
I wish that there would be some kind of leniency in the policies or laws regarding piloting, testing and experimentation. So for example, that we could be doing something for six months, that would not be okay to do in the long run, in the production version, but [be allowed] while we're testing (E5: Q2).
We spend time solving these GDPR issues and kind of know how we should do it, comply with the legislation. Fine, we do it now. But we know that it will be a risk for new partners to come and try to get it because it is such a complicated and time-consuming procedure (E1: Q2).
Against this background, the private sector perceived the limitations and costs set by the GDPR to put European smart city technology developers at a disadvantage vis-a-vis countries that allow for freer re-use and recombination of data for R&D and commercialisation purposes: “[...] it’s much easier to develop and test these types of solutions in an environment like, for example, China, where the data is not controlled like the GDPR regulations now” (E5: Q3). A company project director (E1: Q3) emphasised that “the best innovations go elsewhere” if the EU becomes too strict with GDPR enforcement.
One-sided development inputs
The governance of the instruments funding the R&D initiatives rarely featured norms that would mandate diversity among developers, or enhance feedback loops during the development processes. Both private and public actors reported a lack of incentives to gain more interdisciplinary or diverse feedback on the initiative (or its potential risks). While instruments supporting company-led R&D and ecosystem building emphasised the diversity of participating companies, multidisciplinarity and citizen engagement was left up to the initiative of the companies and research institutions proposing the research projects. There were no norms on the inclusion of researchers from social sciences in the R&D projects. Companies' path-dependencies may lead to certain collaboration opportunities and risks remaining unidentified: “we are too busy to, you know, get the cash flow starting, that we don't get stuck in the minor details like legal aspects which might end up backfiring” (E6: Q1).
The lack of diversity also stemmed from conditions on who can be funded. Some of the national instruments could not fund cities’ participation in projects, offering them only a more passive advisory role on a volunteer basis (E2: Q1). The fact that some of the nationally funded projects were confidential apart from designated outputs makes it difficult to evaluate whether the initiative lacked input from an important stakeholder or a group. Since the funding instruments, more generally, are geared towards enhancing the competencies of Finnish companies or regional development, their conditions typically excluded collaboration with foreign companies, which reduced input on international development and the application of smart city technologies.
Most of the projects are driven by Finnish companies, universities and research institutes - we don't have that much of an understanding or even opportunities to understand what is happening in the smart city domain in China, or the US or South America or Africa, which might lead us to not understanding the global definition or a smart city (E4: Q3).
In contrast, the funding instruments involving cities and their own initiatives sought greater transparency and engagement, which was reinforced by the initiatives’ alignment with municipalities’ strategic goals, such as digitalisation, service-delivery and sustainability (E15: Q1; E20: Q2), as well as the Nordic culture’s openness and trust in the public sector (E3: Q2). However, citizen engagement and feedback has been reported as also being hindered by COVID-19 measures (E16: Q1), and vulnerable to the overrepresentation of privileged digital natives and ethics washing, where the legitimacy of a project is “reducible to consulting the residents” (E8: Q2). Similarly, technosolutionist funding instruments diverted attention from the implications of the technology. For instance, when asked about AI ethics, a respondent discussed the integrity of the technical features of the solution, i.e. assessing whether AI is part of the solution (E2: Q2).
Scale-up as an aim
Smart city projects are more likely to receive funding if they are expected to be replicable and scalable, traits ultimately deeming the projects to also be exportable: “[W]e do not do anything just once, or tailor it for a specific purpose [...] The goal is always scalability” (E8: Q3). This is demonstrated by the fact that cities are viewed as innovation platforms, where companies may test their smart city solutions before replicating them in another district, city or country (E5: Q4). Besides companies, municipalities also discovered scalable governance and business solutions through smart city R&D initiatives (E20: Q3; E17: Q2 & E15: Q2).
A solution’s exportability is also perceived as a reflection of its monetary value. Some of the national funding instruments spearheaded export potential as explicit funding criteria. Exportable products bring tax money to Finland. In this context, smart city megaprojects, for example in the Middle East, are viewed as lucrative exporting opportunities for dozens of Finnish companies (E7: Q1).
Quite many companies have given us very good feedback that the actual pilots have helped them in product development and to commercialise products, which of course means that hopefully they will be a successful business and then try to bring tax euros to Helsinki (E3: Q3).
With respect to smart city exports, Finland was perceived to have a favourable brand, in terms of sustainability and trustworthiness, especially in data protection and governance, where human rights compliance contributes to upholding and reinforcing this image. Data protection compliance was thus perceived as part of a marketing strategy to advance scale-up.
Finland has been a pioneer in open smart city data and also a forerunner in setting up the governance and legislation for data, including requirements of consent from individuals, which is relevant for export (E7: Q2).
Discouraging costs of localisation
Both the private and public sectors viewed seamlessness in the provision of services to citizens as a relevant goal for the development of smart city technologies. The development requires the integration of input from many stakeholders, such as city departments. Where the services in question were not purely digital, their provision was presumed to be supported by IoT solutions.
Many things in smart cities are under the hood; citizens benefit from them without anyone actually seeing them (E9: Q2; see also E15: Q3).
One of the biggest challenges, both in the development of such technologies and their introduction as finished, legally compliant solutions to municipalities, is the complexity of cities as organisational units and the applicable laws, not only data protection law, but also public law applying to municipalities and norms specific to the given city.
The hardest part [of GDPR compliance] was to communicate with different city units, because they have their own data management things. It was not centralised. So you had to have good contacts with the relevant department (E13: Q1).
Pilots run by companies with the city do not guarantee public procurement of the solution and thus, scale-up. From the outside, the procurement processes were perceived as complicated and intransparent, and the regulations rigid. Indeed, in a large city, individual procurement processes involve a large number of people with the aim to ensure both legal compliance (E12: Q2) and that the procured solutions suit the needs of the relevant city department.
The problems are exacerbated when the technology aims to offer a more “seamless” experience, when the introduction of it does not align with sector-specific planning and procurement practices. For this reason, offering smart city solutions to new neighbourhoods was generally seen as more attractive and less administratively burdensome than retrofitting them into established districts. A company project director (national funding) described how his team would have needed to get several permits to change the smart city infrastructure they were developing, adding that their solution’s complexity “is a challenge because no one person can handle this. And in city organisations there are usually many departments. So there is a risk of missing out something and blocking some things we are able to do” (E1: Q4).
The city funding of holistic solutions also poses a challenge, as the benefits may not be directly seen by the citizens or accrue in a cross-departmental fashion. As a consequence, within a municipality it may be difficult to make a strong enough case for including them in the city budget. Where smart city technology requires wide-scale hardware instalments in the urban environment, technology developers must either market the solutions to the municipalities who can afford such investments, or devise a business case, such as one reliant on the re-use of data, that will attract private investors.
Cities are usually not the investors to infrastructure in Finland, [...] if we go to Dubai or other kind of countries where the cities are a bit maybe more rich in finance, they can do the investments (E1: Q5).
R&D funding instruments that empower municipalities and emphasise cross-departmental, multi-stakeholder integration appear to mitigate the obstacles by devising local, integrative solutions.
Ad hoc-nature of R&D projects and human rights risks
Data protection by design in R&D
The interdependencies between the four features in the Finnish and EU smart city R&D funding landscape discussed above may lead to situations where the technology developed by European companies neither supports the fulfilment of fundamental rights in Europe, nor outside the EU. We identify issues relating to the ad hoc-nature of R&D projects and international myopia, discussing them and their solutions in the following two sections.
Our qualitative analysis reveals compliance with the DPbD principle (Art. 25), presupposing a relatively stable, and continuous organisational context, where once the principle is implemented technically and organisationally, it can be maintained at lower marginal costs than its initial set-up required. This contrasts with the often ad hoc nature of R&D projects and pilots that may involve temporary, open collaboration among heterogeneous stakeholders. The risks of fines is an important motivation behind GDPR compliance in the R&D phase, but the upfront costs of establishing heavy organisational measures were perceived as burdensome, especially when the available budget was limited.
Both the assessment of technology’s impact on data protection and other fundamental rights, as well as the protection of these rights through technology design, can be challenging given the relative and context-specific qualifications of the controller’s obligations under Art 25 GDPR. The process of translating abstract values into design specifications is not straightforward (Nissenbaum, 2005; Flanagan et al., 2008; van de Poel, 2013; Koulu, 2021). For example, the CFREU contains 50 articles on citizens’ rights and freedoms and, as discussed in the introductory section, smart city technologies may affect a number of them. Evaluation of technology’s impact on an individual right can be very complex: an assessment tool for the non-discriminatory impact of AI in its development phase consists of a list of 70 factors (Ojanen et al., 2022). Even without going to such lengths, GDPR compliance might have unintended effects on innovation and market structure. Recent studies have found the GDPR obligations to have had a significant inhibitory effect on innovation in mobile app development (Janßen et al., 2022), as well as shrinking the market for third-party web technologies while also increasing the concentration of market share by dominant firms (Peukert et al., 2021).
Thorough compliance with the DPbD obligations in the R&D context is also challenging due to the emergent and experimental nature of the technical solutions developed in the project. DPbD must be implemented at the time when the means of processing are determined (Art. 25. GDPR). Yet, the possible impact may not be evident at the initiation phase and only emerge over time. Many studies of public R&D funding instruments lacked norms, nudges and resources to diversify inputs in the R&D phase in the manner that would give a broader overview of the impact of the technology on human rights. We assume that such incentives are even fewer in private sector R&D. As prototypes may define the ultimate design of the technology (Vakkuri et al., 2020, p. 207), one-sided inputs and a lack of assessment measures may lead to smart city technology being placed on the market, and having an undesirable impact on fundamental rights when scaled up. In both situations of prohibitive costs of compliance and undercompliance, opportunities to develop smart city technology that supports the fulfilment of human rights may be jeopardised.
Legal measures to support compliance
Interest in lowering the compliance standard for companies in the R&D phase was voiced in our data. The AIA proposal may respond to company interests by introducing regulatory sandboxes where innovative AI solutions can be developed and tested; also in “real world conditions”, under lower GDPR compliance requirements and regulatory oversight before they are placed on the market or put into service (Recital 71, Art. 53(-1a); Art. 54(1)(7) (as per Council revisions)). Real-world testing of high-risk AI systems would be subject to more defined criteria, for example, regarding testing subjects’ consent (Recital 72a; Art. 54a). Regulatory sandboxes are expected to provide regulators with an improved understanding of novel technologies while fostering innovation, allowing businesses to explore and experiment with new products and reduce the time-to-market cycle for products (Madiega & Van De Pol, 2022). Despite the potential benefits, regulatory sandboxes have been criticised: they are often politicised, subject to methodological deficiencies and offer limited validity of their results (Ranchordas, 2021b). Furthemore, the sandboxes do not free the actors from the challenges of cross-disciplinary interaction with regulatory oversight.
The extent of obligations the AIA will pose for smart city technology developers at the R&D stage is unclear. The most extreme proposal to date excludes the applicability of AIA to R&D activity (Recital 12b), but nevertheless maintains DPdD obligations when testing technology in a sandbox (Recital 72-a). Whether and how the AIA’s R&D exemptions and sandboxes will foster organisational learning to identify the fundamental rights risks posed by AI requires more research.
Apart from AIA, R&D funding agencies and municipalities should play a more active role in ascertaining the human rights compliance of smart city technologies. Indeed, the funding organisations and municipalities studied acknowledged that emergent data-driven technologies posed novel risks, and many were developing guidelines for data and AI ethics. However, the funding criteria of initiatives studies rarely emphasised fundamental rights at large.
Recent research on AI ethics suggests that R&D institutions can do more to ensure that they are funding accountable AI, such as demanding the developers to file a “trustworthy AI statement”, or establish boards to evaluate the trustworthiness of AI (Gardner et al 2021). Yet, ethics-based approaches to technology are criticised for being prone to ethics washing (Floridi, 2019, pp. 187-188; Metzinger, 2019), and vulnerable to a proliferation of overlapping ethics frameworks (Sætra & Danaher, 2022). Relevant principles, even if based on the content of fundamental rights (Smuha, 2019, p. 101; AI HLEG, 2019) are unenforceable (Hagendorff, 2020, p. 99). Where the AIA’s sandboxes do not reach the same aims or will not apply, public funding for smart city technology should support interdisciplinarity, feedback loops and organisational learning on human rights impacts throughout the project life-cycle. This may take the form of a “soft law” sandbox.
More generally, it would be advisable for public funding agencies to set human and fundamental rights as a normative goal of smart city projects and tie the respect of such conditions to incentives or sanctions. On the macro level, the EU appears to be moving in this direction. For example, the European Regional Development Fund, which also funds programmes on smart and sustainable cities, lists the programme’s respect and implementation of fundamental rights as one of the funding conditions (Arts. 9 & 15, Annexes III & IV, Regulation (EU) 2021/1060). Similarly, actions supported by the Horizon Europe framework programme should respect fundamental and human rights, as well as mandating that legal entities be funded to carry out mandatory ethics self-assessments and governance for ethics screening and checks. In addition, the Horizon Framework specifically funds research and innovation initiatives geared towards strengthening democratic values and fundamental rights (rec 71; Art 19, Annex I, Regulation (EU) 2021/695). Part of its proposals for the 2023-2024 Work Package Programme explicitly mention fundamental rights as evaluation criteria for projects on AI (EC, 2022c). How the emerging governance instruments publicly funded R&D succeed at translating the normative values of fundamental rights into technology, deserves further research.
International myopia
Human-rights risks of exported technologies
From our data, we also observe a tendency toward international myopia, where the deterring compliance and coordination costs, and the unprofitableness of integrating smart city technology locally attracts companies to pursue top-down megaprojects in places where the GDPR does not apply, such as Gulf countries. This may lead to technology exports that harm the human rights of citizens in the importing country. This would be on the grounds of insufficient screening of the technology impacts and international inputs in the R&D phase, and the fact that many smart city technologies can be subject to dual-use:
When you are using solutions that collect all your information, it can be used either for you or against you, depending on the use case. So, in that sense, there is a very thin line between these technological solutions. Traditionally, the dual-use technologies have been referring to technologies that can be used also for military purposes, but in this new world, I think that the dual-use technologies are covering a much wider area, and it’s not only kind of military use, but let’s say, one could use for good and for bad (E21: Q2; see also E16: Q2).
As not all countries have similar levels of fundamental rights protection, what is the responsibility EU companies should assume when exporting technology outside the EU? Exporting technology into a country with an alarming record of human rights violations may represent conscious profit-seeking irresponsibility, a purposeful malpractice of “digital ethics dumping” (Floridi, 2019, p. 190) or, alternatively, naivety on behalf of European smart city developers.
What possible protections afforded by technology were initially designed to be GDPR-compliant and exported outside the EU, under the brand of trustworthiness of the exporting country? According to Hildebrandt “legal protection by design” is afforded by the contestability of rights in a court of law. When such technology is exported to a country that does not afford its citizens rights similar to the GDPR, or strong protection of the rule of law, it may be deemed “legal by design” (Hildebrandt, 2020a). As a consequence, the fundamental rights of citizens may be protected precariously, relying on data or AI ethics (see Mittelstadt, 2019; Hildebrandt, 2020b). Nevertheless, such technology exports may serve the EU companies' interests on the global market of smart cities, with the possibility to reinforce welfare-enhancing practices in developing countries (Manville, 2014, p. 19) and possibly extend the de facto Brussels effect (Bradford, 2012).
When the exported technology is not GDPR compliant by design, or on the basis of complying with the legislation of the country of import, the European smart city technology developers may, in the worst case, offer tools to support authoritarian rule in other countries, as has already occurred with European ICT suppliers (Wagner, 2012; Kanetake, 2019). Such concerns are also raised when EU developers seek states with lower legal constraints and supervision for testing the systems, which might then be imported back into the EU (see Floridi, 2019, p. 190). In this light, we observe a contradiction between European policies and legal standards for human rights compliance of smart city technology set for the internal market, and that for exports.
Measures to support respect for human rights
Two EU law instruments may mitigate the potential adverse effects of exported technology on human rights abroad: technology manufacturers’ human rights due diligence obligations (Donahoe & MacDuffee Metzger, 2019; DDD proposal), as well as export control for dual-use technologies (Whang, 2021; Dual-Use Regulation).
The EU has caught up with the development in which member states make corporate due diligence obligations (OHCHR, 2011; OECD, 2018) binding by law (Kitukwe, 2022). The DDD proposal aims to promote respect for human rights and sustainability by imposing due diligence practices (e.g. identification and prevention of adverse impacts on human rights or the environment) to companies’ global supply chains (Art. 4). In this way, the proposed rules aim to “advance the green transition and protect human rights in Europe and beyond” (EC, 2022d). Albeit the proposed act does not explicitly deal with data, AI or smart cities, it can be interpreted to also pose obligations to prevent human rights abuses caused by AI (Lane, 2022) and other technology in companies’ value chains also outside the EU.
The Dual-Use Regulation seeks to promote responsible technology export from the EU, and to prevent the use of EU-exported technology for the development of weapons. The Regulation “establishes a Union regime for the control of exports, brokering, technical assistance, transit and transfer of dual-use items” (Art. 1), meaning items which can be used both for civil and military purposes. It requires companies to acquire authorisation from a national competent authority when exporting items belonging to the ‘dual-use’ category (Art. 3). Notably, it also demands authorisation for the provision of technical assistance related to dual-use items (Art. 8). Our interviews suggest that smart city technologies have a low and nebulous threshold for creating negative effects. The Dual-Use Regulation was in 2021 recast to give a wider account of human rights risks and cover more surveillance technology (rec. 2, Art. 5). Further research is needed on the applicability of the Dual-Use Regulation on smart city technology exports.
Policy measures to address the lack of risk mitigation, stemming from insufficient knowledge on human rights and their protection in the country of export, are also necessary. This responsibility could be taken by the EU embassies, delegated to Supervisory authorities under the DDD (Art. 18) or initiated by member states’ own initiative. Within the EU, it is important to make the European market for smart city technology more attractive vis-a-vis global megaprojects. Measures should be developed to facilitate effective and human rights-compliant technology localisation and procurement by European cities. Support for the development of replicable compliance processes and governance solutions for piloting and adopting technology across a range of national and European municipalities is also of value. Cooperative measures between cities are also of relevance (Ringel, 2021). European cities may also have an indirect impact on exports by requiring a higher level of corporate human rights due diligence from their contractors, whereas companies sanctioned under DDD could be excluded from public R&D funding (Art. 24).
Conclusion
Publicly funded smart city technology developers in the HMA may pursue exports to countries with lower standards of human right protection due to one-sided development inputs, high costs of compliance and localisation, as well as international myopia. We believe that the case of the HMA is relevant beyond the narrow confines of Finland: further research could focus on generalising this claim with comparative case studies in other countries.
The problems identified should be addressed at different governance levels. European and national R&D funding agencies should be conscious that the technologies associated with smart cities can have a much stronger impact on human rights, society and power relations than the R&D they had been funding in the past. The agencies, with the support of institutions directing them, must adopt a critical lens towards their agendas’ societal objectives and impacts. The power relations reinforced by solutions should not be ignored (Powles & Nissenbaum, 2018; Munn, 2022; Sloane, 2022). Norms that enhance the diversity and interdisciplinarity of the funded projects, including the presence of social sciences (see Sloane & Moss, 2019) and mandating stakeholder consultation and involvement, are necessary. The agencies should also require the development of legal compliance measures as well as assessments and audits of the technology impacts while supporting these actions with expertise, governance and financing.
For smart European cities and, ultimately, the European project to be more than a neoliberal dream (Cardullo & Kitchin, 2018; Hermann, 2007), the EU must align its policy goals for smart cities, fundamental rights-driven technology regulation, and technology exports. Otherwise, it maintains the practices of digital colonialism, strengthens non-democratic developments abroad that may lead to globally destabilising effects and makes its own citizens vulnerable to imported technologies that are developed elsewhere with less oversight.
Acknowledgements
The authors are very grateful for the feedback that Astrid Voorwinden, Riikka Koulu, Suvi Sankari and other members of the Legal Tech Lab gave during the research process. The authors also thank the Internet Policy Review editors for the constructive feedback of their peer reviews. All mistakes are authors’ own.
References
Akbari, A. (2022). Authoritarian smart city: A research agenda. Surveillance & Society, 20(4), 441–449. https://doi.org/10.24908/ss.v20i4.15964
Amnesty International. (2020). Out of control: Failing EU laws for digital surveillance export [Report]. Amnesty International. https://www.amnesty.org/en/documents/eur01/2556/2020/en
Binns, R., & Veale, M. (2021). Is that your final decision? Multi-stage profiling, selective effects, and article 22 of the GDPR [Preprint]. SocArXiv. https://doi.org/10.31235/osf.io/7mq6z
Botero Arcila, B. (2022). Smart city technologies: A political economy introduction to their governance challenges. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The Oxford handbook of AI governance (1st ed., p. C48.S1-C48.S14). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.013.48
Bowen, G. A. (2006). Grounded theory and sensitizing concepts. International Journal of Qualitative Methods, 5(3), 12–23. https://doi.org/10.1177/160940690600500304
Bradford, A. (2012). The Brussels effect. Northwestern University Law Review, 107(1), 1–68. https://scholarlycommons.law.northwestern.edu/nulr/vol107/iss1/1/
Brauneis, R., & Goodman, E. P. (2017). Algorithmic transparency for the smart city. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3012499
Brown, T. E. (2019). Human rights in the smart city: Regulating emerging technologies in city places. In Regulating new technologies in uncertain times (pp. 47–65). https://doi.org/10.1007/978-94-6265-279-8_4
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81, 77–91. http://proceedings.mlr.press/v81/buolamwini18a.html
Business Finland. (n.d.-a). About us. Business Finland. https://www.businessfinland.fi/en/for-finnish-customers/about-us/service-model
Business Finland. (n.d.-b). Strategy. Business Finland. https://www.businessfinland.fi/en/for-finnish-customers/strategy.
Buttarelli, G. (2017). Privacy matters: Updating human rights for the digital society. Health and Technology, 7(4), 325–328. https://doi.org/10.1007/s12553-017-0198-y
Bygrave, L. A. (2017). Data protection by design and by default: Deciphering the EU’s legislative requirements. Oslo Law Review, 4(2), 105–120. https://doi.org/10.18261/issn.2387-3299-2017-02-03
Bygrave, L. A. (2020). Article 22 Automated individual decision-making, including profiling. In L. A. Bygrave, The EU General Data Protection Regulation (GDPR). Oxford University Press. https://doi.org/10.1093/oso/9780198826491.003.0055
Cardullo, P., Di Feliciantonio, C., & Kitchin, R. (Eds.). (2019). The right to the smart city. Emerald Publishing Limited. https://doi.org/10.1108/9781787691391
Cardullo, P., & Kitchin, R. (2019). Smart urbanism and smart citizenship: The neoliberal logic of ‘citizen-focused’ smart cities in Europe. Environment and Planning C: Politics and Space, 37(5), 813–830. https://doi.org/10.1177/0263774X18806508
Charter of Fundamental Rights of the European Union, Pub. L. No. C 326/391 (2012).
Coletta, C., & Kitchin, R. (2017). Algorhythmic governance: Regulating the ‘heartbeat’ of a city using the Internet of Things. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717742418
Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336–349. https://doi.org/10.1177/1527476418796632
Council of Europe. (2021). Ministers’ Deputies 1425th meeting. 10.1 Ad hoc Committee on Artificial Intelligence (CAHAI) (CM(2021)173-add; CM Documents). Council of Europe Committee of Ministers. https://rm.coe.int/possible-elements-of-a-legal-framework-on-artificial-intelligence/1680a5ae6b
D’Ignazio, C., & Klein, L. (2020a). The numbers don’t speak for themselves. In Data feminism. https://data-feminism.mitpress.mit.edu/pub/czq9dfs5
D’Ignazio, C., & Klein, L. (2020b). The power chapter. In Data feminism. https://data-feminism.mitpress.mit.edu/pub/vi8obxh7
D’Ignazio, C., & Klein, L. (2020c). Unicorns, Janitors, Ninjas, Wizards, and Rock Stars. In Data feminism. https://data-feminism.mitpress.mit.edu/pub/2wu7aft8
Dirks, S., & Keeling, M. (2009). A vision of smarter cities: How cities can lead the way into a prosperous and sustainable future [Executive Report]. IBM Global Business Services, IBM Institute for Business Value. https://www.ibm.com/downloads/cas/2JYLM4ZA
Doorn, N., Schuurbiers, D., Van De Poel, I., & Gorman, M. E. (Eds.). (2013). Early engagement and new technologies: Opening up the laboratory (Vol. 16). Springer Netherlands. https://doi.org/10.1007/978-94-007-7844-3
Edwards, L. (2016). Privacy, security and data protection in smart cities: A critical EU law perspective. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2711290
Espoo. (2022, April 28). Espoo selected to implement EU Mission on climate-neutral and smart cities together with other pioneers [Press release]. https://www.espoo.fi/en/news/2022/04/espoo-selected-implement-eu-mission-on-climate-neutral-and-smart-cities-together-other-pioneers
Establishing Horizon Europe – the Framework Programme for Research and Innovation, laying down its rules for participation and dissemination, Pub. L. No. Regulation (EU) 2021/695, 32021R0695 (2021). https://eur-lex.europa.eu/eli/reg/2021/695/oj
European Commission. (2021). European Innovation Scoreboard 2021. https://ec.europa.eu/docsroom/documents/45913/attachments/1/translations/en/renditions/native
European Commission. (2022a). Digital Economy and Society Index (DESI) 2022 Finland. https://ec.europa.eu/newsroom/dae/redirection/document/88700
European Commission. (2022b). Horizon Europe Programme Guide Version 2.0. https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/programme-guide_horizon_en.pdf
European Commission. (2022c). Horizon Europe Work Programme 2023-2024. 7. Digital, Industry and Space. European Commission. https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/wp-call/2023-2024/wp-7-digital-industry-and-space_horizon-2023-2024_en.pdf
European Commission. (2022d, February 23). Just and sustainable economy: Commission lays down rules for companies to respect human rights and environment in global value chains [Press release]. European Commission. Press Corner. https://ec.europa.eu/commission/presscorner/detail/en/ip_22_1145
European Data Protection Board. (2019). Data protection by design and by default version 2.0 adopted on (4/2019 Version 2.0; Guidelines on Article 25). European Data Protection Board. https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_201904_dataprotection_by_design_and_by_default_v2.0_en.pdf
European Declaration on Digital Rights and Principles for the Digital Decade. (2022). https://ec.europa.eu/newsroom/dae/redirection/document/92399
European Parliament. (2022, August). EU action plan on democracy and human rights. Legislative Train. https://www.europarl.europa.eu/legislative-train/carriage/eu-action-plan-on-democracy-and-human-rights/report?sid=6101
European Patent Office. (2020). Patents and the Fourth Industrial Revolution: The global technology trends enabling the data-driven economy [Study]. European Patent Office. https://www.epo.org/service-support/publications.html?pubid=222#tab3
Eurostat. (2021a). Annual national accounts [Database]. https://ec.europa.eu/eurostat/data/database
Eurostat. (2021b). Science, technology, digital society [Database]. https://ec.europa.eu/eurostat/data/database
Feldstein, S. (2021). The rise of digital repression: How technology is reshaping power, politics, and resistance. Oxford University Press. https://doi.org/10.1093/oso/9780190057497.001.0001
Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x
Forge, J. (2010). A note on the definition of “dual use”. Science and Engineering Ethics, 16(1), 111–118. https://doi.org/10.1007/s11948-009-9159-9
Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347. https://doi.org/10.1145/230538.230561
Galdon-Clavell, G. (2013). (Not so) smart cities?: The drivers, impact and risks of surveillance-enabled smart environments. Science and Public Policy, 40(6), 717–723. https://doi.org/10.1093/scipol/sct070
Galič, M., & Gellert, R. (2021). Data protection law beyond identifiability? Atmospheric profiles, nudging and the Stratumseind Living Lab. Computer Law & Security Review, 40, 105486. https://doi.org/10.1016/j.clsr.2020.105486
Gandy, O. H., & Nemorin, S. (2019). Toward a political economy of nudge: Smart city variations. Information, Communication & Society, 22(14), 2112–2126. https://doi.org/10.1080/1369118X.2018.1477969
Gellert, R. (2017). Why the GDPR risk-based approach is about compliance risk, and why it’s not a bad thing. Proceedings of the 20th International Legal Informatics Symposium, 527–532.
Giffinger, R., Fertner, C., Kalasek, R., & Pichler Milanović, N. (2007). City-ranking of European medium-sized cities. europeansmartcities. Technische Universität Wien. https://www.smart-cities.eu/download/city_ranking_final.pdf
Glaser, B. (2012). No preconception: The dictum. Grounded Theory Review, 11(2). https://groundedtheoryreview.com/2012/11/28/no-preconception-the-dictum/
Glaser, B. G., & Strauss, A. L. (2017). The discovery of grounded theory: Strategies for qualitative research (1st ed.). Routledge. https://doi.org/10.4324/9780203793206
Goodman, E. P. (2020). Smart city ethics: How “smart” challenges democratic governance. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 822–839). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.53
Graham, S. (2002). Bridging urban digital divides? Urban polarisation and information and communications technologies (ICTs). Urban Studies, 39(1), 33–56. https://doi.org/10.1080/00420980220099050
Graham, S., & Marvin, S. (2001). Splintering urbanism: Networked infrastructures, technological mobilities and the urban condition. Routledge. https://doi.org/10.4324/9780203452202
Green, B. (2019). The smart enough city: Putting technology in its place to reclaim our urban future. The MIT Press. https://doi.org/10.7551/mitpress/11555.001.0001
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
Hakkarainen, J. (2021). Naming something collective does not make it so: Algorithmic discrimination and access to justice. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1600
Halaweh, M., Fidler, C., & McRobb, S. (2008). Integrating the grounded theory method and case study research methodology within IS research: A possible ‘road map’. ICIS 2008 Proceedings, 165. http://aisel.aisnet.org/icis2008/165
Hao, K., & H, S. (2022). South Africa’s private surveillance machine is fueling a digital apartheid. MIT Technology Review. https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid
Hermann, C. (2007). Neoliberalism in the European Union. Studies in Political Economy, 79(1), 61–90. https://doi.org/10.1080/19187033.2007.11675092
Hielkema, H., & Hongisto, P. (2013). Developing the Helsinki smart city: The role of competitions for open data applications. Journal of the Knowledge Economy, 4(2), 190–204. https://doi.org/10.1007/s13132-012-0087-6
High-Level Expert Group Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission. https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html
Hildebrandt, M. (2020a). Closure: On ethics, code, and law. In M. Hildebrandt, Law for Computer Scientists and Other Folk (1st ed., pp. 283-C11.P236). Oxford University PressOxford. https://doi.org/10.1093/oso/9780198860877.003.0011
Hildebrandt, M. (2020b). ‘Legal by design’ or ‘Legal protection by design’? In M. Hildebrandt, Law for computer scientists and other folk (1st ed., pp. 251-C10.P288). Oxford University PressOxford. https://doi.org/10.1093/oso/9780198860877.003.0010
Hildebrandt, M., & Gutwirth, S. (Eds.). (2008). Profiling the European citizen. Springer Netherlands. https://doi.org/10.1007/978-1-4020-6914-7
Hildebrandt, M., & Tielemans, L. (2013). Data protection by design and technology neutral law. Computer Law & Security Review, 29(5), 509–521. https://doi.org/10.1016/j.clsr.2013.07.004
Hollands, R. G. (2008). Will the real smart city please stand up?: Intelligent, progressive or entrepreneurial? City, 12(3), 303–320. https://doi.org/10.1080/13604810802479126
Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. https://doi.org/10.1177/1049732305276687
Institute for Management Development. (2021). Helsinki. Smart city observatory. https://www.imd.org/smart-city-profile/Helsinki/2021
Jaakola, A. (2013). The power of open statistics for advancing the smart city and citizens participation. Proceedings 59th ISI World Statistics Congress, 2711–2716. https://2013.isiproceedings.org/Files/STS078-P5-S.pdf
Janßen, R., Kesler, R., Kummer, M. E., & Waldfogel, J. (2022). GDPR and the lost generation of innovative apps (Working Paper No. 30028). National Bureau of Economic Research. http://www.nber.org/papers/w30028
Joh, E. E. (2019). Policing the smart city. International Journal of Law in Context, 15(2), 177–182. https://doi.org/10.1017/S1744552319000107
Kanetake, M. (2019). The EU’s dual-use export control and human rights risks: The case of cyber surveillance technology. Europe and the World: A Law Review. https://doi.org/10.14324/111.444.ewlj.2019.14
Kempin Reuter, T. (2019). Human rights and the city: Including marginalized communities in urban development and smart cities. Journal of Human Rights, 18(4), 382–402. https://doi.org/10.1080/14754835.2019.1629887
Kempin Reuter, T. (2020). Smart city visions and human rights: Do they go together? (No. 2020–006; Carr Center Discussion Paper Series). Harvard Kennedy School. https://carrcenter.hks.harvard.edu/publications/smart-city-visions-and-human-rights-do-they-go-together
Kitchin, R. (2014). The real-time city? Big data and smart urbanism. GeoJournal, 79(1), 1–14. https://doi.org/10.1007/s10708-013-9516-8
Kitchin, R. (2022). Conceptualising smart cities. Urban Research & Practice, 15(1), 155–159. https://doi.org/10.1080/17535069.2022.2031143
Kiuru, J., & Inkinen, T. (2017). Predicting innovative growth and demand with proximate human capital: A case study of the Helsinki metropolitan area. Cities, 64, 9–17. https://doi.org/10.1016/j.cities.2017.01.005
Komninos, N. (2013). Intelligent cities. Routledge. https://doi.org/10.4324/9780203857748
Koops, B.-J. (2021). The concept of function creep. Law, Innovation and Technology, 13(1), 29–56. https://doi.org/10.1080/17579961.2021.1898299
Koulu, R. (2021). Crafting digital transparency: Implementing legal values into algorithmic design. Critical Analysis of Law, 8(1), 81–100. http://hdl.handle.net/10138/331921
Krivý, M. (2018). Towards a critique of cybernetic urbanism: The smart city and the society of control. Planning Theory, 17(1), 8–30. https://doi.org/10.1177/1473095216645631
Lane, L. (2022). Clarifying human rights standards through artificial intelligence initiatives. International and Comparative Law Quarterly, 71(4), 915–944. https://doi.org/10.1017/S0020589322000380
Larsson, S. (2019). The socio-legal relevance of artificial intelligence. Dans droit et société, 103(3), 573–593. https://doi.org/10.3917/drs1.103.0573
Laying down common provisions on the European Regional Development Fund, the European Social Fund Plus, the Cohesion Fund, the Just Transition Fund and the European Maritime, Fisheries and Aquaculture Fund and financial rules for those and for the Asylum, Migration and Integration Fund, the Internal Security Fund and the Instrument for Financial Support for Border Management and Visa Policy, Pub. L. No. Regulation (EU) 2021/1060, 32021R1060 (2021). http://data.europa.eu/eli/reg/2021/1060/oj
Lehtiniemi, T., & Ruckenstein, M. (2019). The social imaginaries of data activism. Big Data & Society, 6(1), 205395171882114. https://doi.org/10.1177/2053951718821146
Luusua, A., Ylipulli, J., Foth, M., & Aurigi, A. (2022). Urban AI: Understanding the emerging role of artificial intelligence in smart cities. AI & SOCIETY, s00146-022-01537–5. https://doi.org/10.1007/s00146-022-01537-5
Madiega, T., & Pol, A. (2022). Artificial intelligence act and regulatory sandboxes. (Briefing PE 733.54). European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733544/EPRS_BRI(2022)733544_EN.pdf
Madison, M. J., Frischmann, B. M., & Strandburg, K. J. (2009). Constructing commons in the cultural environment. Cornell Law Review, 95, 657–710. https://scholarship.law.cornell.edu/clr/vol95/iss4/10/
Makkonen, T., & Inkinen, T. (2015). Geographical and temporal variation of regional development and innovation in Finland. Fennia – International Journal of Geography, 193(1), 134–147. https://doi.org/10.11143/46476
Mantelero, A. (2018). AI and big data: A blueprint for a human rights, social and ethical impact assessment. Computer Law & Security Review, 34(4), 754–772. https://doi.org/10.1016/j.clsr.2018.05.017
Manville, C., Cochrane, G., Cave, J., Millard, J., Kevin, J., Kåre, R., Liebe, A., Wissner, M., Massink, R., & Kotterink, B. (2014). Mapping smart cities in the EU [Study]. European Parliament. https://www.europarl.europa.eu/thinktank/en/document/IPOL-ITRE_ET(2014)507480
Metzinger, T. (2019, April 8). EU guidelines: Ethics washing made in Europe. https://www.tagesspiegel.de/politik/ethics-washing-made-in-europe-5937028.html
Mitchell, S., Villa, N., Stewart-Weeks, M., & Lange, A. (2013). Connecting people, process, data, and things to improve the ‘livability’ of cities and communities (Point of View). Cisco. https://www.cisco.com/c/dam/en_us/about/ac79/docs/ps/motm/IoE-Smart-City_PoV.pdf
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4
Monahan, T. (2007). “War rooms” of the street: Surveillance practices in transportation control centers. The Communication Review, 10(4), 367–389. https://doi.org/10.1080/10714420701715456
Morozov, E. (2013). To save everything, click here: The folly of technological solutionism (First edition). PublicAffairs.
Munn, L. (2022). The uselessness of AI ethics. AI and Ethics. https://doi.org/10.1007/s43681-022-00209-w
Mustonen, V., Koponen, J., & Spilling, K. (2014). Älykäs kaupunki: Översikt av smarta tjänster och möjligheter [Smart City: Overview of smart services and possibilities] (No. 12/2014). Ministry of Transport and Communications. http://urn.fi/URN:ISBN:978-952-243-397-8
Najibi, A. (2020, October 24). Racial discrimination in face recognition technology. Science in the News. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/
Nielsen, L. B. (2010). The need for multi-method approaches in empirical legal research. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199542475.013.0040
Nissenbaum, H. (1998). Values in the design of computer systems. Computers in Society, 38–39. https://doi.org/10.1145/277351.277359
Nissenbaum, H. (2005). Values in technical design. In C. Mitcham (Ed.), Encyclopedia of Science, Technology and Ethics. Macmillan. https://nissenbaum.tech.cornell.edu/papers/valuesintechnicaldesign.pdf
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Office of the United Nations High Commissioner for Human Rights (OHCHR). (2011). Guiding principles on business and human rights. Implementing the United Nations “Protect, Respect and Remedy” framework (Report HR/PUB/11/04). Office of the United Nations High Commissioner for Human Rights. https://www.ohchr.org/sites/default/files/documents/publications/guidingprinciplesbusinesshr_en.pdf
Ojanen, A., Sahlgren, O., Vaiste, J., Björk, A., Mikkonen, J., Kimppa, K., Laitinen, A., & Oljakka, N. (2022). Algoritminen syrjintä ja yhdenvertaisuuden edistäminen: Arviointikehikko syrjimättömälle tekoälylle (2022:54; Council of State Report; Research Release Series). Finnish Government. https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/164290/2022_VNTEAS_54.pdf?sequence=4
On the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), Pub. L. No. Regulation (EU) 2016/679, 32016R0679 (2016). https://eur-lex.europa.eu/eli/reg/2016/679/oj
Oostveen, M., & Irion, K. (2016). The golden age of personal data: How to regulate an enabling fundamental right? (Research Paper No. 2016–68). Amsterdam Law School. https://ssrn.com/abstract=2885701
Organisation for Economic Co-operation and Development. (n.d.). Trust in government [Data]. Organisation for Economic Co-operation and Development. https://doi.org/10.1787/1de9675e-en
Organisation for Economic Co-operation and Development. (2018). OECD due diligence guidance for responsible business conduct [Report]. Organisation for Economic Co-operation and Development. https://www.oecd.org/investment/due-diligence-guidance-for-responsible-business-conduct.htm
Paskaleva, K. A. (2011). The smart city: A nexus for open innovation? Intelligent Buildings International, 3(3), 153–171. https://doi.org/10.1080/17508975.2011.586672
Penney, J. (2021). Understanding chilling effects. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3855619
Peukert, C., Bechtold, S., Batikas, M., & Kretschmer, T. (2020). European privacy law and global markets for data. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3560392
Piekkola, H. (2018). Internationalization via export growth and specialization in Finnish regions. Cogent Economics & Finance, 6(1), 1514574. https://doi.org/10.1080/23322039.2018.1514574
Powles, J., & Nissenbaum, H. (2018, December 7). The seductive diversion of ‘solving’ bias in artificial intelligence [Medium post]. OneZero. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53
Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on Corporate Sustainability Due Diligence and amending Directive (EU) 2019/1937, COM(2022) 71 final 2022/0051(COD), European Commission, 52022PC0071 (2022). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0071
Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL Laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts, COM(2021) 206 final 2021/0106(COD), European Commission, 52021PC0206 (2021). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
Ranchordas, S. (2021a). Experimental regulations and regulatory sandboxes: Law without order? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3934075
Ranchordas, S. (2021b). Cities of god. Verfassungsblog: On matters constitutional. https://doi.org/10.17176/20211217-172800-0
Ringel, M. (2021). Smart city design differences: Insights from decision-makers in Germany and the Middle East/North-Africa region. Sustainability, 13(4), 2143. https://doi.org/10.3390/su13042143
Sadowski, J. (2020). Too smart: How digital capitalism is extracting data, controlling our lives, and taking over the world. The MIT Press.
Sætra, H. S., & Danaher, J. (2022). To each technology its own ethics: The problem of ethical proliferation. Philosophy & Technology, 35(4), 93. https://doi.org/10.1007/s13347-022-00591-7
Schaffers, H., Komninos, N., & Pallot, M. (2012). Smart cities as innovation ecosystems sustained by the future Internet [White Paper]. Fireball project. https://www.komninos.eu/wp-content/uploads/2014/01/2012-Smart-Cities-FIREBALL-White-Paper.pdf
Schlehahn, E., Aichroth, P., Mann, S., Schreiner, R., Lang, U., Shepherd, I. D. H., & Wong, B. L. W. (2015). Benefits and pitfalls of predictive policing. 2015 European Intelligence and Security Informatics Conference, 145–148. https://doi.org/10.1109/EISIC.2015.29
Seoane, M. V. (2020). Normative market Europe?: The contested governance of cyber-surveillance technologies. In A. Calcara, R. Csernatoni, & C. Lavallée (Eds.), Emerging security technologies and EU governance (pp. 88–101). https://doi.org/10.4324/9780429351846-6
Setting up a Union regime for the control of exports, brokering, technical assistance, transit and transfer of dual-use items (recast) (Dual-Use Regulation), Pub. L. No. Regulation (EU) 2021/821, 32021R0821 (2021). http://data.europa.eu/eli/reg/2021/821/oj
Sloane, M. (2022). To make AI fair, here’s what we must learn to do. Nature, 605(7908), 9–9. https://doi.org/10.1038/d41586-022-01202-3
Sloane, M., & Moss, E. (2019). AI’s social sciences deficit. Nature Machine Intelligence, 1(8), 330–331. https://doi.org/10.1038/s42256-019-0084-6
Smuha, N. A. (2019). The EU approach to ethics guidelines for trustworthy artificial intelligence. Computer Law Review International, 20(4), 97–106. https://doi.org/10.9785/cri-2019-200402
Smuha, N. A. (2021). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy & Technology, 34(S1), 91–104. https://doi.org/10.1007/s13347-020-00403-w
Söderström, O., Paasche, T., & Klauser, F. (2014). Smart cities as corporate storytelling. City, 18(3), 307–320. https://doi.org/10.1080/13604813.2014.906716
Strauss, A., & Corbin, J. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory. Sage.
Tekes. (2015). Fiksu Kalasatama -hankevalmistelu [Smart Kalasatama project preparation] (Report Final, Annex 1). Tekes. https://docplayer.fi/3757182-Tekes-loppuraportti-fiksu-kalasatama-hankevalmistelu-1-9-2013-31-12-2014-liite-1-15-4-2015.html
Tekes. (2018). Fiksut askeleet kohti älykkäiden kaupunkien globaaleja ratkaisuja [Smart steps towards smart cars. Global solutions for smart cities] (Draft Final; Smart cities programme). Tekes. https://www.businessfinland.fi/globalassets/vanha-old-tekes-global/ohjelmat-ja-palvelut/ohjelmat/fiksu-kaupunki/fiksu-kaupunki
Transparency International. (2021). Corruption Perceptions Index 2021. https://www.transparency.org/en/cpi/2021
Tupasela, A., Snell, K., & Tarkkala, H. (2020). The Nordic data imaginary. Big Data & Society, 7(1). https://doi.org/10.1177/2053951720907107
Työ-ja Elinkeinomisteriö Arbets- och näringsministeriet. (2017). Kaupunkien uusi rooli innovaatioiden edistämisessä [The new urban role in promoting innovation] (Publication No. 40/2017). Työ-ja Elinkeinomisteriö Arbets- och näringsministeriet [Ministry of Economic Affairs and Employment, Finland]. https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/160339/TEMjul_40_2017_verkkojulkaisu.pdf
Vakkuri, V., Kemell, K.-K., Jantunen, M., & Abrahamsson, P. (2020). “This is just a prototype”: How ethics are ignored in software startup-like environments. In V. Stray, R. Hoda, M. Paasivaara, & P. Kruchten (Eds.), Agile processes in software engineering and extreme programming (Vol. 383, pp. 195–210). Springer International Publishing. https://doi.org/10.1007/978-3-030-49392-9_13
Van De Poel, I. (2013). Translating values into design requirements. In D. P. Michelfelder, N. McCarthy, & D. E. Goldberg (Eds.), Philosophy and engineering: Reflections on practice, principles and process (Vol. 15, pp. 253–266). Springer Netherlands. https://doi.org/10.1007/978-94-007-7762-0_20
Van Den Hoven, J., & Weckert, J. (Eds.). (2008). Information technology and moral philosophy (1st ed.). Cambridge University Press. https://doi.org/10.1017/CBO9780511498725
Vanolo, A. (2014). Smartmentality: The smart city as disciplinary strategy. Urban Studies, 51(5), 883–898. https://doi.org/10.1177/0042098013494427
von Grafenstein, M. (2020). How to build data-driven innovation projects at large with data protection by design: A scientific-legal data protection impact assessment with respect to a hypothetical smart city scenario in Berlin. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3606140
von Grafenstein, M., Jakobi, T., & Stevens, G. (2022). Effective data protection by design through interdisciplinary research methods: The example of effective purpose specification by applying user-Centred UX-design methods. Computer Law & Security Review, 46, 105722. https://doi.org/10.1016/j.clsr.2022.105722
Wagner, B. (2012). Exporting censorship and surveillance technology [Report]. Humanist Institute for Co-operation with Developing Countries (Hivos). https://www.academia.edu/2133607/Exporting_Censorship_and_Surveillance_Technology
Washburn, D., & Sindhu, U. (2010). Helping CIOs understand “smart city” initiatives [Report]. Forrester Research, Inc. https://www.forrester.com/report/Helping-CIOs-Understand-SmartCity-Initiatives/RES55590
Whang, C. (2021). Trade and emerging technologies: A comparative analysis of the United States and the European Union dual-use export control regulations. Security and Human Rights, 31(1–4), 11–34. https://doi.org/10.1163/18750230-31010007
Williams, R. (2021). Whose streets? Our streets? (Technology and Public Purpose Project) [Report]. Harvard Kennedy School Belfer Center for Science and International Affairs. https://www.belfercenter.org/sites/default/files/2021-08/WhoseStreets.pdf
Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121 136. https://www.jstor.org/stable/20024652
Wisman, T. (2012). Purpose and function creep by design: Transforming the face of surveillance through the internet of things (SSRN Scholarly Paper No. 2486441). https://papers.ssrn.com/abstract=2486441
World Economic Forum. (2015). The global competitiveness report 2015-2016: Full data edition [Report]. https://www.weforum.org/reports/global-competitiveness-report-2015
World Economic Forum. (2020). The global competitiveness report: How countries are performing on the road to recovery [Report]. https://www.weforum.org/reports/the-global-competitiveness-report-2020/competitiveness-rankings/
World Justice Project. (2020). World justice rule of law index. https://worldjusticeproject.org/our-work/research-and-data/wjp-rule-law-index-2020
Yeung, K., Howes, A., & Pogrebna, G. (2019). AI governance by human rights-centred design, deliberation and oversight: An end to ethics washing. SSRN. https://doi.org/10.2139/ssrn.3435011
Yin, R. (2014). Case study research: Design and methods (5th ed.). Sage.
Appendix: Interview themes and questions
1. Smart Cities and smart city tech
- What does Smart City mean to you?
- What is your exposure to Smart Cities through your work?
2. Governance of procuring, piloting or getting involved with a smart city
- What kind of principles and rules guide you in selecting whether and how you support a smart city project?
3. Managing risks and opportunities
- Do you see any problems or risks with the smart city project(s) you are involved with or more generally?
- What are the means to manage the possible risks are you aware of?
Please feel free to inform us either in advance, or during the interview if you are uncomfortable answering any of the questions.
Interview reference (Expert number; E#) | Role | Funding |
---|---|---|
E1 | Company project director | National funding |
E2 | Company project manager #1 | EU funding |
E3 | Company project manager #2 | City funding |
E4 | Company project manager #3 | National funding |
E5 | Company project manager #4, | EU funding |
E6 | National funding agency manager #1 | National funding |
E7 | National funding agency manager #2 | National funding |
E8 | City data manager #1 | City funding |
E9 | City data manager #2 | City funding |
E10 | City project manager #1 | EU funding |
E11 | City project manager #2 | City funding |
E12 | City procurement expert | City funding |
E13 | City project coordinator | EU funding |
E14 | Regional funding agency specialist | EU funding |
E15 | Project researcher (non-HMA City; group interview) | EU funding |
E16 | Project specialist #1 (non-HMA City; group interview) | EU funding |
E17 | Project specialist #2 (non-HMA City; group interview) | EU funding |
E18 | Company Director | Diverse instruments |
E19 | City project manager #3 | Diverse instruments |
E20 | Non-HMA City, Director | Diverse instruments |
E21 | Public Agency, AI Expert | National funding and EU funding |
The sample discusses initiatives funded by EU funding sources, including, the European Social Fund, the European Regional Development Fund, including its Urban Innovative Actions sub-Fund, Horizon 2020 and Horizon Europe, the European Commission’s Digital Europe Programme and the Recovery and Resilience Facility. We also covered national funding governed by the Finnish Ministry of Economic Affairs and Employment, or other ministries including funding distributed through, Business Finland, government organisation for innovation funding and trade, travel and investment promotion. Interviewees also discussed R&D financed through municipalities’ own budgets, sometimes governed or executed by city-owned companies.
We conducted interviews on large initiatives, such as multistakeholder R&D collaborations and ecosystem-building initiatives, including multi-project programmes as well as smaller projects, such as testbed and piloting initiatives, and cities’ in-house development, complemented with interviews on procurement and projects for its facilitation.
Quote → | Q1 | Q2 | Q3 | Q4 | Q5 |
---|---|---|---|---|---|
Expert ↓ | |||||
E1 | Legal-wise, the GDPR is a huge risk, with the dozens and dozens of cameras that we have. |
We spend time solving these GDPR issues and kind of know how we should do it, comply with the legislation. Fine, we do it now. But we know that it will be a risk for new partners to come and try to get it because it is such a complicated and time-consuming procedure. |
It is a huge risk and if the EU is this strict, the best innovations go elsewhere, because you cannot give easy access for companies to pilot these things. Whereas if you go to South America, Asia no this kind of GDPR legislation exists. So it’s kind of we are kind of hindered a lot. | We would have needed to have several permits to change [the smart city infrastructure] we are developing [...] we need to take so many aspects into account with respect to future planning, because of the complexity of the solution [...] this is a challenge because no one person can handle this. And in city organisations there are usually many departments. So there is a risk of missing out something and block some things we are able to do. | Cities are usually not the investors to infrastructure in Finland, [...] if we go to Dubai or other kind of countries where the cities are a bit maybe more rich in finance, they can do the investments, but here there needs to be an external investor and then we can sell it as a service that the city can buy [...] |
E2 | City civil servants end up volunteering to give feedback on the pilots because they cannot be hired or they cannot be remunerated for it. However, those who were involved were committed until the end of the project, although it required extra work from them. | It's the business of companies, but we definitely are trying to confirm that AI is part of the solution. We have experts in AI to evaluate [...] to be sure that they are really using AI in the solution [...] We also have city experts who were considering the city’s perspective on how it would suit what they need. | |||
E3 | In my experience, those companies that have not thought through the data protection questions, are generally not qualifying in other funding criteria as well. | I think in Helsinki we are quite advanced in this living lab approach, actually involving residents in an open manner [in our pilots]. That’s of course a question of Nordic societies in general, that we have quite open democracies and quite a lot of trust towards city officials etc. | Quite many companies have given us very good feedback that the actual pilots have helped them in product development and have helped them to commercialise products, which of course means that hopefully they will be a successful business and then try to bring tax euros to Helsinki. | ||
E4 | We need to tackle legal aspects such as the GDPR at some point, before reaching market availability [our solutions]. However, we are in such an early phase that we do not know what the solutions might be. | I would say that the companies who are providing surveillance technologies, they are quite aware of the [legal] limitations of what’s possible and what’s not. | Most of the projects are driven by Finnish companies, universities and research institutes - we don't have that much of an understanding or even opportunities to understand what is happening on the smart city domain in China, or US or South America or Africa, which might lead us to not understanding the global definition or a smart city. | ||
E5 | [The GDPR was] just recently set up in Europe [...] and [the EU regulators] are making the situation very unclear because people are not really understanding what it means on many practical levels. | I wish that there would be some kind of leniency in the policies or laws regarding piloting, testing and experimentation. So for example, that we could be doing something for six months, that would not be okay to do in the long run, in the production version, but while we're testing and then be allowed. | When it comes to development of services that are based on personal data, it’s much easier to develop and test these types of solutions in an environment like, for example, China, where the data is not controlled like the GDPR regulations now. | If they are collaborating with City A [...], they should be for example, prepared to offer this service free of charge for a couple of years to City A, if they are their partners developing it, which means that they should then aim for Stockholm or Berlin, or someone else as their first paying customer, rather that switch their partners to customers. | |
E6 | [Social sciences] are not that considered in the research projects, which is a shame. But I think it's more kind of the companies’ fault and not academia's fault that it's not there. We are too busy to, you know, get the cash flow starting, that we don't get stuck in the minor details like legal aspects which might end up backfiring. | ||||
E7 | For example in the case of Saudi Arabia, we have currently around 70 Finnish companies involved. | Finland has been a pioneer in open smart city data and also a forerunner in setting up the governance and legislation for data, including requirements of consent from individuals, which is relevant for export. The exchange of experiences with the authorities of potential countries of export is also of relevance, also for the Finnish brand of trustworthiness. | |||
E8 | If we make it completely anonymous, it’s completely useless, there is no data left. But if we keep certain aspects of data, then it becomes [increasingly] likely that someone with access to anonymized data will identify people. And erring on the side of caution, what we do, leads to risks of underutilizing data that we have. | Personally, [participatory development] feels a bit frustrating as often it feels like doing the right thing, or doing the ethically sound thing, is reducible to consulting the residents. | Replicability is something that is kind of a core principle, we do not do anything just once, or tailor it for a specific purpose, also by using open source and making it available to others to reuse and replicate. [...] The goal is always scalability. | ||
E9 | The city employees have learned to do things in a certain way [...] and then there is a new thing like the use of data and you need to do a DPIA, and this is yet another tool that requires you to sit down and think about the situations. | A citizen may be benefiting from the smart city technologies, but it can be that they do not notice it, as many things in smart cities are under the hood, citizens benefit from them without anyone actually seeing them. | The city has a huge amount of different rules and sometimes I feel that we know them better and sometimes worse. | ||
E10 | |||||
E11 | |||||
E12 | All risks are subordinate to lack of proper process, [...], if you don’t have proper processes, for example, legal compliance cannot be fulfilled. You will always miss something. | There are like over 10 relevant persons participating [in the ICT procurement process] [...] we have three lawyers involved in the procurement subprocesses to ensure compliance. | |||
E13 | The hardest part [of GDPR compliance] was to communicate with different city units, because they have their own data management things. It was not centralised. So you had to have good contacts with the relevant department. | ||||
E14 | |||||
E15 | [...] the logic is that the city organisation is the main partner. So with the money with the EU funding, and with the project plans, and all that, those are incorporated in the city, already. And so it's not just like, let's do some research here, and then nothing changes after three years. So we've had the chance to really do something within the city and maybe bring something new to the existing practices. | I think we have presented [our] solution to other Finnish cities that might have some kind of similar structure and cooperation between [governmental unit] and the city. So the legal framework would be the same. So it’s maybe easier to bring [our] solution first to another Finnish city and then from there, somewhere else. | Like how to make the event experience more smooth and safe at the same time. Because we've had the idea that we don't want to bring too many security measures visible so that people would feel scared, like, is there something to be afraid of now, when I'm at the event, so we want everything to be as smooth as possible, maybe kind of invisible too. | ||
E16 | During the project, we are really focused on minimising risks, not to be overly invasive. But [we are] in the middle of the COVID situation. And as an event project without events, we don't have a lot of data to understand [how people feel] during the event. | The technology can be used for discrimination or to help the people, and this completely depends on the power; who is using it and who is selling it and what they will do with this technology. | |||
E17 | The technology we use is designed to not detect any biometric information. | [...] we've gotten much interest, we went to quite some different expos with the project, but we haven't really done anything to scale it internationally, [...] | |||
E18 | |||||
E19 | |||||
E20 | [...] maybe that was the one risk or challenge that when we talked about the smart city things, and things were kind of on a technology side but also on the competence of the legal side, they had to also kind of do the investigation and learn how to handle different things. So it was sometimes time-consuming that we were waiting for somebody to do their research to understand how to handle different situations. | So strategy is kind of giving the direction that then we had the shorter term execution plan | Here we have the kind of idea that the city could already offer platforms for certain purposes. First of all, we have the open data so that the people can utilise that as much as possible [...] and then [...] companies can utilise that data for their purposes, but then also to have the common platform where people can kind of have the different services. [...] And that’s something which lead to the city establishing the company [...]. And now they are selling these solutions to the other cities also. | ||
E21 | Development of AI and data-driven tech led to the awareness about their risks. Also, the maturing of the tech helps us to understand the possible risks. One needs also to experiment to become aware of the materialization of such risks. | When you are using solutions that collect all your information, it can be used either for you or against you, depending on the use case. So, in that sense, there is a very thin line between these technological solutions. Traditionally, the dual-use technologies have been referring to technologies that can be used also for military purposes, but in this new world, I think that the dual-use technologies are covering a much wider area, and it’s not only kind of military use, but let’s say, one could use for good and for bad. |
Footnotes
1. In this section, we refer to the corresponding fundamental rights of the CFREU, also with respect to Anglo-American authors.