Governing phygital spaces: Human rights by design meets speculative design
Abstract
Smart glasses and AI-powered “phygital spaces” are transforming how people perceive, navigate, and interact with the world. Yet existing regulatory frameworks - Privacy by design and Human Rights by design - focus narrowly on data protection and overlook the relational and collective dynamics these technologies disrupt. This article introduces the ‘ethics of interactions’ as a complementary framework that situates regulation within the lived realities of AI-mediated environments. Our interdisciplinary team, combining law, speculative design, and human–computer interaction, developed a year-long methodology spanning speculative storytelling, legal mapping, field diaries, and role-play based workshops in Germany and Israel. These exercises revealed how smart glasses might reshape trust, consent and perception, through five recurring modes of interaction: person-to-person, person-to-space, person-to-reality, person-to-machine, and person-to-platform. Workshop role-play surfaced vulnerabilities that conventional legal analysis misses, from peer-to-peer surveillance and emotional inference to corporate, platform and government control of augmented realities. Building on these findings, we propose regulatory recommendations that combine dual governance models, interaction-sensitive safeguards, and investment in digital literacy for policymakers. By embedding speculative inquiry into policy design, this study closes the loop between abstract principles and lived dilemmas, offering both conceptual and practical pathways for governing phygital spaces.
This paper is part of The craft of interdisciplinary research and methods in public interest cybersecurity, privacy, and digital rights governance, a special issue of Internet Policy Review, guest-edited by Adam Molnar, Diarmaid Harkin, and Urs Hengartner.
Introduction
The rise of smart glasses and AI-powered spatial computing marks more than a technological upgrade. It represents a fundamental shift in how people experience and interact with their surroundings. Unlike traditional screen-based interfaces, these devices embed digital layers into everyday environments, blurring the boundaries between physical and digital space. We prefer to define this shift as phygital spaces, a term that highlights the socio-cultural and ethical dimensions of this technology rather than its computational infrastructure.
As devices like Meta Ray-Ban smart glasses become more prevalent, critical questions arise regarding privacy, autonomy, dignity, and social norms. These technologies collect vast amounts of biometric and behavioural data and allow users to filter, manipulate, or erase elements of their physical environment. A striking example occurred when two Harvard students used smart glasses with facial recognition to retrieve personal information about metro passengers, inducing a false sense of familiarity among strangers (Binder, 2024; Choo, 2024). Such incidents underscore the urgent need for ethical and regulatory frameworks to govern peer-to-peer surveillance, data privacy, and augmented social interactions.
This paper explores how interdisciplinary collaboration, between law, design, and human-computer interaction, can advance new frameworks for governing phygital technologies. Our central concern was not only the gap between abstract “by design” principles and their implementation, but also how scholars from diverse disciplines, specifically policy experts and designers, can develop shared methods that transform these principles into actionable tools. Over the course of a year, our team experimented with speculative scenarios, and legal mapping, personal field diaries, and participatory workshops. These stages produced insights into smart glasses as mediators of social life, while also serving as sites of negotiation and friction across different epistemic traditions.
What follows is therefore not only an outward-looking policy analysis, but also an inward-looking account of how the integration of speculative design and legal analysis produced the framework we call the ethics of interactions. The article proceeds as follows: Section 2 traces the evolution of the “by design” framework and its limitations for phygital technologies. Section 3 details our interdisciplinary methodology. Section 4 develops the concept of the ethics of interactions and presents key observations from the workshops, while Section 5 translates these insights into regulatory recommendations. We conclude by reflecting on the methodological contribution itself and its implications for future research and policy design.
This contribution is situated within the broader theme of the special issue on interdisciplinary approaches to digital policy. By combining legal analysis, speculative design, and participatory inquiry, it illustrates how crossing disciplinary boundaries can generate not only new regulatory insights but also new conceptual vocabularies for governing and assessing possible societal impact of emerging technologies.
Section 1. Smart glasses: A gateway to phygital interaction
Smart glasses represent a paradigm shift in human-computer interaction, moving from flat, two-dimensional interfaces toward spatially intelligent computing that integrates AI and augmented reality (AR) into a wearable format. Unlike VR headsets, which immerse users in entirely artificial environments, or robots and tactile systems that rely on minimal digital overlay, smart glasses uniquely bridge the digital and physical realms, what can be described as a "phygital revolution." Equipped with cameras, microphones, sensors, and heads-up displays, they enable hands-free, real-time integration of digital content into physical environments (Stein, 2024).
As these technologies advance, the convergence of AI, AR, and wearable computing is likely to redefine how individuals interact with both digital services and public spaces and possibly with both human and non-human entities. Already, gesture recognition and eye-tracking are replacing touch and voice commands, creating more intuitive multimodal systems (Cronin & Scoble, 2020). Continuous AI assistance is becoming a defining feature of second-generation devices: by autonomously interpreting surroundings and providing contextual insights, they anticipate user needs, synchronise preferences across platforms, and personalise experiences with growing precision (Okeke, 2024). This evolution marks a transition from traditional voice or text-based interfaces toward visual processing and spatial awareness, where AI interprets and responds to stimuli in real time (Williams, 2025).
The integration of these capabilities carries profound implications. Persistent AI agents, biometric sensing, and real-time contextual analysis raise pressing questions not only about autonomy and cognitive augmentation but also about privacy in public spaces, a long-standing challenge in debates around CCTV, live facial recognition, and geolocation tracking. Smart glasses are not merely an extension of AR; they restructure how digital and physical realities converge, creating unprecedented opportunities for surveillance, manipulation, and control. Early incidents, such as the use of facial recognition-equipped glasses in public transit, underscore the urgency of proactive governance to ensure alignment with fundamental rights and societal values (Hackl, 2023).
Section 2. The “by design” legal concept: From Privacy by design to speculative design and back
The “By design” legal framework was developed to embed fundamental rights such as privacy, autonomy, dignity, and fairness into technology architecture. Initially conceptualised as Privacy by design (PbD), this principle gradually expanded into broader paradigms such as Human Rights by design (HRbD), Ethics by design (EbD), and Responsible AI-each emphasising proactive governance through safeguards built into technologies rather than relying on post-deployment enforcement.
2.1. Evolution of the “by design” concept
The origins of PbD can be traced to the Fair Information Practice Principles (FIPPs) introduced in 1973, which emphasised transparency, access, consent, data minimisation, and security. Building on these principles, Ann Cavoukian formalised PbD in the 1990s, shifting the focus from compliance to proactive imperatives such as privacy by default and end-to-end security (Waldman, 2020, pp. 148–155). What began as soft law evolved into binding regulation: the Federal Trade Commission adopted PbD (FTC, 2012), and Article 25 of the GDPR codified it as a preventative governance model (Hartzog, 2018; Waldman, 2018, pp. 662–663; Dougherty, 2020, pp. 644–645; Suzor et al., 2018).
As AI technologies spread, policymakers recognised that privacy alone was insufficient. HRbD added principles of participation, accountability, and non-discrimination (Yeung et al., 2020); EbD emphasised fairness and transparency (Iphofen & Kritikos, 2021; European Commission, 2021); and Responsible AI underscored human oversight and algorithmic fairness (Trocin et al., 2023, pp. 2139–2157; Cheng et al., 2021; Aizenberg & van den Hoven, 2022; Fjeld et al., 2020; Teo, 2023). Scholars have also proposed integrating “Friction by design” as a complementary principle, introducing deliberate ‘speed bumps’ – such as waiting periods, multi-step confirmations, or consent prompts – into AI systems to slow down high risk decisions and create space for reflection, accountability, and informed user participation (Frischmann & Benesch, 2023; Ohm & Frankle, 2018). These approaches are reflected in formal measures such as the EU’s AI Act, part of the broader Digital Legislation Package, mandates human oversight and transparency in high-risk AI systems, including explicit requirements for documentation, “quality management systems,” and oversight mechanisms. In the United States, the Executive Order on the “Safe, Secure, and Trustworthy Development and Use of AI” (Office of Science and Technology Policy, 2021; EO 14110, 2023; Cahane & Shwartz Altshuler, 2023) on the “Safe, Secure, and Trustworthy Development and Use of AI”.
2.2. Challenges in implementation
Despite their prominence, these frameworks often lack specificity. Compliance is difficult to standardise, leaving companies uncertain about obligations and individuals without clear remedies (Waldman, 2020, pp. 148–155). Developers frequently reduce “by design” to technical safeguards-encryption, anonymisation, or consent prompts - while ignoring broader ethical and social dimensions, producing fragmented or superficial protections (Waldman, 2018, pp. 662–663, 681–685). Proposed improvements include automated data expiration (Rubinstein, 2011), data minimisation by default (Gürses et al., 2011), and embedding privacy governance into corporate structures (Bamberger & Mulligan, 2015, pp. 76–85; Hartzog, 2018, p. 54). Yet these remain reactive and ill-suited for the socio-cultural risks posed by spatial computing and smart glasses.
2.3. Regulatory gaps for smart glasses
The EU’s framework is comprehensive yet fragmented. GDPR strictly regulates biometric data, the DSA addresses platform accountability, and the AI Act limits biometric identification systems. Still, blind spots persist: smart glasses not only process data but mediate relationships and reshape environments. Real-time biometric and emotional profiling-predicting behaviours or influencing decisions— remains largely unregulated (World Health Organization, 2024; Susser et al., 2019; Cohen, 2023).
2.4. Beyond privacy: Social and ethical impacts
Existing frameworks emphasise individual rights but overlook the interpersonal and collective implications of phygital interactions. The longstanding “privacy in public” dilemma (CCTV, live facial recognition) resurfaces here: not only states and corporations, but also individuals can now intrude upon one another. The widely discussed case of Harvard students using facial-recognition-equipped smart glasses to identify metro passengers illustrates how these devices challenge norms of consent and perception (Binder, 2024; Choo, 2024). Beyond privacy, smart glasses allow users to selectively filter or cover aspects of their surroundings, destabilising shared expectations of public space and raising new vulnerabilities regarding dignity and the right to be seen.
Taken together, the evolution and limitations of “by design” frameworks reveal both their promise and their insufficiency for governing phygital spaces. To move from abstract principle to actionable guidance, we turned to an interdisciplinary methodology, combining law, speculative design, and human-computer interaction, which we outline in the next section.
Section 3. Methodological approach: Bridging law and design through speculative inquiry
This research aims to translate the abstract legal concept of “by design” into a practical framework for policymakers and designers, ensuring that fundamental human rights and core societal values are embedded in technological development. Rather than treating legal principles as static regulatory requirements, our approach establishes a feedback loop between legal theory and design practice, enabling a dynamic interaction between regulatory imperatives and tangible design applications.
To achieve this, our interdisciplinary team —spanning Germany and Israel— developed a methodology that integrates insights from EU policy, law, and immersive technology design with speculative and scenario-based inquiry (Bendor, 2021; Auger, 2013; Dunne & Raby, 2013). The aim was not only to analyse existing legal frameworks but to challenge their assumptions by engaging stakeholders in future-oriented narratives and exercises.
The methodology unfolded over a year-long research process, structured into distinct phases that systematically explored the regulatory challenges posed by smart glasses and phygital public spaces.
3.1. Setting up the collaboration: Negotiating languages and disciplines
Our methodological process began not with a shared framework, but with the recognition of significant cultural and disciplinary gaps. Working across Germany and Israel, and across law, policy, design, and media studies, the team quickly realised that even seemingly basic concepts such as “autonomy” or “accountability” carried different connotations in different epistemic traditions. Early conversations revealed both tension and opportunity: what appeared as ambiguity from one perspective often functioned as generative openness from another.
A pivotal moment occurred when one of the design collaborators insisted: “give me a table of examples.” This moment underscored the asymmetry between legal abstraction and design practice, but it also created a new medium for translation. The designers asked the legal scholars for concrete illustrations of how abstract provisions intersect with everyday life, which led to the creation of info-vis examples between law and practice. In parallel, the legal team enjoyed the near-future speculative narratives that dramatised regulatory dilemmas, illustrating how values like dignity or agency might be tested in day-to-day phygital encounters (link to image can be found here https://uclab.fh-potsdam.de/ippso/_astro/translationprocess_Z2hBKG0.avif).
This reciprocal translation gradually evolved into a shared workflow that unfolded in three steps. First, the legal and design perspectives met in a speculative vignette. Tehilla drafted a short story set in a shopping mall, where smart glasses are in daily use (see Appendix 1). On its basis, Rachel created a structured table mapping the story’s events to relevant current legal and regulatory principles (see Appendix 2). This exercise showed that law could be made legible for designers and that speculative design tools could, in turn, expose hidden tensions in legal frameworks. What surprised us most at this stage was how quickly legal principles lost their abstraction once placed within speculative narratives: concepts like dignity or fairness became visceral once attached to faces, voices, and gestures, in near-future public spaces.
Second, the process was inverted: Michaela designed a fieldwork exercise that required each team member to step into public spaces, be they streets, malls, playgrounds, or restaurants, and ethnographically document them (by photos and inscriptions). Guided by speculative prompts, we reflected on what dilemmas would arise if these everyday scenes were layered with digital overlays. This outward-looking exercise not only revealed latent assumptions about privacy, dignity, and trust but also introduced the legal researchers to reflect through embedded, situated experience rather than doctrine.
Third, based on typological mapping of public spaces, four arenas were chosen and setting maps were created. The discussion evolving around what should be these arenas, what kinds of frictions could be created in them, which participants and audiences could be present in those imaginary public spaces and how we envision future communication and interaction between these groups and individuals, were the foundation for the gamified workshops, where role-play was the crux of the imaginary process. Four arenas were chosen and short scenarios were recorded to lure the participants into the future setting: a transportation hub, a dining space in a mall, an elderly daycare by the beach and a piazza, each presenting new opportunities for co-imagining future public spaces.
Rather than smoothing over disciplinary differences, the team learned to embrace them as productive frictions. Divergent vocabularies, methods, and professional instincts became fertile ground for inquiry. This collaborative “setup” went beyond a preparatory stage; it was in itself an integral phase of the methodology, shaping the scenarios, workshops, and analytical tools that followed.
3.2. Constructing speculative worlds: From narratives to theoretical grounding
The collaborative exercises described above were not improvised; they were anchored in the broader field of speculative design, which seeks to unsettle deterministic narratives of technological progress and open space for imagining alternative futures. Rather than assuming a singular trajectory for smart glasses, speculative methods invite participants to engage in counterfactual scenarios, “what if” provocations, and narrative experimentation that expose hidden values and power relations (Dunne & Raby, 2013; Auger, 2013; Malpass, 2017; Bleecker, 2009; Bleecker et al., 2022; Meskus & Tikka, 2024).
Our approach drew also on adjacent literatures: van Rijshouwer and van Zoonen’s work on citizen engagement in smart cities (2021), Wimmer and Bicking’s research on collaborative scenario-building for policy (2011), and Cordova-Pozo and Rouwette’s studies of scenario planning (2023). These perspectives informed our decision to construct near-future narratives not as predictions, but as probes for exploring how phygital technologies might reshape social norms, public space, and governance.
By embedding our mall vignette, the insights from the public-space exercise, and the shared discussions about the setting in which the scenes of interaction take place into our methodological framework, we transformed these from isolated exercises into systematic tools of inquiry. Speculative world-building thus became a way to test how legal principles and cultural assumptions converge in practice, while simultaneously producing a shared imaginative ground for interdisciplinary collaboration.
At the same time, these artefacts were consistently cross-referenced with the legal backbone of the EU digital legislation package, the GDPR, DSA, DMA, and the AI Act. Rather than treating these instruments as a checklist, we used them as a diagnostic lens: mapping scenes from the mall story or field diaries against existing provisions revealed blind spots and value conflicts that a purely legal analysis would not have captured. Issues such as peer-to-peer surveillance, emotional inference, or the reshaping of social consent in public spaces surfaced only when legal principles were tested within a speculative design setting. This integration of scenario-based role-play with regulatory mapping both exposed the limitations of current frameworks and generated the conceptual ground for the ethics of interactions. The blind spots became visible not in the legal texts themselves, but in the moments when our speculative narratives forced us to confront dynamics that felt awkward, even unsettling, such as the possibility of peers constantly controlling and changing each other’s field of vision.
These insights, however, emerged largely from within the research team itself. To test and refine them, we needed to extend the process outward, bringing external participants into workshops where speculative scenarios could be enacted, challenged, and debated.
3.3. Collaborative scenario workshops: Testing and refining speculative scenarios
The final phase of our research extended the internally developed narratives, legal mappings, and field inscriptions into a collective and participatory setting. We organised three interdisciplinary workshops in Berlin, Jerusalem, and Potsdam, each bringing together designers, legal scholars, policymakers, technologists, and students. The purpose was to co-imagine together with these inter-disciplinary participants and experiment in “rehearsing the future” by exposing them to diverse perspectives and by dramatising them through participatory methods.
Each workshop unfolded around a carefully designed dramaturgy. Participants were first introduced to pre-designed near-future environments located in familiar public spaces where smart glasses were assumed to be ubiquitous. To anchor their imagination, we employed setting maps that invited the participants to locate themselves in the scene and to describe how digital layers might transform these spaces. Short pre-recorded narrative introductions, told from a first-person perspective, dramatised ethical and social dilemmas such as biometric surveillance in a playground or reputation scores displayed in a restaurant queue.
Participants were then invited to “onboard” and were asked to develop a persona: from AI developers and corporate executives to government regulators, civil society activists, and ordinary citizens. Through role-playing, they first described what they saw, and their goal in that space, and then enacted the unfolding scenarios, negotiating conflicts and alliances while experiencing the tensions of phygital public life. The design team also introduced a deck of 18 future functionalities cards, containing amalgams of fictional word pairs that, when put together, sounded like potential interactive features. Examples include “object filtering”, “imagination bridge”, and “social blocking”. The dramatisation of these futures enabled abstract legal principles to become concrete and contested. Questions of agency, decision making and autonomy ceased to be theoretical and were experienced instead as lived dilemmas that participants had to address in real time.
The workshops concluded with group debriefs and legal reflection sessions. These discussions revealed not only regulatory blind spots but also the difficulty of maintaining stable categories when social norms themselves are unsettled by new technologies. We were seeing how quickly participants generated new vocabularies to describe these situations: terms such as “ambient profiling,” “consent fatigue,” or “layered visibility” emerged spontaneously, providing a richer lexicon for thinking about phygital governance.
In this way, the workshops served as the hinge between our methodological exercises and our analytical framework. They assisted us in expanding our speculative scenarios initial framing of four categories person-to-person, person-to-self, person-to-space, person-to-reality— adding additional categories that structure our findings: person-to-machine and person-to-platform, grounding these categories in the lived, shared experiences of diverse stakeholders.
Section 4. Key observations
4.1. ‘Ethics of interactions’
The categories and insights that emerged from our workshops —spanning person-to-person, person-to-self, person-to-space, person-to-reality, person-to-machine, and person-to-platform— made clear that existing “by design” principles were insufficient for addressing the complex, interpersonal challenges of phygital spaces. What participants dramatised in role-play and scenario exercises were not abstract legal puzzles but lived dilemmas: a waitress confronting a deepfake version of herself, parents negotiating digital overlays in a playground, or citizens debating reputation scores projected in a shopping mall. These embodied experiences exposed vulnerabilities and conflicts that cannot be captured by traditional privacy- or rights-based frameworks alone.
Building on these observations, we propose the adoption of the ethics of interactions as a complementary layer to existing regulation. Drawing inspiration from Carol Gilligan’s Ethics of Care and its adaptations by Tronto, Wellner, Mykhailov and others (Gilligan, 1993; Tronto, 1993; Tronto, 2013; Wellner & Mykhailov, 2023; Cohn, 2020; Villegas-Galaviz, 2022), this framework acknowledges the relational and collective dimensions of technologically mediated life. Unlike traditional regulatory approaches, which assume individual autonomy and rational decision-making, the ethics of interactions recognises interdependence and asymmetries in vulnerability, calling for context-sensitive protections rather than one-size-fits-all compliance measures.
This approach does not replace P/HR/EbD and Responsible AI principles. Instead, it serves as a corrective mechanism, broadening developer responsibilities, particularly in scenarios that may impact mental health, emotional well-being, or social cohesion (Tavory, 2024). Over time, these interaction-based principles may evolve into structured regulatory requirements that ensure AI-driven phygital spaces remain inclusive, ethical, and socially sustainable.
The ethics of interactions specifically addresses the socio-cultural implications of phygital reality, considering both users and non-users of smart glasses. By shifting the regulatory focus from isolated AI risks to the broader social consequences of immersive technologies, this framework ensures that emerging digital norms do not undermine human dignity, trust, or collective well-being (Yew, 2021; Fineman, 2017). In this sense, the ethics of interactions is not only a theoretical proposal but a direct outcome of our methodological process: it crystallises what became visible only when abstract principles were stress-tested through speculative design, role-play, and stakeholder dialogue.
4.2. The ethics of interactions approach and human dynamics in phygital public spaces
Smart glasses are active mediators of human interaction shaping social dynamics in ways that existing regulatory frameworks fail to anticipate. Through the combination of speculative narratives, legal mappings, and stakeholder workshops, we identified five primary interactions where smart glasses redefine human interactions in phygital spaces:
- Person-to-Person (P2P): How smart glasses mediate social interactions, privacy, and trust between individuals.
- Person-to-Space (P2S): How users navigate, interpret, and modify physical environments through augmented overlays.
- Person-to-Reality (P2R): How digital enhancements reshape perceptions of truth, memory, and emotional responses.
- Person-to-Machine (P2M): How users interact with AI systems, personal assistants, and neural interfaces embedded in smart glasses.
- Person-to-Platform (P2PL): How corporate control, monetization, and governance impact autonomy in augmented spaces.
These categories emerged not as abstract theoretical distinctions but as recurring patterns that participants enacted and debated in our workshops. They reveal gaps in existing regulatory paradigms, demanding a new conceptual framework beyond privacy and AI ethics to address the social and psychological dimensions of human-technology relationships.
4.2.1. P2P interaction: Rethinking privacy, trust, and social boundaries
Smart glasses process interpersonal encounters in ways that blur the boundaries of perception, turning fleeting glances and casual exchanges into digitally mediated events.
Workshop participants explored real-world examples of this disruption. One participant described how a café patron wearing smart glasses unintentionally recorded nearby conversations, raising concerns about consent in public spaces. Another participant imagined a scenario in which a worker used smart glasses to replay a past argument with a coworker that denied the accuracy of the AI-assisted memory, and argued that the conversation had been misinterpreted by the glasses’ algorithm. This demonstrated the emotional and psychological impact of smart glasses and sparked a debate over trust, context, and the fallibility of AI in interpersonal relationships.
Another example illustrated the unintended consequences of facial recognition in social settings. A tourist using AR-assisted navigation misidentified a police officer as a “tour guide” and approached her with questions. The officer’s own smart glasses, programmed to flag unusual behaviours in public spaces, marked the tourist as suspicious, escalating what should have been an innocuous interaction into a moment of tension.
One of the most striking cases involved a waitress participating in the workshop, who reacted emotionally upon learning about an AR app capable of creating deepfake-like renderings of her appearance. She described this experience not just as deeply invasive but as profoundly disorienting, highlighting how the inability to control one's digital presence can magnify feelings of exploitation and powerlessness.
Together, these examples underscored the uncertainty people feel about what others might know about them through these devices. The participants’ discussions ultimately pointed to a fundamental gap in existing privacy frameworks: while governments have long regulated corporate data collection and state surveillance (Gilliom & Monahan, 2012; Cohen, 2013; Lyon, 2017; van Brakel, 2021) they have yet to address how individual citizens can wield mass surveillance tools against one another.
4.2.2. P2S interaction: The transformation of physical environments
Beyond interpersonal relationships, smart glasses are also reshaping how people interact with their physical surroundings. The workshops revealed concerns about augmented overlays potential to alter the very fabric of public spaces, raising concerns about accessibility, inclusivity, and digital overload.
A recurring discussion revolved around whether traditional physical markers, such as traffic signs, storefront displays, and public information boards, will still be necessary once smart glasses provide real-time overlays for navigation and information retrieval. Some participants feared a future in which cultural landmarks and public signage are erased, replaced by personalised AR-based translations that gradually eliminate local languages and cultural identities from public view, or become inundated with digital advertisements, obscuring essential real-world information. This also raised the issue of distinguishing essential public information from corporate-driven overlays.
Others debated the impact of AR on free expression and digital ownership. One participant envisioned a train station where an activist projected AR protest messages, only to have security officers’ smart glasses flag the display as a “disruptive element”. This led to a deeper discussion about whether digital graffiti is a form of vandalism or a legitimate form of speech. Moreover, if major tech companies control the AR layers over public spaces, who decides what information is visible or concealed?
Another significant concern involved the oversaturation of augmented content. As AR becomes more commercially viable, participants worried that public spaces could become inundated with digital advertisements, obscuring essential real-world information. This raised the broader question: how will we distinguish essential public information from corporate-driven overlays?
These discussions revealed that the governance of AR-mediated spaces is not merely a question of technology but a fundamental issue of human rights, ownership, and control. Smart glasses will go beyond “augmenting” reality, they will determine whose version of reality prevails.
4.2.3. P2R interaction: The blurring line between the physical and the augmented
The ability to seamlessly blend photorealistic virtual content into the physical world opens exciting possibilities but also deep ethical dilemmas that extend beyond traditional regulatory concerns.
Participants in the workshops explored both beneficial and unsettling applications of AR’s growing influence on human perception. In one scenario, a mother used an AR filter to detect traces of allergens on playground equipment, preventing potential exposure for her highly allergic child. However, when the application malfunctioned, she was forced to rely on manual inspection, leading to a dispute with another parent who blindly trusted their own AR-enhanced perceptions. The discussion that followed highlighted a deeper problem: as reliance on AR grows, will human intuition and sensory judgment diminish?
Another example focused on spiritual and cultural beliefs. A participant imagined a senior care home where caregivers used an AR overlay to track “soul energy” after a resident passed away. In one case, a staff member followed a system-generated recommendation to open a window to allow the soul’s departure, while a grieving family insisted that the soul remained in the room. The clash between technology-mediated interpretations of reality and personal belief systems raised profound questions: when AR dictates aspects of human experience —whether religious, emotional, or social— who decides what is real?
Beyond these personal and emotional conflicts, manipulative applications of AR emerged as a major concern. One discussion explored the weaponisation of AR for disinformation. Imagine a pedestrian walking down the street wearing smart glasses, only to be targeted by an AR misinformation campaign. Through hyper-targeted overlays, their surroundings might be distorted —fake news banners, misleading road signs, or even deepfake avatars of familiar figures leading them into a deceptive interaction.
Even without deliberate intent to mislead, conflicting versions of reality presented by AR may cause interpersonal disputes. One group featured a couple walking through a bustling urban market, each using different AR settings. One partner had configured their glasses to filter out advertisements, while the other had enabled a historical overlay providing cultural insights. The incongruent views of the same shared space led to frustration and miscommunication, raising an essential question: when every individual interacts with a personally curated version of reality, how do we maintain a shared understanding of the world?
At its core, the P2R interaction dilemma highlights a fundamental regulatory gap: while current AI laws regulate data collection and algorithmic bias, they do not address the growing impact of AI-generated overlays on individual perception, decision-making, and trust itself.
4.2.4. P2M interaction: trust, accessibility, and the rise of "liquid agency"
Smart glasses are fluid, context-aware companions, capable of anticipating needs, responding to emotions, and even making autonomous decisions. As such the interaction between them and their user introduces both immense opportunities and serious vulnerabilities.
Participants in the workshops raised concerns about accessibility and the potential for economic segregation in a world dominated by AI-driven interfaces and AR. While gesture-based interfaces and voice commands were seen as potential enhancements for individuals with disabilities, concerns surfaced about advanced AI-powered smart glasses features becoming paywalled, leaving lower-income users with a less enriched digital experience. The fear was that wealthier individuals would enjoy a fully enhanced, interactive reality, while others would be left navigating a fragmented, lower-tier version of the same spaces.
Another key theme was trust in machine intelligence. Could people rely on smart glasses to interpret reality accurately, to what degree would they introduce flaws, biases, and potential harm?
The discussion extended to how much decision-making authority should be granted to AI. A concept that emerged frequently was “liquid agency” -the idea that control over a situation should dynamically shift between the human and the machine depending on context. For instance, smart glasses might take over in life-threatening scenarios, issuing immediate safety alerts (e.g., "stop walking -oncoming vehicle detected"), while in less critical contexts, they would defer to human discretion. Finding the right balance between proactive assistance and user autonomy remains a major challenge for future regulation.
The participants’ discussions revealed a paradox: while AI-driven interfaces have the potential to make technology more intuitive, they also increase dependency, raising concerns about whether humans will retain meaningful agency over their digital interactions.
4.2.5. P2PL interaction: platform control, monetization, and governance
Smart glasses do not operate in isolation —they exist within platform ecosystems controlled by major tech companies. Participants raised concerns about how profit-driven platforms will shape user experiences, interactions, and choices in phygital spaces.
One group imagined a parent-child conflict over food choices. The child, wearing smart glasses, received targeted AR recommendations nudging them toward unhealthy foods, despite their parent’s objections. In this scenario, the platform’s advertising incentives directly undermined parental authority, revealing a larger issue of corporate influence over personal decision-making.
Another case study explored AR-powered urban navigation, where advertisers paid platforms to prioritise their businesses in augmented overlays. A tourist walking through a historic district was misled by AI-enhanced overlays, which presented a revised version of history tailored to match the local government’s preferred narrative. This raised a critical concern: should platforms or governments have the power to dictate historical truth, cultural identity, and public knowledge in augmented spaces?
Governance challenges also emerged regarding who controls digital layers over public spaces. If train stations, shopping malls, and historic sites host AR-driven digital overlays, who decides on the content displayed or censored? This question remains largely unresolved in existing regulatory frameworks.
At the heart of these discussions was the realisation that platform control over AR is not just about technology —it is about power. The ability to shape what people see, interact with, and believe in AR-mediated environments represents a new frontier of digital governance that policymakers have yet to fully address.
Section 5. Recommendations: Toward a relational approach to phygital regulation
The following recommendations close the loop of our research by returning to the world of principles and policy. Yet they look different now: shaped not by abstract theorising alone, but by the lived dilemmas, speculative narratives, and workshop debates we described above. While existing frameworks such as PbD, HRbD, EbD, and Responsible AI provide important baselines, they do not sufficiently address the relational dimensions revealed through our methodology. The following recommendations are therefore structured around both general principles and specific interaction types.
5.1 General recommendations
5.1.1 Reframing the discourse
We recommend adopting the term “phygital spaces” instead of “spatial computing”. This shift in terminology could foster a deeper epistemological and regulatory understanding of the phenomenon, ensuring that legal and policy discussions encompass not only technical risks but also the complex, multidimensional reality being created by immersive technologies and the socio-cultural and ethical dimensions.
5.1.2. Refining and expanding the interaction typology
To ensure regulatory clarity, we propose a continued refinement of the typology of phygital interactions, including an exploration of new interaction subtypes that may emerge as technology evolves. Clearly defining these interactions is essential for anticipating their societal impact and guiding legal and design interventions.
5.1.3 Mapping interactions across contexts
Smart glasses function differently depending on the environment in which they are used. We recommend systematically mapping interactions within specific physical environments, such as indoor vs. outdoor settings, commercial vs. residential areas, and high-risk vs. low-risk public spaces. Regulatory frameworks should consider how age, gender, locality, disabilities, and emotional states affect individuals' experiences with augmented environments.
5.2. Interaction-based recommendations
5.2.1. Regulating P2P interactions
The ability of smart glasses to collect, analyse, and share personal data in real time introduces a new and decentralized form of surveillance —not from governments or corporations, but from ordinary individuals in everyday interactions. Unlike traditional privacy concerns centred around state oversight or platform-driven data collection, smart glasses enable peer-to-peer monitoring, where any person in a shared space can record, analyse, and retrieve personal information about others without their knowledge or consent. This shift from a single "Big Brother" to a world of many "Peeping Toms" fundamentally alters privacy expectations, requiring new safeguards to protect individuals from invasive augmented interactions in public spaces.
To address these concerns, we propose the following regulatory measures:
- Mutual consent mechanisms – Implement shared agreement protocols for augmented interactions, ensuring that both parties explicitly consent before AR overlays can display personal data, facial recognition tags, or other identifiable information.
- Privacy opt-out features – Allow individuals to opt out of being recorded, analysed, or tagged by others' smart glasses while in public spaces, preventing unwanted data collection or AI-driven profiling.
- Guardrails for AI-powered memory features – Introduce restrictions on persistent AI-generated memories, preventing their misuse for unauthorised surveillance, personal disputes, or reputation-based profiling.
- Regulating personalised filtering – Ensure that one user's ability to modify their augmented environment (e.g., erasing ads, altering real-world visuals) does not interfere with another’s overlays, preventing conflicts in shared phygital spaces.
5.2.2. Governance of P2S interactions
Phygital spaces operate under two overlapping governance systems: the physical authorities that manage urban infrastructure, public spaces, and private properties (e.g., municipalities, transit agencies, building owners), and the digital platforms that control the augmented layers of these spaces through AR overlays, AI-powered wayfinding, and commercial content integration. This duality introduces new regulatory challenges—while physical authorities set rules for public behaviour, digital platforms dictate what individuals see, interact with, and prioritize in augmented environments.
The lack of coordination between these two governance systems could lead to conflicts where digital content overrides physical regulations, raising concerns about public safety, misinformation, and the commercialization of shared spaces. To mitigate these risks, we propose the following regulatory measures:
- Dual governance models – Establish clear coordination mechanisms between local governments, regulatory bodies, and AR platform operators to harmonise physical and digital governance in public spaces. Policies should ensure augmented interactions respect real-world regulations, user safety, and spatial equity.
- Contextual design of AR functionalities – Customise AR policies based on location-specific needs, ensuring that augmented content aligns with the legal, cultural, and ethical considerations of different environments. For example, limit data collection and AI-driven profiling in schoolyards in order to protect minors from privacy violations and behavioural tracking.
- Public transportation hubs – Ensure navigation and safety alerts remain clear, reliable, and free from commercial distortion.
- Historic and cultural sites – Mandate fact-based AR overlays, preventing the manipulation of historical narratives or cultural heritage for political or commercial interests.
- Hospitals and religious sites – Restrict AR-based advertising and behavioural tracking in spaces where privacy, sensitivity, and ethical concerns take precedence.
- Prioritising safety and accessibility over commercial interests – Implement strict regulations on AR-based advertising, ensuring that commercial overlays do not interfere with critical information, mislead users, or create sensory overload in high-risk environments.
5.2.3. Safeguarding P2R interactions
The blurring of physical and digital realities raises ethical concerns about manipulation, misinformation, and decision-making autonomy.
- Mitigating misinformation in AR overlays – Require verification and transparency mechanisms for AR content, particularly in political, historical, and scientific contexts.
- Labelling and transparency requirements – Platforms must clearly distinguish between sponsored AR content, user-generated overlays, and organic content to prevent covert influence and manipulation.
- Regulating phygital modifications – Prevent misleading digital alterations of public infrastructure, such as hiding hazards or manipulating public safety information.
Taken together, these recommendations return us to the central theme of this article: that regulation of phygital spaces must move beyond abstract principles to embrace the lived realities of mediated interactions. By grounding policy guidance in the ethics of interactions they strengthen existing “by design” frameworks and provide a pathway for adaptive governance. This closing of the loop, from speculative narratives to normative proposals, underscores the value of the dialogue between speculative design in shaping the future of digital policy.
Section 6. Reflections and future directions
Our research set out to explore how legal principles might be translated into phygital realities, but one of its most significant contributions lies in the methodology itself. The process of speculative storytelling, legal mapping, and role-playing, gamified workshops proved to be more than tools of analysis —they became instruments of discovery. They enabled participants, and us as researchers, to experience firsthand the dilemmas that abstract regulation often struggles to anticipate.
The workshops highlighted the value of gamification and role-play. By placing participants in realistic yet speculative phygital scenarios, the narratives collapsed the distance between abstract policy debates and lived experiences. Participants confronted not only hypothetical risks but emotional, ethical, and social tensions that reframed our understanding of regulation. What emerged went beyond featuring a set of policy gaps, but new vocabularies and conceptual frames-most notably, the ethics of interactions framework.
6.1. Leveraging role-playing for policy insights
The workshops demonstrated that role-playing and speculative design are effective tools for identifying legal and social blind spots in current regulatory frameworks. Immersion in realistic phygital scenarios created a bridge between legal abstraction and social reality, allowing participants to surface vulnerabilities that traditional consultation processes rarely expose.
6.2. Expanding the workshop model
To further advance stakeholder engagement and future-proof policymaking, we propose the following steps:
- Applying embodied, situated role-playing methodologies to other domains —such as smart city governance and AI-driven decision-making systems, where similar relational dilemmas are emerging.
- Encouraging cross-cultural collaboration —integrating diverse societal perspectives into regulatory frameworks, ensuring that phygital regulation does not reproduce digital colonialism or cultural bias.
- Investing in digital literacy workshops for policymakers —equipping regulators not only to understand phygital technologies but to anticipate “unknown unknowns” through experiential learning.
In this sense, the methodological dimension of our project points toward a broader transformation: policymaking for phygital technologies must become interactive, participatory, and experimental. By embracing methods that foreground lived experience and collective imagination, regulators can better anticipate the societal impacts of immersive technologies and ensure that future frameworks remain inclusive, adaptive, and ethically grounded.
Conclusion
This study has shown that interdisciplinary collaboration is not only possible but essential when addressing the governance challenges of phygital technologies. By combining legal analysis, speculative design, and human-computer interaction, we developed the ethics of interactions framework, an approach that extends beyond individual rights to consider the relational and collective dimensions of AI-mediated environments.
Equally important, the project highlighted the process through which these insights emerged: the negotiation of disciplinary languages, the creative friction of role-play and storytelling, and the translation of experiential observations into normative principles. These methodological lessons are as significant as the substantive findings, offering a roadmap for others who wish to apply interdisciplinary practices in the field of digital rights and governance.
Our speculative design workshops revealed critical regulatory gaps, showing how smart glasses challenge privacy norms, reshape interpersonal relationships, and blur the boundaries of social consent. Through role-playing and speculative scenarios, participants uncovered friction points that remain unaddressed by existing legal frameworks, reinforcing the necessity of incorporating context-sensitive, interdisciplinary approaches into policymaking. The workshops also emphasised the importance of digital literacy, demonstrating how participatory speculative methods can help decision-makers anticipate emerging AI risks and navigate “unknown unknowns.”
As smart glasses and AI-enhanced phygital reality become more widespread, we must rethink governance frameworks to ensure these technologies empower, rather than erode, human agency and social cohesion. In the words of British statesman John Lubbock: “What we see depends mainly on what we look for” (Hodgins, 2008). This study challenges policymakers, designers, technologists and researchers to look beyond technological capabilities and focus on how AI-mediated interactions shape the realities we inhabit. The challenge now is to institutionalise these insights, embedding the ethics of interactions into future regulatory frameworks that safeguard human dignity, equity, and the shared spaces of the future.
References
Aizenberg, E., & Van Den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society, 7(2), 2053951720949566. https://doi.org/10.1177/2053951720949566
Andrejevic, M. (2002). The work of watching one another: Lateral surveillance, risk, and governance. Surveillance & Society, 2(4). https://doi.org/10.24908/ss.v2i4.3359
Andrejevic, M. (2007). Surveillance in the digital enclosure. The Communication Review, 10(4), 295–317. https://doi.org/10.1080/10714420701715365
Andrejevic, M., & Gates, K. (2014). Big Data surveillance: Introduction. Surveillance & Society, 12(2), 185–196. https://doi.org/10.24908/ss.v12i2.5242
Auger, J. (2013). Speculative design: Crafting the speculation. Digital Creativity, 24(1), 11–35. https://doi.org/10.1080/14626268.2013.767276
Bamberger, K. A., & Mulligan, D. K. (2015). Privacy on the ground: Driving corporate behavior in the United States and Europe. The MIT Press.
Bendor, R. (2021). Value replacement therapy: Imagining urban technologies otherwise. In E. Rijshouwer & L. Zoonen (Eds), Speculative design methods for citizen engagement in smart cities research (pp. 63–73). Centre for BOLD Cities.
Bicking, M., & Wimmer, M. A. (2011). Concept to integrate open collaboration in technology roadmapping: Stakeholder involvement in strategic e-government planning. 2011 44th Hawaii International Conference on System Sciences, 1–12. https://doi.org/10.1109/HICSS.2011.124
Bleecker, J. (2009). Design fiction: A short essay on design, science, fact, and fiction. Near Future Laboratory. https://blog.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/.
Bleecker, J., Foster, N., Girardin, F., & Nova, N. (2022). The manual of design fiction: A practical guide to exploring the near future. Near Future Laboratory. https://nearfuturelaboratory.com/library/2023/06/the-manual-of-design-fiction-softcover/.
Cahane, A., & Shwartz Altshuler, T. (2023). Human, machine, state: Toward the regulation of artificial intelligence. The Israel Democracy Institute.
Cheng, L., Varshney, K. R., & Liu, H. (2021). Socially responsible AI algorithms: Issues, purposes, and challenges. Journal of Artificial Intelligence Research, 71, 1137–1181. https://doi.org/10.1613/jair.1.12814
Choo, L. (2024, October 4). How 2 students used the Meta Ray-Bans to access personal information. Forbes. https://www.forbes.com/sites/lindseychoo/2024/10/04/meta-ray-bans-ai-privacy-surveillance/.
Cohen, J. E. (2013). What privacy is for. Harvard Law Review, 126(7), 1904–1933.
Cohen, T. (2023). Regulating manipulative artificial intelligence. SCripted: A Journal of Law, Technology & Society, 20(1), 203–242.
Cohn, J. (2020). In a different code: Artificial intelligence and the ethics of care. The International Review of Information Ethics, 28. https://doi.org/10.29173/irie383
Cordova-Pozo, K., & Rouwette, E. A. J. A. (2023). Types of scenario planning and their effectiveness: A review of reviews. Futures, 149, 103153. https://doi.org/10.1016/j.futures.2023.103153
Cronin, I., & Scoble, R. (2020). The infinite retina: Spatial computing, augmented reality, and how a collision of new technologies are bringing about the next tech revolution. Packt Publishing.
Dougherty, C. (2020). Every breath you take, every move you make, Facebook’s watching you: A behavioral economic analysis of the US California Consumer Privacy Act and EU ePrivacy Regulation. Northeastern University Law Review, 12(2), 629–659.
Dunne, A., & Raby, F. (2013). Speculative everything: Design, fiction, and social dreaming. The MIT Press.
European Commission DG Research & Innovation. (2021). Ethics by design and ethics of use approaches for artificial intelligence. European Commission. https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf.
European Union. (2010). Charter of fundamental rights of the European Union. Official Journal of the European Union, C 83, 380.
European Union. (2016). Regulation of the European Parliament and of the Council on 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal, L 119, 1–88.
European Union. (2022a, 1925). Regulation of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives. https://op.europa.eu/publication-detail/-/publication/2c2bf2fb-3f85-11eb-b27b-01aa75ed71a1.
European Union. (2022b, 2065). Regulation of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC. Digital Services Act. https://eur-lex.europa.eu/eli/reg/2022/2065/oj.
European Union. (2024). Regulation of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008 (No. 167/2013, (EU) No 168/2013). https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.
Executive Order No. 14110, 3. CFR. (2023). Safe, secure, and trustworthy development and use of artificial intelligence. The White House.
Federal Trade Commission. (2012). Protecting consumer privacy in an era of rapid change: Recommendations for businesses and policymakers. Federal Trade Commission.
Fineman, M. A. (2017). Vulnerability and inevitable inequality. Oslo Law Review, 4(3), 133–149. https://doi.org/10.18261/issn.2387-3299-2017-03-02
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3518482
Frischmann, B. M., & Benesch, S. (2022). Friction-in-design regulation as 21St century time, place and manner restriction. SSRN Electronic Journal, 25, 376–447. https://doi.org/10.2139/ssrn.4178647
Gilligan, C. (1993). In a different voice: Psychological theory and women’s development. Harvard University Press.
Gilliom, J., & Monahan, T. (2012). SuperVision: An introduction to the surveillance society. University of Chicago Press.
Greenwold, S. (1995). Spatial computing [MSc Thesis, Massachusetts Institute of Technology]. https://acg.media.mit.edu/people/simong/thesis/SpatialComputing.pdf.
Gürses, S., Troncoso, C., & Diaz, C. (2011). Engineering privacy by design. Privacy & Data Protection, 25, 1–25.
Hackl, C. (2023). What leaders need to know about spatial computing. Harvard Business Review.
Hartzog, W. (2018). Privacy’s blueprint: The battle to control the design of new technologies. Harvard University Press.
Hodgins, D. C. (2008). What we see depends mainly on what we look for (John Lubbock, British anthropologist, 1834-1913). Addiction, 103(7), 1118–1119. https://doi.org/10.1111/j.1360-0443.2008.02282.x
Iphofen, R., & Kritikos, M. (2021). Regulating artificial intelligence and robotics: Ethics by design in a digital society. Contemporary Social Science, 16(2), 170–184. https://doi.org/10.1080/21582041.2018.1563803
Koetsier, J. (2024, October 3). Meta’s Ray-Ban smart glasses used to instantly dox strangers in public thanks to AI and facial recognition. Forbes. https://www.forbes.com/sites/johnkoetsier/2024/10/03/metas-ray-ban-smart-glasses-used-to-instantly-dox-strangers-in-public-thanks-to-ai-and-facial-recognition/.
Lyon, D. (2017). Surveillance culture: Engagement, exposure, and ethics in digital modernity. International Journal of Communication, 11, 824–842.
Moralioglu, B., & Gül, L. F. (2023). [des-Fi]XR: Envisioning future spaces with XR technologies by using design fiction. 873–882. https://doi.org/10.52842/conf.ecaade.2023.2.873
Office of Science and Technology Policy. (2021). Blueprint for an AI Bill of Rights: Making automated systems work for the American people. Biden White House Archives. https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/.
Ohm, P., & Frankle, J. (2018). Desirable inefficiency. Florida Law Review, 70(4), 777–832.
Okeke, F. (2024, February 22). The challenges Apple faces to make spatial computing mainstream. Techopedia. https://www.techopedia.com/the-challenges-apple-face-to-make-spatial-computing-mainstream.
Picard, R. (1997). Affective computing. The MIT Press.
Rijshouwer, I. E., & Zoonen, L. (Eds). (2021). Speculative design methods for citizen engagement in smart cities research. Leiden-Delft-Erasmus Centre for BOLD Cities. https://books.ipskampprinting.nl/thesis/BOLD-DesignMethods/files/assets/common/downloads/Thesis.pdf.
Rubinstein, I. S. (2011). Regulating privacy by design. Berkeley Technology Law Journal, 26, 1409–1456.
Stein, S. (2024, September 25). I wore Meta’s Orion AR glasses: A wireless taste of a neural future. Cnet. https://www.cnet.com/tech/computing/i-wore-metas-orion-ar-glasses-a-wireless-taste-of-a-neural-future/.
Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 4(1). https://georgetownlawtechreview.org/online-manipulation-hidden-influences-in-a-digital-world/GLTR-01-2020/.
Suzor, N., Dragiewicz, M., Harris, B., Gillett, R., Burgess, J., & Van Geelen, T. (2019). Human rights by design: The responsibilities of social media platforms to address gender‐based violence online. Policy & Internet, 11(1), 84–103. https://doi.org/10.1002/poi3.185
Tavory, T. (2024). Regulating AI in mental health: Ethics of care perspective. JMIR Mental Health, 11, e58493. https://doi.org/10.2196/58493
Teo, S. A. (2023). Human dignity and AI: Mapping the contours and utility of human dignity in addressing challenges presented by AI. Law, Innovation and Technology, 15(1), 241–279. https://doi.org/10.1080/17579961.2023.2184132
Trocin, C., Mikalef, P., Papamitsiou, Z., & Conboy, K. (2023). Responsible AI for digital health: A synthesis and a research agenda. Information Systems Frontiers, 25(6), 2139–2157. https://doi.org/10.1007/s10796-021-10146-4
Tronto, J. C. (2013). Caring democracy: Markets, equality, and justice. NYU Press. http://www.jstor.org/stable/j.ctt9qgfvp.
Tronto, J. C. (2020). Moral boundaries: A political argument for an ethic of care (1st edn). Routledge. https://doi.org/10.4324/9781003070672
Van Brakel, R. (2021). How to watch the watchers? Democratic oversight of algorithmic police surveillance in Belgium. Surveillance & Society, 19(2), 228–240. https://doi.org/10.24908/ss.v19i2.14325
Villegas-Galaviz, C. (2022). Ethics of care as moral grounding for AI. In K. Martin & Villegas, C. (Eds), Ethics of data and analytics: Concepts and cases (1st edn). Auerbach Publications. https://doi.org/10.1201/9781003278290
Waldman, A. E. (2018). Designing without privacy. Houston Law Review, 55(3), 659–725.
Waldman, A. E. (2020). Data protection by design? A critique of Article 25 of the GDPR. Cornell International Law Journal, 53, 148–167.
Wellner, G., & Mykhailov, D. (2023). Caring in an algorithmic world: Ethical perspectives for designers and developers in building AI algorithms to fight fake news. Science and Engineering Ethics, 29(4), 30. https://doi.org/10.1007/s11948-023-00450-4
Williams, R. (2025). What’s next for smart glasses. MIT Technology Review. https://www.technologyreview.com/2025/02/05/1110983/whats-next-for-smart-glasses/?utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=&utm_content=02-05-2025&mc_cid=f653c7246b.
World Health Organization. (2024). Ethics and governance of artificial intelligence for health. Guidance on large multi-modal models (1st ed). World Health Organization. https://iris.who.int/server/api/core/bitstreams/e9e62c65-6045-481e-bd04-20e206bc5039/content
Yeung, K., Howes, A., & Pogrebna, G. (2019). AI governance by human rights-centred design, deliberation and oversight: An end to ethics washing. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3435011
Yew, G. C. K. (2021). Trust in and ethical design of carebots: The case for ethics of care. International Journal of Social Robotics, 13(4), 629–645. https://doi.org/10.1007/s12369-020-00653-w