Solid, Mastodon, and the risk of overburdening the user

Ana Pop Stefanija, imec-SMIT, Vrije Universiteit Brussel, Brussels, Belgium

PUBLISHED ON: 21 Feb 2023

FUNDING

The op-ed is a result of a research done as part of the project DELICIOS “An integrated approach to study the delegation of conflict-of-interest decisions to autonomous agents” (G054919N), funded by the Fonds voor Wetenschappelijk Onderzoek – Vlaanderen (FWO).

When we look at the recent Twitter-Mastodon exodus, we see a new trend of responsibilisation or placing the burden someplace else — from platform to user. This tendency comes in three distinct forms: the burden to familiarise oneself, the burden of trust and the offloading of responsibility to regulation. Why does this matter? Each paradigm or technology shift produces optimism and this optimism stems from technology savvy people who underestimate what new skills and responsibilities they demand from less technology savvy people.

This discrepancy introduces novel risks and challenges that must be accounted for and mitigated before deployment. If we want to design technology for the people, as these decentralised technologies claim, we need to design it with people in mind. The events surrounding Mastodon and the research I have done on Solid, inform this opinion piece.

THE PARADIGM SHIFT

The shift from de facto centralised to re-decentralised web and solutions has been seen, for a while now, as a possible remedy to the established and dominant paradigm of doing data- and AI-based business. This model for building personalised and predictive products is built on (inferential) profiling grounded on excessive and ubiquitous datafication and has been shown to be detrimental to individuals. It leaves individuals with nothing more than pseudo-autonomy and agency, with no ability to decide where, when, how, by whom and for what purposes data is collected and used. This extends to how algorithmic outputs are produced and affecting individuals, online and offline.

Two prominent attempts of this shift towards the decentralised are gaining momentum currently. The first one is Mastodon, which re-emerged immediately after Elon Musk’s takeover of Twitter, and the other one is the Sir Tim Berners Lee’s Solid specification. Although different, one being a “radically different social media” in the so-called fediverse, the other a data storage solution — they share more than few similarities: they are decentralised, open source and interoperable. As a counterbalance to Big Tech, they aim to delegate back decision-making (Mastodon) and control of data to individuals (Solid), focusing on individuals’ control, autonomy and self-directedness when it comes to their data and its derivatives. However, while these attempts might be much needed, the implementation of technological solutions often makes more sense from a technical, but not from a user perspective. It’s a sort of techbro bias — a tendency to rarely consider the characteristics, needs and skills of the rest of society.

Breaking things isn’t enough. We need to move slowly, taking into account the various contexts and areas of use and acknowledge the situatedness and particularities of different users.

THE BURDEN TO FAMILIARISE ONESELF

Shifting paradigms and technologies is laborious for users, it requires mental capacities and skills. Solid and Mastodon introduce a back-to-the-roots way of working or communication. This way is foreign to a majority of people, and only makes sense to the in-crowd.

Without help to navigate the unknown technology, the burden to familiarise oneself often falls entirely on the individual. This familiarisation concerns not only the use itself but extends to the consequences of the use of the technology too. We see this happening on two “fronts”. The first one is what we can call the surface, the technological level: understanding the concept of decentralisation, what it means, how it works, who’s “behind it”. But also — what are its affordances and how to navigate them, what you can and cannot do with it and how.

Beyond the surface, we have the socio-technological level — this concerns familiarisation with the potential risks and harms and the ability to make decisions and act upon that knowledge. This can take many forms — from which Mastodon instance to sign up on, or which Solid pod provider to store your data in, to navigating and understanding privacy policies and Terms of Service. It will also require assessing the trustworthiness of the provider, accounting for possible failure scenarios, risks, and threat models, and understanding and envisioning potential consequences.

This leads to “forcing” control back to individuals without first familiarising them with the technology and its consequences. Techno-optimism creates implicit familiarity with all the positive consequences it can create. Blindly believing that new users will feel the same way is dangerous. Burdening individuals to make the right decisions, to be responsible for their data and what happens to it, but without proper guidance, support and “protection-by-design” (embedding it in its design but also lifecycle), could lead to negative impacts outweighing the positive ones.

THE BURDEN OF TRUST

The burden to familiarise oneself is complemented by the burden of trust. It positions the individuals in a complete reliance and dependency on individual actors. The open source nature of Mastodon and Solid to be built and modified by just anyone, leads not only to non-standardisation of services across pods/instances. It also creates epistemic and power imbalances. It positions the owner (of the instance or the pod) as the dominant decision-maker — they are the ones that decide on the rules and enforce them, on keeping the instance/pod online or not, on protecting the data or not, on sharing it with other parties or not, etc. Coupled with the lack of regulation and standardisation, this often demands blind trust from (potential) users. That leaves individuals vulnerable to the volatility and the good will of the owners.

New users might not be able to fully grasp what they are signing (up) for or can’t properly make sense and envision the potential consequences. As Doctorow (2022) says: “I feel like I'm on the precipice of a great, epistemological void. I can't ‘do my own research’ for everything. I have to delegate my trust”. But to whom, is the question.

REGULATION WILL SOLVE IT?

During interviews for my research, Solid designers often state that the users themselves will have to signal to authorities if an abuse of their data or the system is happening. However, in practice that would mean that users would have to be knowledgeable and have the capacity to inspect a system (as pointed out, this is not feasible). Emphasising that using decentralised systems implies distributed responsibility and, with it, accountability, a number of actors, from academics to platforms themselves, underline that it is the responsibility of users to (learn to) self-govern.

But how attainable is the statement and is everyone given the same burden? And should this even be the case? On the one hand, technology designers are offloading the burden of protection to regulation and policy as well as to users, but on the other hand, is policy even ready for the decentralised web?

This is additionally complicated by the question of data policies, terms of service and codes of conduct — as envisioned, decentralised instances have the freedom and liberty to make their own rules. As Mastodon (2022) says in a reply to a tweet: “Each host can set their own rules. You should familiarise yourself with them before joining”. With the absence of a clear guiding regulation, principles or sanction of authority, the questions of responsibility and accountability remain open, and unregulated, and the burden falls on individual users.

BUILDING ON SOLID GROUNDS

As things are at the moment, the burden of responsibility is being placed on the user. Since this is an unrealistic position, we risk seeing the only solution being centralisation and placing power and control in the hands of corporate interests. If we want to start thinking in a really decentralised manner, possible solutions need to be thought out by collaboration between all the included parties. This means — technology should be designed with its own burden to carry. How do we move forward?

The burden of familiarisation can be tackled with simple design solutions. For example interface and design cues that will provide technical familiarisation. Designing seamfully will introduce stop and think interruptions for the individuals. External documents, tutorials, guidelines, should inform and guide familiarisation with the consequences of different actions. However, only if technology is designed with protection-by-default and design, the above solutions will have a real positive effect.

The burden of trust and overreliance on individuals and/or private interests can be remedied with the establishment, standardisation and application of a set of agreed upon protocols. This should enable not just a similar (if not same) experience, functionalities and affordances for everyone, but also will ensure minimisation of potential risks. The question of who will take this role of setting these standards and how this can be envisioned is important but still unanswered.

Even if policy is ready for decentralised applications, it cannot be the silver bullet. While regulatory bodies should start thinking about decentralisation, technology should be designed with embedded safeguards. That should include the ability to pinpoint the exact point where the responsibility, accountability and liability towards the user will lie.

We need to account for and mitigate these issues if we want to have a technology that would live up to its potential out of the ivory tower of technology labs and into the wild, used by common people.

ACKNOWLEDGEMENTS

I would like to thank Rob Heyman, Nathalie Van Raemdonck and Jo Pierson, as well as the editors of Internet Policy Review for their valuable feedback and suggestions.

Add new comment