Germany: A change of (dis)course in digital policy

Matthias Spielkamp, AlgorithmWatch, Germany
Malte Spitz, Gesellschaft für Freiheitsrechte, Germany
Henriette Litta, Open Knowledge Foundation Deutschland, Germany
Christian Mihr, Reporters Without Borders Germany, Germany
Christian Humborg, Wikimedia Deutschland, Germany

PUBLISHED ON: 20 Sep 2021

Fax machines in health departments, schools without email addresses, millions spent on screwed-up apps—the pandemic has unforgivingly revealed that "business as usual" in digital policy making endangers our future. The focus must no longer be on security interests or the profit margins of tech companies, but on the common good. This is what F5, named after the reload key, stands for—a new coalition calling for a new perspective on digital politics.

A closer look at recent debates in Germany shows how crucial civil society voices have been in preventing detrimental developments: among the most notable examples are the resistance to upload filters in the EU Copyright Directive, the (il)legal framework of Germany's foreign intelligence service or data retention policies—all of which were overhauled thanks to strong resistance from civil society in Germany and Europe. All too often, political decision-making has lagged behind the rapid change of the digital market. Only now, political decision-makers are beginning to take steps to oblige platforms like Facebook and YouTube to become more transparent and to implement a common European framework on platforms’ response to hate speech. They realised equally late that it's an aberration to leave cloud infrastructure entirely to private companies. Invariably, it has been organisations like ours that have ensured that policies were set back on track for the benefit of users and citizens.

Change of perspective on digital policy

We—F5, a new coalition pulling together AlgorithmWatch, the Society for Civil Liberties, the Open Knowledge Foundation Germany, Reporters Without Borders Germany and Wikimedia Deutschland, are calling for a change of perspective: digital policy must finally centre on promoting the common good. It is an appalling waste of scarce resources when government and public authorities create precedents in the form of policies, whose detrimental effects on society later have to be mitigated. So much better policy-making would be possible if rules for the digital age weren’t conceived in non-transparent procedures, all too often driven by lobbyists, business consultants and the sales departments of tech giants—and afterwards watchdog organisations have to build up enormous pressure to reign policies back in for the benefit of all.

Instead, a democratic, open, inclusive and transparent digital policy process must focus on the common good. But this can only succeed if more voices are heard and involved. Our organisations understand and connect the diversity of technological and societal change. Strengthening and institutionalising these diverse voices is one of the goals of our alliance.

We are committed to protecting the right to secure and confidential communications. This fundamental right, taken for granted in analogue life, is increasingly being eroded in the digital realm. Police and intelligence services across Europe have been granted ever-increasing powers to collect and access data and to intercept private messages and calls, often without appropriate reforms of the oversight structures. Now member states are going even further in calling for the development of additional technical means to circumvent encryption, a step that would undermine the rights and security of millions of Europeans and send a catastrophic message to repressive states worldwide.

Effective control of platforms and algorithms

We advocate for European platform regulation that promotes freedom of expression and information and civic discourse on the internet. For this to succeed, meaningful transparency obligations need to be imposed on impactful interaction structures like YouTube, Facebook, Twitter. Platforms specifically must be legally required to provide users with easy-to-use ways of submitting notices and of appealing content moderation decisions. Safeguards must be put in place to avoid the risk of platforms overblocking legitimate content; at the very least, meaningful human oversight and review processes must counteract the one-sided incentives for platforms to delete legitimate content rather than to get fined for inaction. Only these measures will ensure that users can enjoy digital self-determination. At the same time, the state must not abandon its responsibility: we need new ways to bring the judiciary to bear in sanctioning digital wrongdoing, instead of in effect delegating both policing and enforcement of democratically agreed rules almost entirely to private companies.

Increasingly, automated decision-making (ADM) systems, often labelled as artificial intelligence (AI) systems, are being used in the selection of job applicants, for medical diagnoses, to detect welfare fraud, to assess creditworthiness, and the like. The Artificial Intelligence Act, which is currently being negotiated within the European Union, acknowledges that so-called AI systems can come with high or in some scenarios even unacceptable risks for individuals, communities and societies. Among other things, it puts the use of AI in credit scoring or in the field of employment and labour into the high-risk category, due to these AI usage scenarios impacting on essential aspects of a person’s life plan and autonomy. But self-assessments of risks by developers—often corporate actors—and deployers, as currently foreseen by the AI Act, will not suffice to ensure that the use of these systems is guided by individual autonomy and the common good. If not combined with reliable enforcement mechanisms and accountability frameworks, the new rules risk becoming teethless. In the public sector, authorities need to be obliged to systematically evaluate risks relating to a system through an impact assessment and to provide information on all ADM systems in use within public registers. The risks that come with the use of an ADM system can only be assessed on a case by case basis, and not via pre-determined risk categories.

Digitisation and transparency for the public good

In order for our society to benefit as much as possible from the ideas, skills and diversity of civic engagement, however, there also needs to be greater support for common good-oriented digital projects. Liability risks for volunteer-run platforms must be lowered, and self-governance must be promoted. Freedom of information should be strengthened through transparency laws on all levels, and it must become a benchmark for any political reform to work towards free access to knowledge for all. Where public money is spent, by default such investments should favour the output of freely usable content, regardless of whether it’s software, educational materials, data, or another type of content. And public investments must favour diversely accessible, sustainable structures that are strongly committed to democratic values and human rights.

Our free and open digital society thrives on preconditions that neither the state nor companies alone can guarantee. F5 aims to be a major civil society counterweight to dominant business interests, and to policy-making that tends to lose sight of the common good in digitisation. We will establish an additional dedicated line of communication between civil society groups and policymakers at the federal level, in the form of regular high-level high-expertise events, with a pilot planned for 29 September, right after the German general elections. In launching this event series, the future viability of a democratic digital society is our main goal. This is the measure by which we will evaluate the actions of politicians and companies - and, of course, our own.

Add new comment