Interview with Ulrike Klinger and Philipp Hacker: Why the public interest gets lost in the AI gold rush
This interview is part of AI systems for the public interest, a special issue of Internet Policy Review guest-edited by Theresa Züger and Hadi Asghari.
This is an interview with Ulrike Klinger and Philipp Hacker, both professors at the European New School of Digital Studies. Klinger & Hacker highlight the risk of “public interest AI” simply becoming a marketing label, despite its potential, due to the inherent misalignment between for-profit goals and public interest aspirations. They are optimistic that the EU's AI Act is a step in the right direction. Additional steps would include enforcement of the law, as well as increased public funding for public-interest AI systems, and making it bureaucratically easier for academic researchers to access and handle large data sets. Additionally, they recommend considering the purpose and incentives of those designing AI systems.
Ulrike Klinger is Professor for Political Theory and Digital Democracy at the European New School for Digital Studies (ENS) in Frankfurt (Oder). There, she heads the Digital Campaigns and Elections Lab (DiCE). She is also an associate researcher at the Weizenbaum Institute for the Networked Society in Berlin. Her research focuses, among other things, on the role of technology in democratic societies. She received the dissertation prize of the German Political Science Association in 2012. Between 2018 and 2020, she was Assistant Professor of Digital Communication at the Institute for Journalism and Communication Studies at the Free University of Berlin, simultaneously she headed the research group “News, Campaigns and the Rationality of Public Discourse” at the Weizenbaum Institute.
Philipp Hacker is Professor for Law and Ethics of the Digital Society at the European University Viadrina in Frankfurt (Oder), where he joined the European New School of Digital Studies (ENS). He co-leads the International Expert Consortium for the Regulation, Economics and Computer Science of AI (RECSAI). After studying law in Munich, he received his Master of Laws at Yale. As a postdoctoral fellow, he led a project on fairness in machine learning and EU law at Humboldt University of Berlin. His academic work focuses on the regulation of emerging technologies, in particular AI.
1. What do you think is required for AI systems to serve the public interest?
Ulrike Klinger: The current systems that we refer to as “artificial intelligence” are predominantly commercial products developed by companies seeking profits. Elon Musk has sued OpenAI, saying that he invested because the idea behind the company was originally to build open-source, not-for-profit AI with the perhaps grand claim to serve humanity. But the reality is that the most advanced systems right now, e.g. ChatGPT, Sora, Gemini, Llama, Watson, are all owned by tech behemoths like Google, Microsoft, IBM, or Meta. One of the reasons is their sheer market power, but also the fact that running complex systems like large language models (LLMs) requires enormous resources in terms of data and energy. Long story short: almost all foundation models are currently developed and owned by large commercial, for-profit companies who seek to secure market domination for their AI technologies. There is an AI gold rush. And while, in principle, it is possible to build technologies that both make money and serve a public interest, it is not the first and foremost incentive these companies have.
Philipp Hacker: This is precisely why we need regulation like the AI Act to further align economic and societal incentives.
Ulrike Klinger: The discourses around technologies, not only AI, are often shaped by techno-determinism. This is the assumption that technology somehow determines the future of society, or that technological necessities determine how technology evolves and what it looks like. We can find such discourses in both dystopian and utopian views. For instance, it is claimed that AI will save democracy, unless it destroys it first. Technologies do not have this power, because societies will mediate their effects. They are not external forces hitting us from outer space, but human-made, with thousands of hands reaching into these complex systems, tweaking and modifying them all the time. Again, AI systems are just industry products.
So, what does it take to make AI serve the public interest? In an ideal world, we would develop open-source AI systems that are publicly funded and publicly controlled. Legal experts, ethics teams, and civil society would be involved in the process. The tech teams building these systems would be as diverse as possible, not predominantly white and male. Code review and algorithmic auditing would keep an eye on things like bias, discrimination, and the impact of the systems on minority groups in society. Also, ideally one would not just steal training data from media organisations, artists, and social media. But so far this is just a fantasy; what we see in reality is quite the opposite.
Philipp Hacker: To make AI work for the public good, we need the right incentives, and then the right people with the right mindset in the right places. Where the market does not provide these incentives, regulation has to step in. This is the basic justification for rules like the GDPR, non-discrimination law, product liability, and the AI Act.
Personally, I see a lot of potentially beneficial applications in the realms of education and medicine, for instance. Take AI applied to genetic data. There is significant potential for detecting and potentially treating rare diseases, as evidenced, for example, in the research of Yves Moreau and colleagues. Simultaneously, as Moreau has been very vocal about, the combination of genetic data and machine learning is every authoritarian states’ dream. Currently, a lot of research in this field is being conducted on Uyghur and Tibetan minorities in China, where the connections to very severe human rights abuses are obvious (see e.g., misuses of genetic tests and biobanks, genomic surveillance). But even in Western societies, the crux lies in enabling advanced medical research, which could particularly benefit patients struck by disease currently outside of the focus of mainstream research, and, simultaneously, implementing the strict rules in the GDPR about purpose limitation, free and informed consent, and IT security. With the shifting political climate, we cannot take the rule of law for granted anymore, in Germany, the EU, and the US. Hence, some things – such as installing the technological infrastructure for AI-enabled public mass surveillance, genetic testing for cultural, ethnic and “racial” types – just have to be off limits in order to avoid installing a surveillance infrastructure that can easily be abused after a regime change.
Similarly, AI is being used for fraud and hate speech detection or the flagging of child sexual abuse material – all things society has a significant interest in. But again, we need to walk a fine line between fostering socially beneficial use cases and unnecessary infringements on privacy and data protection. In many cases, the regulatory instruments are there. The difficulty lies in enforcing and operationalising them in everyday applications.
2. We regularly hear mentions of “AI for good” and “AI for public interest”. How relevant is this in the overall landscape of AI development?
Ulrike Klinger: Like with all technologies, AI systems are neither good nor bad, but they are not neutral either. One can use LLMs for many creative and educational purposes. And they can be used to create propaganda, disinformation, and deep fakes with the intent to undermine trust in institutions and polarise societies. Technologies are not neutral either, but purposefully designed. Business models, norms, values, ideologies – they are not imposed on technology afterwards, but they are inscribed into them in the production process itself. As Herbert Marcuse put it already in the 1960s: what society and the interests dominating it intend to do with people and with things is projected into technologies by those who make and control them. Of course, framing AI as “good” and pointing to potential benefits for the public is an important part of marketing these systems. I think there is good reason to be sceptical of such claims when there are hardly any incentives for for-profit technology companies to build AI for public interest.
Philipp Hacker: The United Nations have, for example, an AI for Good initiative that seeks to foster and highlight AI applications in socially beneficial use cases, particularly also in the Global South; but to define what “good” actually means, and how it can be disentangled from idiosyncratic corporate interests, remains a challenge. Similarly, greater access to AI, for example with open-source models, is often labelled as the “democratisation” of AI. While these initiatives are certainly laudable and often pursued with the best intent, we should remain wary of such labels. In the Global South, AI technologies may indeed facilitate access to educational materials, healthcare, or more balanced hiring systems. However, they may just as well bring about new dependencies on large technology providers, models not fit for use on a population that is not predominantly white or male, and data scraping practices exploiting vulnerable parts of the population. So, I would say: yes, applications in the “AI for good” realm and that part of the AI discourse are important, but the risk of “ethics washing” is quite serious.
3. Regarding the current proposed regulation in the EU, such as the AI Act, what type of AI development is it encouraging? Do you see the AI Act supporting the developing ecosystem of AI in the public interest?
Ulrike Klinger: The problem with tech regulation always is that it needs to keep up with the fast-paced development. It is really hard to regulate things that you don’t fully understand (yet). So the approach the AI Act has been taking is promising: assessing the level of risks an AI system poses to society and obliging companies to mitigate such effects. The core idea is holding the creators of technology accountable for the collateral effects their tools may have. This way, they are incentivised to at least consider potential harms and the public interest while creating their tools and not only optimising with their business models in mind. I find this promising because it does not stand in the way of developing AI per se, and it also does not fall for the myth of “unregulatability” the industry has been telling, claiming that they themselves wouldn’t know what their algorithms were doing. For a very long time, over 15 years, we let platforms grow and become powerful without any regulation – as democratic societies we should not repeat this mistake with AI. The AI Act, while not perfect, is a clear message in this direction.
Philipp Hacker: I agree. The AI Act does contain some minimum standards for foundation models and highly sensitive applications, such as remote biometric identification (in essence, facial recognition systems) in public spaces. However, concerning both of these high-stakes scenarios, more stringent rules would have been desirable. For example, under the AI Act, state-of-the-art cybersecurity is only required for the most potent foundation models, such as GPT-4, not for GPT-3.5 or anything below it. Given the current geopolitical climate, this does not make sense at all. Similarly, clearer and tighter minimum rules would have been required for ex post biometric identification. Overall, however, the AI Act makes many industry best practices mandatory. In the end, it might have been possible to achieve a similar result with AI liability: if you do something wrong as a developer, and harm ensues, you can be sued. This is potentially the most potent mechanism, particularly for smaller companies who do not have deep pockets. In fact, the product liability directive is being updated to apply to software, including AI, at the same time as the AI Act. Together, they form a fairly potent package. But regulation can only get us so far. As I said, for public interest AI, you also need the right people in the right places.
4. Which current policies do you see supporting AI’s use for the public interest?
Philipp Hacker: Actually, unfortunately, very often quite the opposite is the case. If you want to do AI-driven research as a medical researcher, it is extremely onerous to get approval to do this. Simultaneously, social media companies effectively get this type of data for free from their users, based on an illusion of “informed consent”.
What we would need is a new research data law that replaces the incredible jungle we have at the moment. It is actively hindering essential research we should be doing to save people’s lives. Coupled with safeguards, strict oversight, and the necessary funding to actually engage in public-interest AI systems. In Germany, the new Health Data Use Act goes some significant way toward this goal, even though it is not perfect and would need better safeguards in terms of IT security and data protection. The same can be said for the European Health Data Space, which is currently being set up. In addition, the German government, for example, is now setting up a deep tech and climate fund which will receive €1 billion until 2030. This is great. And it is one way to get the right people in the right places. The UN is considering a similar fund (see the Global Digital Compact, para. 63). But it is far from enough. This is literally a tiny fraction of what any single competitive AI company spends in a single year on AI (Meta recently mentioned that they will spend 37 billion on AI in 2024). So we are quite far away from the climate that is really supportive for public interest AI. Money, computers, chips, data, and talent retention all need to be in place. Currently, they rarely are.
5. Which developments hinder a successful development of AI in the public interest?
Philipp Hacker: Quite clearly, the significant bureaucratic hurdles for engaging in any kind of meaningful research in this direction as soon as personal data is involved. In Germany, there are laws for this at the state level – so you have different rules to follow if you want to run a clinical trial in hospitals in Bavaria, Berlin, and Hamburg. That is insane.
6. Which aspects of recent AI developments are most worrying when it comes to democratic rights and public interest?
Ulrike Klinger: In 2024 more than 60 elections will take place around the world. Mexico, South Africa, India and Indonesia – some of the most populated countries in the world will have major elections. In the EU and the US alone, some 800 million citizens are called to the ballot boxes. We are currently in the situation where everyone, not just experts, can access and use powerful technologies to create disinformation or deep fakes. The combination of cheap and easy generative AI plus the distributive power of social media platforms is here and ready for election campaigns. It is not a coincidence that the World Economic Forum has identified disinformation as the number one threat to democracy in the next two years. Politicians and parties have already used tools like Midjourney to create visual propaganda, and we will see a lot more of it. Images and videos are powerful, because they speak to our emotions, and have effects even when people know they are fake.
Everything one would need for an avalanche of disinformation campaigns is ready at our fingertips. At the same time, there is not yet any effective regulation. The Digital Services Act has technically been in effect since February 2024, but it is not yet up and running. This also means that researchers still can’t get access to platform data through article 40 of the DSA, which means that no one can really monitor these election campaigns on social media. Article 40 establishes – for the first time in history – a legal right for researchers to access platform data to study the collateral effects and threats to democracy. But all this is too little too late for the 2024 elections. Which role will generative AI play for disinformation and propaganda in these campaigns – we will not know, because we do not have access to the data to study this. This worries me a lot. It is the worst possible timing for these election campaigns, when these high-risk technologies are widely available but regulation has not caught up yet. In this situation, anything goes.
Philipp Hacker: I agree, and just want to add: I have myself, together with other scholars and advocacy groups, pushed hard for an equivalent of Article 40 DSA in the AI Act. It has not happened. So here, we really do not have any independent oversight of potent AI models and their use from the research community. What could possibly go wrong…?