Interview with Friederike Rohde: The environmental impact of AI as a public interest concern

Theresa Züger, Alexander von Humboldt Institute for Internet and Society, Berlin, Germany, theresa.zueger@hiig.de

PUBLISHED ON: 30 Sep 2024

This interview is part of AI systems for the public interest, a special issue of Internet Policy Review guest-edited by Theresa Züger and Hadi Asghari.

This is an interview with Friederike Rohde, a sociologist of technology and researcher at the Institute for Ecological Economy Research, conducted in February 2024. In it a number of different topics at the intersection of the public interest (AI) and sustainability are discussed, including a commonality being the relation of both fields to distributional justice. Rohde cautions that the economic structure that AI is currently being developed within is unfortunately at odds with the public interest. At the same time, she also believes that intelligent algorithms and digitalisation can in essence contribute to environmental and climate protection in tangible ways.

Friederike Rohde is a sociologist with a specialisation in science and technology studies . She completed her studies at TU Berlin. She worked as a researcher at the Institute for Sustainability, Center for Technology and Society (ZTG) at TU Berlin and the Institute for Ecological Economy Research (IÖW). As part of this work, she worked on many projects, including the Bits & Bäume project, which is a large movement from civil society and science for sustainable digitalisation. She also worked on the SustAIn project to develop a sustainability index for AI and completed her PhD on digital energy futures. At the Berlin Ethics Lab (TU Berlin) she is currently involved in developing innovative formats for research, education, and multidirectional exchange that enhance the reflection about shared values and norms that frame our engagement with emerging technologies in the digital society.

Theresa Züger: I think we share the observation that the sustainability of AI itself is becoming more and more of an issue. I would like to start by asking you to reflect on what you see as the most critical developments in terms of the sustainability of AI and its relation with public interest.

Frederike Rohde: I think one important aspect that has a lot to do with the public interest is these two major social criteria of transparency and participation. These are actually things that always play a role when you look at any public interest balance sheet. It's somehow always about who is involved when, how, and in what form. That is an issue that is still being neglected in this whole debate about AI.

It is often proclaimed that participation is necessary, but involving different stakeholders in the development of technology, in the design of models, or even in the curation of data doesn't happen so often. The economic and time pressure prevent this from taking place. The problem is also that participation often needs some time. It does not work quickly and efficiently. That often contradicts the logic that these developments and expectations are subject to. That's also a bit of an area of tension: the desire to optimise a lot of processes to make them more efficient vs enable participation. When it comes to the issue of transparency, things get even more tricky. AI systems are very large, complex models with billions of parameters. Explaining to anyone in an understandable way what it is, what's actually in it, what it involves, and what it somehow means is the hard part. Yet, it has a huge public interest aspect to it. In a word, it's not just about creating transparency about how the model is structured and how it works and how many parameters it has and whatnot, or perhaps also how much energy we have used for training, but also who is actually affected by it. Who the vulnerable groups are. If they have been defined. Was a pre-test carried out? Was anything found where you could say yes, there is a potential for discrimination. In other words, to what extent is the AI development process already dealing with things or is it made transparent that the development process is dealing with things that are relevant to the public interest?

TZ: I find it very interesting that you started with this, because when it comes to the sustainability of AI often the first thing people think about is the ecological footprint of AI developments. Are there any issues that are central to the public interest for you?

Frederike Rohde: With the other two topics, the link is perhaps a little more indirect, because I think that with the ecological sustainability aspects, the link to the public interest is of course much more direct, because basically everything you do that goes beyond the planetary boundaries is not good for the public interest. Because we all live on this planet and that is the basis of life for all of us. So I would say that all these ecological effects, which are somehow caused by the digital infrastructure that is needed to make AI work, possess a public interest aspect. I think it's also clear when it comes to energy. But this competition for use is not quite as clear as it is with water, for example. I think it's quite clear that it's really about whether people can use drinking water or whether a data centre uses the drinking water. So that's exactly the reason why people are protesting against data centres in Chile or elsewhere, because this question affects aspects of public interest. If the question is then, who has access to water as a resource? And I don't think there's as much of a fight for energy as there is for other resources. But in the medium term, of course, it will also be about what the valuable energy is actually used for. Are we going to use it to train algorithms or are we going to use it to keep our mobility systems or houses or processes running and for the provision of public services for instance.
We have tried to point out that this is an economic dimension to sustainability. Yet, it always depends a bit on what you mean by the public interest. An economic structure that is strongly monopolised is exactly the opposite of an economy oriented towards the public interest. So you would actually say: well, basically the economic structure that AI is currently being developed within is very much at odds with the public interest. Of course, there are also other smaller approaches, small companies, machine learning development oriented towards the public interest. But let me put it this way: the majority of applications are developed by companies that are oligopolies. This is quite contrary to the public interest, because of course the question is whose interests are actually being represented.

TZ: We noticed in your answers that this concept of sustainability, as we see it in your studies, has a large overlap with an understanding of public interest. Could you trace the overlaps for us?

Frederike Rohde: If you think of sustainability as social sustainability, then it means that not only basic social needs need to be met, but also opportunities for societal self-realisation. If this is what you are working with as a definition, we can say that there is a fairly large overlap with public interest. What we understand by public interest is fed by various debates, but of course also by the donut economy model, where planetary boundaries are set, basic social needs are met and the economy is only considered prosperous if it serves those social needs and operates within planetary boundaries. If you understand that as the public interest, then there already is a strong idea of public interest in the idea of sustainability.

I also wonder to what extent this whole issue of inter- and intragenerational justice has a lot to do with the public interest. The question of how to operationalise the public interest has a lot to do with distributional justice. So who actually has access to what is key. What I always try to establish in a sustainability perspective, or what I actually like, is that sustainability is about global, social, and ecological distributional justice. And that is perhaps also what public interest is actually about at its core.

TZ: Let’s go to the regulatory level. We are currently seeing a lot of regulation at the European level, which is now more or less in the pipeline. How would you assess the current developments? Do you think that they will encourage and support developments in AI for the public interest? Or do you see major obstacles?

Frederike Rohde: I'm a little ambivalent about this. On the one hand, at the European level, it's good that we've managed to get AI regulation like this off the ground at all. It's not that easy to regulate such a rapidly developing technology. And it has also become clear that there are a lot of dynamics involved. Now we even have general purpose and foundation models included in this regulation, which is a great effort. And the fact that it's very often about foundation models, so machine learning models that can be used for various tasks, of course sets limits to regulation, because the impacts often depend on the use case.

So I think it's good that this regulation exists and I think it also addresses many, many important points. But there are of course limits to the risk-based approach. Especially when it comes to developments like foundation models, because you can't say from the outset what the use case is, because it can potentially be used for different tasks. This is one aspect that I am critical of, i.e. whether it is being used in the public interest. It's difficult for me to judge because the question is what do organisations that develop and use AI do with this regulation? Are they saying that we now have to go along with assessments and tick a box or are they also using it as a tool to reflect more strongly? It should be the aim of risk management to recognise such things as negative effects or discrimination in the first place. So I do believe that this whole debate about responsible AI use can be a lever for using it in a way that is also oriented towards the public interest, as you try to take the possible negative consequences into account a little more. But I'm not quite sure yet whether the legislation will actually lead to AI for the public interest in the end.

TZ: And with regard to the ecological sustainability of AI systems, do you see enough in the regulation there to prevent a problematic development?

Frederike Rohde: Well, it's definitely a step forward that these things are in there at all. It is a good step that high risk systems and so called general purpose AI have to report on energy and computing resources for training. But the issue won’t be solved with reporting obligations alone.

I've just read, which I find super interesting, that a bill has now been introduced in the US to address the ecological consequences of AI. Kate Crawford published an article in Nature on 20 February, where she reports that the senator from Massachusetts initiated an AI environmental impact act on 1 February. And that the aim is to somehow establish these standards in order to assess the ecological footprint of AI. The idea is to develop a framework on how companies who develop AI can do these environmental assessments. I found that very interesting and it's actually also the case that at the World Economic Summit in Davos, the CEO of OpenAI admitted that this whole issue of energy consumption is simply an insane topic that won't be easy to solve. When he then said something about nuclear fusion, which of course raises the question of whether this is the way forward, I had to smirk a little to be honest. But it's true that it's a big issue and it shouldn't be neglected. Especially in the US, where most companies and tech groups are based. We should also think about ways to reduce the energy and resource use for training and think about concepts such as tiny models or task specific models.

TZ: I would like to go into another aspect, namely that there are more and more projects that are intended to support sustainability in many respects as AI projects that are orientated towards the public interest or are also intended to be measures against the climate crisis. For example, there are projects that are intended to reduce energy consumption in production or measure emissions, as well as programmes that are intended to reduce waste or recycle, improve or map something like biodiversity. And I would be interested to know what impact you think such systems have and how you rate them. In other words, in terms of a quasi-overall political effort each year to do something about the climate crisis, where do you see these AI-driven projects?

Frederike Rohde: On the one hand, I believe that intelligent algorithms and automation, or digitalisation in general, can contribute to environmental and climate protection. The fact that you can now use them to analyse the big climate models, etc., makes total sense. Even if you have to say, of course, that we know what the problem is. But of course, it certainly makes sense in order to better monitor the consequences of climate change or certainly also for climate adaptation. But there are two problems with this. The first problem is that only a fraction of all AI applications are actually developed with the aim of doing something for the public interest and/or for environmental and climate protection. I don't know what the current figures are, but there was a study by Germany's central environmental authority in 2019 that looked at all these AI start-ups and it came up with a figure like 1%, which is super low. There are useful applications and it makes total sense to promote them. But since they are only a marginal fraction of all the applications, then their overall impact is also relatively limited. What you also have to say is the question of what they are actually changing. It is often the case that they perpetuate existing processes or make them more efficient. Is this really what we want? In agriculture, the use of pesticides is somehow reduced by 30% thanks to AI. But of course that still doesn't lead us to a different form of agriculture or land use. My impression is that these technologies are very often used to make existing processes more efficient. Companies are becoming more efficient, employees are becoming more efficient, everything is becoming more efficient, everything can be utilised more efficiently, more efficient land use and so on. But is that really transformative in the end?. Completely different things also need to be transformed, such as our thought patterns, perhaps practices or governance structures. And the question is whether technology can help us with this or whether it is enough to hope or trust that this will happen through technology. Because these are also processes of change that happen on a social level. What do we have to do? How can our thought patterns actually change if we say okay, we want to travel differently in the future or produce and use our energy differently. This is an aspect that is also very much linked to the question of how transformative these AI systems can actually be. There are currently strong social debates on transformation processes that need to take place in order to achieve sustainability goals. It is often stated that there are theoretical savings potential. Industry coalition Bitkom just published a study where they said how many tones we can save through digital technologies. Yes, of course it's nice to have this theoretical savings potential, but the question is whether they can really be achieved. And there are also societal debates that are actually taking place around the implementation of many technologies? If you think about how controversial the heat pump discussion was, or even just the 30 km/h speed limit or any speed limits on roads in Germany, for that matter. Does AI help us in any way to advance these societal debates? What is key is that we have social negotiation processes that are always part of transformation processes towards sustainability. The role of AI is limited in this respect.

TZ: I find that to be an exciting point. As a final question: have you ever seen a project anywhere where AI technology is used in a different way, where you would say that it actually supports a deeper structural change that we need? Have you ever come across that?

Frederike Rohde: No, I haven't actually come across any yet. But there are some. I think it's possible. I think one area where it's always seen as very helpful is the pharmaceutical industry and medical technology. Recognising protein folding with the help of AI and other scientific progress, can help that diseases can be cured. I don't know whether this is transformative, but these are things that exist. I also believe that AI technologies can be used to break through problematic practices. At AI for Good, for example, I had a little look at what solutions they have for using AI technologies. They use them to track networks for forced prostitution, to track money flows and laundering and to try to uncover such networks and somehow break through these very problematic social practices and exploitative relationships. So I think there is potential!