• Brain City Berlin, HIIG, Prof. Dr. Wolfgang Schulz

    “A sustainable goal of our work is to make clear what the technology can actually do”

How do we design the algorithms that shape our society? And which rules must apply when programming AI so that it serves the good of all? The international research project “Ethics of Digitalisation”, funded by Stiftung Mercator, was launched in August. The first research sprint of the two-year project ended in October 2020. Thirteen fellows spent ten weeks studying the challenges associated with the use of AI in the moderation of online content. The sub-project was coordinated by the the Alexander von Humboldt Institute for Internet and Society (HIIG). Professor Dr Wolfgang Schulz, Research Director at HIIG, provides more details in a Brain City interview.

Professor Dr Schulz, the first research sprint of the international research project “Ethics of Digitalisation: From Principles to Practices”, coordinated by HIIG, has now been completed. Why was the project launched?

The starting point was the fact that, as a rule, the ethical rules being applied to digitalisation are quite abstract. Although the topic has been discussed for a long time, the results are not yet concrete enough for users and developers to actually work with them. Our aim is to work together across disciplines to develop workable solutions for the various stakeholders.

Which stakeholders are you thinking of?  

Politicians and regulators should be able to learn something from it, but also companies that develop products or provide application contexts. However, our work could also be helpful for users. We are continuously asking who can benefit from our results.

“AI and Content Moderation” was the topic of the project’s first research sprint. What were the key results? 

Let me delve deeper into that: at the beginning of the project, there was the observation that, especially in times of COVID, the major social media platforms such as Facebook, Twitter, and YouTube had greater access than ever before to technical mechanisms that identify content that is either illegal or against their standards. They can also let the technology delete this content automatically. This can result in many problems. One example is the lack of transparency. What criteria is the software actually using to make these decisions? Companies could also try to manipulate the formation of opinions via these algorithms. It is also possible that content that is lawful and does not violate the platform’s standards is being deleted because the algorithm fails to recognise this. As a result of the first research sprint, the colleagues developed three policy briefs, which essentially address two points. 

And what are these points? 

One of them is the audit: the testing of algorithms by third parties, i.e. not in-house by the companies themselves. The researchers make very specific suggestions for this. They demand that companies put money into a pool that would fund independent human audits. They also demand transparency that goes far beyond what the companies are doing today. In other words, transparency in the sense of a self-declaration by the companies in addition to third-party audits. Such standardised declarations could be used, for example, to compare whether Facebook acts differently from YouTube or Twitter. I also find it very important that the researchers asked whom the transparency ought to serve.

Can you be more specific?  

The concept of transparency is often viewed in a very general way. But it’s extremely important to ask: who should know what and why? There are significant differences in what users, regulators, and competitors need to know. The policy brief that the researchers have written will help shape the debate in a more nuanced way. 

A third recommendation from the participants relates to freedom of speech in the digital space. 

The research group found that there are still clear limits to what algorithms can and cannot detect. From this follows the very simple but important recommendation that human reviews are generally preferable to technical ones. To explain this with an example: when an algorithm searches for a certain turn of phrase, it may also delete something that is being critical or satirical with radical content. Understanding such nuances is highly complex, even for highly efficient language recognition systems. A long-term goal of our work is therefore to recognize and clarify what technology can actually do and where there are still limits. This will allow politicians and regulators, for example, to adjust their demands accordingly on an ongoing basis. There is a need for permanent dialogue so that one doesn’t over- or underestimate technical progress and has a reasonably clear idea of what the systems can and cannot do. 

The sprints used in the project are a very innovative research format. Can you explain what those are about? 

We believe that interdisciplinary work needs its own formats, a certain time frame, and a structure. This is the only way it can work. It is very important for society that science develop such structures.  In our area of research, the format of sprints or clinics is still relatively untested. We’re still experimenting with it. The term sprints is taken from software development, whereas a clinic, of course, is borrowed from the medical field. For us, clinics are the smaller research formats, while sprints run over a longer period of time. What both have in common is that an interdisciplinary group comes together to solve a specific problem and works out specific solutions within the framework of an organised process. Based on how this process is arranged in each case, we always gain new insights. This also includes, for example, the question of how much leeway one should give to a group to specify the problem. 

Do you also work with industry? 

Many of the platforms we look at are developed and operated by large companies.  If one brings companies into the process too early, they may have too much influence on the shape of the group’s work. On the other hand, research results should not arise in a vacuum, but should be connectable. We therefore seek contact with industry representatives. However, they will be included in the project at a point in time when the topic has already been concretized and then bring their practical approaches to the table. They will also have access to the results. It is also possible that we will invite companies in to discuss the research results with them. With regard to the structuring of the work process, which in a sprint can extend over two to three months, we are constantly learning. 

The “Ethics of Digitalisation” project was inherently international in nature. How is this internationalisation realised within the project?

The project is very international; we actually see ourselves as facilitators. Internationality comes into the project in different ways: On the one hand, the individual sprints or clinics are carried out via the organising Network of Internet & Society Research Centers (NoC) in cooperation with partner institutions in other countries. In addition, when recruiting the young researchers for each spring, we advertise internationally via our partner organisations in order to obtain as good a mix as possible for our teams. 

In this case, mixed primarily refers to disciplines?

Yes. Our teams include computer scientists, lawyers, researchers from the field of governance who are not legally active, political and communication scientists, and ethicists. The further make-up of the teams is determined based on the application context. We are typically not looking for students, but rather advanced doctoral students or researchers in the postdoc phase, as we require a certain amount of expertise.   

What advantages does Berlin as a science location offer HIIG and the project?

Our colleagues love coming to Berlin; we can use the location as part of our international promotion efforts. In addition, the scientific environment for us as an institute is extremely good. We already have the Humboldt-Universität zu Berlin (HU Berlin) on board as a co-founder of our institute as well as the Berlin Social Science Center (WZB) and the Berlin University of the Arts (UdK Berlin). But our networking goes much further, for example it includes the Einstein Center Digital Future. We benefit greatly from this. Scientists from Berlin are always present at our sprints. 

More Stories