“Complete freedom of research is an illusion”

13.12.2023
4/2023

Researchers find themselves trying to strike a balance between freedom and responsibility. Digitisation and artificial intelligence are throwing up explosive new issues. In the following interview, two ZHAW experts talk about the need for universities to take account of ethical aspects and the consequences of their research.

Academic freedom is a fundamental right as defined in the Swiss Federal Constitution. Why is autonomous research important?

Karin Nordström: People are curious and inquisitive by their very nature. These are qualities that have contributed a great deal of good for society. Legislators want to intervene as little as possible here so as to give creativity free reign.

Christoph Heitz: This article also makes reference to the fact that researchers should not allow themselves to be exploited, for example by economic or political powers – at least not on an undeclared basis.

For some people, freedom of research goes too far. For others, they believe it is in danger.

Nordström: Complete freedom of research is an illusion. A researcher’s interests and the questions they ask are the result of their socialisation within certain social, economic or political contexts or are guided by research traditions. In an ideal scenario, researchers will turn questions asked by other people into their own questions, and this does not have to come at the cost of freedom. In the worst case, they will only put forward and answer their own questions, which pretty much equates to research conducted in an ivory tower.

“A researcher’s interests and the questions they ask are the result of their socialisation within certain social, economic or political contexts or are guided by research traditions.”

Karin Nordström, President of the ZHAW Ethics Committee 

How free is the field of applied research?

Heitz: As an application-oriented university, we generally conduct our research in cooperation with partner companies or institutions. It is therefore important to negotiate the objectives before starting to work together. The projects need to be structured in such a way that they are compatible with the values of the knowledge workers involved while thoroughly recognising the framework conditions.

Is that not a contradiction?

Heitz: Not per se. We aren’t vicarious agents. However, as we don’t only generate knowledge, but also help in implementing it in practice, we change the course that events take in a very concrete manner. As universities of applied sciences, we also have a de facto mandate to change the world for the better. This is a noble mission. Nevertheless, we have to ask ourselves how this mission could potentially be thwarted.

“As we don’t only generate knowledge, but also help in implementing it in practice, we change the course that events take in a very concrete manner. We have a de facto mandate to change the world.”

Christoph Heitz, data scientist

What is the answer?

Heitz: Power structures or uncertain financing options, for example. It is thus a great challenge to maintain our independence as a research institution, something that is of central importance for our acceptance. Another aspect that is at least as important as independence is that we are responsible for what we do. The excuse pedalled out by those in the area of basic research that they merely generate knowledge while users are ultimately responsible for what happens with it – this doesn't apply to us in the same way. In conducting all our projects, we have to consider the question of what will happen to our world upon our research findings being “unleashed”.

Are there limits to research?

Nordström: The attitude of wanting to know everything is intrinsically good and important. However, the end must not be used to justify the means. The methods applied to conduct research must be ethically acceptable, i.e. the expected benefits of a research project must be proportionate to the potential risks. It is not necessarily the case that everything that is possible is worth striving for. We should always ask ourselves what we need the knowledge for, how we intend to use it and where we want it to take us. A further question that needs to be taken into account is what kind of insights will allow us to move forward or contribute to a sustainable and fair living environment. Such questions have to be taken into consideration in the healthcare sector in which I work, especially in the field of genetic research.

“We need to ask ourselves what kind of insights will allow us to move forward or contribute to a sustainable and fair living environment.”

Karin Nordström, President of the ZHAW Ethics Committee

Heitz: As a university, it would do us good to think about things that go beyond the business case of our corporate partners. This is key for me. This is because the interests of partner companies are initially somewhat different: those who are aiming to generate economic success and earn money do not primarily think about the social impact, for example. However, where I work in the area of AI, in particular, this is one of the main problems.

Are researchers’ warnings taken into consideration?

Heitz: In my experience, if we highlight the risks and present our considerations, we are taken seriously. Our constructive task lies in developing solutions that enable our partner companies to move forward, while at the same time also being compatible with the values of our society. This isn’t always easy, but it is always rewarding. This allows us to create social sustainability that is also in the interests of our partners.

The ZHAW has now had an Ethics Committee for a year: why is it needed?

Nordström: There is a grey area when it comes to research conducted on or with humans. Not all projects fall within the scope of the Human Research Act and therefore do not have to be reviewed by a cantonal ethics committee. Nevertheless, these projects raise ethical questions. Universities have therefore set up internal commissions or committees in this area. There is also the fact that some scientific journals have made it a requirement that a project’s methods and results are scrutinised from an ethical standpoint before they publish them. The Swiss National Science Foundation also requires that projects meet ethical criteria before funding can be granted.

What is the main priority of the ZHAW Ethics Committee?

Nordström: A balance has to be struck between benefits and risks. Focus is placed on protecting the study participants: have they been adequately and clearly informed about what the project is about? Were they able to provide informed consent on this basis? Was this consent obtained and documented? Questions regarding data security also have to be asked.

“We always have to consider the question of what will happen to our world upon our research findings being “unleashed.”

Christoph Heitz, data scientist

Heitz: Data protection was the big issue in the context of digitisation in the years between 2000 and 2015. With artificial intelligence, our society finds itself in a new phase. Discussions are now taking place on a grand scale as regards how we can change the world with AI and what ethical questions need to be clarified as we do so.

Are AI and its impact on people a topic of discussion within the ZHAW Ethics Committee?

Nordström: Not yet. This is new territory and would add a completely different dimension to the work.

Heitz: Generally speaking, however, a large part of our work with respect to digitisation has a very direct impact on people. This can be seen solely by considering the effects that apps have on our everyday lives. We live in a world that is permeated by many systems in the background. I would not go so far as to say that these systems manipulate us, but they do somehow control us. With the AI Act, the EU is currently in the process of establishing a globally unique legal framework for artificial intelligence.

What aspects will the EU AI Act govern?

Heitz: This act is based on a risk approach. AI systems are analysed and classified according to the risk they represent to people: how great is the risk that somebody might come to harm? Or are population groups systematically discriminated against and economically marginalised because an algorithm works in such a way that means people are systematically worse off when it comes to finding jobs, securing loans or taking out insurance? This happens frequently and generally isn’t attributable to a programming error, but rather the logic behind the task, which doesn’t take such social effects into consideration. Regulation kicks in where AI systems have a significant impact on people’s lives. This is true, for example, for systems in the healthcare sector, as well as for important services provided by banks or insurance companies, for instance.

Christoph Heitz, Head of the Smart Services and Maintenance research group at the School of Engineering and President of the data innovation alliance. Karin Nordström, President of the ZHAW Ethics Committee and Head of the BSc degree programme in Health Promotion and Prevention at the School of Health Professions.

Is the economy aware of these risks?

Heitz: The US elections held back in 2017 were a wake-up call when the influence exerted by the data analysis firm Cambridge Analytica was uncovered and Donald Trump entered the White House. From this point onwards, it became clear that data protection was no longer the only thing that matters. This revealed completely new dimensions in the area of digitisation. During this period, a ZHAW initiative also gave rise to the data innovation alliance network, which now comprises 25 research institutions and 50 companies and of which I am President. The network drives innovation by utilising data. We were faced with the question of what we are actually doing here and how we are impacting the world. We therefore put together a group of ethics experts back in 2017 and developed an ethics code specifically for handling data. We are convinced that we have to ensure that the use of data and AI isn't to the detriment of our social values such as freedom, autonomy and social justice. Over the long term, this type of social sustainability is of key importance for the entire industry, and thus in the direct interest of companies.

What is the ZHAW doing to ensure greater fairness and social justice in algorithms?

Heitz: Until now, our understanding in the fields of engineering and computer science had been such that we primarily solved technical questions. Today, however, this is no longer enough. As an application-oriented university and especially within the context of digitisation, I believe that we have a really big task on our hands and a responsibility to align the content of our projects in a manner that ensures that they are compatible with our society’s values. When you develop an application, you have to decide which values are to be associated with the technical solutions. In order to train specialists in this area, we have been offering a module for computer and data science students on the topic of algorithmic fairness since spring semester 2023. From 2024, this is also to be offered as part of the continuing education programme. The ZHAW is therefore the first university in Switzerland to implement something like this in its foundation courses.

How is fairness defined in this teaching module?

Heitz: Part of the teaching revolves around showing that there are various definitions of fairness and that it isn’t possible to be fair in every respect. Instead, there needs to be a normative determination. The engineers can't do this themselves, however, and have to get management on board. Nevertheless, the engineers do need to highlight the ethical issues and options. After all, only they have a feel for what technical options are available and what ethical implications they entail. Management ultimately has to decide which consequences and risks the company is happy to bear.

Nordström: So is it then part of the educational process to show that technology isn’t neutral and that values are involved?

Heitz: Yes, students are made aware here that, for example, the design or specific technical form of prediction algorithms, for example, can produce tools that discriminate in an improper fashion. They not only learn how to recognise this and which ideas of social justice can be linked to such models, but rather also how they can avoid discrimination and implement fairness correctly in an algorithm. This is an example of the kind of responsibility we should assume as an application-oriented university.

So your focus is on the generation of tomorrow?

Heitz: Not only them. First, we need to develop the skills required for this prudent foresight. And secondly, we need to train students in acquiring these skills and subsequently ask the right questions when it comes to implementation. What's more, we need to be in a position to develop solutions to these questions that entail as few unpleasant consequences as possible.

Can you give me an example of such a question?

Heitz: Let’s assume that we need a tool to support the decision-making process for the distribution of financial resources within the healthcare sector. Typical normative questions one might ask would then be should the medical resources be distributed equally across all areas or do we want to distribute the funds in a needs-oriented manner in order to ensure that women have the same chances of recovery as men? Depending on the choice you make, the technical implementation would then look very different.

Nordström: This is also the big question in the area of public health: what shape should equitable healthcare, health promotion and prevention take? And where are inequalities needed if we are to achieve equality? You also need to ask whether you have the solutions.

Heitz: A great number of approaches have been developed in recent years in the area of research. The task now is to implement them in practice. There is a great risk – and we see this time and again with AI applications – that people develop solutions and five years later are amazed to find that something awful then happens that was never intended. However, nobody took the time to think about the potential consequences, establish normative parameters and adapt the technology to ensure it met certain requirements from the very outset.

Nordström: This applies to research in general.

Heitz: Such forward-looking and ethical thinking therefore has to be trained.

AI is such a wide-ranging topic that even prominent representatives from Google as well as Elon Musk and the like have called for research to be stopped in view of the risks at play.

Heitz: I don’t believe that research moratoria are appropriate. While this might stop anything else bad happening, nothing good will be produced either. I don’t think we need bans. Instead, we need an obligation to put together technical solutions in such a way that any negative impact is minimised. This means that we, as researchers, need to develop solutions that can be put to responsible use. We can enable our research partners to become sensitive to questions about risks, while at the same time supporting them in keeping the social side effects in check. This is a challenging undertaking, but as a university we should not shy away from tackling challenging questions.

0 Comments

Be the First to Comment!

Comment is required!
Name is required!
Valid email is required!
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.