“Science and politics: Bringing the right people together”

1. Dezember 2020
4/2020

At the end of October, ZHAW researcher Ricardo Chavarriaga was invited to a meeting of the Security Policies Committee of the Swiss National Council to talk about Artificial Intelligence (AI) and other emerging technologies and their impact on national security.

What was the purpose of the parliament session about emerging technologies?

Ricardo Chavarriaga: The Security Policies Committee regularly has sessions to talk about a specific topic that may be relevant to security topics. This session dealt with emerging technologies, including AI for security applications. I was asked to come and give a brief presentation about what AI is, how it’s developed, its limitations and current status of governance across the world.
Ricardo Chavarriaga is a ZHAW researcher working on the interface between the brain and machines. As Head of the CLAIRE Office Switzerland, he also wants to promote connections between the European AI research community and people in politics and industry. Besides his work at the ZHAW and CLAIRE, Ricardo Chavarriaga is a fellow at the Geneva Centre for Security Policy.

What is your main take-away from this session?

Ricardo Chavarriaga: It was very interesting to see how the policy-making bodies are interested in these topics. I was quite impressed by their interest and depth of engagement. Sometimes we are not sure to what extent these bodies are paying attention, and my impression of this session and other events is that they are indeed quite interested and often eager to get support on this.
«In facial recognition, systems that have already been deployed for surveillance and the identification of people had a very strong racial bias.»
— Ricardo Chavarriaga, ZHAW researcher and Head of the CLAIRE Office Switzerland

Which topics came up in the discussion?

One concern in general about artificial intelligence in security and other areas is its reliability. There have been several incidences with systems that were deemed robust and reliable that didn’t prove to be so once they were deployed. For example, in facial recognition, systems that have already been deployed for surveillance and the identification of people had a very strong racial bias. That means, they performed very poorly on people of colour and this led to prosecutions based on misidentification of suspects. This example has real effects in society to the extent that certain states have banned their use for surveillance. Some companies put a voluntary moratorium on the development of this application. And this question of how robust and reliable the technology is can be extended to many other applications.

What does this mean for the science community? What are some of their challenges?

The communication between the scientific community and policy makers is sometimes not as efficient as it should be. There is a strong interest in AI in Switzerland, but our communication and exchanges still need to be improved. As members of the scientific community, we need to get used to being in the area of politics, which is something I am personally invested to do. It is something that institutions like ZHAW and other research and development institutions should do as well. Because it’s to our own benefit that these political decisions are really guided by science and evidence. This is a discussion that should involve not only specific institutions and countries, but that needs to be discussed on the global level.
«CLAIRE has a vision of promoting trustworthy and human-centred technology and AI.»
— Ricardo Chavarriaga, ZHAW researcher and Head of the CLAIRE Office Switzerland

What is the role of CLAIRE in this debate?

CLAIRE has a vision of promoting trustworthy and human-centred technology and AI. Besides the promotion of research and innovation in technical aspects of AI, CLAIRE has a clear interest of addressing the ethical concerns in the development of AI. For example, we have an advisory group on ethical, legal and societal aspects. Several of us are involved in specific actions where we discuss not only the ethical aspects, but also how they are related to the technology that we build. How can we build a technology that is respectful of ethical concerns, such as human rights and privacy concerns? These are aspects where there are no clear-cut right answers, but it’s an evolving dialogue. From our position as people in research and development, we need to get involved and bring a clear view of where we are with this technology today. This can help to develop the proper governance that will dictate which applications are considered necessary or safe. When is it worth using these technologies and what are the risks? Which tools do we use to evaluate the suitability of these technologies? Which are the areas where we definitely don’t want them?

Where do you see the potential of AI that can be beneficial to society?

We have many areas where AI can bring good and can benefit society in social, economic, and political areas. We use these technologies, for example, to monitor climate change and the environmental conditions for developing certain crops. And, in general, if we can improve the processes in our economy and supply chains, this can lead to better and more affordable products and a more efficient use of resources. For instance, with the Swiss startup KITRO, we are working on a project where we use AI to help restaurants reduce food waste. The project is funded by Innosuisse and done in collaboration with the ZHAW Institute of Embedded Systems (InES). Education is another interesting area. Here we can identify the best possibilities and exercises for students to improve individual learning. Instead of having a fixed curriculum for everybody, we can personalize these services.  With personalization, we can create better services that put the needs and rights of a person centre-stage. This for example also applies to medicine and other areas. Using human-centred approaches for AI, these technologies create a symbiosis with the person without putting at risk its privacy.