Leitlinien für Künstliche Intelligenz

AI research and the unknown unknowns

13.12.2023
4/2023

With its new research group, the ZHAW Centre for Artificial Intelligence aims to play its part in ensuring AI applications are developed in an ethical and sustainable manner. Director of the Centre for Artificial Intelligence Thilo Stadelmann is also calling for the greater involvement of the humanities in the AI discourse.

Hollywood screenwriters going on strike. A legal judgement against a paedophile in Canada. Universities in Switzerland updating their examination regulations. At first glance, these events seem to have nothing in common. However, they all involve the use of artificial intelligence (AI): what if AI is used to write scripts, create pornographic images or help students cheat in their exams? What limits have to be set with respect to this new technology and which social norms need to be observed in its development?

The ZHAW Centre for Artificial Intelligence (CAI) has been looking at questions such as these for some time and recently established a new Responsible AI Innovation (RAI) research group for this purpose. Its members conduct research into approaches for the development of AI applications that comply with ethical principles and serve the common good. “Such considerations are frequently viewed by companies as an area of discussion that is separate from their own business activities,” says Ricardo Chavarriaga, the Head of RAI. The group would like to place more emphasis on this topic and help to ensure that responsible behaviour is already implied during the development of AI systems. According to the AI expert, this can be done with technical means. “For example, we can develop mechanisms that enable us to incorporate ethical principles from the outset.” These “operational ethics” would make principles such as fairness and transparency tangible and allow for them to be implemented at a technical level.

The group also lends a helping hand during the drafting of legislation and the certification of AI applications. The Innosuisse project “certAInty”, for example, has been developing an AI certification system since 2022 that incorporates principles such as autonomy and control, security, transparency and reliability.

The consequences are often unknown

During the development of new AI applications, the focus is primarily turned to the positive impact they may have, for example providing a solution to an existing problem. “Possible negative effects are taken into account less frequently. One reason for this is that people don’t even know what form they might take,” says Chavarriaga. Such “unknown unknowns” often exist, as the example of social media demonstrates. “To start out with, social networks represented an innocent pastime. It was only as time passed that their sometimes massive negative effects came to light.”

Chavarriaga shares his view that AI system developers have to try to anticipate risks. However, he continues, the reality is often somewhat different. “People launch an application on the market and react if it has negative consequences.” Given the complexity of AI systems, it can prove challenging to assess how they will work and thus to predict their potential consequences. “It can in some cases be difficult to understand how neural networks work,” says Chavarriaga. This is a topic that is being investigated by other research groups at the CAI. Chavarriaga adds that in the context of ethical principles, it is of secondary importance to understand how highly complex systems work in every detail. “In turn, however, we need to ensure that ethical principles are taken into account when developing such systems.”

Regulation still at an early stage

There are now a number of ethical guidelines for AI, including those of the Organisation for Economic Co-operation and Development (OECD). These stipulate that AI systems must, among other things, promote sustainable development and respect the rule of law as well as human rights and democratic values. They should contain safety mechanisms and function transparently, with their use being disclosed. Last but not least, the organisations that develop or use these systems should take on responsibility for their consequences.

“For authorisation, the risks also need to be assessed early on in the development process.”

Ricardo Chavarriaga, Head of the Responsible AI Innovation research group

Chavarriaga says that ethical principles, as proposed in the OECD guidelines, are the common denominator in the global debate about the responsible use of AI. “The regulation of this technology is still at an early stage, however.” According to Chavarriaga, the EU is currently the most advanced in this regard. With its AI Act, which the bloc’s member states are set to adopt by the end of this year, AI applications will in future be categorised into risk classes. Each class will impose obligations on the developers and operators of AI applications. Chavarriaga believes that this is a sensible approach: “For authorisation, the risks also need to be assessed early on in the development process.”

Humanities need to have a say

In light of the far-reaching consequences, some researchers are not only calling for the regulation of AI, but also for a fundamental discourse. "The technology has a subtle effect on how we perceive ourselves as people,” says Head of the CAI Thilo Stadelmann. “It shapes our way of thinking and how we live as well as our culture and moral values.” He has co-authored a work programme that outlines the assessment of artificial intelligence from the perspective of the humanities.

“This portrays people as machines, and in turn humanises machines.”

Thilo Stadelmann, Head of the Centre for Artificial Intelligence

“As things stand, the discourse is dominated by a technical and scientific point of view,” says Stadelmann. And from this perspective, the human brain is often viewed as a biological IT system, he adds. “This portrays people as machines, and in turn humanises machines.” From a humanities standpoint, this means that the value of human life is diminished. “The voices of these disciplines therefore also need to be heard in this debate. We need to talk about what exactly it is that makes people special and sets them apart from machines.”

Stadelmann opines that the way in which AI is perceived and assessed also depends on the terms used. “Artificial intelligence was created as a marketing term in order to attract investors for research.” In essence, it is not about producing intelligence in itself, but rather intelligent behaviour. In their work programme, Stadelmann and his co-authors therefore suggest that we use the term “extended intelligence” in place of AI. “This term better describes what the technology is, namely a tool that expands our capabilities.”

Human-AI interaction in critical systems

The interaction between people and AI solutions for critical systems such as electricity, rail and air traffic control is at the heart of the “AI4REALNET” project. The project is a collaboration between the Centre for Artificial Intelligence (CAI) and the Institute of Data Analysis and Process Design (IDP) of the School of Engineering as well as several international universities and industrial partners. It was selected from a total of 114 submissions as part of a European tender. While critical infrastructure networks for mobility or electricity are generally operated by people, human expertise is increasingly being supplemented by control and monitoring software as well as varying degrees of automation. A key question addressed by the project centres on the technological and ethical challenges that arise from human-AI cooperation.

“As we are dealing with sensitive infrastructures, the stakes are very high. The AI systems have to be reliable so as to ensure that critical applications are not jeopardised,” says Ricardo Chavarriaga from the CAI. The main objective of AI4REALNET is to develop an overarching multidisciplinary approach and to test and assess AI as part of industry-relevant use cases. The aim is to combine newly emerging AI algorithms, existing AI-friendly digital environments, the socio-technical design of AI-based decision-making systems and human-machine interactions (HMI) to improve the operation of network infrastructures in both real time and in predictive mode.

0 Comments

Be the First to Comment!

Comment is required!
Name is required!
Valid email is required!
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.