
“I don't want to hug a robot”
Can artificial intelligence comfort us or develop a will of its own? Information scientist Elke Brucker-Kley, humanities scholar Volker Kiel, and robot researcher Yulia Sandamirskaya discuss these questions in the following interview.
The boundaries between the natural and the artificial are becoming increasingly blurred – especially since generative AI tools have become so easily available. How have you experienced this development?
Yulia Sandamirskaya: I have been taken by surprise at how quickly AI tools have spread and how unquestioningly we are adopting them. People are very forgiving when dealing with these tools. After all, they still make a lot of mistakes. Texts often lack coherence between the individual paragraphs.
Volker Kiel: I am also worried by the unreflected use of them. Coaching and counselling primarily work through relationships with people. And yet, the new opportunities offered by these tools have been warmly embraced, for example for preparing or evaluating sessions. I fear that too much thinking is being outsourced to the machines, which could come at the expense of spontaneity and intuition.
Elke Brucker-Kley: I was less surprised by the hype. I have been working with natural language systems since the 1990s, so I’ve seen how long the journey has been. From extremely frustrating language interactions to today's astonishing quality – it truly is a quantum leap.
AI is often praised for the fact that it takes on unpleasant tasks for us so that we can focus on more meaningful ones. However, AI is also being increasingly used for tasks that typically require human skills, such as empathy or life experience. How do you view this development?
Sandamirskaya: The really hard, repetitive work is not being taken over by AI – be it in manufacturing, nursing or agriculture. AI grows in areas where a great deal of data is readily available, and this is primarily image data and text data. It is the path of least resistance, not necessarily the one that would benefit humanity the most. It reminds me of the story of the man who is searching for his keys under a street light – not because he lost them there, but because that’s where it’s brightest.
Kiel: I see a risk in our becoming too accustomed to communication with machines, because they are supportive and take work off our hands. This fosters a detachment from real human relationships and, at the same time, a sense of isolation. People are increasingly reluctant to approach others. We need contact with other people for our emotional development and healing. We want to be seen by others in our uniqueness. This is something that cannot be replicated by data-driven communication.
Brucker-Kley: AI gives us greater autonomy, but we also lose capabilities. However, we must avoid both utopian and dystopian ways of thinking. For me, when training the specialists who help to shape these technologies, the topic of responsible innovation is essential: conveying an awareness of how these technologies affect us.

“When I create a technology, I open the door to possibilities that are beyond my control.”
Elke Brucker-Kley is Co-Head of the Centre for Information Systems – People & Technology at the School of Management and Law. The centre investigates how new technologies affect people and specialises in the combination of virtual reality and conversational agents.
Critical thinking is often cited as a key skill in dealing with AI. However, it is becoming increasingly difficult to judge what is real and what is artificial when new technologies are constantly emerging that are able to do things that seemed impossible just a few months before.
Sandamirskaya: Critical thinking belongs on the school curriculum. Teachers should be actively supported by continuing education courses on new technologies to ensure that they are in a position to offer pupils nuanced insight. At the same time, it is important to continue teaching analogue human skills: logical thinking and creativity. However, this is a huge task for schools. What is needed are smaller classes and more individual work.
Kiel: Media literacy is key. Saying this, I do not mean being able to use a tablet, but rather discussing the benefits and risks of AI tools with pupils. Adults must also ensure, however, that children do not engage too early or too heavily with digital media. They should be able to maintain the necessary distance that allows their critical thinking to develop free from the influence of technology.
Ms Brucker-Kley, you are researching the interface between people and technology. In your view, should there be clear limits as to the areas in which AI is allowed to imitate human abilities – or should we rather explore everything that is technically possible?
Brucker-Kley: It is a dilemma. On the one hand, the possibilities are fascinating. On the other, you have to be aware of the consequences. For example, we can read users’ brain waves to generate an appropriate response from an avatar in real time. However, when I create such a technology, I also open the door to possibilities that are beyond my control. That is why broad social discourse on new technologies is important, and not just the question of technical feasibility. Furthermore, ethical issues should not only be discussed in ethics committees and scientific councils, but also with people who are impacted by these technologies.

“Intuition leads us to insights that we cannot reach rationally and that no machine can have.”
Yulia Sandamirskaya is Head of the Research Centre for Cognitive Computing in Life Sciences at the Institute of Computational Life Sciences. Her areas of expertise are cognitive systems, neuromorphic computing and robotics.
You conducted a research project investigating how AI applications with emotional capabilities affect young people. Were there any surprising findings?
Brucker-Kley: I was impressed by how sceptical and reflective the young people were in responding to the provocative hypothesis of friendship with AI. What did surprise me was the desire for imperfection. They found the digital friend, which was always available and on hand to help, boring. One person remarked that the avatar should try less hard to be the perfect friend, and should instead also make mistakes from time to time and exhibit a clumsy side. Others wanted to be able to argue with the avatar. They were also concerned, however, that they could lose the ability to interact with real people if they were only to argue with AI that forgives everything.
Ms Sandamirskaya, you develop artificial systems modelled on human intelligence. What is the essence of human intelligence? And can machines imitate this?
Sandamirskaya: The human brain learns something new every day, often unconsciously. We constantly adapt to our environment on various levels. AI, by contrast, requires a complex training process in which the entire system, including the old data, has to be optimised. Strategic thinking also sets us apart. We have the ability to focus on what is essential despite all the noise around us, plan for the long term, respond to new information at short notice and decide which steps to take next based on the situation at hand. AI, however, is unable to distinguish the important from the unimportant and has no memory. That said, AI is able to outperform us in other areas, such as reading and summarising entire books. What worries me is that all of the manufacturers today are moving in the same direction: creating systems that require a lot of data and long training and are error-prone and inflexible. I think it would make more sense to have several systems that each specialise in individual tasks.
Mr Kiel, AI cannot respond spontaneously to new information. When coaching a person, on the other hand, a single piece of new information can change the whole conversation, can’t it?
Kiel: Exactly, and AI also lacks human intuition. Albert Einstein is reported to have once said: “The intuitive mind is a sacred gift, and the rational mind is a faithful servant. We have created a society that honours the servant but has forgotten the gift.” Intuition is central to human relationships. We intuitively understand others much better than we could ever describe it with language. Intuition leads us to insights that we cannot reach rationally and that no machine can have.
Nevertheless, artificial beings do not leave us cold, be it an encounter with a humanoid robot or a conversation with an AI tool. Are these emotions less valuable than those triggered by people?
Brucker-Kley: I do not want to make a judgement on that. However, if we grow used to these kinds of emotions and forget that the AI’s emotions are not real, the boundaries start to blur. I take a very critical view of that. I believe that human interaction simulated with AI does not have to be perfect, even if this were technically feasible. Avatars do not have to look photorealistic, and they should only simulate emotions if there is a benefit to be gained. It has to be clear that this is an artificial entity, as it cannot be assumed that all users will be able to make that distinction.
Kiel: In addition to conversations and interactions, experiences shared with others, for example in nature or travelling together, are also important for human relationships. Being silent together, offering one another comfort – not just with words, but also with a hug. I don’t want to hug a robot.
Mr Kiel, many people wait for a therapy appointment for months. Can AI be a useful replacement for a human specialist here?
Kiel: I hear from experts in the field of psychotherapy that virtual reality is increasingly being used in clinics, for example for systematic desensitisation in the case of fear of heights. I come from Cologne, where in the past therapists would walk slowly up the steps of the city’s cathedral with their clients. This can now be done much more easily and practically on a virtual basis. Such applications can really be extremely helpful and they are available to many more people.
Brucker-Kley: AI could also help to ensure a certain minimum standard of care in the medical field. In the future, it may well be a question of financial means whether someone can have access to a human doctor or psychologist.

“Intuition leads us to insights that we cannot reach rationally and that no machine can have.”
Volker Kiel is a lecturer and consultant at the Institute of Applied Psychology. He is an expert in leadership, coaching and change management.
Ms Sandamirskaya, one of your current projects is looking at the use of robots in the area of care. What are the goals of this project?
Sandamirskaya: Robots should make it possible for elderly people and those with physical disabilities to live autonomously for as long as possible rather than in a care home. In homes, robots could support the care staff by taking tasks off their hands. However, private households represent a difficult environment for robots, as the surroundings are very dynamic. Simple tasks such as recognising objects, picking them up and moving them from A to B must be done safely so as not to endanger anyone. Technologically, we are not quite there yet. Data protection is another problem. We do not want the images to be sent to a cloud. Instead, the data should be processed locally. The question of financing is also unresolved. Would health insurance companies or Spitex co-finance such robots? And in which areas should the robots help at all?
There have already been studies* that show that AI can manipulate us, such as when it is programmed to protect itself. How do you assess the risk of AI developing a will of its own?
Brucker-Kley: AI does not have its own will, but it can act purposefully and evolve if it has been programmed in that way. I therefore do not see the danger in rebellious AI, but in the development of machines whose objectives are so complex that we no longer understand them ourselves. We have to think with an enormous degree of foresight about what goals and purposes we want to pursue – and not just about the objectives of a particular single case.
Sandamirskaya: It is a very powerful technology, one that can become truly dangerous, should it start feeding us falsehoods. The comparison with nuclear fission comes to mind: you can use it to generate electricity or build an atomic bomb. What we do with it, however, won’t depend on the will of the AI, but on that of humans.
0 Comments
Be the First to Comment!