How chatbots and the like are influencing student and university life
While artificial intelligence tools can enhance student life, they also come fraught with risks. Universities need to explore new avenues in how they conduct their teaching and research activities. ZHAW experts are looking for answers to this new challenge.
Mark Cieliebak sits on one of the sofas in the co-working space of the ZHAW’s Centre for Artificial Intelligence in Winterthur. He gazes through a large window into the next building, where students are currently sitting examinations. According to the Professor for Natural Language Processing at the Centre for Artificial Intelligence, there is an urgent need to consider how universities should deal with new artificial intelligence (AI) tools such as the text-based dialogue system ChatGPT. “My gut feeling tells me that AI would pass most of the examinations. Perhaps not with the highest grade, but maybe with a 4.”
ChatGPT solves computer science exercises
When ChatGPT – GPT stands for Generative Pre-trained Transformer – was launched on the market at the end of November 2022, rumblings began to sound across schools and universities. Public schools in New York are blocking the website, while in Switzerland the Lucerne University of Applied Sciences and Arts is looking into the use of new anti-plagiarism software and swissuniversities, the umbrella organisation of the country’s universities, also has the associated risks on its radar. Lecturers at the ZHAW will also soon feel the first effects in the classroom. Cieliebak explains how one student copied a computer science exercise into ChatGPT without making any changes to it and immediately received the solutions. “That's when I said, OK, that wasn’t really the idea here. However, as a way of testing the tool on programming language, it was exciting.”
“We now demand a much higher level. For example, we no longer accept spelling errors, as the algorithms have this linguistic capability.”
Lecturers from the Institute of Computational Life Sciences are also testing the chatbot using their examination tasks. Matthias Nyfeler and Robert Vorburger share their experiences during an event in front of more than 100 ZHAW employees. ChatGPT can answer open-ended and multiple choice questions perfectly. In the case of graphical exercises or questions on specific data sets, however, it does not produce good results (yet). This event and a look at the event calendar show that there is a need for discussion at all universities, with some fundamental questions having to be answered: are exercises that the tool can solve in a very short time still appropriate for modern university life? And what form should teaching and examinations take in the future?
Lessons learned from DeepL
Alice Delorme Benites, a Professor of Human-Machine Communication at the School of Applied Linguistics, sees many parallels between the AI translation tool DeepL, which was made publicly available seven years ago, and AI chatbots such as ChatGPT. “Autumn 2016 was like ‘The Day After Tomorrow’ for translators,” she says. Traditional exercises, examinations and assessments all had to re-evaluated. We were posed with exactly the same question as we are facing today: what skills should students learn?
"We want students to use the machine because that is what they will be doing in their future careers. However, it is essential that they are able to adopt a critical approach towards it and use it appropriately,” remarks Delorme Benites. Specifically, two things have been changed with respect to teaching: “On the one hand, we have done away with certain examinations and replaced them with portfolios, learning diaries, reflection exercises or annotated translations. And on the other, we now demand a much higher level. For example, we no longer accept spelling errors, as the algorithms have this linguistic capability.”
Chatbot texts are based on statistics, not intelligence
Alice Delorme Benites and Mark Cieliebak not only point to the numerous opportunities offered by such systems, but also the risks. After all, no matter how convincingly a program “interacts” with us, it remains a machine. “The tool generates the text based on pure statistics and has no understanding of the output, even if the answers appear convincing and as if they had been written by a person. You can easily rephrase the exact same question and get a nonsensical answer,” explains Cieliebak. At a superficial level, the answers are often coherent and well-written. However, they may also contain errors or so-called AI hallucinations, the term used in technical jargon to denote the provision of output not grounded in reality.
For Delorme Benites, the hype surrounding the tool is also problematic at another level: “A large number of myths and false practices are currently doing the rounds. The jungle this creates can be difficult to deconstruct in retrospect.” There are also risks of deception and fake news as well as environmental and ethical issues. Alice Delorme Benites emphasises that artificial intelligence serves to reinforce stereotypes: “With ChatGPT, there is no source text, but rather source facts and this exacerbates the world view that prevails in the underlying data.” However, the experts are in agreement that the tool itself is not the problem. Instead, it shines a light on societal problems. How we as humans handle the tool is pivotal.
Tools for research
The tool can also be used for research purposes. Could the tool be used as a digital assistant or even as a co-author for scientific publications? No is the answer from the academic journal “Science”. It has already banned the use of AI-generated content, while the Swiss National Science Foundation (SNSF) has also expressed its scepticism. The ACL (Association for Computational Linguistics) is amending its policy in a more differentiated way: while AI tools cannot be listed as co-authors, the authors can use them as long as they explain their application and its scope. It is allowed, for example, as part of brainstorming, language correction, literature research or summarising known concepts, on the condition that the output is checked with respect to its accuracy.
“A software developer told me that he cuts the amount of time he spends on programming by up to 80%, as he often only has to check whether the generated program code is correct.”
Generally speaking, Mark Cieliebak believes that generative AI can save a great deal of time. “It is not only texts that you can generate, but also program codes, and at a very high level. A software developer told me that he cuts the amount of time he spends on programming by up to 80%, as he often only has to check whether the generated program code is correct.” He continues by saying that while a ban on such tools makes no sense from this perspective, we do need guidelines on how we want to use them.
Delorme Benites compares the handling of AI tools to a washing cycle: “Everyone has a washing machine and, fundamentally speaking, it is simply a matter of knowing which buttons have to be pressed and whether to separate the colours in advance. However, it is the machine that actually washes the clothes. I don't think that we would call ourselves lazy just because we no longer wash our clothes by hand in a river.” However, if we use the washing machine without thinking, we may find that the colour or size of our clothes has changed when we take them out of the drum.
Expertise at the ZHAW
There is no lack of expertise in the field of AI at the ZHAW. Together with other experts, Mark Cieliebak has initiated the Competence Center for Generative AI. Their mission is to pass on their knowledge, conduct new research and provide information and advice to users, raising their awareness in the process. They want to build a network. For example, they are carrying out scientific research into how well the tool performs compared to existing algorithms.
In addition to such bottom-up initiatives by researchers, a course is also being set at an Executive Board level that will enable lecturers to deal with the tool appropriately and provide guidance on how to use it. Tools like ChatGPT affect all degree programmes in different ways. The ZHAW Academic Affairs Unit is actively shaping this negotiation process and therefore held a workshop at the end of February with representatives from all subject areas and other organisational units. In particular, the rules for using ChatGPT as part of assessments were discussed. These are now being drawn up.
Didactically clever integration into education offerings
“The programme directors are coming with specific questions and expressing a desire for the ZHAW to adopt a clear positioning,” says Patrick Hunger, Head of the Academic Affairs Unit. “At the same time, we want to avoid overregulation, as AI and human-machine cooperation will be key for shaping future developments in the realm of education at the ZHAW,” explains the lawyer. In the long term, the ZHAW wants to focus on the potential offered by AI tools in the field of education and to work with the Schools to explore what is needed to integrate these tools into university life in a didactically clever manner, continues Hunger. This is also on the workshop’s agenda.
“We want to avoid overregulation, as AI and human-machine cooperation will be key for shaping future developments in the realm of education at the ZHAW.”
It remains to be seen how AI tools will now reshape the university landscape. Alice Delorme Benites’ conclusion on DeepL gives hope: “It has been shown over the past six years that the fear of losing the ability to translate is unfounded. Students are still continuing to learn languages and improve their skill sets. However, they also work very consciously with the machine. In my view, GPT and DeepL are positive tools because we no longer have to ‘do the laundry’, meaning we have time for more meaningful and enjoyable activities. They also encourage us to train people rather than simply filling their brains with content.”
0 Comments
Be the First to Comment!