Outlook: Update on AI legal opinion in preparation
The author of this post found this (at least personal) impression confirmed in more than a few points, not least at the AI law symposium organised by KI-NEL-24 (Bochum Feb. 2025). Anyone who believed that the EU's AI regulation offered clear guidelines and that it was only down to the respective university administration that there were still no memorable guidelines on AI deception attempts was bitterly disappointed. After all, nothing could characterise the current situation better than the reference by an Austrian legal scholar well-known in IT law to the beautiful legal rule of three: "The question is complex; you can see it one way or the other; there is no ECJ ruling on this yet!".
With regard to the legally compliant use of AI tools in teaching, this is of course unsatisfactory. The legal uncertainty naturally has an even more serious effect in the area of many university examinations, as it can often be observed that AI tools are obviously increasingly being used unfairly (or at least in a highly unreflected manner) in examinations (especially in traditional term papers through to MA theses).
But at least the organisers of the legal symposium were able to announce a silver lining on the horizon: The legal opinion on AI tools edited by Peter Salden and Jonas Leschke, which was published under the title "Didactic and legal perspectives on AI-supported writing in higher education" in March 2023 and has since provided helpful guidance, will be published in an expanded and updated version towards autumn 2025. Questions and topics that were highlighted as virulent, not least by the participants of the symposium (including primarily representatives of ZPAs and university legal offices), will be taken into account. The new expert opinion will not be able to provide conclusive legal certainty - this remains subject to the judgements of the highest courts. However, lecturers and representatives of the institutions involved in teaching and examinations will receive a well-founded orientation with the status of current technical, didactic, legal and ethical developments with this report in the autumn.
Assuming a little legal and ethical "awareness" in practical use (or to put it another way: rudimentary "AI skills"), there is not much standing in the way of innovative or explorative teaching/learning experiments with or using AI. Until then, the"Handreichung für Lehrende zum Einsatz von generativer KI in der Lehre"(handout for teachers on the use of generative AI in teaching) from the UniService Digitalisation Teaching or contact with the always helpful BU:NDLE staff at your faculty may help to avoid the worst personal application errors. And as far as the exam topic is concerned: cheating has always been done, and the best defence against cheating is still a good competence-oriented exam design. But perhaps now is also the right time to put the current examination formats to the test of modern examination didactics. In any case, the author of this post is already looking forward to seeing what the expert report has to say on all of these topics.