Refine
Document Type
- Conference Proceeding (6)
- Article (3)
- Bachelor Thesis (3)
- Book (1)
- Part of a Book (1)
- Doctoral Thesis (1)
- Preprint (1)
- Report (1)
Language
- German (9)
- English (6)
- Multiple languages (2)
Is part of the Bibliography
- no (17)
Keywords
- Künstliche Intelligenz (8)
- ChatGPT (3)
- Chatbot (3)
- Digitalisierung (2)
- Großes Sprachmodell (2)
- Hochschule (2)
- Large Language Models (2)
- Lehre (2)
- Tertiärbereich (2)
- higher education (2)
Institute
- Fakultät IV - Wirtschaft und Informatik (6)
- Fakultät III - Medien, Information und Design (4)
- Fakultät I - Elektro- und Informationstechnik (2)
- Fakultät II - Maschinenbau und Bioverfahrenstechnik (2)
- Fakultät V - Diakonie, Gesundheit und Soziales (2)
- Data|H - Institute for Applied Data Science Hannover (1)
- ISA - Institut für Sensorik und Automation (1)
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
Context: Higher education is changing at an accelerating pace due to the widespread use of digital teaching and emerging technologies. In particular, AI assistants such as ChatGPT pose significant challenges for higher education institutions because they bring change to several areas, such as learning assessments or learning experiences.
Objective: Our objective is to discuss the impact of AI assistants in the context of higher education, outline possible changes to the context, and present recommendations for adapting to change.
Method: We review related work and develop a conceptual structure that visualizes the role of AI assistants in higher education.
Results: The conceptual structure distinguishes between humans, learning, organization, and disruptor, which guides our discussion regarding the implications of AI assistant usage in higher education. The discussion is based on evidence from related literature.
Conclusion: AI assistants will change the context of higher education in a disruptive manner, and the tipping point for this transformation has already been reached. It is in our hands to shape this transformation.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
Am 17.09.2024 fand an der Hochschule Hannover das KI-Forum 2024 statt. Die Tagung bot eine Austauschmöglichkeit über die (Aus-)Wirkung von KI auf die verschiedenen Aspekte von Hochschulen und verband Hochschulangehörige aus Niedersachsen und darüber hinaus. Diese Aspekte, die eine Hochschule, insbesondere eine Hochschule für Angewandte Wissenschaften ausmachen, standen im Fokus: Lehre und Forschung mit Bezug auf Anwendung. In den Kategorien AI 4 Students, AI in Research und AI Showroom haben die Beitragenden ihr KI-Thema in Form eines Vortrags, eines Posters oder Demonstrators präsentiert. Der Tagungsband umfasst alle Beiträge in Form von Abstracts oder Papern und bildet die Vielfältigkeit der Veranstaltung ab und zeigt damit auch welche Relevanz das Thema KI im Hochschulkontext hat.
Wie sollen wir KI lehren?
(2024)
Auf die Frage, wie KI gelehrt werden soll, gibt es keine allgemeingültige Antwort. In dieser Keynote werden stattdessen einige Denkanstöße und Anregungen zur Diskussion gegeben. Zentrales Ziel sollte sein, den Studierenden die Funktionsweise von KI-Verfahren zu vermitteln, damit sie diese sinnvoll einsetzen und deren Grenzen einschätzen können.
Der Projektbericht dokumentiert die Durchführung und die Ergebnisse des Studierendenprojekts „Guided Walk Reloaded: Recherchekompetenz spielerisch vermitteln“, das im Sommersemester 2024 im Studiengang „Informationsmanagement - berufsbegleitend“ an der Hochschule Hannover in Zusammenarbeit mit der ZBW – Leibniz-Informationszentrum Wirtschaft realisiert wurde.
Der Beitrag beschreibt den Entwicklungsprozess ausgewählter gelungener middle-out-initiierter und kooperativ umgesetzter Weiterentwicklungen hochschulischer Lehr-Lernsysteme einzelner Hochschulen. Ausgehend von innerhochschulischen Prozessen, die auf die Mikro- und Mesoebene studentischer Kompetenzaneignungsprozesse im Kontext ‚Schreibdidaktik‘ fokussieren, konnte eine bilaterale interhochschulische Kooperation etabliert werden. Der daraus generierte Mehrwert wird diskutiert. Er bildet die Grundlage für die Idee eines größeren interhochschulischen Netzwerks von Middle-Out-Transformierer:innen. Hierzu werden die notwendigen Voraussetzungen für das nachhaltige Etablieren sowie die Entwicklungspotentiale eines solchen Netzwerks erarbeitet.
Automated question generation holds great promise in many fields, such as education, to reduce the workload and automate an otherwise tedious task. However, major challenges remain regarding the quality of generated questions. To identify and address these challenges generated questions are evaluated either automatically or manually. While several automated metrics, mostly based on the comparison with a gold standard, exist, their usefulness is limited and human evaluation is often used for more accurate assessment. Our research generates questions using several models and methods, including fine-tuning, zero-shot and few-shot. We compare model performance by classifying the generated questions using a multi-label approach. This approach evaluates by sorting generated questions into zero or more binary problem classes and attempting to identify different problems with the generated questions. Our results show that different models tend to generate questions that fit into different problem classes. Additionally, the problem classification evaluation is capable of recognizing these differences and weighing the classes for the models accordingly, creating model-specific distribution characteristics.