TY - CPAPER U1 - Konferenzveröffentlichung A1 - Berger, Joshua A1 - Stamatakis, Markos A1 - Hoppe, Anett A1 - Ewerth, Ralph A1 - Wartena, Christian T1 - Identifying Problem Types in Automated Question Generation T2 - KI-Forum 2024: AI 4 Students – AI in Research – AI Showroom N2 - Automated question generation holds great promise in many fields, such as education, to reduce the workload and automate an otherwise tedious task. However, major challenges remain regarding the quality of generated questions. To identify and address these challenges generated questions are evaluated either automatically or manually. While several automated metrics, mostly based on the comparison with a gold standard, exist, their usefulness is limited and human evaluation is often used for more accurate assessment. Our research generates questions using several models and methods, including fine-tuning, zero-shot and few-shot. We compare model performance by classifying the generated questions using a multi-label approach. This approach evaluates by sorting generated questions into zero or more binary problem classes and attempting to identify different problems with the generated questions. Our results show that different models tend to generate questions that fit into different problem classes. Additionally, the problem classification evaluation is capable of recognizing these differences and weighing the classes for the models accordingly, creating model-specific distribution characteristics. KW - Automated Question Generation KW - NLP KW - Transformers KW - Künstliche Intelligenz KW - Automatische Sprachanalyse KW - AQG KW - Problem Categorization Y1 - 2024 UN - https://nbn-resolving.org/urn:nbn:de:bsz:960-opus4-34511 SN - 978-3-69018-002-3 SB - 978-3-69018-002-3 U6 - https://doi.org/10.25968/opus-3451 DO - https://doi.org/10.25968/opus-3451 SP - 34 EP - 37 S1 - 4 PB - HsH Applied Academics CY - Hannover ER -