Refine
Year of publication
- 2023 (25) (remove)
Document Type
- Article (8)
- Conference Proceeding (5)
- Master's Thesis (4)
- Bachelor Thesis (3)
- Course Material (2)
- Preprint (2)
- Working Paper (1)
Is part of the Bibliography
- no (25)
Keywords
- CI/CD (2)
- Chatbot (2)
- DevOps (2)
- Führung (2)
- Künstliche Intelligenz (2)
- Machine Learning (2)
- Maschinelles Lernen (2)
- Nachhaltigkeit (2)
- Psychology at Work (2)
- Selbstführung (2)
Institute
- Fakultät IV - Wirtschaft und Informatik (25) (remove)
Autonomous and integrated passenger and freight transport (APFIT) is a promising approach to tackle both, traffic and last-mile-related issues such as environmental emissions, social and spatial conflicts or operational inefficiencies. By conducting an agent-based simulation, we shed light on this widely unexplored research topic and provide first indications regarding influential target figures of such a system in the rural area of Sarstedt, Germany. Our results show that larger fleets entail inefficiencies due to suboptimal utilization of monetary and material resources and increase traffic volume while higher amounts of unused vehicles may exacerbate spatial conflicts. Nevertheless, to fit the given demand within our study area, a comparatively large fleet of about 25 vehicles is necessary to provide reliable service, assuming maximum passenger waiting times of six minutes to the expense of higher standby times, rebalancing effort, and higher costs for vehicle acquisition and maintenance.
Pathologists need to identify abnormal changes in tissue. With the developing digitalization, the used tissue slides are stored digitally. This enables pathologists to annotate the region of interest with the support of software tools. PathoLearn is a web-based learning platform explicitly developed for the teacher-student scenario, where the goal is that students learn to identify potential abnormal changes. Artificial intelligence (AI) and machine learning (ML) have become very important in medicine. Many health sectors already utilize AI and ML. This will only increase in the future, also in the field of pathology. Therefore, it is important to teach students the fundamentals and concepts of AI and ML early in their studies. Additionally, creating and training AI generally requires knowledge of programming and technical details. This thesis evaluates how this boundary can be overcome by comparing existing end-to-end AI platforms and teaching tools for AI. It was shown that a visual programming editor offers a fitting abstraction for creating neural networks without programming. This was extended with real-time collaboration to enable students to work in groups. Additionally, an automatic training feature was implemented, removing the necessity to know technical details about training neural networks.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
We present an approach towards a data acquisition system for digital twins that uses a 5G net- work for data transmission and localization. The current hardware setup, which utilizes stereo vision and LiDAR for 3D mapping, is explained together with two recorded point cloud data sets. Furthermore, a resulting digital twin comprised of voxelized point cloud data is shown. Ideas for future applications and challenges regarding the system are discussed and an outlook on further development is given.
Bluetooth ist ein weit verbreitetes drahtloses Übertragungsprotokoll, das in vielen mobilen Geräten wie bspw. Tablets, Kopfhörer oder Smartwatches verwendet wird. Bluetooth-fähige Geräte senden mehrmals pro Minute öffentliche Advertisements, die u.a. die einzigartige MAC-Adresse des Gerätes beinhalten. Das Mitschneiden dieser Advertisements mittels Bluetooth-Logger ermöglicht es, Bewegungen der Geräte zu analysieren und lassen somit Rückschlüsse auf die Bewegungen der Besitzenden zu.
Zum Schutz der Privatsphäre werden seit 2014 zufällig erzeugte MAC-Adressen in Advertisements verwendet. Eine sog. randomisierte MAC-Adresse bleibt durchschnittlich 15 Minuten lang gültig und wird dann durch eine neue zufällige Adresse ersetzt. Der Aufenthalt eines Geräts zu einem späteren Zeitpunkt kann nicht bestimmt werden. Dennoch kann der Wechsel eines Geräts von einem Bluetooth-Logger zu einem anderen innerhalb dieser 15 Minuten erkannt und somit eine Bewegung des Gerätes abgeleitet werden.
Durch Apps der Kontaktpersonennachverfolgung wie die Corona-Warn-App (CWA) senden auch vermeintlich inaktive Smartphones Bluetooth-Advertisements. Mit etwa einem Viertel der Aufzeichnungen unterstützt die CWA die Auswertungen dieser experimentellen Arbeit.
Um die praktische Anwendbarkeit zu demonstrieren, wurde der Erlebniszoo Hannover als Testgelände genutzt. Die Auswertung der über sieben Wochen gesammelten Daten ermöglichte die Analyse von Stoßzeiten, stark besuchten Orten und Besucherströmen.
Background:
Many patients with cardiovascular disease also show a high comorbidity of mental disorders, especially such as anxiety and depression. This is, in turn, associated with a decrease in the quality of life. Psychocardiological treatment options are currently limited. Hence, there is a need for novel and accessible psychological help. Recently, we demonstrated that a brief face-to-face metacognitive therapy (MCT) based intervention is promising in treating anxiety and depression. Here, we aim to translate the face-to-face approach into digital application and explore the feasibility of this approach.
Methods:
We translated a validated brief psychocardiological intervention into a novel non-blended web app. The data of 18 patients suffering from various cardiac conditions but without diagnosed mental illness were analyzed after using the web app over a two-week period in a feasibility trial. The aim was whether a nonblended web app based MCT approach is feasible in the group of cardiovascular patients with cardiovascular disease.
Results:
Overall, patients were able to use the web app and rated it as satisfactory and beneficial. In addition, there was first indication that using the app improved the cardiac patients’ subjectively perceived health and reduced their anxiety. Therefore, the approach seems feasible for a future randomized controlled trial.
Conclusion:
Applying a metacognitive-based brief intervention via a nonblended web app seems to show good acceptance and feasibility in a small target group of patients with CVD. Future studies should further develop, improve and validate digital psychotherapy approaches, especially in patient groups with a lack of access to standard psychotherapeutic care.
Die Arbeit untersucht die Anwendung von maschinellem Lernen zur Erkennung von Aktivitäten von Schiffen anhand von AIS-Signalen. Das Automatic Identification System (AIS) wird von Schiffen genutzt, um Informationen über ihren Status in regelmäßigen Intervallen zu übertragen. Auf Basis der Daten wurden mithilfe von Machine Learning-Algorithmen aus der Gruppe der überwachten Klassifikationsalgorithmen Modelle gelernt, die in der Lage sind zu erkennen, welcher Aktivität ein Schiff zu einem Zeitpunkt nachgeht.
Da das erfolgreiche Lernen eines Modells von einer sorgfältigen Datenvorbereitung abhängt, wurden verschiedene Verfahren zur Datenvorbereitung verwendet. Anschließend wurden verschiedene Algorithmen eingesetzt, darunter der Random Forest und k-NN, um Modelle zu lernen.
Die Ergebnisse zeigen, dass die Aktivitäten mit einer Genauigkeit von bis zu 99% erkannt werden konnten, wenn in der Datenvorbereitung geeignete Verfahren gewählt wurden.
In unseren Studien haben sich Personenfaktoren im Vergleich zu Situationsfaktoren durchgängig als relevanter für die Entscheidung eines Menschen für oder gegen Korruption erwiesen. Bei der Entscheidung eines Menschen für oder gegen Korruption wirken die verschiedenen Personenfaktorklassen unterschiedlich stark. Die Personenfaktorklassen Persönlichkeit, Werte und Einstellungen beeinflussen die Entscheidung für oder gegen korruptes Handeln substanziell. Hingegen hat die Personenfaktorklasse implizite Motive entgegen ursprünglicher Erwartungen keinen substanziellen Einfluss. Auch soziodemografische Merkmale wie beispielsweise Alter oder Geschlecht haben keine substanzielle Wirkung auf Entscheidungen für oder gegen korruptes Handeln. Das Alter oder das Geschlecht ist nur indirekt wirksam, wenn es mit anderen Personenfaktoren verknüpft ist. So kann sich beispielsweise die Offenheit mit dem Alter verändern. Kausal für korrupte Handlungen sind die jeweiligen Personenfaktoren und nicht die soziodemografischen Merkmale. Die Personenfaktoren sind empirisch vergleichsweise gut abgesichert. Bei den Situationsfaktoren gibt es noch zahlreiche Unschärfen, die sich letztlich auf Basis des derzeitigen Kenntnisstands nicht zufriedenstellend auflösen lassen. Wie eine konkrete Situation von einem bestimmten Menschen wahrgenommen und verarbeitet wird, hängt von dessen Personenfaktoren und nicht nur von äußeren Situationsfaktoren ab. Die von uns vorgestellte Theorie kann eine Basis für die weitere Forschung zu Korruption sein.
In the last years generative models have gained large public attention due to their high level of quality in generated images. In short, generative models learn a distribution from a finite number of samples and are able then to generate infinite other samples. This can be applied to image data. In the past generative models have not been able to generate realistic images, but nowadays the results are almost indistinguishable from real images.
This work provides a comparative study of three generative models: Variational Autoencoder (VAE), Generative Adversarial Network (GAN) and Diffusion Models (DM). The goal is not to provide a definitive ranking indicating which one of them is the best, but to qualitatively and where possible quantitively decide which model is good with respect to a given criterion. Such criteria include realism, generalization and diversity, sampling, training difficulty, parameter efficiency, interpolating and inpainting capabilities, semantic editing as well as implementation difficulty. After a brief introduction of how each model works on the inside, they are compared against each other. The provided images help to see the differences among the models with respect to each criterion.
To give a short outlook on the results of the comparison of the three models, DMs generate most realistic images. They seem to generalize best and have a high variation among the generated images. However, they are based on an iterative process, which makes them the slowest of the three models in terms of sample generation time. On the other hand, GANs and VAEs generate their samples using one single forward-pass. The images generated by GANs are comparable to the DM and the images from VAEs are blurry, which makes them less desirable in comparison to GANs or DMs. However, both the VAE and the GAN, stand out from the DMs with respect to the interpolations and semantic editing, as they have a latent space, which makes space-walks possible and the changes are not as chaotic as in the case of DMs. Furthermore, concept-vectors can be found, which transform a given image along a given feature while leaving other features and structures mostly unchanged, which is difficult to archive with DMs.
There are many aspects of code quality, some of which are difficult to capture or to measure. Despite the importance of software quality, there is a lack of commonly accepted measures or indicators for code quality that can be linked to quality attributes. We investigate software developers’ perceptions of source code quality and the practices they recommend to achieve these qualities. We analyze data from semi-structured interviews with 34 professional software developers, programming teachers and students from Europe and the U.S. For the interviews, participants were asked to bring code examples to exemplify what they consider good and bad code, respectively. Readability and structure were used most commonly as defining properties for quality code. Together with documentation, they were also suggested as the most common target properties for quality improvement. When discussing actual code, developers focused on structure, comprehensibility and readability as quality properties. When analyzing relationships between properties, the most commonly talked about target property was comprehensibility. Documentation, structure and readability were named most frequently as source properties to achieve good comprehensibility. Some of the most important source code properties contributing to code quality as perceived by developers lack clear definitions and are difficult to capture. More research is therefore necessary to measure the structure, comprehensibility and readability of code in ways that matter for developers and to relate these measures of code structure, comprehensibility and readability to common software quality attributes.
The digital transformation with its new technologies and customer expectation has a significant effect on the customer channels in the insurance industry. The objective of this study is the identification of enabling and hindering factors for the adoption of online claim notification services that are an important part of the customer experience in insurance. For this purpose, we conducted a quantitative cross-sectional survey based on the exemplary scenario of car insurance in Germany and analyzed the data via structural equation modeling (SEM). The findings show that, besides classical technology acceptance factors such as perceived usefulness and ease of use, digital mindset and status quo behavior play a role: acceptance of digital innovations, lacking endurance as well as lacking frustration tolerance with the status quo lead to a higher intention for use. Moreover, the results are strongly moderated by the severity of the damage event—an insurance-specific factor that is sparsely considered so far. The latter discovery implies that customers prefer a communication channel choice based on the individual circumstances of the claim.
During the Corona-Pandemic, information (e.g. from the analysis of balance sheets and payment behavior) traditionally used for corporate credit risk analysis became less valuable because it represents only past circumstances. Therefore, the use of currently published data from social media platforms, which have shown to contain valuable information regarding the financial stability of companies, should be evaluated. In this data e. g. additional information from disappointed employees or customers can be present. In order to analyze in how far this data can improve the information base for corporate credit risk assessment, Twitter data regarding the ten greatest insolvencies of German companies in 2020 and solvent counterparts is analyzed in this paper. The results from t-tests show, that sentiment before the insolvencies is significantly worse than in the comparison group which is in alignment with previously conducted research endeavors. Furthermore, companies can be classified as prospectively solvent or insolvent with up to 70% accuracy by applying the k-nearest-neighbor algorithm to monthly aggregated sentiment scores. No significant differences in the number of Tweets for both groups can be proven, which is in contrast to findings from studies which were conducted before the Corona-Pandemic. The results can be utilized by practitioners and scientists in order to improve decision support systems in the domain of corporate credit risk analysis. From a scientific point of view, the results show, that the information asymmetry between lenders and borrowers in credit relationships, which are principals and agents according to the principal-agent-theory, can be reduced based on user generated content from social media platforms. In future studies, it should be evaluated in how far the data can be integrated in established processes for credit decision making. Furthermore, additional social media platforms as well as samples of companies should be analyzed. Lastly, the authenticity of user generated contend should be taken into account in order to ensure, that credit decisions rely on truthful information only.
Unternehmen, die sich ernsthaft mit Nachhaltigkeit beschäftigen, müssen den Nachweis erbringen, dass sie positive Effekte für die Gesellschaft erzielen. Damit ist eine ganzheitliche Wirkungsmessung unabdingbar. Sozialunternehmen sollten als Vorbild für eine solche Wirkungsmessung dienen. Eine wissenschaftliche Studie auf Basis der sog. „Ergebnispyramide“ kommt jedoch zu dem Schluss, dass selbst diese ihre Wirkung bisher kaum ganzheitlich messen.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarizes our comparison of all three tools from our final comparison round.
Music streaming platforms offer music listeners an overwhelming choice of music. Therefore, users of streaming platforms need the support of music recommendation systems to find music that suits their personal taste. Currently, a new class of recommender systems based on knowledge graph embeddings promises to improve the quality of recommendations, in particular to provide diverse and novel recommendations. This paper investigates how knowledge graph embeddings can improve music recommendations. First, it is shown how a collaborative knowledge graph can be derived from open music data sources. Based on this knowledge graph, the music recommender system EARS (knowledge graph Embedding-based Artist Recommender System) is presented in detail, with particular emphasis on recommendation diversity and explainability. Finally, a comprehensive evaluation with real-world data is conducted, comparing of different embeddings and investigating the influence of different types of knowledge.
Die Angriffserkennung ist ein wesentlicher Bestandteil, Cyberangriffe zu verhindern und abzumildern. Dazu werden Daten aus verschiedenen Quellen gesammelt und auf Einbruchsspuren durchsucht. Die heutzutage produzierten Datenmengen sind ein wesentliches Problem für die Angriffserkennung. Besonders bei komplexen Cyberangriffen, die über einen längeren Zeitraum stattfinden, wächst die zu durchsuchende Datenmenge stark an und erschwert das Finden und Kombinieren der einzelnen Angriffsschritte.
Eine mögliche Lösung, um dem Problem entgegenzuwirken, ist die Reduktion der Datenmenge. Die Datenreduktion versucht, Daten herauszufiltern, die aus Sicht der Angriffserkennung irrelevant sind. Diese Ansätze werden unter dem Begriff Reduktionstechniken zusammengefasst. In dieser Arbeit werden Reduktionstechniken aus der Wissenschaft untersucht und auf Benchmark Datensätzen angewendet, um ihre Nutzbarkeit zu evaluieren. Dabei wird der Frage nachgegangen, ob die Reduktionstechniken in der Lage sind, irrelevante Daten ausfindig zu machen und zu reduzieren, ohne dass eine Beeinträchtigung der Angriffserkennung stattfindet. Die Evaluation der Angriffserkennung erfolgt durch ThreaTrace, welches eine Graph Neural Network basierte Methode ist.
Die Evaluierung zeigt, dass mehrere Reduktionstechniken die Datenmenge wesentlich reduzieren können, ohne die Angriffserkennung zu beeinträchtigen. Bei drei Techniken führt der Einsatz zu keinen nennenswerten Veränderungen der Erkennungsraten. Dabei wurden Reduktionsraten von bis zu 30 % erreicht. Bei der Anwendung einer Reduktionstechnik stieg die Erkennungsleistung sogar um 8 %. Lediglich bei zwei Techniken führt der Einsatz zum drastischen Absinken der Erkennungsrate.
Insgesamt zeigt die Arbeit, dass eine Datenreduktion angewandt werden kann, ohne die Angriffserkennung zu beeinträchtigen. In besonderen Fällen kann eine Datenreduktion, die Erkennungsleistung sogar verbessern. Allerdings ist der erfolgreiche Einsatz der Reduktionstechniken abhängig vom verwendeten Datensatz und der verwendeten Methode der Angriffserkennung.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarized comparison of two tools.
In this paper, we present a novel approach for real-time rendering of soft eclipse shadows cast by spherical, atmosphereless bodies. While this problem may seem simple at first, it is complicated by several factors. First, the extreme scale differences and huge mutual distances of the involved celestial bodies cause rendering artifacts in practice. Second, the surface of the Sun does not emit light evenly in all directions (an effect which is known as limb darkening). This makes it impossible to model the Sun as a uniform spherical light source. Finally, our intended applications include real-time rendering of solar eclipses in virtual reality, which require very high frame rates. As a solution to these problems, we precompute the amount of shadowing into an eclipse shadow map, which is parametrized so that it is independent of the position and size of the occluder. Hence, a single shadow map can be used for all spherical occluders in the Solar System. We assess the errors introduced by various simplifications and compare multiple approaches in terms of performance and precision. Last but not least, we compare our approaches to the state-of-the-art and to reference images. The implementation has been published under the MIT license.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
Nachhaltiges Anforderungsmanagement mit externen IT-Dienstleistern in der öffentlichen Verwaltung
(2023)
Kontext: Der Einfluss der Organisationskultur und -struktur auf das Outsourcing von IT-Prozessen in der öffentlichen Verwaltung wird untersucht.
Zielsetzung: Es wird die Wirkung der kulturellen und strukturellen Besonderheiten der öffentlichen Verwaltung auf die Zusammenarbeit mit externen IT-Dienstleistern erforscht. Ziel ist die Entwicklung eines nachhaltigen, zukunftswirksamen Konzepts des Anforderungsmanagements.
Methode: Expert*inneninterviews mit einer anschließenden qualitativen Inhaltsanalyse nach der Methodik von Gläser und Laudel werden durchgeführt.
Ergebnisse: Die Struktur der öffentlichen Verwaltung ist stark fragmentiert; die Kultur geprägt von Risikoaversion und bürokratischem Handeln.
Konklusion: Eine Kombination aus Kommunikations-, Wissens- und Beziehungsmanagement ermöglicht ein nachhaltiges Anforderungsmanagement.
The paper provides a comprehensive overview of modeling and pricing cyber insurance and includes clear and easily understandable explanations of the underlying mathematical concepts. We distinguish three main types of cyber risks: idiosyncratic, systematic, and systemic cyber risks. While for idiosyncratic and systematic cyber risks, classical actuarial and financial mathematics appear to be well-suited, systemic cyber risks require more sophisticated approaches that capture both network and strategic interactions. In the context of pricing cyber insurance policies, issues of interdependence arise for both systematic and systemic cyber risks; classical actuarial valuation needs to be extended to include more complex methods, such as concepts of risk-neutral valuation and (set-valued) monetary risk measures.
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
Begleitheft zum Seminar "Psychology at Work - Selbstmanagement im Unternehmen und Authentic Leadership". Checklisten, Übungen, Arbeitsblätter, Links und Foliensätze. Zusätzlich zu den Videos und den Live online Einheiten enthält das Begleitheft die Video-Übungen zur besseren Übersicht und Bearbeitung, sowie zusätzliche online-Links und vertiefende Arbeitsblätter.
Begleitheft zum Seminar "Psychology at work. Positive Psychologie im Unternehmen und Positive Leadership". Checklisten, Übungen, Arbeitsblätter, Links und Foliensätze.
Zusätzlich zu den Videos und den Live online Einheiten enthält das Begleitheft die Video-Übungen zur besseren Übersicht und Bearbeitung, sowie zusätzliche online-Links und vertiefende Arbeitsblätter.