Refine
Year of publication
Document Type
- Conference Proceeding (50)
- Article (42)
- Bachelor Thesis (7)
- Part of a Book (2)
- Master's Thesis (2)
- Preprint (2)
- Doctoral Thesis (1)
- Report (1)
- Working Paper (1)
Language
- English (108) (remove)
Is part of the Bibliography
- no (108)
Keywords
- Serviceorientierte Architektur (9)
- Mikroservice (8)
- Computersicherheit (7)
- SOA (7)
- Agilität <Management> (6)
- Agile Softwareentwicklung (5)
- Insurance Industry (5)
- Künstliche Intelligenz (5)
- Nachhaltigkeit (5)
- Rechnernetz (5)
Institute
- Fakultät IV - Wirtschaft und Informatik (108) (remove)
Integrated Risk and Opportunity Management (IROM) goes far beyond what is found in organizations today. However, it offers the best opportunity not only to keep pace with the VUCA world, but to actually profit from it. Accordingly, the introduction of opportunity-based thinking in addition to risk-based thinking is part of the design specification for ISO 9000 and ISO 9001. The prerequisite for the successful design of an IROM is the individual definition, control and integration of risk and opportunity management processes, considering eight success factors, the "8 C". Top management benefits directly from the result: better, coordinated decision memos enable faster and more appropriate decisions.
Renewable energy production is one of the strongest rising markets and further extreme growth can be anticipated due to desire of increased sustainability in many parts of the world. With the rising adoption of renewable power production, such facilities are increasingly attractive targets for cyber attacks. At the same time higher requirements on a reliable production are raised. In this paper we propose a concept that improves monitoring of renewable power plants by detecting anomalous behavior. The system does not only detect an anomaly, it also provides reasoning for the anomaly based on a specific mathematical model of the expected behavior by giving detailed information about various influential factors causing the alert. The set of influential factors can be configured into the system before learning normal behaviour. The concept is based on multidimensional analysis and has been implemented and successfully evaluated on actual data from different providers of wind power plants.
Training and evaluating deep learning models on road graphs for traffic prediction using SUMO
(2024)
The escalation of traffic volume in urban areas poses multifaceted challenges including increased accident risks, congestion, and prolonged travel times. Traditional approaches of expanding road infrastructure face limitations such as space constraints and the potential exacerbation of traffic issues.
Intelligent Transport Systems (ITS) present an alternative strategy to alleviate traffic problems by leveraging data-driven solutions. Central to ITS is traffic prediction, a process vital for applications like Traffic Management and Navigation Systems.
Recent advancements in traffic prediction have witnessed a surge of interest, particularly in deep learning methods optimized for graph-based data processing, being considered the most promising avenue presently.
These methods typically rely on real-life datasets containing traffic sensor data such as METR-LA and PeMS. However, the finite nature of real-life data prompts exploration into augmenting training and testing datasets with simulated traffic data.
This thesis explores the potential of utilizing traffic simulations, employing the microscopic traffic simulator SUMO, to train and test deep learning models for traffic prediction. A framework integrating PyTorch and SUMO is proposed for this purpose, aiming to elucidate the feasibility and effectiveness of using simulated traffic data for enhancing predictive models in traffic management systems.
In this paper, we present a novel approach for real-time rendering of soft eclipse shadows cast by spherical, atmosphereless bodies. While this problem may seem simple at first, it is complicated by several factors. First, the extreme scale differences and huge mutual distances of the involved celestial bodies cause rendering artifacts in practice. Second, the surface of the Sun does not emit light evenly in all directions (an effect which is known as limb darkening). This makes it impossible to model the Sun as a uniform spherical light source. Finally, our intended applications include real-time rendering of solar eclipses in virtual reality, which require very high frame rates. As a solution to these problems, we precompute the amount of shadowing into an eclipse shadow map, which is parametrized so that it is independent of the position and size of the occluder. Hence, a single shadow map can be used for all spherical occluders in the Solar System. We assess the errors introduced by various simplifications and compare multiple approaches in terms of performance and precision. Last but not least, we compare our approaches to the state-of-the-art and to reference images. The implementation has been published under the MIT license.
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
Music streaming platforms offer music listeners an overwhelming choice of music. Therefore, users of streaming platforms need the support of music recommendation systems to find music that suits their personal taste. Currently, a new class of recommender systems based on knowledge graph embeddings promises to improve the quality of recommendations, in particular to provide diverse and novel recommendations. This paper investigates how knowledge graph embeddings can improve music recommendations. First, it is shown how a collaborative knowledge graph can be derived from open music data sources. Based on this knowledge graph, the music recommender system EARS (knowledge graph Embedding-based Artist Recommender System) is presented in detail, with particular emphasis on recommendation diversity and explainability. Finally, a comprehensive evaluation with real-world data is conducted, comparing of different embeddings and investigating the influence of different types of knowledge.
The digital transformation with its new technologies and customer expectation has a significant effect on the customer channels in the insurance industry. The objective of this study is the identification of enabling and hindering factors for the adoption of online claim notification services that are an important part of the customer experience in insurance. For this purpose, we conducted a quantitative cross-sectional survey based on the exemplary scenario of car insurance in Germany and analyzed the data via structural equation modeling (SEM). The findings show that, besides classical technology acceptance factors such as perceived usefulness and ease of use, digital mindset and status quo behavior play a role: acceptance of digital innovations, lacking endurance as well as lacking frustration tolerance with the status quo lead to a higher intention for use. Moreover, the results are strongly moderated by the severity of the damage event—an insurance-specific factor that is sparsely considered so far. The latter discovery implies that customers prefer a communication channel choice based on the individual circumstances of the claim.
The paper provides a comprehensive overview of modeling and pricing cyber insurance and includes clear and easily understandable explanations of the underlying mathematical concepts. We distinguish three main types of cyber risks: idiosyncratic, systematic, and systemic cyber risks. While for idiosyncratic and systematic cyber risks, classical actuarial and financial mathematics appear to be well-suited, systemic cyber risks require more sophisticated approaches that capture both network and strategic interactions. In the context of pricing cyber insurance policies, issues of interdependence arise for both systematic and systemic cyber risks; classical actuarial valuation needs to be extended to include more complex methods, such as concepts of risk-neutral valuation and (set-valued) monetary risk measures.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarized comparison of two tools.
In the last years generative models have gained large public attention due to their high level of quality in generated images. In short, generative models learn a distribution from a finite number of samples and are able then to generate infinite other samples. This can be applied to image data. In the past generative models have not been able to generate realistic images, but nowadays the results are almost indistinguishable from real images.
This work provides a comparative study of three generative models: Variational Autoencoder (VAE), Generative Adversarial Network (GAN) and Diffusion Models (DM). The goal is not to provide a definitive ranking indicating which one of them is the best, but to qualitatively and where possible quantitively decide which model is good with respect to a given criterion. Such criteria include realism, generalization and diversity, sampling, training difficulty, parameter efficiency, interpolating and inpainting capabilities, semantic editing as well as implementation difficulty. After a brief introduction of how each model works on the inside, they are compared against each other. The provided images help to see the differences among the models with respect to each criterion.
To give a short outlook on the results of the comparison of the three models, DMs generate most realistic images. They seem to generalize best and have a high variation among the generated images. However, they are based on an iterative process, which makes them the slowest of the three models in terms of sample generation time. On the other hand, GANs and VAEs generate their samples using one single forward-pass. The images generated by GANs are comparable to the DM and the images from VAEs are blurry, which makes them less desirable in comparison to GANs or DMs. However, both the VAE and the GAN, stand out from the DMs with respect to the interpolations and semantic editing, as they have a latent space, which makes space-walks possible and the changes are not as chaotic as in the case of DMs. Furthermore, concept-vectors can be found, which transform a given image along a given feature while leaving other features and structures mostly unchanged, which is difficult to archive with DMs.