Refine
Year of publication
Document Type
- Conference Proceeding (50)
- Article (42)
- Bachelor Thesis (6)
- Part of a Book (2)
- Master's Thesis (2)
- Preprint (2)
- Doctoral Thesis (1)
- Report (1)
- Working Paper (1)
Language
- English (107) (remove)
Is part of the Bibliography
- no (107)
Keywords
- Serviceorientierte Architektur (9)
- Mikroservice (8)
- Computersicherheit (7)
- SOA (7)
- Agilität <Management> (6)
- Agile Softwareentwicklung (5)
- Insurance Industry (5)
- Künstliche Intelligenz (5)
- Nachhaltigkeit (5)
- Rechnernetz (5)
Institute
- Fakultät IV - Wirtschaft und Informatik (107) (remove)
In the context of modern mobility, topics such as smart-cities, Car2Car-Communication, extensive vehicle sensor-data, e-mobility and charging point management systems have to be considered. These topics of modern mobility often have in common that they are characterized by complex and extensive data situations. Vehicle position data, sensor data or vehicle communication data must be preprocessed, aggregated and analyzed. In many cases, the data is interdependent. For example, the vehicle position data of electric vehicles and surrounding charging points have a dependence on one another and characterize a competition situation between the vehicles. In the case of Car2Car-Communication, the positions of the vehicles must also be viewed in relation to each other. The data are dependent on each other and will influence the ability to establish a communication. This dependency can provoke very complex and large data situations, which can no longer be treated efficiently. With this work, a model is presented in order to be able to map such typical data situations with a strong dependency of the data among each other. Microservices can help reduce complexity.
This document describes the work done during the Research Semester in Summer 2006 of Prof. Dr. Stefan Wohlfeil. It is about Security Management tasks and how these tasks might be supported by Open Source software tools. I begin with a short discussion of general management tasks and describe some additional, security related management tasks. These security related tasks should then be added to a software tool which already provides the general tasks. Nagios is such a tool. It is extended to also perform some of the security related management tasks, too. I describe the new checking scripts and how Nagios needs to be configured to use these scripts. The work has been done in cooperation with colleagues from the Polytech- nic of Namibia in Windhoek, Namibia. This opportunity was used to also establish a partnership between the Department of Computer Science at FH Hannover and the Department of Information Technology at the Polytechnic. A first Memorandum of Agreement lays the groundwork for future staff or student exchange.
Radioisotope-guided sentinel lymph node dissection (sLND) has shown high diagnostic reliability in prostate (PCa) and other cancers. To overcome the limitations of the radioactive tracers, magnetometer-guided sLND using superparamagnetic iron oxide nanoparticles (SPIONs) has been successfully used in PCa. This prospective study (SentiMag Pro II, DRKS00007671) determined the diagnostic accuracy of magnetometer-guided sLND in intermediate- and high-risk PCa. Fifty intermediate- or high-risk PCa patients (prostate-specific antigen (PSA) >= 10 ng/mL and/or Gleason score >= 7; median PSA 10.8 ng/mL, IQR 7.4–19.2 ng/mL) were enrolled. After the intraprostatic SPIONs injection a day earlier, patients underwent magnetometer-guided sLND and extended lymph node dissection (eLND, followed by radical prostatectomy. SLNs were detected in in vivo and in ex vivo samples. Diagnostic accuracy of sLND was assessed using eLND as the reference. SLNs were detected in all patients (detection rate 100%), with 447 sentinel lymph nodes SLNs (median 9, IQR 6–12) being identified and 966 LNs (median 18, IQR 15–23) being removed. Thirty-six percent (18/50) of patients had LN metastases (median 2, IQR 1–3). Magnetometer-guided sLND had 100% sensitivity, 97.0% specificity, 94.4% positive predictive value, 100% negative predictive value, 0.0% false negative rate, and 3.0% additional diagnostic value (LN metastases only in SLNs outside the eLND template). In vivo, one positive SLN/LN-positive patient was missed, resulting in a sensitivity of 94.4%. In conclusion, this new magnetic sentinel procedure has high accuracy for nodal staging in intermediate- and high-risk PCa. The reliability of intraoperative SLN detection using this magnetometer system requires verification in further multicentric studies.
In huge warehouses or stockrooms, it is often very difficult to find a certain item, because it has been misplaced and is therefore not at its assumed position. This position paper presents an approach on how to coordinate mobile RFID agents using a blackboard architecture based on Complex Event Processing.
Recent developments in the field of deep learning have shown promising advances for a wide range of historically difficult computer vision problems. Using advanced deep learning techniques, researchers manage to perform high-quality single-image super-resolution, i.e., increasing the resolution of a given image without major losses in image quality, usually encountered when using traditional approaches such as standard interpolation. This thesis examines the process of deep learning super-resolution using convolutional neural networks and investigates whether the same deep learning models can be used to increase OCR results for low-quality text images.
Cradle to Cradle – An analysis of the market potential in the German outdoor apparel industry
(2016)
The purpose of this study is to investigate the market potential in the German outdoor apparel industry by focusing on sustainable production in terms of environmental and human health. A literature study of the Cradle to Cradle (C2C) design concept is provided, as it represents a solution for pollution, waste and environmental destruction caused by the current industrial design and waste management. The data for the subsequent market- and competitive analysis of the German outdoor apparel industry was collected through secondary research in order to identify several key market indicators for the assessment of the market potential. The outcome of this research is the identification of a positioning strategy for outdoor apparel according to the C2C design concept. The results show stagnant growth rates in recent years in the German outdoor apparel market and strong rivalry among the competitors. However, a significant market potential was calculated and beneficial trends for sustainable outdoor brands were recognised. These findings reveal the existence of a market potential for an outdoor apparel brand according to the C2C design concept. By following a positioning strategy of transparency and full commitment to a sustainable production, the company might be able to gain market shares from its competitors, as future predictions indicate slow growth rates in the market. The results of this analysis can be of great interest for entrepreneurs that plan to enter the German outdoor apparel industry.
The paper presents a comprehensive model of a banking system that integrates network effects, bankruptcy costs, fire sales, and cross-holdings. For the integrated financial market we prove the existence of a price-payment equilibrium and design an algorithm for the computation of the greatest and the least equilibrium. The number of defaults corresponding to the greatest price-payment equilibrium is analyzed in several comparative case studies. These illustrate the individual and joint impact of interbank liabilities, bankruptcy costs, fire sales and cross-holdings on systemic risk. We study policy implications and regulatory instruments, including central bank guarantees and quantitative easing, the significance of last wills of financial institutions, and capital requirements.
The negative effects of traffic, such as air quality problems and road congestion, put a strain on the infrastructure of cities and high-populated areas. A potential measure to reduce these negative effects are grocery home deliveries (e-grocery), which can bundle driving activities and, hence, result in decreased traffic and related emission outputs. Several studies have investigated the potential impact of e-grocery on traffic in various last-mile contexts. However, no holistic view on the sustainability of e-grocery across the entire supply chain has yet been proposed. Therefore, this paper presents an agent-based simulation to assess the impact of the e-grocery supply chain compared to the stationary one in terms of mileage and different emission outputs. The simulation shows that a high e-grocery utilization rate can aid in decreasing total driving distances by up to 255 % relative to the optimal value as well as CO 2 emissions by up to 50 %.
Context: Agile software development (ASD) sets social aspects like communication and collaboration in focus. Thus, one may assume that the specific work organization of companies impacts the work of ASD teams. A major change in work organization is the switch to a 4-day work week, which some companies investigated in experiments. Also, recent studies show that ASD teams are affected by the switch to remote work since the Covid 19 pandemic outbreak in 2020.
Objective: Our study presents empirical findings on the effects on ASD teams operating remote in a 4-day work week organization. Method: We performed a qualitative single case study and conducted seven semi-structured interviews, observed 14 agile practices and screened eight project documents and protocols of agile practices.
Results: We found, that the teams adapted the agile method in use due to the change to a 4-day work week environment and the switch to remote work. The productivity of the two ASD teams did not decrease. Although the stress level of the ASD team member increased due to the 4-day work week, we found that the job satisfaction of the individual ASD team members is affected positively. Finally, we point to affects on social facets of the ASD teams.
Conclusion: The research community benefits from our results as the current state of research dealing with the effects of a 4-day work week on ASD teams is limited. Also, our findings provide several practical implications for ASD teams working remote in a 4-day work week.
In the last years generative models have gained large public attention due to their high level of quality in generated images. In short, generative models learn a distribution from a finite number of samples and are able then to generate infinite other samples. This can be applied to image data. In the past generative models have not been able to generate realistic images, but nowadays the results are almost indistinguishable from real images.
This work provides a comparative study of three generative models: Variational Autoencoder (VAE), Generative Adversarial Network (GAN) and Diffusion Models (DM). The goal is not to provide a definitive ranking indicating which one of them is the best, but to qualitatively and where possible quantitively decide which model is good with respect to a given criterion. Such criteria include realism, generalization and diversity, sampling, training difficulty, parameter efficiency, interpolating and inpainting capabilities, semantic editing as well as implementation difficulty. After a brief introduction of how each model works on the inside, they are compared against each other. The provided images help to see the differences among the models with respect to each criterion.
To give a short outlook on the results of the comparison of the three models, DMs generate most realistic images. They seem to generalize best and have a high variation among the generated images. However, they are based on an iterative process, which makes them the slowest of the three models in terms of sample generation time. On the other hand, GANs and VAEs generate their samples using one single forward-pass. The images generated by GANs are comparable to the DM and the images from VAEs are blurry, which makes them less desirable in comparison to GANs or DMs. However, both the VAE and the GAN, stand out from the DMs with respect to the interpolations and semantic editing, as they have a latent space, which makes space-walks possible and the changes are not as chaotic as in the case of DMs. Furthermore, concept-vectors can be found, which transform a given image along a given feature while leaving other features and structures mostly unchanged, which is difficult to archive with DMs.
Delphi is a frequently used research method in the information systems (IS) field. The last fifteen years have seen many variants of the Delphi Method proposed and used in IS research. However, these variants do not seem to be properly derived; while all variants share certain characteristics, their reasoning for differentiation inconsistently varies. It seems that researchers tend to create “new” Delphi Method variants, although the underlying modification of the Delphi Method is, in fact, minor. This leads to a heterogeneity of Delphi Method variants and undermines scientific rigor when using Delphi. The study addresses this deficit and (1) identifies different variants of Delphi and determines their characteristics, (2) critically reflects to what extent a clear distinction between these variants exists, (3) shows the clearly distinguishable Delphi Method variants and their characteristics, (4) develops a proposed taxonomy of Delphi Method variants, and (5) evaluates and applies this taxonomy. The proposed taxonomy helps clearly differentiate Delphi Method variants and enhances methodological rigor when using the Delphi Method.
Autonomous and integrated passenger and freight transport (APFIT) is a promising approach to tackle both, traffic and last-mile-related issues such as environmental emissions, social and spatial conflicts or operational inefficiencies. By conducting an agent-based simulation, we shed light on this widely unexplored research topic and provide first indications regarding influential target figures of such a system in the rural area of Sarstedt, Germany. Our results show that larger fleets entail inefficiencies due to suboptimal utilization of monetary and material resources and increase traffic volume while higher amounts of unused vehicles may exacerbate spatial conflicts. Nevertheless, to fit the given demand within our study area, a comparatively large fleet of about 25 vehicles is necessary to provide reliable service, assuming maximum passenger waiting times of six minutes to the expense of higher standby times, rebalancing effort, and higher costs for vehicle acquisition and maintenance.
At University of Applied Sciences and Arts Hannover, LON-CAPA is used as a learning management system beside Moodle. LON-CAPA has a strong focus on e-assessment in mathematics and sciences. We used LON-CAPA in Hannover mainly in mathematics courses.
Since theoretical computer science needs a lot of mathematics, this course is also well-suited for e-assessment in LON-CAPA. Beside this, we already used JFLAP as an interactive tool to deal with automata, machines and grammars in theoretical computer science. In LON-CAPA, there exists a possibility of using external graders to grade problems.
We decided to write a grading engine (with JFLAP inside) to grade automata, machines and grammars handed in by students and to couple this with LON-CAPA. This report describes the types of questions that are now possible with this grader and how they can be authored in LON-CAPA.
During the transition from conventional towards purely electrical, sustainable mobility, transitional technologies play a major part in the task of increasing adaption rates and decreasing range anxiety. Developing new concepts to meet this challenge requires adaptive test benches, which can easily be modified e.g. when progressing from one stage of development to the next, but also meet certain sustainability demands themselves.
The system architecture presented in this paper is built around a service-oriented software layer, connecting a modular hardware layer for direct access to sensors and actuators to an extensible set of client tools. Providing flexibility, serviceability and ease of use, while maintaining a high level of reusability for its constituent components and providing features to reduce the required overall run time of the test benches, it can effectively decrease the CO2 emissions of the test bench while increasing its sustainability and efficiency.
Cloud computing has become well established in private and public sector projects over the past few years, opening ever new opportunities for research and development, but also for education. One of these opportunities presents itself in the form of dynamically deployable, virtual lab environments, granting educational institutions increased flexibility with the allocation of their computing resources. These fully sandboxed labs provide students with their own, internal network and full access to all machines within, granting them the flexibility necessary to gather hands-on experience with building heterogeneous microservice architectures. The eduDScloud provides a private cloud infrastructure to which labs like the microservice lab outlined in this paper can be flexibly deployed at a moment’s notice.
In this paper, we present a novel approach for real-time rendering of soft eclipse shadows cast by spherical, atmosphereless bodies. While this problem may seem simple at first, it is complicated by several factors. First, the extreme scale differences and huge mutual distances of the involved celestial bodies cause rendering artifacts in practice. Second, the surface of the Sun does not emit light evenly in all directions (an effect which is known as limb darkening). This makes it impossible to model the Sun as a uniform spherical light source. Finally, our intended applications include real-time rendering of solar eclipses in virtual reality, which require very high frame rates. As a solution to these problems, we precompute the amount of shadowing into an eclipse shadow map, which is parametrized so that it is independent of the position and size of the occluder. Hence, a single shadow map can be used for all spherical occluders in the Solar System. We assess the errors introduced by various simplifications and compare multiple approaches in terms of performance and precision. Last but not least, we compare our approaches to the state-of-the-art and to reference images. The implementation has been published under the MIT license.
In recent years, multiple efforts for reducing energy usage have been proposed. Especially buildings offer high potentials for energy savings. In this paper, we present a novel approach for intelligent energy control that combines a simple infrastructure using low cost sensors with the reasoning capabilities of Complex Event Processing. The key issues of the approach are a sophisticated semantic domain model and a multi-staged event processing architecture leading to an intelligent, situation-aware energy management system.
The usage of microservices promises a lot of benefits concerning scalability and maintainability, rewriting large monoliths is however not always possible. Especially in scientific projects, pure microservice architectures are therefore not feasible in every project. We propose the utilization of microservice principles for the construction of microsimulations for urban transport. We present a prototypical architecture for the connection of MATSim and AnyLogic, two widely used simulation tools in the context of urban transport simulation. The proposed system combines the two tools into a singular tool supporting civil engineers in decision making on innovative urban transport concepts.
Portable-micro-Combined-Heat-and-Power-units are a gateway technology bridging conventional vehicles and Battery Electric Vehicles (BEV). Being a new technology, new software has to be created that can be easily adapted to changing requirements. We propose and evaluate three different architectures based on three architectural paradigms. Using a scenario-based evaluation, we conclude that a Service-Oriented Architecture (SOA) using microservices provides a higher quality solution than a layered or Event-Driven Complex-Event-Processing (ED-CEP) approach. Future work will include implementation and simulation-driven evaluation.
In this paper the workflow of the project 'Untersuchungs-, Simulations- und Evaluationstool für Urbane Logistik` (USEfUL) is presented. Aiming to create a web-based decision support tool for urban logistics, the project needed to integrate multiple steps into a single workflow, which in turn needed to be executed multiple times. While a service-oriented system could not be created, the principles of service orientation was utilized to increase workflow efficiency and flexibility, allowing the workflow to be easily adapted to new concepts or research areas.
The purpose of this research is to explore results that are measured by social enterprises (= SEs) according to their mission and vision. Four SEs are examined for this reason. The status quo of aligned measurements was captured by conducting seven semi-structured interviews with persons from the middle and top management of the considered SEs. A conceptual framework, which categorizes output, outcome and impact measurements, is used as the basis for a structured content analysis. The findings imply that SEs’ measurements are not sufficiently aligned with their mission and vision. Outputs are measured by all considered SEs. However, they fail to measure outcomes with all its sublevels. Especially, measuring mindset change and behavior change outcomes are neglected by the examined SEs. That can lead to adjustments, where SEs only create more outputs but fail to create more outcomes and impact. Furthermore, neglecting outcome measurements makes existing but mostly unsystematic impact measurements invalid, since outputs, outcomes and impact build on each other. The research presented here provides one of the first investigations into the alignment of measurements with mission and vision in the context of SEs. Ultimately, the findings question SEs current measurements and aim to open further perspectives on improving the performance of SEs.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
Social skills are essential for a successful understanding of agile methods in software development. Several studies highlight the opportunities and advantages of integrating real-world projects and problems while collaborating with companies into higher education using agile methods. This integration comes with several opportunities and advantages for both the students and the company. The students are able to interact with real-world software development teams, analyze and understand their challenges and identify possible measures to tackle them. However, the integration of real-world problems and companies is complex and may come with a high effort in terms of coordination and preparation of the course. The challenges related to the interaction and communication with students are increased by virtual distance teaching during the Covid-19 pandemic as direct contact with students is missing. Also, we do not know how problem-based learning in virtual distance teaching is valued by the students. This paper presents our adapted eduScrum approach and learning outcome of integrating experiments with real-world software development teams from two companies into a Master of Science course organized in virtual distance teaching. The evaluation shows that students value analyzing real-world problems using agile methods. They highlight the interaction with real-world software development teams. Also, the students appreciate the organization of the course using an iterative approach with eduScrum. Based on our findings, we present four recommendations for the integration of agile methods and real world problems into higher education in virtual distance teaching settings. The results of our paper contribute to the practitioner and researcher/lecturer community, as we provide valuable insights how to fill the gap between practice and higher education in virtual distance settings.
Companies worldwide have enabled their employees to work remotely as a consequence of the Covid 19 pandemic. Software development is a human-centered discipline and thrives on teamwork. Agile methods are focusing on several social aspects of software development. Software development teams in Germany were mainly co-located before the pandemic. This paper aims to validate the findings of existing studies by expanding on an existing multiple-case study. Therefore, we collected data by conducting semi-structured interviews, observing agile practices, and viewing project documents in three cases. Based on the results, we can confirm the following findings: 1) The teams rapidly adapted the agile practices and roles, 2) communication is more objective within the teams, 3) decreased social exchange between team members, 4) the expectation of a combined approach of remote and onsite work after the pandemic, 5) stable or increased (perceived) performance and 6) stable or increased well-being of team members.
In 2020, the world changed due to the Covid 19 pandemic. Containment measures to reduce the spread of the virus were planned and implemented by many countries and companies. Worldwide, companies sent their employees to work from home. This change has led to significant challenges in teams that were co-located before the pandemic. Agile software development teams were affected by this switch, as agile methods focus on communication and collaboration. Research results have already been published on the challenges of switching to remote work and the effects on agile software development teams. This article presents a systematic literature review. We identified 12 relevant papers for our studies and analyzed them on detail. The results provide an overview how agile software development teams reacted to the switch to remote work, e.g., which agile practices they adapted. We also gained insights on the changes of the performance of agile software development teams and social effects on agile software development teams during the pandemic.
This Innovative Practice Full Paper presents our learnings of the process to perform a Master of Science class with eduScrum integrating real world problems as projects. We prepared, performed, and evaluated an agile educational concept for the new Master of Science program Digital Transformation organized and provided by the department of business computing at the University of Applied Sciences and Arts - Hochschule Hannover in Germany. The course deals with innovative methodologies of agile project management and is attended by 25 students. We performed the class due the summer term in 2019 and 2020 as a teaching pair. The eduScrum method has been used in different educational contexts, including higher education. During the approach preparation, we decided to use challenges, problems, or questions from the industry. Thus, we acquired four companies and prepared in coordination with them dedicated project descriptions. Each project description was refined in the form of a backlog (list of requirements). We divided the class into four eduScrum teams, one team for each project. The subdivision of the class was done randomly.
Since we wanted to integrate realistic projects into industry partners’ implementation, we decided to adapt the eduScrum approach. The eduScrum teams were challenged with different projects, e.g., analyzing a dedicated phenomenon in a real project or creating a theoretical model for a company’s new project management approach. We present our experiences of the whole process to prepare, perform and evaluate an agile educational approach combined with projects from practice. We found, that the students value the agile method using real world problems. However, the results are mainly based on the summer term 2019, this paper also includes our learnings from virtual distance teaching during the Covid 19 pandemic in summer term 2020. The paper contributes to the distribution of methods for higher education teaching in the classroom and distance learning.
Agile methods require constant optimization of one’s approach and leading to the adaptation of agile practices. These practices are also adapted when introducing them to companies and their software development teams due to organizational constraints. As a consequence of the widespread use of agile methods, we notice a high variety of their elements:
Practices, roles, and artifacts. This multitude of agile practices, artifacts, and roles results in an unsystematic mixture. It leads to several questions: When is a practice a practice, and when is it a method or technique? This paper presents the tree of agile elements, a taxonomy of agile methods, based on the literature and guidelines of widely used agile methods. We describe a taxonomy of agile methods using terms and concepts of software engineering, in particular software process models. We aim to enable agile elements to be delimited, which should help companies, agile teams, and the research community gain a basic understanding of the interrelationships and dependencies of individual components of agile methods.
Context: Companies adapt agile methods, practices or artifacts for their use in practice since more than two decades. This adaptions result in a wide variety of described agile practices. For instance, the Agile Alliance lists 75 different practices in its Agile Glossary. This situation may lead to misunderstandings, as agile practices with similar names can be interpreted and used differently.
Objective: This paper synthesize an integrated list of agile practices, both from primary and secondary sources.
Method: We performed a tertiary study to identify existing overviews and lists of agile practices in the literature. We identified 876 studies, of which 37 were included.
Results: The results of our paper show that certain agile practices are listed and used more often in existing studies. Our integrated list of agile practices comprises 38 entries structured in five categories. Conclusion: The high number of agile practices and thus, the wide variety increased steadily over the past decades due to the adaption of agile methods. Based on our findings, we present a comprehensive overview of agile practices. The research community benefits from our integrated list of agile practices as a potential basis for future research. Also, practitioners benefit from our findings, as the structured overview of agile practices provides the opportunity to select or adapt practices for their specific needs.
Pathologists need to identify abnormal changes in tissue. With the developing digitalization, the used tissue slides are stored digitally. This enables pathologists to annotate the region of interest with the support of software tools. PathoLearn is a web-based learning platform explicitly developed for the teacher-student scenario, where the goal is that students learn to identify potential abnormal changes. Artificial intelligence (AI) and machine learning (ML) have become very important in medicine. Many health sectors already utilize AI and ML. This will only increase in the future, also in the field of pathology. Therefore, it is important to teach students the fundamentals and concepts of AI and ML early in their studies. Additionally, creating and training AI generally requires knowledge of programming and technical details. This thesis evaluates how this boundary can be overcome by comparing existing end-to-end AI platforms and teaching tools for AI. It was shown that a visual programming editor offers a fitting abstraction for creating neural networks without programming. This was extended with real-time collaboration to enable students to work in groups. Additionally, an automatic training feature was implemented, removing the necessity to know technical details about training neural networks.
Visual effects and elements in video games and interactive virtual environments can be applied to transfer (or delegate) non-visual perceptions (e.g. proprioception, presence, pain) to players and users, thus increasing perceptual diversity via the visual modality. Such elements or efects are referred to as visual delegates (VDs). Current fndings on the experiences that VDs can elicit relate to specifc VDs, not to VDs in general. Deductive and comprehensive VD evaluation frameworks are lacking. We analyzed VDs in video games to generalize VDs in terms of their visual properties. We conducted a systematic paper analysis to explore player and user experiences observed in association with specifc VDs in user studies. We conducted semi-structured interviews with expert players to determine their preferences and the impact of VD properties. The resulting VD framework (VD-frame) contributes to a more strategic approach to identifying the impact of VDs on player and user experiences.
Research into new forms of care for complex chronic diseases requires substantial efforts in the collection, storage, and analysis of medical data. Additionally, providing practical support for those who coordinate the actual care management process within a diversified network of regional service providers is also necessary. For instance, for stroke units, rehabilitation partners, ambulatory actors, as well as health insurance funds. In this paper, we propose the concept of comprehensive and practical receiver-oriented encryption (ROE) as a guiding principle for such data-intensive, research-oriented case management systems, and
illustrate our concept with the example of the IT infrastructure of the project STROKE OWL.
During the Corona-Pandemic, information (e.g. from the analysis of balance sheets and payment behavior) traditionally used for corporate credit risk analysis became less valuable because it represents only past circumstances. Therefore, the use of currently published data from social media platforms, which have shown to contain valuable information regarding the financial stability of companies, should be evaluated. In this data e. g. additional information from disappointed employees or customers can be present. In order to analyze in how far this data can improve the information base for corporate credit risk assessment, Twitter data regarding the ten greatest insolvencies of German companies in 2020 and solvent counterparts is analyzed in this paper. The results from t-tests show, that sentiment before the insolvencies is significantly worse than in the comparison group which is in alignment with previously conducted research endeavors. Furthermore, companies can be classified as prospectively solvent or insolvent with up to 70% accuracy by applying the k-nearest-neighbor algorithm to monthly aggregated sentiment scores. No significant differences in the number of Tweets for both groups can be proven, which is in contrast to findings from studies which were conducted before the Corona-Pandemic. The results can be utilized by practitioners and scientists in order to improve decision support systems in the domain of corporate credit risk analysis. From a scientific point of view, the results show, that the information asymmetry between lenders and borrowers in credit relationships, which are principals and agents according to the principal-agent-theory, can be reduced based on user generated content from social media platforms. In future studies, it should be evaluated in how far the data can be integrated in established processes for credit decision making. Furthermore, additional social media platforms as well as samples of companies should be analyzed. Lastly, the authenticity of user generated contend should be taken into account in order to ensure, that credit decisions rely on truthful information only.
Since textual user generated content from social media platforms contains valuable information for decision support and especially corporate credit risk analysis, automated approaches for text classification such as the application of sentiment dictionaries and machine learning algorithms have received great attention in recent user generated content based research endeavors. While machine learning algorithms require individual training data sets for varying sources, sentiment dictionaries can be applied to texts immediately, whereby domain specific dictionaries attain better results than domain independent word lists. We evaluate by means of a literature review how sentiment dictionaries can be constructed for specific domains and languages. Then, we construct nine versions of German sentiment dictionaries relying on a process model which we developed based on the literature review. We apply the dictionaries to a manually classified German language data set from Twitter in which hints for financial (in)stability of companies have been proven. Based on their classification accuracy, we rank the dictionaries and verify their ranking by utilizing Mc Nemar’s test for significance. Our results indicate, that the significantly best dictionary is based on the German language dictionary SentiWortschatz and an extension approach by use of the lexical-semantic database GermaNet. It achieves a classification accuracy of 59,19 % in the underlying three-case-scenario, in which the Tweets are labelled as negative, neutral or positive. A random classification would attain an accuracy of 33,3 % in the same scenario and hence, automated coding by use of the sentiment dictionaries can lead to a reduction of manual efforts. Our process model can be adopted by other researchers when constructing sentiment dictionaries for various domains and languages. Furthermore, our established dictionaries can be used by practitioners especially in the domain of corporate credit risk analysis for automated text classification which has been conducted manually to a great extent up to today.
In this paper, we consider the route coordination problem in emergency evacuation of large smart buildings. The building evacuation time is crucial in saving lives in emergency situations caused by imminent natural or man-made threats and disasters. Conventional approaches to evacuation route coordination are static and predefined. They rely on evacuation plans present only at a limited number of building locations and possibly a trained evacuation personnel to resolve unexpected contingencies. Smart buildings today are equipped with sensory infrastructure that can be used for an autonomous situation-aware evacuation guidance optimized in real time. A system providing such a guidance can help in avoiding additional evacuation casualties due to the flaws of the conventional evacuation approaches. Such a system should be robust and scalable to dynamically adapt to the number of evacuees and the size and safety conditions of a building. In this respect, we propose a distributed route recommender architecture for situation-aware evacuation guidance in smart buildings and describe its key modules in detail. We give an example of its functioning dynamics on a use case.
Background: Virtual reality (VR) is increasingly used as simulation technology in emergency medicine education and training, in particular for training nontechnical skills. Experimental studies comparing teaching and learning in VR with traditional training media often demonstrate the equivalence or even superiority regarding particular variables of learning or training effectiveness.
Objective: In the EPICSAVE (Enhanced Paramedic Vocational Training with Serious Games and Virtual Environments) project, a highly immersive room-scaled multi-user 3-dimensional VR simulation environment was developed. In this feasibility study, we wanted to gain initial insights into the training effectiveness and media use factors influencing learning and training in VR.
Methods: The virtual emergency scenario was anaphylaxis grade III with shock, swelling of the upper and lower respiratory tract, as well as skin symptoms in a 5-year-old girl (virtual patient) visiting an indoor family amusement park with her grandfather (virtual agent). A cross-sectional, one-group pretest and posttest design was used to evaluate the training effectiveness and quality of the training execution. The sample included 18 active emergency physicians.
Results: The 18 participants rated the VR simulation training positive in terms of training effectiveness and quality of the training execution. A strong, significant correlation (r=.53, P=.01) between experiencing presence and assessing training effectiveness was observed. Perceived limitations in usability and a relatively high extraneous cognitive load reduced this positive effect.
Conclusions: The training within the virtual simulation environment was rated as an effective educational approach. Specific media use factors appear to modulate training effectiveness (ie, improvement through “experience of presence” or reduction through perceived limitations in usability). These factors should be specific targets in the further development of this VR simulation training.
Autonomous mobile six-legged robots are able to demonstrate the potential of intelligent control systems based on recurrent neural networks. The robots evaluate only two forward and two backward looking infrared sensor signals. Fast converging genetic training algorithms are applied to train the robots to move straight in six directions. The robots performed successfully within an obstacle environment and there could be observed a never trained useful interaction between each of the single robots. The paper describes the robot systems and presents the test results. Video clips are downloadable under www.inform.fh-hannover.de/download/lechner.php. Held on IFAC International Conference on Intelligent Control Systems and Signal Processing (ICONS 2003, April 2003, Portugal).
Report of a research project of the Fachhochschule Hannover, University of Applied Sciences and Arts, Department of Information Technologies. Automatic face recognition increases the security standards at public places and border checkpoints. The picture inside the identification documents could widely differ from the face, that is scanned under random lighting conditions and for unknown poses. The paper describes an optimal combination of three key algorithms of object recognition, that are able to perform in real time. The camera scan is processed by a recurrent neural network, by a Eigenfaces (PCA) method and by a least squares matching algorithm. Several examples demonstrate the achieved robustness and high recognition rate.
Dramatic increases in the number of cyber security attacks and breaches toward businesses and organizations have been experienced in recent years. The negative impacts of these breaches not only cause the stealing and compromising of sensitive information, malfunctioning of network devices, disruption of everyday operations, financial damage to the attacked business or organization itself, but also may navigate to peer businesses/organizations in the same industry. Therefore, prevention and early detection of these attacks play a significant role in the continuity of operations in IT-dependent organizations. At the same time detection of various types of attacks has become extremely difficult as attacks get more sophisticated, distributed and enabled by Artificial Intelligence (AI). Detection and handling of these attacks require sophisticated intrusion detection systems which run on powerful hardware and are administered by highly experienced security staff. Yet, these resources are costly to employ, especially for small and medium-sized enterprises (SMEs). To address these issues, we developed an architecture -within the GLACIER project- that can be realized as an in-house operated Security Information Event Management (SIEM) system for SMEs. It is affordable for SMEs as it is solely based on free and open-source components and thus does not require any licensing fees. Moreover, it is a Self-Contained System (SCS) and does not require too much management effort. It requires short configuration and learning phases after which it can be self-contained as long as the monitored infrastructure is stable (apart from a reaction to the generated alerts which may be outsourced to a service provider in SMEs, if necessary). Another main benefit of this system is to supply data to advanced detection algorithms, such as multidimensional analysis algorithms, in addition to traditional SIEMspecific tasks like data collection, normalization, enrichment, and storage. It supports the application of novel methods to detect security-related anomalies. The most distinct feature of this system that differentiates it from similar solutions in the market is its user feedback capability. Detected anomalies are displayed in a Graphical User Interface (GUI) to the security staff who are allowed to give feedback for anomalies. Subsequently, this feedback is utilized to fine-tune the anomaly detection algorithm. In addition, this GUI also provides access to network actors for quick incident responses. The system in general is suitable for both Information Technology (IT) and Operational Technology (OT) environments, while the detection algorithm must be specifically trained for each of these environments individually.
Background:
Many patients with cardiovascular disease also show a high comorbidity of mental disorders, especially such as anxiety and depression. This is, in turn, associated with a decrease in the quality of life. Psychocardiological treatment options are currently limited. Hence, there is a need for novel and accessible psychological help. Recently, we demonstrated that a brief face-to-face metacognitive therapy (MCT) based intervention is promising in treating anxiety and depression. Here, we aim to translate the face-to-face approach into digital application and explore the feasibility of this approach.
Methods:
We translated a validated brief psychocardiological intervention into a novel non-blended web app. The data of 18 patients suffering from various cardiac conditions but without diagnosed mental illness were analyzed after using the web app over a two-week period in a feasibility trial. The aim was whether a nonblended web app based MCT approach is feasible in the group of cardiovascular patients with cardiovascular disease.
Results:
Overall, patients were able to use the web app and rated it as satisfactory and beneficial. In addition, there was first indication that using the app improved the cardiac patients’ subjectively perceived health and reduced their anxiety. Therefore, the approach seems feasible for a future randomized controlled trial.
Conclusion:
Applying a metacognitive-based brief intervention via a nonblended web app seems to show good acceptance and feasibility in a small target group of patients with CVD. Future studies should further develop, improve and validate digital psychotherapy approaches, especially in patient groups with a lack of access to standard psychotherapeutic care.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarized comparison of two tools.
The digital transformation with its new technologies and customer expectation has a significant effect on the customer channels in the insurance industry. The objective of this study is the identification of enabling and hindering factors for the adoption of online claim notification services that are an important part of the customer experience in insurance. For this purpose, we conducted a quantitative cross-sectional survey based on the exemplary scenario of car insurance in Germany and analyzed the data via structural equation modeling (SEM). The findings show that, besides classical technology acceptance factors such as perceived usefulness and ease of use, digital mindset and status quo behavior play a role: acceptance of digital innovations, lacking endurance as well as lacking frustration tolerance with the status quo lead to a higher intention for use. Moreover, the results are strongly moderated by the severity of the damage event—an insurance-specific factor that is sparsely considered so far. The latter discovery implies that customers prefer a communication channel choice based on the individual circumstances of the claim.
We present an approach towards a data acquisition system for digital twins that uses a 5G net- work for data transmission and localization. The current hardware setup, which utilizes stereo vision and LiDAR for 3D mapping, is explained together with two recorded point cloud data sets. Furthermore, a resulting digital twin comprised of voxelized point cloud data is shown. Ideas for future applications and challenges regarding the system are discussed and an outlook on further development is given.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarizes our comparison of all three tools from our final comparison round.
Cloud Computing: Serverless
(2021)
A serverless architecture is a new approach to offering services over the Internet. It combines BaaS (Backend-as-a-service) and FaaS (Function-as-a-service). With the serverless architecture no own or rented infrastructures are needed anymore. In addition, the company does not have to worry about scaling any longer, as this happens automatically and immediately. Furthermore, there is no need any longer for maintenance work on the servers, as this is completely taken over by the provider. Administrators are also no longer needed for the same reason. Finally, many ready-made functions are offered, with which the development effort can be reduced. As a result, the serverless architecture is very well suited to many application scenarios, and it can save considerable costs (server costs, maintenance costs, personnel costs, electricity costs, etc.). The company only must subdivide the source code of the application and upload it to the provider’s server. The rest is done by the provider.
In microservice architectures, data is often hold redundantly to create an overall resilient system. Although the synchronization of this data proposes a significant challenge, not much research has been done on this topic yet. This paper shows four general approaches for assuring consistency among services and demonstrates how to identify the best solution for a given architecture. For this, a microservice architecture, which implements the functionality of a mainframe-based legacy system from the insurance industry, serves as an example.
Microservices is an architectural style for complex application systems, promising some crucial benefits, e.g. better maintainability, flexible scalability, and fault tolerance. For this reason microservices has attracted attention in the software development departments of different industry sectors, such as ecommerce and streaming services. On the other hand, businesses have to face great challenges, which hamper the adoption of the architectural style. For instance, data are often persisted redundantly to provide fault tolerance. But the synchronization of those data for the sake of consistency is a major challenge. Our paper presents a case study from the insurance industry which focusses consistency issues when migrating a monolithic core application towards microservices. Based on the Domain Driven Design (DDD) methodology, we derive bounded contexts and a set of microservices assigned to these contexts. We discuss four different approaches to ensure consistency and propose a best practice to identify the most appropriate approach for a given scenario. Design and implementation details and compliance issues are presented as well.
The transfer of historically grown monolithic software architectures into modern service-oriented architectures creates a lot of loose coupling points. This can lead to an unforeseen system behavior and can significantly impede those continuous modernization processes, since it is not clear where bottlenecks in a system arise. It is therefore necessary to monitor such modernization processes with an adaptive monitoring concept in order to be able to correctly record and interpret unpredictable system dynamics. For this purpose, a general measurement methodology and a specific implementation concept are presented in this work.