Refine
Year of publication
Document Type
- Conference Proceeding (120) (remove)
Language
- English (120) (remove)
Has Fulltext
- yes (120)
Is part of the Bibliography
- no (120)
Keywords
- Mikroservice (8)
- Serviceorientierte Architektur (7)
- Agilität <Management> (6)
- Energiemanagement (6)
- Agile Softwareentwicklung (5)
- Insurance Industry (5)
- Versicherungswirtschaft (5)
- COVID-19 (4)
- Computersicherheit (4)
- Concreteness (4)
In 2020, the world changed due to the Covid 19 pandemic. Containment measures to reduce the spread of the virus were planned and implemented by many countries and companies. Worldwide, companies sent their employees to work from home. This change has led to significant challenges in teams that were co-located before the pandemic. Agile software development teams were affected by this switch, as agile methods focus on communication and collaboration. Research results have already been published on the challenges of switching to remote work and the effects on agile software development teams. This article presents a systematic literature review. We identified 12 relevant papers for our studies and analyzed them on detail. The results provide an overview how agile software development teams reacted to the switch to remote work, e.g., which agile practices they adapted. We also gained insights on the changes of the performance of agile software development teams and social effects on agile software development teams during the pandemic.
Context: Companies adapt agile methods, practices or artifacts for their use in practice since more than two decades. This adaptions result in a wide variety of described agile practices. For instance, the Agile Alliance lists 75 different practices in its Agile Glossary. This situation may lead to misunderstandings, as agile practices with similar names can be interpreted and used differently.
Objective: This paper synthesize an integrated list of agile practices, both from primary and secondary sources.
Method: We performed a tertiary study to identify existing overviews and lists of agile practices in the literature. We identified 876 studies, of which 37 were included.
Results: The results of our paper show that certain agile practices are listed and used more often in existing studies. Our integrated list of agile practices comprises 38 entries structured in five categories. Conclusion: The high number of agile practices and thus, the wide variety increased steadily over the past decades due to the adaption of agile methods. Based on our findings, we present a comprehensive overview of agile practices. The research community benefits from our integrated list of agile practices as a potential basis for future research. Also, practitioners benefit from our findings, as the structured overview of agile practices provides the opportunity to select or adapt practices for their specific needs.
Context: Agile software development (ASD) sets social aspects like communication and collaboration in focus. Thus, one may assume that the specific work organization of companies impacts the work of ASD teams. A major change in work organization is the switch to a 4-day work week, which some companies investigated in experiments. Also, recent studies show that ASD teams are affected by the switch to remote work since the Covid 19 pandemic outbreak in 2020.
Objective: Our study presents empirical findings on the effects on ASD teams operating remote in a 4-day work week organization. Method: We performed a qualitative single case study and conducted seven semi-structured interviews, observed 14 agile practices and screened eight project documents and protocols of agile practices.
Results: We found, that the teams adapted the agile method in use due to the change to a 4-day work week environment and the switch to remote work. The productivity of the two ASD teams did not decrease. Although the stress level of the ASD team member increased due to the 4-day work week, we found that the job satisfaction of the individual ASD team members is affected positively. Finally, we point to affects on social facets of the ASD teams.
Conclusion: The research community benefits from our results as the current state of research dealing with the effects of a 4-day work week on ASD teams is limited. Also, our findings provide several practical implications for ASD teams working remote in a 4-day work week.
Image captions in scientific papers usually are complementary to the images. Consequently, the captions contain many terms that do not refer to concepts visible in the image. We conjecture that it is possible to distinguish between these two types of terms in an image caption by analysing the text only. To examine this, we evaluated different features. The dataset we used to compute tf.idf values, word embeddings and concreteness values contains over 700 000 scientific papers with over 4,6 million images. The evaluation was done with a manually annotated subset of 329 images. Additionally, we trained a support vector machine to predict whether a term is a likely visible or not. We show that concreteness of terms is a very important feature to identify terms in captions and context that refer to concepts visible in images.
Since textual user generated content from social media platforms contains valuable information for decision support and especially corporate credit risk analysis, automated approaches for text classification such as the application of sentiment dictionaries and machine learning algorithms have received great attention in recent user generated content based research endeavors. While machine learning algorithms require individual training data sets for varying sources, sentiment dictionaries can be applied to texts immediately, whereby domain specific dictionaries attain better results than domain independent word lists. We evaluate by means of a literature review how sentiment dictionaries can be constructed for specific domains and languages. Then, we construct nine versions of German sentiment dictionaries relying on a process model which we developed based on the literature review. We apply the dictionaries to a manually classified German language data set from Twitter in which hints for financial (in)stability of companies have been proven. Based on their classification accuracy, we rank the dictionaries and verify their ranking by utilizing Mc Nemar’s test for significance. Our results indicate, that the significantly best dictionary is based on the German language dictionary SentiWortschatz and an extension approach by use of the lexical-semantic database GermaNet. It achieves a classification accuracy of 59,19 % in the underlying three-case-scenario, in which the Tweets are labelled as negative, neutral or positive. A random classification would attain an accuracy of 33,3 % in the same scenario and hence, automated coding by use of the sentiment dictionaries can lead to a reduction of manual efforts. Our process model can be adopted by other researchers when constructing sentiment dictionaries for various domains and languages. Furthermore, our established dictionaries can be used by practitioners especially in the domain of corporate credit risk analysis for automated text classification which has been conducted manually to a great extent up to today.
Wikidata and Wikibase as complementary research data management services for cultural heritage data
(2022)
The NFDI (German National Research Data Infrastructure) consortia are associations of various institutions within a specific research field, which work together to develop common data infrastructures, guidelines, best practices and tools that conform to the principles of FAIR data. Within the NFDI, a common question is: What is the potential of Wikidata to be used as an application for science and research? In this paper, we address this question by tracing current research usecases and applications for Wikidata, its relation to standalone Wikibase instances, and how the two can function as complementary services to meet a range of research needs. This paper builds on lessons learned through the development of open data projects and software services within the Open Science Lab at TIB, Hannover, in the context of NFDI4Culture – the consortium including participants across the broad spectrum of the digital libraries, archives, and museums field, and the digital humanities.
We present a novel long short-term memory (LSTM) approach for time-series prediction of the sand demand which arises from preparing the sand moulds for the iron casting process of a foundry. With our approach, we contribute to qualify LSTM and its combination with feedback-corrected optimal scheduling for industrial processes.
The sand is produced in an energy intensive mixing process which is controlled by optimal scheduling. The optimal scheduling is solved for a fixed prediction horizon. One major influencing factor is the sand demand, which is highly disturbed, for example due to production interruptions. The causes of production interruptions are in general physically unknown. We assume that information about the future behavior of the sand demand is included in current and past process data. Therefore, we choose LSTM networks for predicting the time-series of the sand demand.
The sand demand prediction is performed by our multi model approach. This approach outperforms the currently used naive estimation, even when predicting far into the future. Our LSTM based prediction approach can forecast the sand demand with a conformity up to 38 % and a mean value accuracy of approximately 99%. Simulating the optimal scheduling with sand demand prediction leads to an improvement in energy savings of approximately 1.1% compared to the naive estimation. The application of our novel approach at the real production plant of a foundry proves the simulation results and verifies the capability of our approach.
With the use of an energy management system in an industrial company according to ISO 50001, a step-by-step increase in energy efficiency can be achieved. The realization of energy monitoring and load management functions requires programs on edge devices or PLCs to acquire the data, adapt the data type or scale the values of the energy information. In addition, the energy information must be mapped to communication interfaces (e.g. based on OPC UA) in order to convey this energy information to the energy management application. The development of these energy management programs is associated with a high engineering effort, because the field devices from the heterogeneous field level do not provide the energy information in standardized semantics. To mitigate this engineering effort, a universal energy data information model (UEIM) is developed and presented in this paper.
A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.
Even for the more traditional insurance industry, the Microservices Architecture (MSA) style plays an increasingly important role in provisioning insurance services. However, insurance businesses must operate legacy applications, enterprise software, and service-based applications in parallel for a more extended transition period. The ultimate goal of our ongoing research is to design a microservice reference architecture in cooperation with our industry partners from the insurance domain that provides an approach for the integration of applications from different architecture paradigms. In Germany, individual insurance services are classified as part of the critical infrastructure. Therefore, German insurance companies must comply with the Federal Office for Information Security requirements, which the Federal Supervisory Authority enforces. Additionally, insurance companies must comply with relevant laws, regulations, and standards as part of the business’s compliance requirements. Note: Since Germany is seen as relatively ’tough’ with respect to privacy and security demands, fullfilling those demands might well be suitable (if not even ’over-achieving’) for insurances in other countries as well. The question raises thus, of how insurance services can be secured in an application landscape shaped by the MSA style to comply with the architectural and security requirements depicted above. This article highlights the specific regulations, laws, and standards the insurance industry must comply with. We present initial architectural patterns to address authentication and authorization in an MSA tailored to the requirements of our insurance industry partners.
To avoid the shortcomings of traditional monolithic applications, the Microservices Architecture (MSA) style plays an increasingly important role in providing business services. This is true even for the more conventional insurance industry with its highly heterogeneous application landscape and sophisticated cross-domain business processes. Therefore, the question arises of how workflows can be implemented to grant the required flexibility and agility and, on the other hand, to exploit the potential of the MSA style. In this article, we present two different approaches – orchestration and choreography. Using an application scenario from the insurance domain, both concepts are discussed. We introduce a pattern that outlines the mapping of a workflow to a choreography.
Techno-economic analysis that allocate costs to the energy flows of energy systems are helpful to understand the formation of costs within processes and to increase the cost efficiency. For the economic evaluation, the usefulness or quality of the energy is of great importance. In exergy-based methods, this is considered by allocating costs to the exergy instead of energy. As exergy represents the ability of performing work, it is often named the useful part of energy. In contrast, the anergy, the part of energy, which cannot perform work, is often assumed to be not useful.
However, heat flows as used e.g. in domestic heating are always a mixture of a relative small portion of exergy and a big portion of anergy. Although of lower quality, the anergy is obviously useful for these applications. The question is, whether it makes sense to differentiate between exergy and anergy and take both properties into account for the economic evaluation.
To answer this question, a new methodical concept based on the definition of an anergy-exergy cost ratio is compared to the commonly applied approaches of considering either energy or exergy as the basis for economic evaluation. These three different approaches for the economic analysis of thermal energy systems are applied to an exemplary heating system with thermal storages. It is shown that the results of the techno-economic analysis can be improved by giving anergy an economic value and that the proposed anergy-cost ratio allows a flexible adaptation of the evaluation depending on the economic constraints of a system.
The German Corona Consensus (GECCO) established a uniform dataset in FHIR format for exchanging and sharing interoperable COVID-19 patient specific data between health information systems (HIS) for universities. For sharing the COVID-19 information with other locations that use openEHR, the data are to be converted in FHIR format. In this paper, we introduce our solution through a web-tool named “openEHR-to-FHIR” that converts compositions from an openEHR repository and stores in their respective GECCO FHIR profiles. The tool provides a REST web service for ad hoc conversion of openEHR compositions to FHIR profiles.
A new type of rotary compressor, called “rotary-chamber compressor”, consists of two interlocking rotors with 4 wings each, that perform non-uniform rotary movements. Both rotors have the same direction of rotation, while one rotor is accelerating, the other rotor is retarding. After surpassing a specific mark, the sequence changes and the leading rotor begins to retard and vice versa. Due to the resulting relative phase difference, the volume between the two wings is changing periodically, which allows pulsating working chambers. The technology was first introduced by its founder Jürgen Schukey in 1987. Since then, no further development on this machine is known to us except our own. In this contribution, a study on the kinematics of the rotary-chamber-compressor is presented. Initial studies have shown that changes in the kinematics of the rotors will have a direct influence on the thermodynamical variables, which, if optimized, can lead to an increased performance of the machine. Therefore, a mathematical model has been developed to obtain the performance parameters from different kinematic concepts by using numerical CFD analysis. Furthermore, additional optimization possibilities will be listed and discussed.
In order to ensure validity in legal texts like contracts and case law, lawyers rely on standardised formulations that are written carefully but also represent a kind of code with a meaning and function known to all legal experts. Using directed (acyclic) graphs to represent standardized text fragments, we are able to capture variations concerning time specifications, slight rephrasings, names, places and also OCR errors. We show how we can find such text fragments by sentence clustering, pattern detection and clustering patterns. To test the proposed methods, we use two corpora of German contracts and court decisions, specially compiled for this purpose. However, the entire process for representing standardised text fragments is language-agnostic. We analyze and compare both corpora and give an quantitative and qualitative analysis of the text fragments found and present a number of examples from both corpora.
In the area of manufacturing and process automation in industrial applications, technical energy management systems are mainly used to measure, collect, store, analyze and display energy data. In addition, PLC programs on the control level are required to obtain the energy data from the field level. If the measured data is available in a PLC as a raw value, it still has to be processed by the PLC, so that it can be passed on to the higher layers in a suitable format, e.g. via OPC UA. In plants with heterogeneous field device installations, a high engineering effort is required for the creation of corresponding PLC programs. This paper describes a concept for a code generator that can be used to reduce this engineering effort.
Requirements for an energy data information model for a communication-independent device description
(2021)
With the help of an energy management system according to ISO 50001, industrial companies obtain the opportunities to reduce energy consumption and to increase plant efficiencies. In such a system, the communication of energy data has an important function. With the help of so-called energy profiles (e.g. PROFIenergy), energy data can be communicated between the field level and the higher levels via proven communication protocols (e.g. PROFINET). Due to the fact that in most cases several industrial protocols are used in an automation system, the problem is how to transfer energy data from one protocol to another with as less effort as possible. An energy data information model could overcome this problem and describe energy data in a uniform and semantically unambiguous way. Requirements for a unified energy data information model are presented in this paper.
For anomaly-based intrusion detection in computer networks, data cubes can be used for building a model of the normal behavior of each cell. During inference an anomaly score is calculated based on the deviation of cell metrics from the corresponding normality model. A visualization approach is shown that combines different types of diagrams and charts with linked user interaction for filtering of data.
With regard to climate change, increasing energy efficiency is still a significant issue in the industry. In order to acquire energy data at the field level, so-called energy profiles can be used. They are advantageous as they are integrated into existing industrial ethernet standards (e.g. PROFINET). Commonly used energy profiles such as PROFIenergy and sercos Energy have been established in industrial use. However, as the Industrial Internet of Things (IIoT) continues to develop, the question arises whether the established energy profiles are sufficient to fullfil the requirements of the upcoming IIoT communication technologies. To answer this question the paper compares and discusses the common energy profiles with the current and future challenges of energy data communication. Furthermore, this analysis examines the need for further research in this field.
Agility is considered the silver bullet for survival in the VUCA world. However, many organisations are afraid of endangering their ISO 9001 certificate when introducing agile processes. A joint research project of the University of Applied Sciences and Arts Hannover and the DGQ has set itself the goal of providing more security in this area. The findings were based on interviews with managers and team members from various organisations of different sizes and industries working in an agile manner as well as on common audit practices and a literature analysis. The outcome presents a clear distinction of agility from flexibility as well as useful guidelines for the integration of agile processes in QM systems - for QM practitioners and auditors alike.