Refine
Year of publication
Document Type
- Conference Proceeding (162) (remove)
Has Fulltext
- yes (162)
Is part of the Bibliography
- no (162)
Keywords
- Digitalisierung (9)
- Energiemanagement (8)
- Mikroservice (8)
- Angewandte Botanik (7)
- Gepresste Pflanzen (7)
- Herbar Digital (7)
- Herbarium (7)
- Serviceorientierte Architektur (7)
- Virtualisierung (7)
- Agile Softwareentwicklung (6)
Complexes like iron (II)-triazoles exhibit spin crossover behavior at ambient temperature and are often considered for possible application. In previous studies, we implemented complexes of this type into polymer nanofibers and first polymer-based optical waveguide sensor systems. In our current study, we synthesized complexes of this type, implemented them into polymers and obtained composites through drop casting and doctor blading. We present that a certain combination of polymer and complex can lead to composites with high potential for optical devices. For this purpose, we used two different complexes [Fe(atrz)3](2 ns)2 and [Fe(atrz)3]Cl1.5(BF4)0.5 with different polymers for each composite. We show through transmission measurements and UV/VIS spectroscopy that the optical properties of these composite materials can reversibly change due to the spin crossover effect.
Compounds that exhibit the spin crossover effect are known to show a change of spin states through external stimuli. This reversible switching of spin states is accompanied by a change of the properties of the compound. Complexes, like iron (II)-triazole complexes, that exhibit this behavior at ambient temperature are often discussed for potential applications. In previous studies we synthesized iron (II)-triazole complexes and implemented them into electrospun nanofibers. We used Mössbauer spectroscopy in first studies to prove a successful implementation with maintaining spin crossover properties. Further studies from us showed that it is possible to use different electrospinning methods to either do a implementation or a deposition of the synthesized solid SCO material into or onto the polymer nanofibers. We now used a solvent in which both, the used iron (II)-triazole complex [Fe(atrz)3](2 ns)2 and three different polymers (Polyacrylonitrile, Polymethylmethacrylate and Polyvinylpyrrolidone), are soluble. This shall lead to a higher homogeneous distribution of the complex along the nanofibers. Mössbauer spectroscopy and other measurements are therefore in use to show a successful implementation without any significant changes to the complex.
Renewable energy production is one of the strongest rising markets and further extreme growth can be anticipated due to desire of increased sustainability in many parts of the world. With the rising adoption of renewable power production, such facilities are increasingly attractive targets for cyber attacks. At the same time higher requirements on a reliable production are raised. In this paper we propose a concept that improves monitoring of renewable power plants by detecting anomalous behavior. The system does not only detect an anomaly, it also provides reasoning for the anomaly based on a specific mathematical model of the expected behavior by giving detailed information about various influential factors causing the alert. The set of influential factors can be configured into the system before learning normal behaviour. The concept is based on multidimensional analysis and has been implemented and successfully evaluated on actual data from different providers of wind power plants.
Operators of production plants are increasingly emphasizing secure communication, including real-time communication, such as PROFINET, within their control systems. This trend is further advanced by standards like IEC 62443, which demand the protection of realtime communication in the field. PROFIBUS and PROFINET International (PI) is working on the specification of the security extensions for PROFINET (“PROFINET Security”), which shall fulfill the requirements of secure communication in the field.
This paper discusses the matter in three parts. First, the roles and responsibilities of the plant owner, the system integrator, and the component provider regarding security, and the basics of the IEC 62443 will be described. Second, a conceptual overview of PROFINET Security, as well as a status update about the PI specification work will be given. Third, the article will describe how PROFINET Security can contribute to the defense-in-depth approach, and what the expected operating environment is. We will evaluate how PROFINET Security contributes to fulfilling the IEC 62443-4-2 standard for automation components.
Two of the authors are members of the PI Working Group CB/PG10 Security.
To learn a subject, the acquisition of the associated technical language is important.
Despite this widely accepted importance of learning the technical language, hardly any studies are published that describe the characteristics of most technical languages that students are supposed to learn. This might largely be due to the absence of specialized text corpora to study such languages at lexical, syntactical and textual level. In the present paper we describe a corpus of German physics text that can be used to study the language used in physics. A large and a small variant are compiled. The small version of the corpus consists of 5.3 Million words and is available on request.
During the Corona-Pandemic, information (e.g. from the analysis of balance sheets and payment behavior) traditionally used for corporate credit risk analysis became less valuable because it represents only past circumstances. Therefore, the use of currently published data from social media platforms, which have shown to contain valuable information regarding the financial stability of companies, should be evaluated. In this data e. g. additional information from disappointed employees or customers can be present. In order to analyze in how far this data can improve the information base for corporate credit risk assessment, Twitter data regarding the ten greatest insolvencies of German companies in 2020 and solvent counterparts is analyzed in this paper. The results from t-tests show, that sentiment before the insolvencies is significantly worse than in the comparison group which is in alignment with previously conducted research endeavors. Furthermore, companies can be classified as prospectively solvent or insolvent with up to 70% accuracy by applying the k-nearest-neighbor algorithm to monthly aggregated sentiment scores. No significant differences in the number of Tweets for both groups can be proven, which is in contrast to findings from studies which were conducted before the Corona-Pandemic. The results can be utilized by practitioners and scientists in order to improve decision support systems in the domain of corporate credit risk analysis. From a scientific point of view, the results show, that the information asymmetry between lenders and borrowers in credit relationships, which are principals and agents according to the principal-agent-theory, can be reduced based on user generated content from social media platforms. In future studies, it should be evaluated in how far the data can be integrated in established processes for credit decision making. Furthermore, additional social media platforms as well as samples of companies should be analyzed. Lastly, the authenticity of user generated contend should be taken into account in order to ensure, that credit decisions rely on truthful information only.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1, 2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarizes our comparison of all three tools from our final comparison round.
In this poster we present the ongoing development of an integrated free and open source toolchain for semantic annotation of digitised cultural heritage. The toolchain development involves the specification of a common data model that aims to increase interoperability across diverse datasets and to enable new collaborative research approaches.
We present an approach towards a data acquisition system for digital twins that uses a 5G net- work for data transmission and localization. The current hardware setup, which utilizes stereo vision and LiDAR for 3D mapping, is explained together with two recorded point cloud data sets. Furthermore, a resulting digital twin comprised of voxelized point cloud data is shown. Ideas for future applications and challenges regarding the system are discussed and an outlook on further development is given.
Autonomous and integrated passenger and freight transport (APFIT) is a promising approach to tackle both, traffic and last-mile-related issues such as environmental emissions, social and spatial conflicts or operational inefficiencies. By conducting an agent-based simulation, we shed light on this widely unexplored research topic and provide first indications regarding influential target figures of such a system in the rural area of Sarstedt, Germany. Our results show that larger fleets entail inefficiencies due to suboptimal utilization of monetary and material resources and increase traffic volume while higher amounts of unused vehicles may exacerbate spatial conflicts. Nevertheless, to fit the given demand within our study area, a comparatively large fleet of about 25 vehicles is necessary to provide reliable service, assuming maximum passenger waiting times of six minutes to the expense of higher standby times, rebalancing effort, and higher costs for vehicle acquisition and maintenance.
The miniaturized Mössbauer-spectrometer (MIMOS II), originally devised by Göstar Klingelhöfer, is further developed by the Renz group at the Leibniz University Hanover in cooperation with the Hanover University of Applied Sciences and Arts. A new processing unit with a two-dimensional (2D) data acquisition was developed by M. Jahns. The advantage of this data acquisition is that no thresholds need to be set before the measurement. The energy of each photon is determined and stored with the velocity of the drive. After the measurement, the relevant area can be selected for the Mössbauer spectrum. Now we have expanded the evaluation unit with a power supply for a MIMOS drive and a MIMOS PIN detector. So we have a very compact MIMOS transmissions measurement setup. With this setup it is possible to process the signals of two detectors serially. Currently we are working on a parallel signal processing.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
Legal documents often have a complex layout with many different headings, headers and footers, side notes, etc. For the further processing, it is important to extract these individual components correctly from a legally binding document, for example a signed PDF. A common approach to do so is to classify each (text) region of a page using its geometric and textual features. This approach works well, when the training and test data have a similar structure and when the documents of a collection to be analyzed have a rather uniform layout. We show that the use of global page properties can improve the accuracy of text element classification: we first classify each page into one of three layout types. After that, we can train a classifier for each of the three page types and thereby improve the accuracy on a manually annotated collection of 70 legal documents consisting of 20,938 text elements. When we split by page type, we achieve an improvement from 0.95 to 0.98 for single-column pages with left marginalia and from 0.95 to 0.96 for double-column pages. We developed our own feature-based method for page layout detection, which we benchmark against a standard implementation of a CNN image classifier. The approach presented here is based on corpus of freely available German contracts and general terms and conditions.
Both the corpus and all manual annotations are made freely available. The method is language agnostic.
Companies worldwide have enabled their employees to work remotely as a consequence of the Covid 19 pandemic. Software development is a human-centered discipline and thrives on teamwork. Agile methods are focusing on several social aspects of software development. Software development teams in Germany were mainly co-located before the pandemic. This paper aims to validate the findings of existing studies by expanding on an existing multiple-case study. Therefore, we collected data by conducting semi-structured interviews, observing agile practices, and viewing project documents in three cases. Based on the results, we can confirm the following findings: 1) The teams rapidly adapted the agile practices and roles, 2) communication is more objective within the teams, 3) decreased social exchange between team members, 4) the expectation of a combined approach of remote and onsite work after the pandemic, 5) stable or increased (perceived) performance and 6) stable or increased well-being of team members.
Social skills are essential for a successful understanding of agile methods in software development. Several studies highlight the opportunities and advantages of integrating real-world projects and problems while collaborating with companies into higher education using agile methods. This integration comes with several opportunities and advantages for both the students and the company. The students are able to interact with real-world software development teams, analyze and understand their challenges and identify possible measures to tackle them. However, the integration of real-world problems and companies is complex and may come with a high effort in terms of coordination and preparation of the course. The challenges related to the interaction and communication with students are increased by virtual distance teaching during the Covid-19 pandemic as direct contact with students is missing. Also, we do not know how problem-based learning in virtual distance teaching is valued by the students. This paper presents our adapted eduScrum approach and learning outcome of integrating experiments with real-world software development teams from two companies into a Master of Science course organized in virtual distance teaching. The evaluation shows that students value analyzing real-world problems using agile methods. They highlight the interaction with real-world software development teams. Also, the students appreciate the organization of the course using an iterative approach with eduScrum. Based on our findings, we present four recommendations for the integration of agile methods and real world problems into higher education in virtual distance teaching settings. The results of our paper contribute to the practitioner and researcher/lecturer community, as we provide valuable insights how to fill the gap between practice and higher education in virtual distance settings.
Data and Information Science: Book of Abstracts at BOBCATSSS 2022 Hybrid Conference, 23rd - 25th of May 2022, Debrecen.
This year marks the 30th anniversary of the BOBCATSSS. The BOBCATSSS is an international, annual symposium designed for librarians and information professionals in a rapidly changing environment. Over the past 30 years, the conference has included exciting topics, great venues, interested guests and engaging presenters.
This year we would like to introduce the topics of the many papers presented in the Book of Abstracts for the first time in presence at the University of Debrecen and hybrid. The Book of Abstracts provides an overview of all presentations given at BOBCATSSS. Presentations are listed in alphabetical order by title and include speeches, Pecha Kuchas, posters and workshops.
The theme of BOBCATSSS is Data and Information Science. Data and information are the basis for decisions and processes in business, politics and science. Particularly important in the current era of digital transformation. This is exactly where this year's subthemes come in. They deal with data science, openness as well as institutional roles.
In this paper we investigate how concreteness and abstractness are represented in word embedding spaces. We use data for English and German, and show that concreteness and abstractness can be determined independently and turn out to be completely opposite directions in the embedding space. Various methods can be used to determine the direction of concreteness, always resulting in roughly the same vector. Though concreteness is a central aspect of the meaning of words and can be detected clearly in embedding spaces, it seems not as easy to subtract or add concreteness to words to obtain other words or word senses like e.g. can be done with a semantic property like gender.
Visual effects and elements in video games and interactive virtual environments can be applied to transfer (or delegate) non-visual perceptions (e.g. proprioception, presence, pain) to players and users, thus increasing perceptual diversity via the visual modality. Such elements or efects are referred to as visual delegates (VDs). Current fndings on the experiences that VDs can elicit relate to specifc VDs, not to VDs in general. Deductive and comprehensive VD evaluation frameworks are lacking. We analyzed VDs in video games to generalize VDs in terms of their visual properties. We conducted a systematic paper analysis to explore player and user experiences observed in association with specifc VDs in user studies. We conducted semi-structured interviews with expert players to determine their preferences and the impact of VD properties. The resulting VD framework (VD-frame) contributes to a more strategic approach to identifying the impact of VDs on player and user experiences.
Die Covid-19 Pandemie hat zu einem signifikanten Anstieg der Remote Work geführt. Die Veränderung in der Interaktion und Kollaboration ist für viele agile Teams eine Herausforderung gewesen. Diverse Studien zeigen unterschiedliche Effekte und Auswirkungen auf die Zusammenarbeit agiler Teams während der Pandemie. So ist die Kommunikation sachlicher und zielgerichteter geworden. Ebenso wird eine Verminderung des sozialen Austauschs in den Teams berichtet. Unser Artikel thematisiert die Veränderung der Interaktion in agilen Teams durch die Remote Work. Wir haben eine qualitative Fallstudie bei einem agilen Software-Entwicklungsteam bei Otto durchgeführt. Unsere Ergebnisse zeigen einen Zusammenhang zwischen den Auswirkungen auf die Interaktion und der persönlichen Autonomie der Team-Mitglieder. Darüber hinaus haben wir keine signifikanten negativen Effekte durch die veränderte Interaktion auf die agile Arbeitsweise festgestellt.
Generalisierte Rechtsdokumente, bei denen für die individuellen Ausprägungen eines Vertrages die Positionen im Text bekannt sind, können eingesetzt werden, um erstens das Genehmigungsverfahren von Neuverträgen automatisiert zu unterstützen und zweitens als Vertragsgenerator neue Rechtsdokumente vorausgewählt zur Verfügung zu stellen. In diesem Beitrag wird, mithilfe von bekannten juristischen Texten gezeigt, wie formelhafte Textabschnitte identifiziert und häufige individuelle Ausprägungen klassifiziert werden können, um als Musterabschnitte eingesetzt zu werden. Es werden Einsatzbereiche vorgestellt und vorhandenes Potential für Legal Tech-Anwendungen aufgezeigt.
In 2020, the world changed due to the Covid 19 pandemic. Containment measures to reduce the spread of the virus were planned and implemented by many countries and companies. Worldwide, companies sent their employees to work from home. This change has led to significant challenges in teams that were co-located before the pandemic. Agile software development teams were affected by this switch, as agile methods focus on communication and collaboration. Research results have already been published on the challenges of switching to remote work and the effects on agile software development teams. This article presents a systematic literature review. We identified 12 relevant papers for our studies and analyzed them on detail. The results provide an overview how agile software development teams reacted to the switch to remote work, e.g., which agile practices they adapted. We also gained insights on the changes of the performance of agile software development teams and social effects on agile software development teams during the pandemic.
Context: Companies adapt agile methods, practices or artifacts for their use in practice since more than two decades. This adaptions result in a wide variety of described agile practices. For instance, the Agile Alliance lists 75 different practices in its Agile Glossary. This situation may lead to misunderstandings, as agile practices with similar names can be interpreted and used differently.
Objective: This paper synthesize an integrated list of agile practices, both from primary and secondary sources.
Method: We performed a tertiary study to identify existing overviews and lists of agile practices in the literature. We identified 876 studies, of which 37 were included.
Results: The results of our paper show that certain agile practices are listed and used more often in existing studies. Our integrated list of agile practices comprises 38 entries structured in five categories. Conclusion: The high number of agile practices and thus, the wide variety increased steadily over the past decades due to the adaption of agile methods. Based on our findings, we present a comprehensive overview of agile practices. The research community benefits from our integrated list of agile practices as a potential basis for future research. Also, practitioners benefit from our findings, as the structured overview of agile practices provides the opportunity to select or adapt practices for their specific needs.
Context: Agile software development (ASD) sets social aspects like communication and collaboration in focus. Thus, one may assume that the specific work organization of companies impacts the work of ASD teams. A major change in work organization is the switch to a 4-day work week, which some companies investigated in experiments. Also, recent studies show that ASD teams are affected by the switch to remote work since the Covid 19 pandemic outbreak in 2020.
Objective: Our study presents empirical findings on the effects on ASD teams operating remote in a 4-day work week organization. Method: We performed a qualitative single case study and conducted seven semi-structured interviews, observed 14 agile practices and screened eight project documents and protocols of agile practices.
Results: We found, that the teams adapted the agile method in use due to the change to a 4-day work week environment and the switch to remote work. The productivity of the two ASD teams did not decrease. Although the stress level of the ASD team member increased due to the 4-day work week, we found that the job satisfaction of the individual ASD team members is affected positively. Finally, we point to affects on social facets of the ASD teams.
Conclusion: The research community benefits from our results as the current state of research dealing with the effects of a 4-day work week on ASD teams is limited. Also, our findings provide several practical implications for ASD teams working remote in a 4-day work week.
Image captions in scientific papers usually are complementary to the images. Consequently, the captions contain many terms that do not refer to concepts visible in the image. We conjecture that it is possible to distinguish between these two types of terms in an image caption by analysing the text only. To examine this, we evaluated different features. The dataset we used to compute tf.idf values, word embeddings and concreteness values contains over 700 000 scientific papers with over 4,6 million images. The evaluation was done with a manually annotated subset of 329 images. Additionally, we trained a support vector machine to predict whether a term is a likely visible or not. We show that concreteness of terms is a very important feature to identify terms in captions and context that refer to concepts visible in images.
Since textual user generated content from social media platforms contains valuable information for decision support and especially corporate credit risk analysis, automated approaches for text classification such as the application of sentiment dictionaries and machine learning algorithms have received great attention in recent user generated content based research endeavors. While machine learning algorithms require individual training data sets for varying sources, sentiment dictionaries can be applied to texts immediately, whereby domain specific dictionaries attain better results than domain independent word lists. We evaluate by means of a literature review how sentiment dictionaries can be constructed for specific domains and languages. Then, we construct nine versions of German sentiment dictionaries relying on a process model which we developed based on the literature review. We apply the dictionaries to a manually classified German language data set from Twitter in which hints for financial (in)stability of companies have been proven. Based on their classification accuracy, we rank the dictionaries and verify their ranking by utilizing Mc Nemar’s test for significance. Our results indicate, that the significantly best dictionary is based on the German language dictionary SentiWortschatz and an extension approach by use of the lexical-semantic database GermaNet. It achieves a classification accuracy of 59,19 % in the underlying three-case-scenario, in which the Tweets are labelled as negative, neutral or positive. A random classification would attain an accuracy of 33,3 % in the same scenario and hence, automated coding by use of the sentiment dictionaries can lead to a reduction of manual efforts. Our process model can be adopted by other researchers when constructing sentiment dictionaries for various domains and languages. Furthermore, our established dictionaries can be used by practitioners especially in the domain of corporate credit risk analysis for automated text classification which has been conducted manually to a great extent up to today.
Mit der Anwendung der Norm ISO 50001 und der einhergehenden Einführung eines Energiemanagementsystems (kurz EnMS) kann eine sukzessive Erhöhung der Energieeffizienz erreicht werden. Zur Umsetzung von Energie-Monitoring- oder Standby-Management-Funktionalitäten müssen Energiedaten in der Feldebene bereitgestellt werden und auf Edge-Devices oder SPSen mittels eines Energiemanagement-Programms ggf. im Datenformat angepasst, skaliert und auf eine etablierte Kommunikationsschnittstelle (z.B. basierend auf OPC UA- oder MQTT) abgebildet werden. Die Erstellung dieser Energiemanagement-Programme geht mit einem hohen Engineering-Aufwand einher, denn die Feldgeräte aus der heterogenen Feldebene stellen die Energiedaten nicht in einer standardisierten Semantik bereit. Um diesem Engineering-Aufwand entgegenzuwirken, wird ein Konzept für ein universelles Energiedateninformationsmodell (kurz UEDIM) vorgestellt. Dieses Konzept sieht die Bereitstellung der Energiedaten an das EnMS in einer semantisch standardisierten Form vor. Zur weiteren Entwicklung des UEDIM wird im Beitrag näher untersucht, in welcher Form Energiedaten in der Feldebene bereitgestellt werden können und welche Anforderungen für das UEDIM aufzustellen sind.
Wikidata and Wikibase as complementary research data management services for cultural heritage data
(2022)
The NFDI (German National Research Data Infrastructure) consortia are associations of various institutions within a specific research field, which work together to develop common data infrastructures, guidelines, best practices and tools that conform to the principles of FAIR data. Within the NFDI, a common question is: What is the potential of Wikidata to be used as an application for science and research? In this paper, we address this question by tracing current research usecases and applications for Wikidata, its relation to standalone Wikibase instances, and how the two can function as complementary services to meet a range of research needs. This paper builds on lessons learned through the development of open data projects and software services within the Open Science Lab at TIB, Hannover, in the context of NFDI4Culture – the consortium including participants across the broad spectrum of the digital libraries, archives, and museums field, and the digital humanities.
We present a novel long short-term memory (LSTM) approach for time-series prediction of the sand demand which arises from preparing the sand moulds for the iron casting process of a foundry. With our approach, we contribute to qualify LSTM and its combination with feedback-corrected optimal scheduling for industrial processes.
The sand is produced in an energy intensive mixing process which is controlled by optimal scheduling. The optimal scheduling is solved for a fixed prediction horizon. One major influencing factor is the sand demand, which is highly disturbed, for example due to production interruptions. The causes of production interruptions are in general physically unknown. We assume that information about the future behavior of the sand demand is included in current and past process data. Therefore, we choose LSTM networks for predicting the time-series of the sand demand.
The sand demand prediction is performed by our multi model approach. This approach outperforms the currently used naive estimation, even when predicting far into the future. Our LSTM based prediction approach can forecast the sand demand with a conformity up to 38 % and a mean value accuracy of approximately 99%. Simulating the optimal scheduling with sand demand prediction leads to an improvement in energy savings of approximately 1.1% compared to the naive estimation. The application of our novel approach at the real production plant of a foundry proves the simulation results and verifies the capability of our approach.
With the use of an energy management system in an industrial company according to ISO 50001, a step-by-step increase in energy efficiency can be achieved. The realization of energy monitoring and load management functions requires programs on edge devices or PLCs to acquire the data, adapt the data type or scale the values of the energy information. In addition, the energy information must be mapped to communication interfaces (e.g. based on OPC UA) in order to convey this energy information to the energy management application. The development of these energy management programs is associated with a high engineering effort, because the field devices from the heterogeneous field level do not provide the energy information in standardized semantics. To mitigate this engineering effort, a universal energy data information model (UEIM) is developed and presented in this paper.
A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.
Even for the more traditional insurance industry, the Microservices Architecture (MSA) style plays an increasingly important role in provisioning insurance services. However, insurance businesses must operate legacy applications, enterprise software, and service-based applications in parallel for a more extended transition period. The ultimate goal of our ongoing research is to design a microservice reference architecture in cooperation with our industry partners from the insurance domain that provides an approach for the integration of applications from different architecture paradigms. In Germany, individual insurance services are classified as part of the critical infrastructure. Therefore, German insurance companies must comply with the Federal Office for Information Security requirements, which the Federal Supervisory Authority enforces. Additionally, insurance companies must comply with relevant laws, regulations, and standards as part of the business’s compliance requirements. Note: Since Germany is seen as relatively ’tough’ with respect to privacy and security demands, fullfilling those demands might well be suitable (if not even ’over-achieving’) for insurances in other countries as well. The question raises thus, of how insurance services can be secured in an application landscape shaped by the MSA style to comply with the architectural and security requirements depicted above. This article highlights the specific regulations, laws, and standards the insurance industry must comply with. We present initial architectural patterns to address authentication and authorization in an MSA tailored to the requirements of our insurance industry partners.
To avoid the shortcomings of traditional monolithic applications, the Microservices Architecture (MSA) style plays an increasingly important role in providing business services. This is true even for the more conventional insurance industry with its highly heterogeneous application landscape and sophisticated cross-domain business processes. Therefore, the question arises of how workflows can be implemented to grant the required flexibility and agility and, on the other hand, to exploit the potential of the MSA style. In this article, we present two different approaches – orchestration and choreography. Using an application scenario from the insurance domain, both concepts are discussed. We introduce a pattern that outlines the mapping of a workflow to a choreography.
Techno-economic analysis that allocate costs to the energy flows of energy systems are helpful to understand the formation of costs within processes and to increase the cost efficiency. For the economic evaluation, the usefulness or quality of the energy is of great importance. In exergy-based methods, this is considered by allocating costs to the exergy instead of energy. As exergy represents the ability of performing work, it is often named the useful part of energy. In contrast, the anergy, the part of energy, which cannot perform work, is often assumed to be not useful.
However, heat flows as used e.g. in domestic heating are always a mixture of a relative small portion of exergy and a big portion of anergy. Although of lower quality, the anergy is obviously useful for these applications. The question is, whether it makes sense to differentiate between exergy and anergy and take both properties into account for the economic evaluation.
To answer this question, a new methodical concept based on the definition of an anergy-exergy cost ratio is compared to the commonly applied approaches of considering either energy or exergy as the basis for economic evaluation. These three different approaches for the economic analysis of thermal energy systems are applied to an exemplary heating system with thermal storages. It is shown that the results of the techno-economic analysis can be improved by giving anergy an economic value and that the proposed anergy-cost ratio allows a flexible adaptation of the evaluation depending on the economic constraints of a system.
The German Corona Consensus (GECCO) established a uniform dataset in FHIR format for exchanging and sharing interoperable COVID-19 patient specific data between health information systems (HIS) for universities. For sharing the COVID-19 information with other locations that use openEHR, the data are to be converted in FHIR format. In this paper, we introduce our solution through a web-tool named “openEHR-to-FHIR” that converts compositions from an openEHR repository and stores in their respective GECCO FHIR profiles. The tool provides a REST web service for ad hoc conversion of openEHR compositions to FHIR profiles.
A new type of rotary compressor, called “rotary-chamber compressor”, consists of two interlocking rotors with 4 wings each, that perform non-uniform rotary movements. Both rotors have the same direction of rotation, while one rotor is accelerating, the other rotor is retarding. After surpassing a specific mark, the sequence changes and the leading rotor begins to retard and vice versa. Due to the resulting relative phase difference, the volume between the two wings is changing periodically, which allows pulsating working chambers. The technology was first introduced by its founder Jürgen Schukey in 1987. Since then, no further development on this machine is known to us except our own. In this contribution, a study on the kinematics of the rotary-chamber-compressor is presented. Initial studies have shown that changes in the kinematics of the rotors will have a direct influence on the thermodynamical variables, which, if optimized, can lead to an increased performance of the machine. Therefore, a mathematical model has been developed to obtain the performance parameters from different kinematic concepts by using numerical CFD analysis. Furthermore, additional optimization possibilities will be listed and discussed.
In order to ensure validity in legal texts like contracts and case law, lawyers rely on standardised formulations that are written carefully but also represent a kind of code with a meaning and function known to all legal experts. Using directed (acyclic) graphs to represent standardized text fragments, we are able to capture variations concerning time specifications, slight rephrasings, names, places and also OCR errors. We show how we can find such text fragments by sentence clustering, pattern detection and clustering patterns. To test the proposed methods, we use two corpora of German contracts and court decisions, specially compiled for this purpose. However, the entire process for representing standardised text fragments is language-agnostic. We analyze and compare both corpora and give an quantitative and qualitative analysis of the text fragments found and present a number of examples from both corpora.
Die Forderungen, auch nicht personenbezogene Daten besser zu schützen, nehmen zu. Dies gilt auch für die Landwirtschaft. Landwirte fordern selbstbewusst „Meine Daten gehören mir“ und wollen für die Bereitstellung ihrer Betriebsdaten angemessen entlohnt werden. Es spricht aber einiges dafür, dass die meisten der erhobenen Daten kaum einen ökonomischen Wert aufweisen. In diesem Artikel wird systematisch untersucht, welche Arten von Daten es gibt und welchen Marktwert sie vermutlich haben. Da Daten digitale Güter sind, gelten für sie dieselben Besonderheiten wie für sonstigen digitalen Content, wie einfache Kopier- und Veränderbarkeit. Die Analyse kommt zu dem Schluss, dass die meisten Daten in der Landwirtschaft vermutlich nur einen geringen Wert aufweisen, der eine Vermarktung, aber auch einen aufwendigen juristischen Schutz nicht rechtfertigt. Erst durch Datenaggregation und geschickte Auswertung dieser Rohdaten werden quasi in einer Veredelungsstufe nützliche Informationen erzeugt. Vermutlich wäre es aber am besten, möglichst viele Daten öffentlich zugänglich zu halten, sodass Werte durch innovative Geschäftsmodelle geschaffen werden, die auf diesen öffentlichen Daten aufbauen.
In the area of manufacturing and process automation in industrial applications, technical energy management systems are mainly used to measure, collect, store, analyze and display energy data. In addition, PLC programs on the control level are required to obtain the energy data from the field level. If the measured data is available in a PLC as a raw value, it still has to be processed by the PLC, so that it can be passed on to the higher layers in a suitable format, e.g. via OPC UA. In plants with heterogeneous field device installations, a high engineering effort is required for the creation of corresponding PLC programs. This paper describes a concept for a code generator that can be used to reduce this engineering effort.
Requirements for an energy data information model for a communication-independent device description
(2021)
With the help of an energy management system according to ISO 50001, industrial companies obtain the opportunities to reduce energy consumption and to increase plant efficiencies. In such a system, the communication of energy data has an important function. With the help of so-called energy profiles (e.g. PROFIenergy), energy data can be communicated between the field level and the higher levels via proven communication protocols (e.g. PROFINET). Due to the fact that in most cases several industrial protocols are used in an automation system, the problem is how to transfer energy data from one protocol to another with as less effort as possible. An energy data information model could overcome this problem and describe energy data in a uniform and semantically unambiguous way. Requirements for a unified energy data information model are presented in this paper.
For anomaly-based intrusion detection in computer networks, data cubes can be used for building a model of the normal behavior of each cell. During inference an anomaly score is calculated based on the deviation of cell metrics from the corresponding normality model. A visualization approach is shown that combines different types of diagrams and charts with linked user interaction for filtering of data.
With regard to climate change, increasing energy efficiency is still a significant issue in the industry. In order to acquire energy data at the field level, so-called energy profiles can be used. They are advantageous as they are integrated into existing industrial ethernet standards (e.g. PROFINET). Commonly used energy profiles such as PROFIenergy and sercos Energy have been established in industrial use. However, as the Industrial Internet of Things (IIoT) continues to develop, the question arises whether the established energy profiles are sufficient to fullfil the requirements of the upcoming IIoT communication technologies. To answer this question the paper compares and discusses the common energy profiles with the current and future challenges of energy data communication. Furthermore, this analysis examines the need for further research in this field.
Agility is considered the silver bullet for survival in the VUCA world. However, many organisations are afraid of endangering their ISO 9001 certificate when introducing agile processes. A joint research project of the University of Applied Sciences and Arts Hannover and the DGQ has set itself the goal of providing more security in this area. The findings were based on interviews with managers and team members from various organisations of different sizes and industries working in an agile manner as well as on common audit practices and a literature analysis. The outcome presents a clear distinction of agility from flexibility as well as useful guidelines for the integration of agile processes in QM systems - for QM practitioners and auditors alike.
We present a feedback-corrected optimal scheduling approach to reduce the demand of electrical energy of batch processes, exemplified at the sand preparation in foundry. The main energy driver in the exemplary foundry is the idle time of the batch-wise working sand mixers. In this novel approach, we use linear integer programming to minimize the demand of energy of the sand mixers by scheduling the batches in real-time. For the optimization we use a physical model of the sand preparation, which takes dwell-times of the processes as dead-time systems into account. In this paper, we present the steps to make the optimal scheduling approach applicable for the production process. The application at the real production plant proves the performance of the suggested approach. Compared to the conventional control, the feedback-corrected optimal scheduling approach leads to an reduction in energy consumption of approximately 6.5 % without modifying the process or the aggregates.
This paper presents a novel approach for modelling the energy consumption of the coupled parallel moulding sand mixers of a foundry as an optimal control problem. The minimization of energy consumption is optimized by scheduling the mixing processes in a linear integer programming scheme. The sand flow through the foundry’s sand preparation is characterized by a physical model. This model considers the sand demand of the moulding machine as disturbance, the stored sand masses in the mixer hoppers and machine hoppers, respectively. The novel approach of handling dwell-times for dosing, mixing and transport processes using dead-time systems and constraint pushing allows the application of a linear model. The formulation of the optimal control problem aims at real-time application as model predictive control at the production plant. Initial application results indicate an improvement in energy consumption of approximately 8%.
This Innovative Practice Full Paper presents our learnings of the process to perform a Master of Science class with eduScrum integrating real world problems as projects. We prepared, performed, and evaluated an agile educational concept for the new Master of Science program Digital Transformation organized and provided by the department of business computing at the University of Applied Sciences and Arts - Hochschule Hannover in Germany. The course deals with innovative methodologies of agile project management and is attended by 25 students. We performed the class due the summer term in 2019 and 2020 as a teaching pair. The eduScrum method has been used in different educational contexts, including higher education. During the approach preparation, we decided to use challenges, problems, or questions from the industry. Thus, we acquired four companies and prepared in coordination with them dedicated project descriptions. Each project description was refined in the form of a backlog (list of requirements). We divided the class into four eduScrum teams, one team for each project. The subdivision of the class was done randomly.
Since we wanted to integrate realistic projects into industry partners’ implementation, we decided to adapt the eduScrum approach. The eduScrum teams were challenged with different projects, e.g., analyzing a dedicated phenomenon in a real project or creating a theoretical model for a company’s new project management approach. We present our experiences of the whole process to prepare, perform and evaluate an agile educational approach combined with projects from practice. We found, that the students value the agile method using real world problems. However, the results are mainly based on the summer term 2019, this paper also includes our learnings from virtual distance teaching during the Covid 19 pandemic in summer term 2020. The paper contributes to the distribution of methods for higher education teaching in the classroom and distance learning.
Agile methods require constant optimization of one’s approach and leading to the adaptation of agile practices. These practices are also adapted when introducing them to companies and their software development teams due to organizational constraints. As a consequence of the widespread use of agile methods, we notice a high variety of their elements:
Practices, roles, and artifacts. This multitude of agile practices, artifacts, and roles results in an unsystematic mixture. It leads to several questions: When is a practice a practice, and when is it a method or technique? This paper presents the tree of agile elements, a taxonomy of agile methods, based on the literature and guidelines of widely used agile methods. We describe a taxonomy of agile methods using terms and concepts of software engineering, in particular software process models. We aim to enable agile elements to be delimited, which should help companies, agile teams, and the research community gain a basic understanding of the interrelationships and dependencies of individual components of agile methods.
The negative effects of traffic, such as air quality problems and road congestion, put a strain on the infrastructure of cities and high-populated areas. A potential measure to reduce these negative effects are grocery home deliveries (e-grocery), which can bundle driving activities and, hence, result in decreased traffic and related emission outputs. Several studies have investigated the potential impact of e-grocery on traffic in various last-mile contexts. However, no holistic view on the sustainability of e-grocery across the entire supply chain has yet been proposed. Therefore, this paper presents an agent-based simulation to assess the impact of the e-grocery supply chain compared to the stationary one in terms of mileage and different emission outputs. The simulation shows that a high e-grocery utilization rate can aid in decreasing total driving distances by up to 255 % relative to the optimal value as well as CO 2 emissions by up to 50 %.