Refine
Year of publication
Document Type
- Conference Proceeding (120) (remove)
Language
- English (120) (remove)
Has Fulltext
- yes (120)
Is part of the Bibliography
- no (120)
Keywords
- Mikroservice (8)
- Serviceorientierte Architektur (7)
- Agilität <Management> (6)
- Energiemanagement (6)
- Agile Softwareentwicklung (5)
- Insurance Industry (5)
- Versicherungswirtschaft (5)
- COVID-19 (4)
- Computersicherheit (4)
- Concreteness (4)
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
The research project "Herbar Digital" was started in 2007 with the aim to digitize 3.5 million dried plants on paper sheets belonging to the Botanic Museum Berlin in Germany. Frequently the collector of the plant is unknown, so a procedure had to be developed in order to determine the writer of the handwriting on the sheet. In the present work the static character was transformed into a dynamic form. This was done with the model of an inert ball which was rolled along the written character. During this off-line writer recognition, different mathematical procedures were used such as the reproduction of the write line of individual characters by Legendre polynomials. When only one character was used, a recognition rate of about 40% was obtained. By combining multiple characters, the recognition rate rose considerably and reached 98.7% with 13 characters and 93 writers (chosen randomly from the international IAM-database [3]). A global statistical approach using the whole handwritten text resulted in a similar recognition rate. By combining local and global methods, a recognition rate of 99.5% was achieved.
The methods developed in the research project "Herbar Digital" are to help plant taxonomists to master the great amount of material of about 3.5 million dried plants on paper sheets belonging to the Botanic Museum Berlin in Germany. Frequently the collector of the plant is unknown. So a procedure had to be developed in order to determine the writer of the handwriting on the sheet. In the present work the static character is transformed into a dynamic form. This is done with the model of an inert ball which is rolled through the written character. During this off-line writer recognition, different mathematical procedures are used such as the reproduction of the write line of individual characters by Legendre polynomials. When only one character is used, a recognition rate of about 40% is obtained. By combining multiple characters, the recognition rate rises considerably and reaches 98.7% with 13 characters and 93 writers (chosen randomly from the international IAM-database [3]). Another approach tries to identify the writer by handwritten words. The word is cut out and transformed into a 6-dimensional time series and compared e.g. by means of DTW-methods. A global statistical approach using the whole handwritten sentences results in a similar recognition rate of more than 98%. By combining the methods, a recognition rate of 99.5% is achieved.
Wikidata and Wikibase as complementary research data management services for cultural heritage data
(2022)
The NFDI (German National Research Data Infrastructure) consortia are associations of various institutions within a specific research field, which work together to develop common data infrastructures, guidelines, best practices and tools that conform to the principles of FAIR data. Within the NFDI, a common question is: What is the potential of Wikidata to be used as an application for science and research? In this paper, we address this question by tracing current research usecases and applications for Wikidata, its relation to standalone Wikibase instances, and how the two can function as complementary services to meet a range of research needs. This paper builds on lessons learned through the development of open data projects and software services within the Open Science Lab at TIB, Hannover, in the context of NFDI4Culture – the consortium including participants across the broad spectrum of the digital libraries, archives, and museums field, and the digital humanities.
The NOA project collects and stores images from open access publications and makes them findable and reusable. During the project a focus group workshop was held to determine whether the development is addressing researchers’ needs. This took place before the second half of the project so that the results could be considered for further development since addressing users’ needs is a big part of the project. The focus was to find out what content and functionality they expect from image repositories.
In a first step, participants were asked to fill out a survey about their images use. Secondly, they tested different use cases on the live system. The first finding is that users have a need for finding scholarly images but it is not a routine task and they often do not know any image repositories. This is another reason for repositories to become more open and reach users by integrating with other content providers. The second finding is that users paid attention to image licenses but struggled to find and interpret them while also being unsure how to cite images. In general, there is a high demand for reusing scholarly images but the existing infrastructure has room to improve.
For anomaly-based intrusion detection in computer networks, data cubes can be used for building a model of the normal behavior of each cell. During inference an anomaly score is calculated based on the deviation of cell metrics from the corresponding normality model. A visualization approach is shown that combines different types of diagrams and charts with linked user interaction for filtering of data.
Visual effects and elements in video games and interactive virtual environments can be applied to transfer (or delegate) non-visual perceptions (e.g. proprioception, presence, pain) to players and users, thus increasing perceptual diversity via the visual modality. Such elements or efects are referred to as visual delegates (VDs). Current fndings on the experiences that VDs can elicit relate to specifc VDs, not to VDs in general. Deductive and comprehensive VD evaluation frameworks are lacking. We analyzed VDs in video games to generalize VDs in terms of their visual properties. We conducted a systematic paper analysis to explore player and user experiences observed in association with specifc VDs in user studies. We conducted semi-structured interviews with expert players to determine their preferences and the impact of VD properties. The resulting VD framework (VD-frame) contributes to a more strategic approach to identifying the impact of VDs on player and user experiences.
Scientific papers from all disciplines contain many abbreviations and acronyms. In many cases these acronyms are ambiguous. We present a method to choose the contextual correct definition of an acronym that does not require training for each acronym and thus can be applied to a large number of different acronyms with only few instances. We constructed a set of 19,954 examples of 4,365 ambiguous acronyms from image captions in scientific papers along with their contextually correct definition from different domains. We learn word embeddings for all words in the corpus and compare the averaged context vector of the words in the expansion of an acronym with the weighted average vector of the words in the context of the acronym. We show that this method clearly outperforms (classical) cosine similarity. Furthermore, we show that word embeddings learned from a 1 billion word corpus of scientific exts outperform word embeddings learned from much larger general corpora.
In huge warehouses or stockrooms, it is often very difficult to find a certain item, because it has been misplaced and is therefore not at its assumed position. This position paper presents an approach on how to coordinate mobile RFID agents using a blackboard architecture based on Complex Event Processing.
In the area of manufacturing and process automation in industrial applications, technical energy management systems are mainly used to measure, collect, store, analyze and display energy data. In addition, PLC programs on the control level are required to obtain the energy data from the field level. If the measured data is available in a PLC as a raw value, it still has to be processed by the PLC, so that it can be passed on to the higher layers in a suitable format, e.g. via OPC UA. In plants with heterogeneous field device installations, a high engineering effort is required for the creation of corresponding PLC programs. This paper describes a concept for a code generator that can be used to reduce this engineering effort.
With the use of an energy management system in an industrial company according to ISO 50001, a step-by-step increase in energy efficiency can be achieved. The realization of energy monitoring and load management functions requires programs on edge devices or PLCs to acquire the data, adapt the data type or scale the values of the energy information. In addition, the energy information must be mapped to communication interfaces (e.g. based on OPC UA) in order to convey this energy information to the energy management application. The development of these energy management programs is associated with a high engineering effort, because the field devices from the heterogeneous field level do not provide the energy information in standardized semantics. To mitigate this engineering effort, a universal energy data information model (UEIM) is developed and presented in this paper.
To avoid the shortcomings of traditional monolithic applications, the Microservices Architecture (MSA) style plays an increasingly important role in providing business services. This is true even for the more conventional insurance industry with its highly heterogeneous application landscape and sophisticated cross-domain business processes. Therefore, the question arises of how workflows can be implemented to grant the required flexibility and agility and, on the other hand, to exploit the potential of the MSA style. In this article, we present two different approaches – orchestration and choreography. Using an application scenario from the insurance domain, both concepts are discussed. We introduce a pattern that outlines the mapping of a workflow to a choreography.
In this paper the workflow of the project 'Untersuchungs-, Simulations- und Evaluationstool für Urbane Logistik` (USEfUL) is presented. Aiming to create a web-based decision support tool for urban logistics, the project needed to integrate multiple steps into a single workflow, which in turn needed to be executed multiple times. While a service-oriented system could not be created, the principles of service orientation was utilized to increase workflow efficiency and flexibility, allowing the workflow to be easily adapted to new concepts or research areas.
Agile methods require constant optimization of one’s approach and leading to the adaptation of agile practices. These practices are also adapted when introducing them to companies and their software development teams due to organizational constraints. As a consequence of the widespread use of agile methods, we notice a high variety of their elements:
Practices, roles, and artifacts. This multitude of agile practices, artifacts, and roles results in an unsystematic mixture. It leads to several questions: When is a practice a practice, and when is it a method or technique? This paper presents the tree of agile elements, a taxonomy of agile methods, based on the literature and guidelines of widely used agile methods. We describe a taxonomy of agile methods using terms and concepts of software engineering, in particular software process models. We aim to enable agile elements to be delimited, which should help companies, agile teams, and the research community gain a basic understanding of the interrelationships and dependencies of individual components of agile methods.
Microservices build a deeply distributed system. Although this offers significant flexibility for development teams and helps to find solutions for scalability or security questions, it also intensifies the drawbacks of a distributed system. This article offers a decision framework, which helps to increase the resiliency of microservices. A metamodel is used to represent services, resiliency patterns, and quality attributes. Furthermore, the general idea for a suggestion procedure is outlined.
The negative effects of traffic, such as air quality problems and road congestion, put a strain on the infrastructure of cities and high-populated areas. A potential measure to reduce these negative effects are grocery home deliveries (e-grocery), which can bundle driving activities and, hence, result in decreased traffic and related emission outputs. Several studies have investigated the potential impact of e-grocery on traffic in various last-mile contexts. However, no holistic view on the sustainability of e-grocery across the entire supply chain has yet been proposed. Therefore, this paper presents an agent-based simulation to assess the impact of the e-grocery supply chain compared to the stationary one in terms of mileage and different emission outputs. The simulation shows that a high e-grocery utilization rate can aid in decreasing total driving distances by up to 255 % relative to the optimal value as well as CO 2 emissions by up to 50 %.
Microservices are meanwhile an established software engineering vehicle, which more and more companies are examining and adopting for their development work. Naturally, reference architectures based on microservices come into mind as a valuable thing to utilize. Initial results for such architectures are published in generic and in domain-specific form. Missing to the best of our knowledge however, is a domain-specific reference architecture based on microservices, which takes into account specifics of the insurance industry domain. Jointly with partners from the German insurance industry, we take initial steps to fill this gap in the present article. Thus, we aim towards a microservices-based reference software architecture for (at least German) insurance companies. As the main results of this article we thus provide an initial such reference architecture together with a deeper look into two important parts of it.
In this poster we present the ongoing development of an integrated free and open source toolchain for semantic annotation of digitised cultural heritage. The toolchain development involves the specification of a common data model that aims to increase interoperability across diverse datasets and to enable new collaborative research approaches.
Complexes like iron (II)-triazoles exhibit spin crossover behavior at ambient temperature and are often considered for possible application. In previous studies, we implemented complexes of this type into polymer nanofibers and first polymer-based optical waveguide sensor systems. In our current study, we synthesized complexes of this type, implemented them into polymers and obtained composites through drop casting and doctor blading. We present that a certain combination of polymer and complex can lead to composites with high potential for optical devices. For this purpose, we used two different complexes [Fe(atrz)3](2 ns)2 and [Fe(atrz)3]Cl1.5(BF4)0.5 with different polymers for each composite. We show through transmission measurements and UV/VIS spectroscopy that the optical properties of these composite materials can reversibly change due to the spin crossover effect.