Refine
Year of publication
Document Type
- Conference Proceeding (162) (remove)
Has Fulltext
- yes (162)
Is part of the Bibliography
- no (162)
Keywords
- Digitalisierung (9)
- Energiemanagement (8)
- Mikroservice (8)
- Angewandte Botanik (7)
- Gepresste Pflanzen (7)
- Herbar Digital (7)
- Herbarium (7)
- Serviceorientierte Architektur (7)
- Virtualisierung (7)
- Agile Softwareentwicklung (6)
BYOD Bring Your Own Device
(2013)
Using modern devices like smartphones and tablets offers a wide variety of advantages; this has made them very popular as consumer devices in private life. Using them in the workplace is also popular. However, who wants to carry around and handle two devices; one for personal use, and one for work-related tasks? That is why “dual use”, using one single device for private and business applications, may represent a proper solution. The result is “Bring Your Own Device,” or BYOD, which describes the circumstance in which users make their own personal devices available for company use. For companies, this brings some opportunities and risks. We describe and discuss organizational issues, technical approaches, and solutions.
Regional Innovation Systems describe the relations between actors, structures and infrastructures in a region in order to stimulate innovation and regional development. For these systems the collection and organization of information is crucial. In the present paper we investigate the possibilities to extract information from websites of companies. First we describe regional innovation systems and the information types that are necessary to create them. Then we discuss the possibilities of text mining and keyword extraction techniques to extract this information from company websites. Finally, we describe a small scale experiment in which keywords related to economic sectors and commodities are extracted from the websites of over 200 companies. This experiment shows what the main challenges are for information extraction from websites for regional innovation systems.
The amount of papers published yearly increases since decades. Libraries need to make these resources accessible and available with classification being an important aspect and part of this process. This paper analyzes prerequisites and possibilities of automatic classification of medical literature. We explain the selection, preprocessing and analysis of data consisting of catalogue datasets from the library of the Hanover Medical School, Lower Saxony, Germany. In the present study, 19,348 documents, represented by notations of library classification systems such as e.g. the Dewey Decimal Classification (DDC), were classified into 514 different classes from the National Library of Medicine (NLM) classification system. The algorithm used was k-nearest-neighbours (kNN). A correct classification rate of 55.7% could be achieved. To the best of our knowledge, this is not only the first research conducted towards the use of the NLM classification in automatic classification but also the first approach that exclusively considers already assigned notations from other
classification systems for this purpose.
Cloud Computing: Serverless
(2021)
A serverless architecture is a new approach to offering services over the Internet. It combines BaaS (Backend-as-a-service) and FaaS (Function-as-a-service). With the serverless architecture no own or rented infrastructures are needed anymore. In addition, the company does not have to worry about scaling any longer, as this happens automatically and immediately. Furthermore, there is no need any longer for maintenance work on the servers, as this is completely taken over by the provider. Administrators are also no longer needed for the same reason. Finally, many ready-made functions are offered, with which the development effort can be reduced. As a result, the serverless architecture is very well suited to many application scenarios, and it can save considerable costs (server costs, maintenance costs, personnel costs, electricity costs, etc.). The company only must subdivide the source code of the application and upload it to the provider’s server. The rest is done by the provider.
The CogALex-V Shared Task provides two datasets that consists of pairs of words along with a classification of their semantic relation. The dataset for the first task distinguishes only between related and unrelated, while the second data set distinguishes several types of semantic relations. A number of recent papers propose to construct a feature vector that represents a pair of words by applying a pairwise simple operation to all elements of the feature vector. Subsequently, the pairs can be classified by training any classification algorithm on these vectors. In the present paper we apply this method to the provided datasets. We see that the results are not better than from the given simple baseline. We conclude that the results of the investigated method are strongly depended on the type of data to which it is applied.
A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.
With regard to climate change, increasing energy efficiency is still a significant issue in the industry. In order to acquire energy data at the field level, so-called energy profiles can be used. They are advantageous as they are integrated into existing industrial ethernet standards (e.g. PROFINET). Commonly used energy profiles such as PROFIenergy and sercos Energy have been established in industrial use. However, as the Industrial Internet of Things (IIoT) continues to develop, the question arises whether the established energy profiles are sufficient to fullfil the requirements of the upcoming IIoT communication technologies. To answer this question the paper compares and discusses the common energy profiles with the current and future challenges of energy data communication. Furthermore, this analysis examines the need for further research in this field.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarizes our comparison of all three tools from our final comparison round.
With an increasing complexity and scale, sufficient evaluation of Information Systems (IS) becomes a challenging and difficult task. Simulation modeling has proven as suitable and efficient methodology for evaluating IS and IS artifacts, presupposed it meets certain quality demands. However, existing research on simulation modeling quality solely focuses on quality in terms of accuracy and credibility, disregarding the role of additional quality aspects. Therefore, this paper proposes two design artifacts in order to ensure a holistic quality view on simulation quality. First, associated literature is reviewed in order to extract relevant quality factors in the context of simulation modeling, which can be used to evaluate the overall quality of a simulated solution before, during or after a given project. Secondly, the deduced quality factors are integrated in a quality assessment framework to provide structural guidance on the quality assessment procedure for simulation. In line with a Design Science Research (DSR) approach, we demonstrate the eligibility of both design artifacts by means of prototyping as well as an example case. Moreover, the assessment framework is evaluated and iteratively adjusted with the help of expert feedback.
In industrial production facilities, technical Energy Management Systems are used to measure, monitor and display energy consumption related information. The measurements take place at the field device level of the automation pyramid. The measured values are recorded and processed at the control level. The functionalities to monitor and display energy data are located at the MES level of the automation pyramid. So the energy data from all PLCs has to be aggregated, structured and provided for higher level systems. This contribution introduces a concept for an Energy Data Aggregation Layer, which provides the functionality described above. For the implementation of this Energy Data Aggregation Layer, a combination of AutomationML and OPC UA is used.
In microservice architectures, data is often hold redundantly to create an overall resilient system. Although the synchronization of this data proposes a significant challenge, not much research has been done on this topic yet. This paper shows four general approaches for assuring consistency among services and demonstrates how to identify the best solution for a given architecture. For this, a microservice architecture, which implements the functionality of a mainframe-based legacy system from the insurance industry, serves as an example.
Since textual user generated content from social media platforms contains valuable information for decision support and especially corporate credit risk analysis, automated approaches for text classification such as the application of sentiment dictionaries and machine learning algorithms have received great attention in recent user generated content based research endeavors. While machine learning algorithms require individual training data sets for varying sources, sentiment dictionaries can be applied to texts immediately, whereby domain specific dictionaries attain better results than domain independent word lists. We evaluate by means of a literature review how sentiment dictionaries can be constructed for specific domains and languages. Then, we construct nine versions of German sentiment dictionaries relying on a process model which we developed based on the literature review. We apply the dictionaries to a manually classified German language data set from Twitter in which hints for financial (in)stability of companies have been proven. Based on their classification accuracy, we rank the dictionaries and verify their ranking by utilizing Mc Nemar’s test for significance. Our results indicate, that the significantly best dictionary is based on the German language dictionary SentiWortschatz and an extension approach by use of the lexical-semantic database GermaNet. It achieves a classification accuracy of 59,19 % in the underlying three-case-scenario, in which the Tweets are labelled as negative, neutral or positive. A random classification would attain an accuracy of 33,3 % in the same scenario and hence, automated coding by use of the sentiment dictionaries can lead to a reduction of manual efforts. Our process model can be adopted by other researchers when constructing sentiment dictionaries for various domains and languages. Furthermore, our established dictionaries can be used by practitioners especially in the domain of corporate credit risk analysis for automated text classification which has been conducted manually to a great extent up to today.
For the analysis of contract texts, validated model texts, such as model clauses, can be used to identify used contract clauses. This paper investigates how the similarity between titles of model clauses and headings extracted from contracts can be computed, and which similarity measure is most suitable for this. For the calculation of the similarities between title pairs we tested various variants of string similarity and token based similarity. We also compare two additional semantic similarity measures based on word embeddings using pre-trained embeddings and word embeddings trained on contract texts. The identification of the model clause title can be used as a starting point for the mapping of clauses found in contracts to verified clauses.
Building a well-founded understanding of the concepts, tasks and limitations of IT in all areas of society is an essential prerequisite for future developments in business and research. This applies in particular to the healthcare sector and medical research, which are affected by the noticeable advances in digitization. In the transfer project “Zukunftslabor Gesundheit” (ZLG), a teaching framework was developed to support the development of further education online courses in order to teach heterogeneous groups of learners independent of location and prior knowledge. The study at hand describes the development and components of the framework.
Das Referat klärt zuerst, was man unter "Didaktik der Pflege" versteht und inwieweit allgemeindidaktische Zielsetzungen eine Bedeutung für die Pflegeausbildung haben. Dann werden wichtige didaktische Kriterien für die Auswahl von Inhalten der Pflegeausbildung erörtert und gleichzeitig werden Fragestellungen für eine zukünftige Pflegedidaktik formuliert.
Bei der Konzeption und Entwicklung der BID-Studiengänge ist neben den inhaltlichen und studienorganisatorischen Überlegungen die Ableitung und Entwicklung realistischer Planungsdaten eine der Hauptaufgaben des Modellversuchs BID und eine wesentliche Voraussetzung für ihre erfolgreiche Umsetzung in die Praxis gewesen. Auf diese Planungsergebnisse und die Umsetzung wird in diesem Beitrag vor allem einzugehen sein.
Digitale Marktplätze können die Kosten einer Handelstransaktion, die sog. Transaktionskosten, senken. Durch weiteren technischen Fortschritt und intelligente Handelsbots wird die Nutzung des Marktmechanismus immer kostengünstiger. Dieser Artikel gibt einen Überblick über die bisherige Entwicklung von Digitalen Marktplätzen der Agrar- und Ernährungswirtschaft und eine mögliche Zukunft. Vermutlich werden die Transaktionskosten weiter fallen, sodass weitere Effizienzgewinne durch die vermehrte Nutzung von Märkten möglich sein werden.
Discovery and efficient reuse of technology pictures using Wikimedia infrastructures. A proposal
(2016)
Multimedia objects, especially images and figures, are essential for the visualization and interpretation of research findings. The distribution and reuse of these scientific objects is significantly improved under open access conditions, for instance in Wikipedia articles, in research literature, as well as in education and knowledge dissemination, where licensing of images often represents a serious barrier.
Whereas scientific publications are retrievable through library portals or other online search services due to standardized indices there is no targeted retrieval and access to the accompanying images and figures yet. Consequently there is a great demand to develop standardized indexing methods for these multimedia open access objects in order to improve the accessibility to this material.
With our proposal, we hope to serve a broad audience which looks up a scientific or technical term in a web search portal first. Until now, this audience has little chance to find an openly accessible and reusable image narrowly matching their search term on first try - frustratingly so, even if there is in fact such an image included in some open access article.