Refine
Year of publication
Document Type
- Conference Proceeding (162) (remove)
Has Fulltext
- yes (162)
Is part of the Bibliography
- no (162)
Keywords
- Digitalisierung (9)
- Energiemanagement (8)
- Mikroservice (8)
- Angewandte Botanik (7)
- Gepresste Pflanzen (7)
- Herbar Digital (7)
- Herbarium (7)
- Serviceorientierte Architektur (7)
- Virtualisierung (7)
- Agile Softwareentwicklung (6)
Flatness-based feedforward control is an approach for combining fast motion with low oscillations for nonlinear or flexible drive systems. Its desired trajectories must be continuously differentiable to the degree of the system order. Designing such trajectories, that also reach the dynamic system limits, poses a challenge. Common solutions, like Gevrey functions, usually require lengthy offline calculations. To achieve a quicker and simpler industrial-suited solution, this paper presents a new online trajectory generation scheme. The algorithm utilizes higher order s-curve trajectories created by a cyclic filtering process using moving average filters. An experimental validation proves the capability as well as industrial applicability of the presented approach for flexible structures like stacker cranes.
Big-Data-Datenplattformen werden immer beliebter, um große Datenmengen bei Bedarf analysieren zu können. Zu den fünf gängigsten Big-Data-Verarbeitungsframeworks gehören Apache Hadoop, Apache Storm, Apache Samza, Apache Spark, und Apache Flink. Zwar unterstützen alle fünf Plattformen die Verarbeitung großer Datenmengen, doch unterscheiden sich diese Frameworks in ihren Anwendungsbereichen und der zugrunde liegenden Architektur. Eine Reihe von Studien hat sich bereits mit dem Vergleich dieser Big-Data-Frameworks befasst, indem sie sie anhand eines bestimmten Leistungsindikators bewertet haben. Die IT-Sicherheit dieser Frameworks wurde dabei jedoch nicht betrachtet. In diesem Beitrag werden zunächst allgemeine Anforderungen und Anforderungen an die IT-Sicherheit der Datenplattformen definiert. Anschließend werden die Datenplattform-Konzepte unter Berücksichtigung der aufgestellten Anforderungen analysiert und gegenübergestellt.
Industrial Control Systems (ICS) succumb to an ever evolving variety of threats. Additionally, threats are increasing in number and get more complex. This requires a holistic and up-to-date security concept for ICS as a whole. Usually security concepts are applied and updated based on regularly performed ICS security assessments. Such ICS security assessments require high effort and extensive knowledge about ICS and its security. This is often a problem for small and mediumsized enterprises (SME), which do not have sufficient respective sufficiently skilled human resources. This paper defines in a first step requirements on the knowledge needed to perform an ICS security assessment and the life cycle of this knowledge. Afterwards the ICS security knowledge and its life cycle are developed and discussed considering the requirements and related work.
This paper presents a cascaded methodology for enhancing the path accuracy of industrial robots by using advanced control schemes. It includes kinematic calibration as well as dynamic modeling and identification. This is followed by a centralized model-based compensation of robot dynamics. The implemented feed-forward torque control shows the expected improvements of control accuracy. However, external measurements show the influence of joint elasticities as systematic path errors. To further increase the accuracy an iterative learning controller (ILC) based on external camera measurements is designed. The implementation yields to significant improvements of path accuracy. By means of a kind of automated ”Teach-In”, an overall effective concept for the automated calibration and optimization of the accuracy of industrial robots in high-dynamic path-applications is realized.
The research project "Herbar Digital" was started in 2007 with the aim to digitize 3.5 million dried plants on paper sheets belonging to the Botanic Museum Berlin in Germany. Frequently the collector of the plant is unknown, so a procedure had to be developed in order to determine the writer of the handwriting on the sheet. In the present work the static character was transformed into a dynamic form. This was done with the model of an inert ball which was rolled along the written character. During this off-line writer recognition, different mathematical procedures were used such as the reproduction of the write line of individual characters by Legendre polynomials. When only one character was used, a recognition rate of about 40% was obtained. By combining multiple characters, the recognition rate rose considerably and reached 98.7% with 13 characters and 93 writers (chosen randomly from the international IAM-database [3]). A global statistical approach using the whole handwritten text resulted in a similar recognition rate. By combining local and global methods, a recognition rate of 99.5% was achieved.
Our research project, "Rationalizing the virtualization of botanical document material and their usage by process optimization and automation (Herbar-Digital)" started on July 1, 2007 and will last until 2012. Its long-term aim is the digitization of the more than 3,5 million specimens in the Berlin Herbarium. The University of Applied Sciences and Arts in Hannover collaborates with the department of Biodiversity Informatics at the BGBM (Botanic Garden and Botanical Museum Berlin-Dahlem) headed by Walter Berendsohn. The part of Herbar-Digital here presented deals with the analysis of the generated high resolution images (10,400 lines x 7,500 pixel).
Das Forschungsprojekt „Herbar Digital” startete 2007 mit dem Ziel der Digitalisierung des Bestands von mehr als 3,5 Millionen getrockneter Pflanzen bzw. Pflanzenteile auf Papierbögen (Herbarbelege) des Botanischen Museums Berlin. Die Aufgabe des Autors ist die Analyse der hochaufgelösten Bilder mit 10400 Zeilen und 7500 Spalten. Die Herbarbelege können außerdem unterschiedliche Objekte enthalten wie Umschläge mit zusätzlichen Pflanzenteilen, gedruckte oder handgeschriebene Etiketten, Farbtabellen, Maßstäbe, Stempel, Barcodes, farbige „Typus-Etiketten“ und handschriftliche Anmerkungen direkt auf dem Beleg. Die schriftlichen Anmerkungen, insbesondere in Handschrift, sind von besonderem Interesse. Kommerzielle OCR-Software kann oftmals Schrift in komplexen Umgebungen nicht lokalisieren, wie sie häufig auf den Herbarbelegen vorliegt, auf denen Schrift zwischen Blättern, Wurzeln und anderen Objekten angeordnet ist. Im folgenden wird eine Methode vorgestellt, die es ermöglicht, Schriftpassagen im Bild automatisch zu finden.
Autonomous and integrated passenger and freight transport (APFIT) is a promising approach to tackle both, traffic and last-mile-related issues such as environmental emissions, social and spatial conflicts or operational inefficiencies. By conducting an agent-based simulation, we shed light on this widely unexplored research topic and provide first indications regarding influential target figures of such a system in the rural area of Sarstedt, Germany. Our results show that larger fleets entail inefficiencies due to suboptimal utilization of monetary and material resources and increase traffic volume while higher amounts of unused vehicles may exacerbate spatial conflicts. Nevertheless, to fit the given demand within our study area, a comparatively large fleet of about 25 vehicles is necessary to provide reliable service, assuming maximum passenger waiting times of six minutes to the expense of higher standby times, rebalancing effort, and higher costs for vehicle acquisition and maintenance.
The technical, environmental and economic potential of hemp fines as a natural filler in bioplastics to produce biocomposites is the subject of this study – giving a holistic overview. Hemp fines are an agricultural by-product of the hemp fibres and shives production. Shives and fibres are for example used in the paper, animal bedding or composite area. About 15 to 20 wt.-% per kg hemp straw results in hemp fines after processing. In 2010 about 11,439 metric tons of hemp fines were produced in Europe. Hemp fines are an inhomogeneous material which includes hemp dust, shives and fibre. For these examinations the hemp fines are sieved in a further step with a tumbler sieving machine to obtain more specified fractions. The untreated hemp fines (ex work) as well as the sieved fractions are combined with a polylactide polymer (PLA) using a co-rotating twin screw extruder to produce biocomposites with different hemp fine content. By using an injection moulding machine standard test bars are produced to conduct several material tests. The Young’s modulus is increased and the impact strength reduced by hemp fines. With a content of above 15 wt.-% hemp fines are also improving the environmental (global warming potential) and economic performance in comparison to pure PLA.
Complications may occur after a liver transplantation, therefore proper monitoring and care in the post-operation phase plays a very important role. Sometimes, monitoring and care for patients from abroad is difficult due to a variety of reasons, e.g., different care facilities. The objective of our research for this paper is to design, implement and evaluate a home monitoring and decision support infrastructure for international children who underwent liver transplant operation. A point-of-care device and the PedsQL questionnaire were used in patients’ home environment for measuring the blood parameters and assessing quality of life. By using a tablet PC and a specially developed software, the measured results were able to be transmitted to the health care providers via internet. So far, the developed infrastructure has been evaluated with four international patients/families transferring 38 records of blood test. The evaluation showed that the home monitoring and decision support infrastructure is technically feasible and is able to give timely alarm in case of abnormal situation as well as may increase parent’s feeling of safety for their children.
The NOA project collects and stores images from open access publications and makes them findable and reusable. During the project a focus group workshop was held to determine whether the development is addressing researchers’ needs. This took place before the second half of the project so that the results could be considered for further development since addressing users’ needs is a big part of the project. The focus was to find out what content and functionality they expect from image repositories.
In a first step, participants were asked to fill out a survey about their images use. Secondly, they tested different use cases on the live system. The first finding is that users have a need for finding scholarly images but it is not a routine task and they often do not know any image repositories. This is another reason for repositories to become more open and reach users by integrating with other content providers. The second finding is that users paid attention to image licenses but struggled to find and interpret them while also being unsure how to cite images. In general, there is a high demand for reusing scholarly images but the existing infrastructure has room to improve.
During the transition from conventional towards purely electrical, sustainable mobility, transitional technologies play a major part in the task of increasing adaption rates and decreasing range anxiety. Developing new concepts to meet this challenge requires adaptive test benches, which can easily be modified e.g. when progressing from one stage of development to the next, but also meet certain sustainability demands themselves.
The system architecture presented in this paper is built around a service-oriented software layer, connecting a modular hardware layer for direct access to sensors and actuators to an extensible set of client tools. Providing flexibility, serviceability and ease of use, while maintaining a high level of reusability for its constituent components and providing features to reduce the required overall run time of the test benches, it can effectively decrease the CO2 emissions of the test bench while increasing its sustainability and efficiency.
Cloud computing has become well established in private and public sector projects over the past few years, opening ever new opportunities for research and development, but also for education. One of these opportunities presents itself in the form of dynamically deployable, virtual lab environments, granting educational institutions increased flexibility with the allocation of their computing resources. These fully sandboxed labs provide students with their own, internal network and full access to all machines within, granting them the flexibility necessary to gather hands-on experience with building heterogeneous microservice architectures. The eduDScloud provides a private cloud infrastructure to which labs like the microservice lab outlined in this paper can be flexibly deployed at a moment’s notice.
Der zukünftig steigende Bedarf an Bereitstellung von Regelenergie aus regenerativen Kraftwerken sowie sinkende EEG-Tarifstrukturen im Bereich Biogas führen zur Notwendigkeit einer Entwicklung alternativer Betriebs- und Vergütungsmodelle. Der vorliegende Beitrag skizziert ein wirtschaftliches Ausgleichssystem für virtuelle Biogas-Verbundkraftwerke. Es beschreibt, welche Kosten und Erlöse in virtuellen Biogas-Verbünden generiert werden, sofern diese teilautomatisiert und auf die regionale Netzstabilität fokussiert betrieben werden. Das wirtschaftliche Ausgleichssystem ist ein Teil des im Forschungsvorhaben VKV Netz zu entwickelnden Steuerungssystems für virtuelle Biogas-Verbundkraftwerke (http://vkvnetz.de).
Beitrag zum Workshop "Informationskompetenz im Norden" am 01.02.2018 im Bibliotheks- und Informationssytem der Carl von Ossietzky Universität Oldenburg.
Es geht zunächst darum, welche Ansätze und Projekte die Schreibwerkstatt verfolgt, um Informations- & Schreibprozesse an der Hochschule Hannover zu fördern.
Da es gemeinsame Ziele und Zielgruppen von sowie inhaltliche Überschneidungen zwischen Bibliothek und Schreibwerkstatt gibt, werden Kooperationsbeispiele und Vorteile der Zusammenarbeit vorgestellt.
Requirements for an energy data information model for a communication-independent device description
(2021)
With the help of an energy management system according to ISO 50001, industrial companies obtain the opportunities to reduce energy consumption and to increase plant efficiencies. In such a system, the communication of energy data has an important function. With the help of so-called energy profiles (e.g. PROFIenergy), energy data can be communicated between the field level and the higher levels via proven communication protocols (e.g. PROFINET). Due to the fact that in most cases several industrial protocols are used in an automation system, the problem is how to transfer energy data from one protocol to another with as less effort as possible. An energy data information model could overcome this problem and describe energy data in a uniform and semantically unambiguous way. Requirements for a unified energy data information model are presented in this paper.
Mit der Anwendung der Norm ISO 50001 und der einhergehenden Einführung eines Energiemanagementsystems (kurz EnMS) kann eine sukzessive Erhöhung der Energieeffizienz erreicht werden. Zur Umsetzung von Energie-Monitoring- oder Standby-Management-Funktionalitäten müssen Energiedaten in der Feldebene bereitgestellt werden und auf Edge-Devices oder SPSen mittels eines Energiemanagement-Programms ggf. im Datenformat angepasst, skaliert und auf eine etablierte Kommunikationsschnittstelle (z.B. basierend auf OPC UA- oder MQTT) abgebildet werden. Die Erstellung dieser Energiemanagement-Programme geht mit einem hohen Engineering-Aufwand einher, denn die Feldgeräte aus der heterogenen Feldebene stellen die Energiedaten nicht in einer standardisierten Semantik bereit. Um diesem Engineering-Aufwand entgegenzuwirken, wird ein Konzept für ein universelles Energiedateninformationsmodell (kurz UEDIM) vorgestellt. Dieses Konzept sieht die Bereitstellung der Energiedaten an das EnMS in einer semantisch standardisierten Form vor. Zur weiteren Entwicklung des UEDIM wird im Beitrag näher untersucht, in welcher Form Energiedaten in der Feldebene bereitgestellt werden können und welche Anforderungen für das UEDIM aufzustellen sind.
With the use of an energy management system in an industrial company according to ISO 50001, a step-by-step increase in energy efficiency can be achieved. The realization of energy monitoring and load management functions requires programs on edge devices or PLCs to acquire the data, adapt the data type or scale the values of the energy information. In addition, the energy information must be mapped to communication interfaces (e.g. based on OPC UA) in order to convey this energy information to the energy management application. The development of these energy management programs is associated with a high engineering effort, because the field devices from the heterogeneous field level do not provide the energy information in standardized semantics. To mitigate this engineering effort, a universal energy data information model (UEIM) is developed and presented in this paper.
In this poster we present the ongoing development of an integrated free and open source toolchain for semantic annotation of digitised cultural heritage. The toolchain development involves the specification of a common data model that aims to increase interoperability across diverse datasets and to enable new collaborative research approaches.
A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.