Refine
Year of publication
Document Type
- Conference Proceeding (162) (remove)
Has Fulltext
- yes (162)
Is part of the Bibliography
- no (162)
Keywords
- Digitalisierung (9)
- Energiemanagement (8)
- Mikroservice (8)
- Angewandte Botanik (7)
- Gepresste Pflanzen (7)
- Herbar Digital (7)
- Herbarium (7)
- Serviceorientierte Architektur (7)
- Virtualisierung (7)
- Agile Softwareentwicklung (6)
- Agilität <Management> (6)
- Erkennungssoftware (6)
- OCR (6)
- Recognition software (6)
- Computersicherheit (5)
- Insurance Industry (5)
- Text Mining (5)
- Versicherungswirtschaft (5)
- Automation (4)
- Bibliothek (4)
- COVID-19 (4)
- Concreteness (4)
- E-Learning (4)
- Energieeffizienz (4)
- Grader (4)
- Informationsmanagement (4)
- Rechnernetz (4)
- SOA (4)
- Semantik (4)
- Telearbeit (4)
- energy management (4)
- Agile methods (3)
- Ausbildung (3)
- Autobewerter (3)
- Big Data (3)
- Biogas (3)
- Cloud Computing (3)
- Complex Event Processing (3)
- Computersimulation (3)
- Erneuerbare Energien (3)
- German (3)
- Gießerei (3)
- Information Retrieval (3)
- Klassifikation (3)
- Microservices (3)
- Nachhaltigkeit (3)
- PROFInet (3)
- Regelenergie (3)
- Reserveleistung (3)
- Visualisierung (3)
- foundry (3)
- microservices (3)
- Agile software development (2)
- Benutzererlebnis (2)
- Benutzeroberfläche (2)
- Bibliothekswesen (2)
- Computerunterstütztes Lernen (2)
- Consistency (2)
- Contract Analysis (2)
- Deutsch (2)
- Disambiguation (2)
- Distributional Semantics (2)
- EEG (2)
- Elektrospinnen (2)
- Energieeinsparung (2)
- Ethernet (2)
- Framework <Informatik> (2)
- Ganzzahlige lineare Optimierung (2)
- Graja (2)
- Industrie 4.0 (2)
- Information Visualization (2)
- Konkretum <Linguistik> (2)
- Kulturerbe (2)
- Künstliche Intelligenz (2)
- Landwirtschaft (2)
- Machine Learning (2)
- Microservice (2)
- Microservices Architecture (2)
- Modellversuch BID (2)
- Molecular switches (2)
- Network Security (2)
- Neuronales Netz (2)
- Open Access (2)
- PPS (2)
- PROFINET Security (2)
- Programmieraufgabe (2)
- Programmierung (2)
- Rechtswissenschaften (2)
- Resiliency (2)
- Sachtext (2)
- Semantic Web (2)
- Simulation (2)
- Sprachnorm (2)
- Steuerungssystem (2)
- Triazole (2)
- Urban Logistics (2)
- User Interfaces (2)
- Vergleich (2)
- Vertrag (2)
- Wikibase (2)
- Wikidata (2)
- XML (2)
- agile methods (2)
- agile software development (2)
- batch-wise parallel process (2)
- dwell-time (2)
- eduscrum (2)
- energy efficiency (2)
- linear integer programming (2)
- optimal scheduling (2)
- remote work (2)
- soft constraint (2)
- technical energy management (2)
- Ähnlichkeit (2)
- Übung <Hochschule> (2)
- 2D data processing (1)
- 3D data (1)
- 3d mapping (1)
- 4-day work week (1)
- API (1)
- ARIS (1)
- Abbreviations (1)
- Abkürzung (1)
- Ablaufplanung (1)
- Absolvent (1)
- Acronyms (1)
- Adaptive IT Infrastructure (1)
- Adhäsion (1)
- Agent <Informatik> (1)
- Agile Manifesto (1)
- Agile Methoden (1)
- Agile Practices (1)
- Agile Software Development (1)
- Agile education (1)
- Agile method (1)
- Agile practices (1)
- Air quality (1)
- Akronym (1)
- Algorithmus (1)
- Alternative work schedule (1)
- Ambiguität (1)
- Anergy (1)
- Anforderungsermittlung (1)
- Angewandte Informatik (1)
- Annotation (1)
- Anomalieerkennung (1)
- Anomaly detection (1)
- Anonymization (1)
- Application Programming Interface (1)
- Arbeitsablauf (1)
- Arbeitswelt (1)
- Arbeitszufriedenheit (1)
- Articial intelligence (1)
- Asymmetric encryption (1)
- Attack detection (1)
- Auswahl (1)
- Authentication (1)
- Authentifikation (1)
- Authorization (1)
- AutomationML (1)
- Automatische Klassifikation (1)
- Automatische Sprachanalyse (1)
- Automatisierte Bewertung (1)
- Automatisierte Programmbewertung (1)
- Automatisierungssystem (1)
- Autorisierung (1)
- Azyklischer gerichteter Graph (1)
- BaaS (Backend-as-a-service) (1)
- Bahnplanung (1)
- Batteriefahrzeug (1)
- Battery Electric Vehicles (1)
- Baumaßnahme (1)
- Beruf (1)
- Betriebsdaten (1)
- Betriebsdatenerfassung (1)
- Bewertungsaspekt (1)
- Bewertungsmaßstab (1)
- Bewertungsschema (1)
- Bibliothekar (1)
- Big Data Analytics (1)
- Big-Data-Datenplattform (1)
- Bilderkennung (1)
- Bildersprache (1)
- Bildersuchmaschine (1)
- Bildmaterial (1)
- Bildverarbeitung (1)
- Biokunststoff (1)
- Blackboard Pattern (1)
- Book of Abstract (1)
- Bring Your Own Device (1)
- C-SPARQL (1)
- CI/CD (1)
- CQL (1)
- Case Management (1)
- Chargenbetrieb (1)
- Chatbot (1)
- Choreography (1)
- Citizens (1)
- City-Logistik (1)
- Classification (1)
- Codegenerierung (1)
- Codierung (1)
- Composite materials (1)
- Computer simulation (1)
- Computerlinguistik (1)
- Constructive Alignment (1)
- Consumerization (1)
- Context Awareness (1)
- Corporate Credit Risk (1)
- Corpus construction (1)
- Crowdshipping (1)
- Curriculumentwicklung (1)
- Cyber-Security (1)
- Cyberattacke (1)
- Data Cubes (1)
- Data Management (1)
- Data Science (1)
- Data Sharing (1)
- Data handling (1)
- Data-Warehouse-Konzept (1)
- Datenaufbereitung (1)
- Datenerfassung (1)
- Datenqualität (1)
- Datenschutz (1)
- Datenstrom (1)
- Datenwürfel (1)
- Decision Support (1)
- Decision Support Systems, Clinical (1)
- Decision Support Tool (1)
- Deep Convolutional Networks (1)
- Design Science (1)
- Designwissenschaft <Informatik> (1)
- DevOps (1)
- Dewey-Dezimalklassifikation (1)
- Didactic (1)
- Didaktik (1)
- Dienstgüte (1)
- Digital Wellbeing (1)
- Digital storage (1)
- Digitaler Marktplatz (1)
- Digitalization (1)
- Digitization (1)
- Dimension 2 (1)
- Disambiguierung (1)
- District Heating (1)
- Docker (1)
- Dokumentanalyse (1)
- Domain Driven Design (DDD) (1)
- Drehkolbenverdichter (1)
- Dynamic identification (1)
- Dynamic modelling (1)
- Dynamische Modellierung (1)
- E - Assessment (1)
- E-Assessment (1)
- E-Grocery (1)
- EAssessment (1)
- EPN (1)
- Education (1)
- Eilzustellung (1)
- Eindringerkennung (1)
- Electrospinning (1)
- Elektromobilität (1)
- Elektronischer Markt (1)
- Elektronischer Marktplatz (1)
- Empfehlungssystem (1)
- Enduser Device (1)
- Energieaufnahme (1)
- Energieerzeugung (1)
- Energieverbrauch (1)
- Entscheidungsunterstützungssystem (1)
- Ereignisgesteuerte Prozesskette (1)
- Erneuerbare-Energien-Gesetz (2000) (1)
- Evaluation (1)
- Event Processing Network (1)
- Event Processing Network Model (1)
- Exergie (1)
- Exergy (1)
- Explainable anomaly detection (1)
- FHIR (1)
- FaaS (Function-as-a-service) (1)
- Fachsprache (1)
- Farming 4.0 (1)
- Fassung (1)
- Feature and Text Extraction (1)
- Feldgeräte (1)
- Fernunterricht (1)
- Fernwärmeversorgung (1)
- Fertigung (1)
- Fertigungslogistik (1)
- Fertigungssteuerung (1)
- Figurative Language (1)
- Finanzkrise (1)
- Finite-Elemente-Methode (1)
- Flachheitsbasierte Vorsteuerung (1)
- Flexible Struktur (1)
- Focus Group (1)
- Foresight (1)
- Formelhafte Textabschnitte (1)
- Forschungsdaten (1)
- Framework (1)
- Function as a Service (1)
- Funktionsgenerator (1)
- Futurologie (1)
- GECCO: German Corona Consensus Data Set (1)
- Gedenkfeier (1)
- Gemischt-ganzzahlige Optimierung (1)
- Genetic algorithms (1)
- Genetischer Algorithmus (1)
- Geschlechtsunterschied (1)
- Geschäftsprozessmanagement (1)
- Geschäftsprozessmodellierung (1)
- Gesundheitsfürsorge (1)
- Gesundheitsinformationssystem (1)
- Graph-based Text Representations (1)
- Graphische Benutzeroberfläche (1)
- Grappa (1)
- Gruppeninterview (1)
- Hadoop (1)
- Handelsbot (1)
- Hannover / Fachhochschule Hannover / Bibliothek (1)
- Health IT (1)
- Health Information Interoperability (1)
- Heat Pump (1)
- Herbarbeleg (1)
- Hilfsprogramm (1)
- Hochschule (1)
- Hochschulpolitik (1)
- Home Care (1)
- Hybrid Conference (1)
- IBM PC (1)
- ICS Security (1)
- ISO 9001 (1)
- IT security (1)
- IT-Sicherheit (1)
- Image Recognition (1)
- Image Retrieval (1)
- Imagery (1)
- Images (1)
- Indicator Measurement (1)
- Industrial Security (1)
- Industrial robots (1)
- Industrieroboter (1)
- Industry 4.0 (1)
- Information Dissemination (1)
- Information Extraction (1)
- Information Management (1)
- Information Science (1)
- Informationskompetenz (1)
- Informationsmodell (1)
- Informationsmodellierung (1)
- Informationstechnik (1)
- Informationsvermittlung (1)
- Integration (1)
- Intelligent control (1)
- Intelligentes Stromnetz (1)
- Interaktion (1)
- Interdiziplinäre Studiengänge (1)
- Internet der Dinge (1)
- Interoperabilität (1)
- Investment Banking (1)
- Istio (1)
- Java <Programmiersprache> (1)
- Keyword Extraction (1)
- Kinematic calibration (1)
- Kinematik (1)
- Kleben (1)
- Knowledge Life Cycle (1)
- Knowledge Maps (1)
- Kommunikation (1)
- Kompakkt (1)
- Kompetenz (1)
- Kontextbezogenes System (1)
- Korpus <Linguistik> (1)
- Krankenhaus (1)
- Krankenunterlagen (1)
- Kreditrisiko (1)
- Kreditwesen (1)
- Kritische Masse (1)
- Kryptologie (1)
- Kubernetes (1)
- LIG (1)
- LOINC (1)
- LSTM (1)
- Lastmanagement (1)
- Lastverteilung <Energietechnik> (1)
- Latent Semantic Analysis (1)
- Layout Detection (1)
- Lean Management (1)
- Lean Production (1)
- Lebensmittel (1)
- Lebensmitteleinzelhandel (1)
- Legal Documents (1)
- Legal Writings (1)
- Legende <Bild> (1)
- Leistungskennzahl (1)
- Leistungssteigerung (1)
- Leitstand (1)
- Lemmatization (1)
- Lernmanagementsystem (1)
- Lernmotivation (1)
- Lexical Semantics (1)
- Lieferservice (1)
- Linear Indexed Grammars (1)
- Linked Data (1)
- Linked Open Data (1)
- Literaturbericht (1)
- Liver Transplantation (1)
- Low Exergy Heat Net (1)
- Luftqualität (1)
- MIMOS II (1)
- Management (1)
- MapReduce (1)
- Markov Models (1)
- Maschinelles Lernen (1)
- Masterstudium (1)
- Mathematisches Modell (1)
- Media Didactic Concept (1)
- Medical Coding (1)
- Mediendesign (1)
- Mediendesignausbildung (1)
- Mediendesigninformatik (1)
- Mediendidaktik (1)
- Medizin (1)
- Medizinische Bibliothek (1)
- Messwerterfassung (1)
- Middleware (1)
- Mikro-Kraft-Wärme-Kopplung (1)
- Mikroprozessor (1)
- Mischanlage (1)
- Mobile (1)
- Mobile Device Management (1)
- Modellprädiktive Regelung (1)
- Modifizierte dezentrale (1)
- Motivation (1)
- Multidimensional Analysis (1)
- Multidimensional analysis (1)
- Mössbauer (1)
- Mößbauer-Spektrometer (1)
- Mößbauer-Spektroskopie (1)
- NFDI (1)
- NFDI4Culture – Konsortium für Forschungsdaten materieller und immaterieller Kulturgüter (1)
- NLP (1)
- NMPC (1)
- Neoliberalismus (1)
- Neural controls (1)
- Neural networks (1)
- Neural-network models (1)
- Nichtlineare modellprädiktive Regelung (1)
- Niederdruckplasma (1)
- Nierentransplantation (1)
- Normality model (1)
- Notation <Klassifikation> (1)
- Nürnberg / Evangelische Studentengemeinde (1)
- OPC UA (1)
- OT Security (1)
- OT-Security (1)
- Online-Trajektoriengenerierung (1)
- Open Repositories (1)
- Open Science (1)
- Open Source (1)
- OpenRefine (1)
- OpenStack (1)
- Optimale Kontrolle (1)
- Orchestration (1)
- PDF <Dateiformat> (1)
- PDF Document Analysis (1)
- POS Tagging (1)
- PageRank (1)
- Paket (1)
- Paraphrase (1)
- Paraphrase Similarity (1)
- Path accuracy (1)
- Patient empowerment (1)
- Personennahverkehr (1)
- Pflege (1)
- Phraseologie (1)
- Physics (1)
- Physik (1)
- Plugin (1)
- Polymere (1)
- Polymers (1)
- Portable Micro-CHP Unit (1)
- Praxisprojekte (1)
- Pregel (1)
- Preisbildung (1)
- Preisdifferenzierung (1)
- Preissetzung (1)
- Privacy by Design (1)
- ProFormA (1)
- ProFormA-Aufgabenformat (1)
- Problemorientiertes Lernen (1)
- Processes (1)
- Produktionslogistik (1)
- Produktionsprozess (1)
- Prognose (1)
- Programmieraufgaben (1)
- Programmierausbildung (1)
- Projektmanagement (1)
- Prozessmanagement (1)
- Prozessmodell (1)
- Prozessmuster (1)
- Prozessoptimierung (1)
- Prüfstand (1)
- Pseudonymization (1)
- Pädagogik (1)
- QM (1)
- Qualifikation (1)
- Quality Control (1)
- Quality Management (1)
- Quality assessment (1)
- Quality of Service (1)
- Qualität (1)
- Qualitätskontrolle (1)
- Qualitätsmanagement (1)
- REST <Informatik> (1)
- RESTful (1)
- RFID (1)
- Realisierung (1)
- Rechtsanwalt (1)
- Rechtsdokumente (1)
- Recommender System (1)
- Reduction of Complexity (1)
- Reference Architecture (1)
- Referenzmodell (1)
- Regalbediengerät (1)
- Regalförderzeug (1)
- Regional Development (1)
- Regional Innovation Systems (1)
- Regional Policy (1)
- Regulierung (1)
- Remote Arbeit (1)
- Remote work (1)
- Repository <Informatik> (1)
- Representational State Transfer (1)
- Requirements engineering (1)
- Resilienz (1)
- Richardson Maturity Model (1)
- Rissausbreitung (1)
- Robotics (1)
- Robotik (1)
- RuleCore (1)
- SCO (1)
- SOA co-existence (1)
- SYCAT (1)
- Sakura Science Program (1)
- Schaltungstechnik (1)
- Schlagwortkatalog (1)
- Schlagwortnormdatei (1)
- Schreibberatung (1)
- Schreibwerkstatt (1)
- Schwarmintelligenz (1)
- Scientific image search (1)
- Scrum <Vorgehensmodell> (1)
- Secure communication (1)
- Security (1)
- Security Knowledge (1)
- Security Ontology (1)
- Selbstgesteuertes Lernen (1)
- Self-directed Learning (1)
- Semantic Web Technologies (1)
- Semantics (1)
- Semantisches Datenmodell (1)
- Serverless Computing (1)
- Service Mesh (1)
- Service Orientation (1)
- Service-orientation (1)
- Shortest Path (1)
- Signal processing (1)
- Signaltechnik (1)
- Signalverarbeitung (1)
- Similarity Measures (1)
- Simulation Modeling (1)
- Situation Awareness (1)
- Smart Buildings (1)
- Smart Grid (1)
- Smart Society (1)
- Society 5.0 (1)
- Software (1)
- Software Architecture (1)
- Software-Tool (1)
- Softwarearchitektur (1)
- Softwareentwicklung (1)
- Softwaretest (1)
- Softwarewerkzeug (1)
- Spannungsintensitätsfaktor (1)
- Spezialbibliothekar (1)
- Spin crossover (1)
- Standardised formulation (1)
- Standardisierung (1)
- Standards (1)
- Statistical Methods (1)
- Statistische Methoden (1)
- Strategie (1)
- Strategische Vorausschau (1)
- Straßenverkehr (1)
- Structural Analysis (1)
- Studienbeiträge (1)
- Studiengebühr (1)
- Subprime-Krise (1)
- Supply Chain Management (1)
- Supply Chains (1)
- Sustainable development (1)
- Swarm Intelligence (1)
- Systemdienstleistungen (1)
- Systems Librarian, Data Librarian, Job advertisement analysis, Job profiles, New competencies (1)
- Taxonomie (1)
- Technisches Energiemanagementsystem (tEnMS) (1)
- Techno-Economic Analysis (1)
- Terminologie (1)
- Terminology (1)
- Territorial Intelligence (1)
- Tertiary study (1)
- Tertiärbereich (1)
- Test Bench (1)
- Text Similarity (1)
- Text annotation (1)
- Textbooks (1)
- Thermal Storage (1)
- Thesaurus (1)
- Thin film (1)
- Tiefeninterview (1)
- Title Matching (1)
- Transaktionskosten (1)
- Transmission measurement setup (1)
- Transplantatabstoßung (1)
- Trendanalyse (1)
- Triazole complexes (1)
- Twitter <Softwareplattform> (1)
- Twitter analysis (1)
- Umweltbilanz (1)
- Unternehmen (1)
- Unternehmensgründung (1)
- User Generated Content (1)
- Variabilität (1)
- Verbal Idioms (1)
- Verbesserung (1)
- Verbundwerkstoff (1)
- Versicherungsbetrieb (1)
- Versicherungsvertrag (1)
- Verteiltes System (1)
- Vertragsklausel (1)
- Verweilzeit (1)
- Videospiel (1)
- Viertagewoche (1)
- Virtuelle Realität (1)
- Virtuelles Laboratorium (1)
- Visualization (1)
- Vorbehandlung <Technik> (1)
- Waveguides (1)
- Wellenleiter (1)
- Werkzeug (1)
- Wert (1)
- Wikimedia Commons (1)
- Wikipedia categories (1)
- Wind power plant (1)
- Windkraftwerk (1)
- Wirtschaftlichkeit (1)
- Wirtschaftsdemokratie (1)
- Wissenschaftliche Bibliothek (1)
- Wissenschaftliches Arbeiten (1)
- Wissensmanagement (1)
- Word Counting (1)
- Word Norms (1)
- Work From Home (1)
- Workflow (1)
- Wort (1)
- Wärmepumpe (1)
- Wärmespeicher (1)
- Wärmeübertragung (1)
- XML-Model (1)
- XML-Schema (1)
- Zeitreihe (1)
- Zukunftsforschung (1)
- Zweiwortsatz (1)
- abstractness (1)
- aerospace engineering (1)
- agent-based simulation (1)
- aggregation server (1)
- agile education (1)
- application (1)
- attributional LCA (1)
- bio-based plastics (1)
- biocomposites (1)
- build automation (1)
- build server (1)
- business process management (1)
- class room (1)
- code generation (1)
- combined heat and power (1)
- concreteness (1)
- consequential LCA (1)
- constraint pushing (1)
- context vectors (1)
- covid 19 (1)
- crack propagation rate (1)
- credit risk (1)
- critical mass (1)
- cultural heritage (1)
- cyber security (1)
- data mapping (1)
- data stream processing (1)
- data warehouse (1)
- digital twins (1)
- distance learning (1)
- distributed systems (1)
- distributional semantics (1)
- dynamic programming (1)
- dynamic trajectories (1)
- e-Assessment (1)
- e-mobility (1)
- eLearning (1)
- eco-design (1)
- eduDScloud (1)
- education (1)
- energy data (1)
- energy data information model (1)
- energy information model (1)
- energy monitoring (1)
- energy profiles (1)
- event-driven process chain (1)
- fall prediction (1)
- fall prevention (1)
- fall risk (1)
- finite element method (1)
- flatness-based control (1)
- flexible structure (1)
- game analysis (1)
- gender (1)
- generic interface (1)
- graduate (1)
- graft rejection (1)
- graphical user interface (1)
- hemp (1)
- herbarium (1)
- high-quality Learning Formats (1)
- image processing (1)
- in-depth-interviews (1)
- increasing continuous differentiability (1)
- individuelle Programmieraufgabe (1)
- industrial production process (1)
- information extraction (1)
- information modeling (1)
- information system (1)
- integrated passenger and freight transport (1)
- interoperability (1)
- key performance indicators (1)
- kidney transplant (1)
- library and information science (1)
- lidar (1)
- life-cycle-assessment (1)
- linked data (1)
- literature review (1)
- matrix calulations (1)
- measurement data acquisition (1)
- mixed-integer programming (1)
- model predictive control (1)
- moving average filter (1)
- natural fiber (1)
- neural network model (1)
- online trajectory generation (1)
- openEHR (1)
- plant specimen (1)
- pmCHP (1)
- point clouds (1)
- prediction methods (1)
- private cloud (1)
- problem based learning (1)
- production control (1)
- professional life (1)
- real-time application (1)
- recommender systems (1)
- research data management (1)
- research information (1)
- rural transport simulation (1)
- scaling (1)
- scheduling (1)
- security (1)
- security protocol extensions (1)
- semantic knowledge (1)
- semistructured interview (1)
- sensor-based assessment (1)
- sentiment dictionaries (1)
- serverless architecture (1)
- serverless functions (1)
- service models (1)
- service-orientation (1)
- situation-awareness (1)
- smart buildings (1)
- standardized semantics (1)
- startup (1)
- stereo vision (1)
- stress intensity factor (1)
- supervised machine learning (1)
- survey (1)
- sustainability (1)
- system integration (1)
- systematic literature review (1)
- taxonomy (1)
- text mining (1)
- thesauri (1)
- time-series forecast (1)
- tool evaluation (1)
- user experience (1)
- user generated content (1)
- virtual distance teaching (1)
- virtual lab (1)
- virtual reality (1)
- visual delegates (1)
- visual perception (1)
- wearable sensors (1)
- web crawling (1)
- word embedding space (1)
- work satisfaction (1)
- work-life balance (1)
- working life (1)
- workload decomposition (1)
- Öffentliche Bibliothek (1)
- Überwachtes Lernen (1)
All of us are aware of the changes in the information field during the last years. We all see the paradigm shift coming up and have some idea how it will challenge our profession in the future. But how the road to excellence - in education of information specialists in the future - will look like? There are different models (new and old ones) for reorganising the structure of education: * Integration * Specialisation * Step-by step-model * Modul System * Network System / Combination model The paper will present the actual level of discussion on building up a new curriculum at the Department of Information and Communication (IK) at the FH Hannover. Based on the mission statement of the department »Education of information professionals is a part of the dynamic evolution of knowledge society« the direction of change and the main goals will be presented. The different reorganisation models will be explained with its objectives, opportunities and forms of implementation. Some examples will show the ideas and tools for a first draft of a reconstruction plan to become fit for the future. This talk has been held at the German-Dutch University Conference »Information Specialists for the 21st Century« at the Fachhochschule Hannover - University of Applied Sciences, Department of Information and Communication, October 14 -15, 1999 in Hannover, Germany.
The miniaturized Mössbauer-spectrometer (MIMOS II), originally devised by Göstar Klingelhöfer, is further developed by the Renz group at the Leibniz University Hanover in cooperation with the Hanover University of Applied Sciences and Arts. A new processing unit with a two-dimensional (2D) data acquisition was developed by M. Jahns. The advantage of this data acquisition is that no thresholds need to be set before the measurement. The energy of each photon is determined and stored with the velocity of the drive. After the measurement, the relevant area can be selected for the Mössbauer spectrum. Now we have expanded the evaluation unit with a power supply for a MIMOS drive and a MIMOS PIN detector. So we have a very compact MIMOS transmissions measurement setup. With this setup it is possible to process the signals of two detectors serially. Currently we are working on a parallel signal processing.
Data and Information Science: Book of Abstracts at BOBCATSSS 2022 Hybrid Conference, 23rd - 25th of May 2022, Debrecen.
This year marks the 30th anniversary of the BOBCATSSS. The BOBCATSSS is an international, annual symposium designed for librarians and information professionals in a rapidly changing environment. Over the past 30 years, the conference has included exciting topics, great venues, interested guests and engaging presenters.
This year we would like to introduce the topics of the many papers presented in the Book of Abstracts for the first time in presence at the University of Debrecen and hybrid. The Book of Abstracts provides an overview of all presentations given at BOBCATSSS. Presentations are listed in alphabetical order by title and include speeches, Pecha Kuchas, posters and workshops.
The theme of BOBCATSSS is Data and Information Science. Data and information are the basis for decisions and processes in business, politics and science. Particularly important in the current era of digital transformation. This is exactly where this year's subthemes come in. They deal with data science, openness as well as institutional roles.
The transfer of historically grown monolithic software architectures into modern service-oriented architectures creates a lot of loose coupling points. This can lead to an unforeseen system behavior and can significantly impede those continuous modernization processes, since it is not clear where bottlenecks in a system arise. It is therefore necessary to monitor such modernization processes with an adaptive monitoring concept in order to be able to correctly record and interpret unpredictable system dynamics. For this purpose, a general measurement methodology and a specific implementation concept are presented in this work.
A Look at Service Meshes
(2021)
Service meshes can be seen as an infrastructure layer for microservice-based applications that are specifically suited for distributed application architectures. It is the goal to introduce the concept of service meshes and its use for microservices with the example of an open source service mesh called Istio. This paper gives an introduction into the service mesh concept and its relation to microservices. It also gives an overview of selected features provided by Istio as relevant to the above concept and provides a small sample setup that demonstrates the core features.
The Gravitational Search Algorithm is a swarm-based optimization metaheuristic that has been successfully applied to many problems. However, to date little analytical work has been done on this topic.
This paper performs a mathematical analysis of the formulae underlying the Gravitational Search Algorithm. From this analysis, it derives key properties of the algorithm's expected behavior and recommendations for parameter selection. It then confirms through empirical examination that these recommendations are sound.
Lemmatization is a central task in many NLP applications. Despite this importance, the number of (freely) available and easy to use tools for German is very limited. To fill this gap, we developed a simple lemmatizer that can be trained on any lemmatized corpus. For a full form word the tagger tries to find the sequence of morphemes that is most likely to generate that word. From this sequence of tags we can easily derive the stem, the lemma and the part of speech (PoS) of the word. We show (i) that the quality of this approach is comparable to state of the art methods and (ii) that we can improve the results of Part-of-Speech (PoS) tagging when we include the morphological analysis of each word.
A new type of rotary compressor, called “rotary-chamber compressor”, consists of two interlocking rotors with 4 wings each, that perform non-uniform rotary movements. Both rotors have the same direction of rotation, while one rotor is accelerating, the other rotor is retarding. After surpassing a specific mark, the sequence changes and the leading rotor begins to retard and vice versa. Due to the resulting relative phase difference, the volume between the two wings is changing periodically, which allows pulsating working chambers. The technology was first introduced by its founder Jürgen Schukey in 1987. Since then, no further development on this machine is known to us except our own. In this contribution, a study on the kinematics of the rotary-chamber-compressor is presented. Initial studies have shown that changes in the kinematics of the rotors will have a direct influence on the thermodynamical variables, which, if optimized, can lead to an increased performance of the machine. Therefore, a mathematical model has been developed to obtain the performance parameters from different kinematic concepts by using numerical CFD analysis. Furthermore, additional optimization possibilities will be listed and discussed.
Intrusion detection systems and other network security components detect security-relevant events based on policies consisting of rules. If an event turns out as a false alarm, the corresponding policy has to be adjusted in order to reduce the number of false positives. Modified policies, however, need to be tested before going into productive use. We present a visual analysis tool for the evaluation of security events and related policies which integrates data from different sources using the IF-MAP specification and provides a “what-if” simulation for testing modified policies on past network dynamics. In this paper, we will describe the design and outcome of a user study that will help us to evaluate our visual analysis tool.
This Innovative Practice Full Paper presents our learnings of the process to perform a Master of Science class with eduScrum integrating real world problems as projects. We prepared, performed, and evaluated an agile educational concept for the new Master of Science program Digital Transformation organized and provided by the department of business computing at the University of Applied Sciences and Arts - Hochschule Hannover in Germany. The course deals with innovative methodologies of agile project management and is attended by 25 students. We performed the class due the summer term in 2019 and 2020 as a teaching pair. The eduScrum method has been used in different educational contexts, including higher education. During the approach preparation, we decided to use challenges, problems, or questions from the industry. Thus, we acquired four companies and prepared in coordination with them dedicated project descriptions. Each project description was refined in the form of a backlog (list of requirements). We divided the class into four eduScrum teams, one team for each project. The subdivision of the class was done randomly.
Since we wanted to integrate realistic projects into industry partners’ implementation, we decided to adapt the eduScrum approach. The eduScrum teams were challenged with different projects, e.g., analyzing a dedicated phenomenon in a real project or creating a theoretical model for a company’s new project management approach. We present our experiences of the whole process to prepare, perform and evaluate an agile educational approach combined with projects from practice. We found, that the students value the agile method using real world problems. However, the results are mainly based on the summer term 2019, this paper also includes our learnings from virtual distance teaching during the Covid 19 pandemic in summer term 2020. The paper contributes to the distribution of methods for higher education teaching in the classroom and distance learning.
Die Covid-19 Pandemie hat zu einem signifikanten Anstieg der Remote Work geführt. Die Veränderung in der Interaktion und Kollaboration ist für viele agile Teams eine Herausforderung gewesen. Diverse Studien zeigen unterschiedliche Effekte und Auswirkungen auf die Zusammenarbeit agiler Teams während der Pandemie. So ist die Kommunikation sachlicher und zielgerichteter geworden. Ebenso wird eine Verminderung des sozialen Austauschs in den Teams berichtet. Unser Artikel thematisiert die Veränderung der Interaktion in agilen Teams durch die Remote Work. Wir haben eine qualitative Fallstudie bei einem agilen Software-Entwicklungsteam bei Otto durchgeführt. Unsere Ergebnisse zeigen einen Zusammenhang zwischen den Auswirkungen auf die Interaktion und der persönlichen Autonomie der Team-Mitglieder. Darüber hinaus haben wir keine signifikanten negativen Effekte durch die veränderte Interaktion auf die agile Arbeitsweise festgestellt.
Smart Cities require reliable means for managing installations that offer essential services to the citizens. In this paper we focus on the problem of evacuation of smart buildings in case of emergencies. In particular, we present an abstract architecture for situation-aware evacuation guidance systems in smart buildings, describe its key modules in detail, and provide some concrete examples of its structure and dynamics.
During the Corona-Pandemic, information (e.g. from the analysis of balance sheets and payment behavior) traditionally used for corporate credit risk analysis became less valuable because it represents only past circumstances. Therefore, the use of currently published data from social media platforms, which have shown to contain valuable information regarding the financial stability of companies, should be evaluated. In this data e. g. additional information from disappointed employees or customers can be present. In order to analyze in how far this data can improve the information base for corporate credit risk assessment, Twitter data regarding the ten greatest insolvencies of German companies in 2020 and solvent counterparts is analyzed in this paper. The results from t-tests show, that sentiment before the insolvencies is significantly worse than in the comparison group which is in alignment with previously conducted research endeavors. Furthermore, companies can be classified as prospectively solvent or insolvent with up to 70% accuracy by applying the k-nearest-neighbor algorithm to monthly aggregated sentiment scores. No significant differences in the number of Tweets for both groups can be proven, which is in contrast to findings from studies which were conducted before the Corona-Pandemic. The results can be utilized by practitioners and scientists in order to improve decision support systems in the domain of corporate credit risk analysis. From a scientific point of view, the results show, that the information asymmetry between lenders and borrowers in credit relationships, which are principals and agents according to the principal-agent-theory, can be reduced based on user generated content from social media platforms. In future studies, it should be evaluated in how far the data can be integrated in established processes for credit decision making. Furthermore, additional social media platforms as well as samples of companies should be analyzed. Lastly, the authenticity of user generated contend should be taken into account in order to ensure, that credit decisions rely on truthful information only.
We present a feedback-corrected optimal scheduling approach to reduce the demand of electrical energy of batch processes, exemplified at the sand preparation in foundry. The main energy driver in the exemplary foundry is the idle time of the batch-wise working sand mixers. In this novel approach, we use linear integer programming to minimize the demand of energy of the sand mixers by scheduling the batches in real-time. For the optimization we use a physical model of the sand preparation, which takes dwell-times of the processes as dead-time systems into account. In this paper, we present the steps to make the optimal scheduling approach applicable for the production process. The application at the real production plant proves the performance of the suggested approach. Compared to the conventional control, the feedback-corrected optimal scheduling approach leads to an reduction in energy consumption of approximately 6.5 % without modifying the process or the aggregates.
The usage of microservices promises a lot of benefits concerning scalability and maintainability, rewriting large monoliths is however not always possible. Especially in scientific projects, pure microservice architectures are therefore not feasible in every project. We propose the utilization of microservice principles for the construction of microsimulations for urban transport. We present a prototypical architecture for the connection of MATSim and AnyLogic, two widely used simulation tools in the context of urban transport simulation. The proposed system combines the two tools into a singular tool supporting civil engineers in decision making on innovative urban transport concepts.
The automated transfer of flight logbook information from aircrafts into aircraft maintenance systems leads to reduced ground and maintenance time and is thus desirable from an economical point of view. Until recently, flight logbooks have not been managed electronically in aircrafts or at least the data transfer from aircraft to ground maintenance system has been executed manually. Latest aircraft types such as the Airbus A380 or the Boeing 787 do support an electronic logbook and thus make an automated transfer possible. A generic flight logbook transfer system must deal with different data formats on the input side – due to different aircraft makes and models – as well as different, distributed aircraft maintenance systems for different airlines as aircraft operators. This article contributes the concept and top level distributed system architecture of such a generic system for automated flight log data transfer. It has been developed within a joint industry and applied research project. The architecture has already been successfully evaluated in a prototypical implementation.
Automatic classification of scientific records using the German Subject Heading Authority File (SWD)
(2012)
The following paper deals with an automatic text classification method which does not require training documents. For this method the German Subject Heading Authority File (SWD), provided by the linked data service of the German National Library is used. Recently the SWD was enriched with notations of the Dewey Decimal Classification (DDC). In consequence it became possible to utilize the subject headings as textual representations for the notations of the DDC. Basically, we we derive the classification of a text from the classification of the words in the text given by the thesaurus. The method was tested by classifying 3826 OAI-Records from 7 different repositories. Mean reciprocal rank and recall were chosen as evaluation measure. Direct comparison to a machine learning method has shown that this method is definitely competitive. Thus we can conclude that the enriched version of the SWD provides high quality information with a broad coverage for classification of German scientific articles.
Automatisierte Steuerung von virtuellen Biogas-Kraftwerksverbünden für den netzorientierten Betrieb
(2019)
Das Steuerungssystem VKV Netz ermöglicht den auf die Erbringung regionaler Systemdienstleistungen ausgerichteten Betrieb virtueller Biogas-Kraftwerksverbünde. Damit leistet es sowohl einen Beitrag zum zukünftig gesteigerten Bedarf an Regelenergie durch regenerative Kraftwerke als es auch alternative, zukunftsfähige Erlöspotenziale für die zumeist landwirtschaftlichen bzw. landwirtschaftsnahen Biogas-Anlagenbetreiber abseits des EEG aufzeigt. Das Steuerungssystem wurde im Rahmen des BMWi-Verbundforschungsvorhabens VKV Netz (Förderkennzeichen 0325943A) durch die Hochschule Hannover, die SLT-Technologies GmbH & Co. KG sowie die Überlandwerk Leinetal GmbH in Kooperation mit assoziierten Biogasanlagen im Zeitraum 01.01.2016 bis 31.12.2018 entwickelt und pilotiert.
Dieser Beitrag adressiert einleitend die aktuelle Bedrohungslage aus Sicht der Industrie mit einem Fokus auf das Feld und die Feldgeräte. Zentral wird dann die Frage behandelt, welchen Beitrag Feldgeräte im Kontext von hoch vernetzten Produktionsanlagen für die künftige IT-Sicherheit leisten können und müssen. Unter anderem werden auf Basis der bestehenden Standards wie IEC 62443-4-1, IEC 62443-4-2 oder der VDI 2182-1 und VDI 2182-4 ausgewählte Methoden und Maßnahmen am Beispiel eines Durchflussmessgerätes vorgestellt, die zur künftigen Absicherung von Feldgeräten notwendig sind.
Das ProFormA-Aufgabenformat wurde eingeführt, um den Austausch von Programmieraufgaben zwischen beliebigen Autobewertern (Grader) zu ermöglichen. Ein Autobewerter führt im ProFormA-Aufgabenformat spezifizierte „Tests“ sequentiell aus, um ein vom Studierenden eingereichtes Programm zu prüfen. Für die Strukturierung und Darstellung der Testergebnisse existiert derzeit kein graderübergreifender Standard. Wir schlagen eine Erweiterung des ProFormA-Aufgabenformats um eine Hierarchie von Bewertungsaspekten vor, die nach didaktischen Aspekten gruppiert ist und entsprechende Testausführungen referenziert. Die Erweiterung wurde in Graja umgesetzt, einem Autobewerter für Java-Programme. Je nach gewünschter Detaillierung der Bewertungsaspekte sind Testausführungen in Teilausführungen aufzubrechen. Wir illustrieren unseren Vorschlag mit den Testwerkzeugen Compiler, dynamischer Softwaretest, statische Analyse sowie unter Einsatz menschlicher Bewerter.
BYOD Bring Your Own Device
(2013)
Using modern devices like smartphones and tablets offers a wide variety of advantages; this has made them very popular as consumer devices in private life. Using them in the workplace is also popular. However, who wants to carry around and handle two devices; one for personal use, and one for work-related tasks? That is why “dual use”, using one single device for private and business applications, may represent a proper solution. The result is “Bring Your Own Device,” or BYOD, which describes the circumstance in which users make their own personal devices available for company use. For companies, this brings some opportunities and risks. We describe and discuss organizational issues, technical approaches, and solutions.
Regional Innovation Systems describe the relations between actors, structures and infrastructures in a region in order to stimulate innovation and regional development. For these systems the collection and organization of information is crucial. In the present paper we investigate the possibilities to extract information from websites of companies. First we describe regional innovation systems and the information types that are necessary to create them. Then we discuss the possibilities of text mining and keyword extraction techniques to extract this information from company websites. Finally, we describe a small scale experiment in which keywords related to economic sectors and commodities are extracted from the websites of over 200 companies. This experiment shows what the main challenges are for information extraction from websites for regional innovation systems.
The amount of papers published yearly increases since decades. Libraries need to make these resources accessible and available with classification being an important aspect and part of this process. This paper analyzes prerequisites and possibilities of automatic classification of medical literature. We explain the selection, preprocessing and analysis of data consisting of catalogue datasets from the library of the Hanover Medical School, Lower Saxony, Germany. In the present study, 19,348 documents, represented by notations of library classification systems such as e.g. the Dewey Decimal Classification (DDC), were classified into 514 different classes from the National Library of Medicine (NLM) classification system. The algorithm used was k-nearest-neighbours (kNN). A correct classification rate of 55.7% could be achieved. To the best of our knowledge, this is not only the first research conducted towards the use of the NLM classification in automatic classification but also the first approach that exclusively considers already assigned notations from other
classification systems for this purpose.
Cloud Computing: Serverless
(2021)
A serverless architecture is a new approach to offering services over the Internet. It combines BaaS (Backend-as-a-service) and FaaS (Function-as-a-service). With the serverless architecture no own or rented infrastructures are needed anymore. In addition, the company does not have to worry about scaling any longer, as this happens automatically and immediately. Furthermore, there is no need any longer for maintenance work on the servers, as this is completely taken over by the provider. Administrators are also no longer needed for the same reason. Finally, many ready-made functions are offered, with which the development effort can be reduced. As a result, the serverless architecture is very well suited to many application scenarios, and it can save considerable costs (server costs, maintenance costs, personnel costs, electricity costs, etc.). The company only must subdivide the source code of the application and upload it to the provider’s server. The rest is done by the provider.
The CogALex-V Shared Task provides two datasets that consists of pairs of words along with a classification of their semantic relation. The dataset for the first task distinguishes only between related and unrelated, while the second data set distinguishes several types of semantic relations. A number of recent papers propose to construct a feature vector that represents a pair of words by applying a pairwise simple operation to all elements of the feature vector. Subsequently, the pairs can be classified by training any classification algorithm on these vectors. In the present paper we apply this method to the provided datasets. We see that the results are not better than from the given simple baseline. We conclude that the results of the investigated method are strongly depended on the type of data to which it is applied.
A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.
With regard to climate change, increasing energy efficiency is still a significant issue in the industry. In order to acquire energy data at the field level, so-called energy profiles can be used. They are advantageous as they are integrated into existing industrial ethernet standards (e.g. PROFINET). Commonly used energy profiles such as PROFIenergy and sercos Energy have been established in industrial use. However, as the Industrial Internet of Things (IIoT) continues to develop, the question arises whether the established energy profiles are sufficient to fullfil the requirements of the upcoming IIoT communication technologies. To answer this question the paper compares and discusses the common energy profiles with the current and future challenges of energy data communication. Furthermore, this analysis examines the need for further research in this field.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarizes our comparison of all three tools from our final comparison round.
With an increasing complexity and scale, sufficient evaluation of Information Systems (IS) becomes a challenging and difficult task. Simulation modeling has proven as suitable and efficient methodology for evaluating IS and IS artifacts, presupposed it meets certain quality demands. However, existing research on simulation modeling quality solely focuses on quality in terms of accuracy and credibility, disregarding the role of additional quality aspects. Therefore, this paper proposes two design artifacts in order to ensure a holistic quality view on simulation quality. First, associated literature is reviewed in order to extract relevant quality factors in the context of simulation modeling, which can be used to evaluate the overall quality of a simulated solution before, during or after a given project. Secondly, the deduced quality factors are integrated in a quality assessment framework to provide structural guidance on the quality assessment procedure for simulation. In line with a Design Science Research (DSR) approach, we demonstrate the eligibility of both design artifacts by means of prototyping as well as an example case. Moreover, the assessment framework is evaluated and iteratively adjusted with the help of expert feedback.
In industrial production facilities, technical Energy Management Systems are used to measure, monitor and display energy consumption related information. The measurements take place at the field device level of the automation pyramid. The measured values are recorded and processed at the control level. The functionalities to monitor and display energy data are located at the MES level of the automation pyramid. So the energy data from all PLCs has to be aggregated, structured and provided for higher level systems. This contribution introduces a concept for an Energy Data Aggregation Layer, which provides the functionality described above. For the implementation of this Energy Data Aggregation Layer, a combination of AutomationML and OPC UA is used.
In microservice architectures, data is often hold redundantly to create an overall resilient system. Although the synchronization of this data proposes a significant challenge, not much research has been done on this topic yet. This paper shows four general approaches for assuring consistency among services and demonstrates how to identify the best solution for a given architecture. For this, a microservice architecture, which implements the functionality of a mainframe-based legacy system from the insurance industry, serves as an example.
Since textual user generated content from social media platforms contains valuable information for decision support and especially corporate credit risk analysis, automated approaches for text classification such as the application of sentiment dictionaries and machine learning algorithms have received great attention in recent user generated content based research endeavors. While machine learning algorithms require individual training data sets for varying sources, sentiment dictionaries can be applied to texts immediately, whereby domain specific dictionaries attain better results than domain independent word lists. We evaluate by means of a literature review how sentiment dictionaries can be constructed for specific domains and languages. Then, we construct nine versions of German sentiment dictionaries relying on a process model which we developed based on the literature review. We apply the dictionaries to a manually classified German language data set from Twitter in which hints for financial (in)stability of companies have been proven. Based on their classification accuracy, we rank the dictionaries and verify their ranking by utilizing Mc Nemar’s test for significance. Our results indicate, that the significantly best dictionary is based on the German language dictionary SentiWortschatz and an extension approach by use of the lexical-semantic database GermaNet. It achieves a classification accuracy of 59,19 % in the underlying three-case-scenario, in which the Tweets are labelled as negative, neutral or positive. A random classification would attain an accuracy of 33,3 % in the same scenario and hence, automated coding by use of the sentiment dictionaries can lead to a reduction of manual efforts. Our process model can be adopted by other researchers when constructing sentiment dictionaries for various domains and languages. Furthermore, our established dictionaries can be used by practitioners especially in the domain of corporate credit risk analysis for automated text classification which has been conducted manually to a great extent up to today.
For the analysis of contract texts, validated model texts, such as model clauses, can be used to identify used contract clauses. This paper investigates how the similarity between titles of model clauses and headings extracted from contracts can be computed, and which similarity measure is most suitable for this. For the calculation of the similarities between title pairs we tested various variants of string similarity and token based similarity. We also compare two additional semantic similarity measures based on word embeddings using pre-trained embeddings and word embeddings trained on contract texts. The identification of the model clause title can be used as a starting point for the mapping of clauses found in contracts to verified clauses.
Building a well-founded understanding of the concepts, tasks and limitations of IT in all areas of society is an essential prerequisite for future developments in business and research. This applies in particular to the healthcare sector and medical research, which are affected by the noticeable advances in digitization. In the transfer project “Zukunftslabor Gesundheit” (ZLG), a teaching framework was developed to support the development of further education online courses in order to teach heterogeneous groups of learners independent of location and prior knowledge. The study at hand describes the development and components of the framework.
Das Referat klärt zuerst, was man unter "Didaktik der Pflege" versteht und inwieweit allgemeindidaktische Zielsetzungen eine Bedeutung für die Pflegeausbildung haben. Dann werden wichtige didaktische Kriterien für die Auswahl von Inhalten der Pflegeausbildung erörtert und gleichzeitig werden Fragestellungen für eine zukünftige Pflegedidaktik formuliert.
Bei der Konzeption und Entwicklung der BID-Studiengänge ist neben den inhaltlichen und studienorganisatorischen Überlegungen die Ableitung und Entwicklung realistischer Planungsdaten eine der Hauptaufgaben des Modellversuchs BID und eine wesentliche Voraussetzung für ihre erfolgreiche Umsetzung in die Praxis gewesen. Auf diese Planungsergebnisse und die Umsetzung wird in diesem Beitrag vor allem einzugehen sein.
Digitale Marktplätze können die Kosten einer Handelstransaktion, die sog. Transaktionskosten, senken. Durch weiteren technischen Fortschritt und intelligente Handelsbots wird die Nutzung des Marktmechanismus immer kostengünstiger. Dieser Artikel gibt einen Überblick über die bisherige Entwicklung von Digitalen Marktplätzen der Agrar- und Ernährungswirtschaft und eine mögliche Zukunft. Vermutlich werden die Transaktionskosten weiter fallen, sodass weitere Effizienzgewinne durch die vermehrte Nutzung von Märkten möglich sein werden.
Discovery and efficient reuse of technology pictures using Wikimedia infrastructures. A proposal
(2016)
Multimedia objects, especially images and figures, are essential for the visualization and interpretation of research findings. The distribution and reuse of these scientific objects is significantly improved under open access conditions, for instance in Wikipedia articles, in research literature, as well as in education and knowledge dissemination, where licensing of images often represents a serious barrier.
Whereas scientific publications are retrievable through library portals or other online search services due to standardized indices there is no targeted retrieval and access to the accompanying images and figures yet. Consequently there is a great demand to develop standardized indexing methods for these multimedia open access objects in order to improve the accessibility to this material.
With our proposal, we hope to serve a broad audience which looks up a scientific or technical term in a web search portal first. Until now, this audience has little chance to find an openly accessible and reusable image narrowly matching their search term on first try - frustratingly so, even if there is in fact such an image included in some open access article.
Editorial for the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016)
(2016)
Knowledge Organization Systems (KOS), in the form of classification systems, thesauri, lexical databases, ontologies, and taxonomies, play a crucial role in digital information management and applications generally. Carrying semantics in a well-controlled and documented way, Knowledge Organisation Systems serve a variety of important functions: tools for representation and indexing of information and documents, knowledge-based support to information searchers, semantic road maps to domains and disciplines, communication tool by providing conceptual framework, and conceptual basis for knowledge based systems, e.g. automated classification systems. New networked KOS (NKOS) services and applications are emerging, and we have reached a stage where many KOS standards exist and the integration of linked services is no longer just a future scenario. This editorial describes the workshop outline and overview of presented papers at the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) in Hannover, Germany.
Editorial for the 17th European Networked Knowledge Organization Systems Workshop (NKOS 2017)
(2017)
Knowledge Organization Systems (KOS), in the form of classification systems, thesauri, lexical databases, ontologies, and taxonomies, play a crucial role in digital information management and applications generally. Carrying semantics in a well-controlled and documented way, Knowledge Organization Systems serve a variety of important functions: tools for representation and indexing of information and documents, knowledge-based support to information searchers, semantic road maps to domains and disciplines, communication tool by providing conceptual framework, and conceptual basis for knowledge based systems, e.g. automated classification systems. New networked KOS (NKOS) services and applications are emerging, and we have reached a stage where many KOS standards exist and the integration of linked services is no longer just a future scenario. This editorial describes the workshop outline and overview of presented papers at the 17th European Networked Knowledge Organization Systems Workshop (NKOS 2017) which was held during the TPDL 2017 Conference in Thessaloniki, Greece.
Vorgestellt wird ein Ansatz zur objektorientierten Modellierung, Simulation und Animation von Informationssystemen. Es wird ein Vorgehensmodell dargestellt, mit dem unter Verwendung des beschriebenen Ansatzes Anforderungs- oder Systemspezifikationen von Rechnergestützten Informationssystemen erstellt werden können. Der Ansatz basiert auf einem Metamodell zur Beschreibung Rechnergestützter Informationssysteme und verfügt über eine rechnergestützte Modellierungsumgebung. Anhand eines Projektes zur Entwicklung einer Anforderungsspezifikation für ein rechnergestütztes Pflegedokumentations- und -kommunikationssystems wird der Einsatz der Methode beispielhaft illustriert.
Automatisiert bewertbare Programmieraufgaben definieren Tests, die auf Einreichungen angewendet werden. Da Testergebnisse nicht mit Bewertungsergebnissen gleichzusetzen sind, schlagen wir ein Beschreibungsformat vor, das Testergebnisse auf Bewertungsergebnisse abbildet. Lehrkräfte können die Abbildungsvorschrift an ihren Lehrkontext anpassen. Der Vorschlag ist unabhängig von den beteiligten Autobewertern, von den eingesetzten Benutzungsschnittstellen und von der zu lernenden Programmiersprache einsetzbar. Das Format basiert auf verschachtelten Bewertungskategorien, welche um ein Nullifikationen-Konzept ergänzt werden.
Dieser Beitrag ist im Rahmen des Forschungsschwerpunktes Herbar Digital an der Fachhochschule Hannover entstanden. Es wird ein neuartiges Geschäftsprozessmodell für die Generierung und Digitalisierung von Herbarbelegen vorgestellt, in das mittels eines Prozessmusters eine Fertigungssteuerungskomponente eingebettet ist. Dieser Ansatz ist ein Beitrag zur Entwicklung eines präzisen Prozesscontrollings, das Herbarien ermöglichen soll, die massenhafte Digitalisierung von Herbarbelegen effizient durchzuführen.
Ein Schnittstellen-Datenmodell der Variabilität in automatisch bewerteten Programmieraufgaben
(2018)
Automatisch bewertete, variable Programmieraufgaben stellen besondere Schnittstellenanforderungen an Autobewerter (Grader) und Lernmanagementsysteme (LMS). Um Wiederverwendung von Aufgaben über Systemgrenzen hinweg zu begünstigen, schlagen wir vor, Aufgabenschablonen durch eine von allen beteiligten Systemen genutzte Middleware zu instanziieren und dabei Variabilitätsinformationen in einem Schnittstellen-Datenmodell zu transportieren. Wir stellen ein solches Datenmodell vor, welches für die Grader-unabhängige Kommunikation mit LMS ausgelegt ist und beispielhaft im Autobewerter Graja implementiert wurde. Zudem wird eine Dialogkomponente für die manuelle Werteauswahl vorgestellt, die auch bei großen Wertemengen effizient und Grader-unabhängig einsetzbar ist. Die Eignung des Dialogs und des Datenmodells wird anhand eines typischen Bewertungsszenarios diskutiert.
Der zunehmende Anteil erneuerbarer Energien an der Stromproduktion Deutschlands erfordert einen ebenso steigenden Anteil der erneuerbaren Energien an der Bereitstellung von Regelenergie zur Stabilisierung der Stromnetze. Durch die Möglichkeit der zeitlichen Entkopplung von Gas- und Stromproduktion ist insbesondere die Biogastechnologie für die Bereitstellung von Regelenergie geeignet. Der vorliegende Beitrag skizziert ein Steuerungssystem für virtuelle Biogas-Verbundkraftwerke, dessen Oberziel die Stabilisierung des Stromnetzes ist. Die Entwicklung des Systems erfolgt im Zuge des Forschungsprojekts VKV Netz und wird durch das Bundesministerium für Wirtschaft und Energie gefördert.
Der zukünftig steigende Bedarf an Bereitstellung von Regelenergie aus regenerativen Kraftwerken sowie sinkende EEG-Tarifstrukturen im Bereich Biogas führen zur Notwendigkeit einer Entwicklung alternativer Betriebs- und Vergütungsmodelle. Der vorliegende Beitrag skizziert ein wirtschaftliches Ausgleichssystem für virtuelle Biogas-Verbundkraftwerke. Es beschreibt, welche Kosten und Erlöse in virtuellen Biogas-Verbünden generiert werden, sofern diese teilautomatisiert und auf die regionale Netzstabilität fokussiert betrieben werden. Das wirtschaftliche Ausgleichssystem ist ein Teil des im Forschungsvorhaben VKV Netz zu entwickelnden Steuerungssystems für virtuelle Biogas-Verbundkraftwerke (http://vkvnetz.de).
Einfluss von Industrie 4.0 auf die Anwendbarkeit von Lastmanagement in der industriellen Produktion
(2018)
Technische Energiemanagementsysteme (kurz und im Folgenden tEnMS) in der produzierenden Industrie dienen heute meinst dem Messen, Speichern und Auswerten von Energieverbrauchsdaten. Allerdings besteht auch die Möglichkeit der Vorhersage und aktiven Einflussnahme auf die Energieaufnahme von Produktionsumgebungen durch das tEnMS. Derartige Funktionen werden als Prognose- und Lastmanagementfunktionen bezeichnet. Industrielle Produktionsumgebungen erfahren im Rahmen von Industrie 4.0 einen Wandel. Dieser Beitrag soll aufzeigen, wie tEnMS durch den beschriebenen Wandel beeinflusst werden und welche Chancen sich daraus für zukünftige tEnMS ergeben.
Das Forschungsprojekt „Herbar Digital” [JKS00] startete 2007 mit dem Ziel der Digitalisierung des Bestands von mehr als 3,5 Millionen getrockneter Pflanzen bzw. Pflanzenteile auf Papierbögen (Herbarbelege) des Botanischen Museums Berlin. Da gelegentlich der Sammler der Pflanze unbekannt ist, wurde in der vorliegenden Arbeit ein Verfahren entwickelt, um aus kursiv geschriebenen Buchstaben deren Schreiber zu bestimmen. Dazu muss der statische Buchstabe in eine dynamische Form gebracht werden. Dies geschieht mit dem Modell einer trägen Kugel, die durch den Schriftzug rollt. Bei dieser Offline-Schreibererkennung werden verschiedene Verfahren wie die Nachbildung der Schreiblinie einzelner Buchstaben durch z.B. Legendre-Polynome verwendet. Bei Verwendung nur eines Buchstabens der Schreiber wird eine Erkennungsrate von durchschnittlich 40% erreicht. Durch Kombination von mehreren Buchstaben steigt die Erkennungsrate stark an und beträgt bei 13 Buchstaben und 93 Schreibern einer internationalen Datenbank 98,6%.
Our work is motivated primarily by the lack of standardization in the area of Event Processing Network (EPN) models. We identify general requirements for such models. These requirements encompass the possibility to describe events in the real world, to establish temporal and causal relationships among the events, to aggregate the events, to organize the events into a hierarchy, to categorize the events into simple or complex, to create an EPN model in an easy and simple way and to use that model ad hoc. As the major contribution, this paper applies the identified requirements to the RuleCore model.
Nowadays, REST is the most dominant architectural style of choice at least for newly created web services. So called RESTfulness is thus really a catchword for web application, which aim to expose parts of their functionality as RESTful web services. But are those web services RESTful indeed? This paper examines the RESTfulness of ten popular RESTful APIs (including Twitter and PayPal). For this examination, the paper defines REST, its characteristics as well as its pros and cons. Furthermore, Richardson's Maturity Model is shown and utilized to analyse those selected APIs regarding their RESTfulness. As an example, a simple, RESTful web service is provided as well.
Renewable energy production is one of the strongest rising markets and further extreme growth can be anticipated due to desire of increased sustainability in many parts of the world. With the rising adoption of renewable power production, such facilities are increasingly attractive targets for cyber attacks. At the same time higher requirements on a reliable production are raised. In this paper we propose a concept that improves monitoring of renewable power plants by detecting anomalous behavior. The system does not only detect an anomaly, it also provides reasoning for the anomaly based on a specific mathematical model of the expected behavior by giving detailed information about various influential factors causing the alert. The set of influential factors can be configured into the system before learning normal behaviour. The concept is based on multidimensional analysis and has been implemented and successfully evaluated on actual data from different providers of wind power plants.
This paper presents a possibility to extend the formalism of linear indexed grammars. The extension is based on the use of tuples of pushdowns instead of one pushdown to store indices during a derivation. If a restriction on the accessibility of the pushdowns is used, it can be shown that the resulting formalisms give rise to a hierarchy of languages that is equivalent with a hierarchy defined by Weir. For this equivalence, that was already known for a slightly different formalism, this paper gives a new proof. Since all languages of Weir's hierarchy are known to be mildly context sensitive, the proposed extensions of LIGs become comparable with extensions of tree adjoining grammars and head grammars.
Our research project, "Rationalizing the virtualization of botanical document material and their usage by process optimization and automation (Herbar-Digital)" started on July 1, 2007 and will last until 2012. Its long-term aim is the digitization of the more than 3,5 million specimens in the Berlin Herbarium. The University of Applied Sciences and Arts in Hannover collaborates with the department of Biodiversity Informatics at the BGBM (Botanic Garden and Botanical Museum Berlin-Dahlem) headed by Walter Berendsohn. The part of Herbar-Digital here presented deals with the analysis of the generated high resolution images (10,400 lines x 7,500 pixel).
Flatness-based feedforward control is an approach for combining fast motion with low oscillations for nonlinear or flexible drive systems. Its desired trajectories must be continuously differentiable to the degree of the system order. Designing such trajectories, that also reach the dynamic system limits, poses a challenge. Common solutions, like Gevrey functions, usually require lengthy offline calculations. To achieve a quicker and simpler industrial-suited solution, this paper presents a new online trajectory generation scheme. The algorithm utilizes higher order s-curve trajectories created by a cyclic filtering process using moving average filters. An experimental validation proves the capability as well as industrial applicability of the presented approach for flexible structures like stacker cranes.
Beitrag zum Workshop "Informationskompetenz im Norden" am 01.02.2018 im Bibliotheks- und Informationssytem der Carl von Ossietzky Universität Oldenburg.
Es geht zunächst darum, welche Ansätze und Projekte die Schreibwerkstatt verfolgt, um Informations- & Schreibprozesse an der Hochschule Hannover zu fördern.
Da es gemeinsame Ziele und Zielgruppen von sowie inhaltliche Überschneidungen zwischen Bibliothek und Schreibwerkstatt gibt, werden Kooperationsbeispiele und Vorteile der Zusammenarbeit vorgestellt.
Generalisierte Rechtsdokumente, bei denen für die individuellen Ausprägungen eines Vertrages die Positionen im Text bekannt sind, können eingesetzt werden, um erstens das Genehmigungsverfahren von Neuverträgen automatisiert zu unterstützen und zweitens als Vertragsgenerator neue Rechtsdokumente vorausgewählt zur Verfügung zu stellen. In diesem Beitrag wird, mithilfe von bekannten juristischen Texten gezeigt, wie formelhafte Textabschnitte identifiziert und häufige individuelle Ausprägungen klassifiziert werden können, um als Musterabschnitte eingesetzt zu werden. Es werden Einsatzbereiche vorgestellt und vorhandenes Potential für Legal Tech-Anwendungen aufgezeigt.
Techno-economic analysis that allocate costs to the energy flows of energy systems are helpful to understand the formation of costs within processes and to increase the cost efficiency. For the economic evaluation, the usefulness or quality of the energy is of great importance. In exergy-based methods, this is considered by allocating costs to the exergy instead of energy. As exergy represents the ability of performing work, it is often named the useful part of energy. In contrast, the anergy, the part of energy, which cannot perform work, is often assumed to be not useful.
However, heat flows as used e.g. in domestic heating are always a mixture of a relative small portion of exergy and a big portion of anergy. Although of lower quality, the anergy is obviously useful for these applications. The question is, whether it makes sense to differentiate between exergy and anergy and take both properties into account for the economic evaluation.
To answer this question, a new methodical concept based on the definition of an anergy-exergy cost ratio is compared to the commonly applied approaches of considering either energy or exergy as the basis for economic evaluation. These three different approaches for the economic analysis of thermal energy systems are applied to an exemplary heating system with thermal storages. It is shown that the results of the techno-economic analysis can be improved by giving anergy an economic value and that the proposed anergy-cost ratio allows a flexible adaptation of the evaluation depending on the economic constraints of a system.
Die Weltwirtschaftskrise des Jahres 1929 beendete ein „goldenes Zeitalter“. Sie veränderte nachhaltig die internationale Völkergemeinschaft, unter anderem in Bezug auf den Welthandel, die Finanzströme und die Arbeitslosigkeit. Die Auswirkungen unserer heutigen Krise scheinen vergleichbar, die Ausgangslage, Ursachen und Verantwortung sind jedoch grundverschieden.<br /> Kein Lehrbuch und keine Vorlesung haben uns auf diese Krisenform vorbereitet. Auch liegen keine wirtschaftspolitischen Erfahrungen vor, die als Grundlage zur Bewältigung einer Krise in dieser Dimension dienen könnten. Aber wir können– obgleich die Krise andauert – schon heute beobachten, dass die Konsequenzen anders ausfallen und zu langfristigen, einschneidenden Veränderungen führen.<br /> Mit unserer Fachveranstaltung bieten wir Erklärungsansätze und diskutieren über Verantwortung und Konsequenzen. Drei Beiträge führen aus unterschiedlichen Perspektiven in das Thema ein.
„Grappa“ ist eine Middleware, die auf die Anbindung verschiedener Autobewerter an verschiedene E-Learning-Frontends respektive Lernmanagementsysteme (LMS) spezialisiert ist. Ein Prototyp befindet sich seit mehreren Semestern an der Hochschule Hannover mit dem LMS „moodle“ und dem Backend „aSQLg“ im Einsatz und wird regelmäßig evaluiert. Dieser Beitrag stellt den aktuellen Entwicklungsstand von Grappa nach diversen Neu- und Weiterentwicklungen vor. Nach einem Bericht über zuletzt gesammelte Erfahrungen mit der genannten Kombination von Systemen stellen wir wesentliche Neuerungen der moodle-Plugins, welche der Steuerung von Grappa aus moodle heraus dienen, vor. Anschließend stellen wir eine Erweiterung der bisherigen Architektur in Form eines neuentwickelten Grappa-php-Clients zur effizienteren Anbindung von LMS vor. Weiterhin berichten wir über die Anbindung eines weiteren Autobewerters „Graja“ für Programmieraufgaben in Java. Der Bericht zeigt, dass bereits wichtige Schritte für eine einheitliche Darstellung automatisierter Programmbewertung in LMS mit unterschiedlichen Autobewertern für die Studierenden absolviert sind. Die praktischen Erfahrungen zeigen aber auch, dass sowohl bei jeder der Systemkomponenten individuell, wie auch in deren Zusammenspiel via Grappa noch weitere Entwicklungsarbeiten erforderlich sind, um die Akzeptanz und Nutzung bei Studierenden sowie Lehrenden weiter zu steigern.
In the context of modern mobility, topics such as smart-cities, Car2Car-Communication, extensive vehicle sensor-data, e-mobility and charging point management systems have to be considered. These topics of modern mobility often have in common that they are characterized by complex and extensive data situations. Vehicle position data, sensor data or vehicle communication data must be preprocessed, aggregated and analyzed. In many cases, the data is interdependent. For example, the vehicle position data of electric vehicles and surrounding charging points have a dependence on one another and characterize a competition situation between the vehicles. In the case of Car2Car-Communication, the positions of the vehicles must also be viewed in relation to each other. The data are dependent on each other and will influence the ability to establish a communication. This dependency can provoke very complex and large data situations, which can no longer be treated efficiently. With this work, a model is presented in order to be able to map such typical data situations with a strong dependency of the data among each other. Microservices can help reduce complexity.
The technical, environmental and economic potential of hemp fines as a natural filler in bioplastics to produce biocomposites is the subject of this study – giving a holistic overview. Hemp fines are an agricultural by-product of the hemp fibres and shives production. Shives and fibres are for example used in the paper, animal bedding or composite area. About 15 to 20 wt.-% per kg hemp straw results in hemp fines after processing. In 2010 about 11,439 metric tons of hemp fines were produced in Europe. Hemp fines are an inhomogeneous material which includes hemp dust, shives and fibre. For these examinations the hemp fines are sieved in a further step with a tumbler sieving machine to obtain more specified fractions. The untreated hemp fines (ex work) as well as the sieved fractions are combined with a polylactide polymer (PLA) using a co-rotating twin screw extruder to produce biocomposites with different hemp fine content. By using an injection moulding machine standard test bars are produced to conduct several material tests. The Young’s modulus is increased and the impact strength reduced by hemp fines. With a content of above 15 wt.-% hemp fines are also improving the environmental (global warming potential) and economic performance in comparison to pure PLA.
Complications may occur after a liver transplantation, therefore proper monitoring and care in the post-operation phase plays a very important role. Sometimes, monitoring and care for patients from abroad is difficult due to a variety of reasons, e.g., different care facilities. The objective of our research for this paper is to design, implement and evaluate a home monitoring and decision support infrastructure for international children who underwent liver transplant operation. A point-of-care device and the PedsQL questionnaire were used in patients’ home environment for measuring the blood parameters and assessing quality of life. By using a tablet PC and a specially developed software, the measured results were able to be transmitted to the health care providers via internet. So far, the developed infrastructure has been evaluated with four international patients/families transferring 38 records of blood test. The evaluation showed that the home monitoring and decision support infrastructure is technically feasible and is able to give timely alarm in case of abnormal situation as well as may increase parent’s feeling of safety for their children.
Context: Agile software development (ASD) sets social aspects like communication and collaboration in focus. Thus, one may assume that the specific work organization of companies impacts the work of ASD teams. A major change in work organization is the switch to a 4-day work week, which some companies investigated in experiments. Also, recent studies show that ASD teams are affected by the switch to remote work since the Covid 19 pandemic outbreak in 2020.
Objective: Our study presents empirical findings on the effects on ASD teams operating remote in a 4-day work week organization. Method: We performed a qualitative single case study and conducted seven semi-structured interviews, observed 14 agile practices and screened eight project documents and protocols of agile practices.
Results: We found, that the teams adapted the agile method in use due to the change to a 4-day work week environment and the switch to remote work. The productivity of the two ASD teams did not decrease. Although the stress level of the ASD team member increased due to the 4-day work week, we found that the job satisfaction of the individual ASD team members is affected positively. Finally, we point to affects on social facets of the ASD teams.
Conclusion: The research community benefits from our results as the current state of research dealing with the effects of a 4-day work week on ASD teams is limited. Also, our findings provide several practical implications for ASD teams working remote in a 4-day work week.
Social skills are essential for a successful understanding of agile methods in software development. Several studies highlight the opportunities and advantages of integrating real-world projects and problems while collaborating with companies into higher education using agile methods. This integration comes with several opportunities and advantages for both the students and the company. The students are able to interact with real-world software development teams, analyze and understand their challenges and identify possible measures to tackle them. However, the integration of real-world problems and companies is complex and may come with a high effort in terms of coordination and preparation of the course. The challenges related to the interaction and communication with students are increased by virtual distance teaching during the Covid-19 pandemic as direct contact with students is missing. Also, we do not know how problem-based learning in virtual distance teaching is valued by the students. This paper presents our adapted eduScrum approach and learning outcome of integrating experiments with real-world software development teams from two companies into a Master of Science course organized in virtual distance teaching. The evaluation shows that students value analyzing real-world problems using agile methods. They highlight the interaction with real-world software development teams. Also, the students appreciate the organization of the course using an iterative approach with eduScrum. Based on our findings, we present four recommendations for the integration of agile methods and real world problems into higher education in virtual distance teaching settings. The results of our paper contribute to the practitioner and researcher/lecturer community, as we provide valuable insights how to fill the gap between practice and higher education in virtual distance settings.
This paper describes the approach of the Hochschule Hannover to the SemEval 2013 Task Evaluating Phrasal Semantics. In order to compare a single word with a two word phrase we compute various distributional similarities, among which a new similarity measure, based on Jensen-Shannon Divergence with a correction for frequency effects. The classification is done by a support vector machine that uses all similarities as features. The approach turned out to be the most successful one in the task.
The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.
Nowadays, smartphones and sensor devices can provide a variety of information about a user’s current situation. So far, many recommender systems neglect this kind of information and thus cannot provide situationspecific recommendations. Situation-aware recommender systems adapt to changes in the user’s environment and therefore are able to offer recommendations that are more appropriate for the current situation. In this paper, we present a software architecture that enables situation awareness for arbitrary recommendation techniques. The proposed system considers both (semi-)static user profiles and volatile situational knowledge to obtain meaningful recommendations. Furthermore, the implementation of the architecture in a museum of natural history is presented, which uses Complex Event Processing to achieve situation awareness.
Integrating distributional and lexical information for semantic classification of words using MRMF
(2016)
Semantic classification of words using distributional features is usually based on the semantic similarity of words. We show on two different datasets that a trained classifier using the distributional features directly gives better results. We use Support Vector Machines (SVM) and Multirelational Matrix Factorization (MRMF) to train classifiers. Both give similar results. However, MRMF, that was not used for semantic classification with distributional features before, can easily be extended with more matrices containing more information from different sources on the same problem. We demonstrate the effectiveness of the novel approach by including information from WordNet. Thus we show, that MRMF provides an interesting approach for building semantic classifiers that (1) gives better results than unsupervised approaches based on vector similarity, (2) gives similar results as other supervised methods and (3) can naturally be extended with other sources of information in order to improve the results.
Complex Event Processing (CEP) has been established as a well-suited software technology for processing high-frequent data streams. However, intelligent stream based systems must integrate stream data with semantical background knowledge. In this work, we investigate different approaches on integrating stream data and semantic domain knowledge. In particular, we discuss from a software engineering per- spective two different architectures: an approach adding an ontology access mechanism to a common Continuous Query Language (CQL) is compared with C-SPARQL, a streaming extension of the RDF query language SPARQL.
In vielen Fällen muss vor dem Kleben eine Klebflächenbehandlung durchgeführt werden, da Klebverbindungen mit unvorbehandelten Teilen häufig zu geringe Klebfestigkeiten und/oder eine unzureichende Alterungsbeständigkeit aufweisen. Zur Klebflächenbehandlung stehen unterschiedliche Verfahren zur Verfügung. Wenn mit mehreren Behandlungen klebtechnisch einwandfreie Verbindungen hergestellt werden können, gilt es, das Verfahren zu ermitteln, welches am besten in den Fertigungsfluss integriert werden kann und die geringsten Kosten verursacht. Dabei muss auch die Arbeitssicherheit und der Umweltschutz mit beachtet werden. Zur Beurteilung der Verfahren werden Bewertungskriterien gegeben. Die Verfahren werden abschließend kurz charakterisiert.
Autonomous and integrated passenger and freight transport (APFIT) is a promising approach to tackle both, traffic and last-mile-related issues such as environmental emissions, social and spatial conflicts or operational inefficiencies. By conducting an agent-based simulation, we shed light on this widely unexplored research topic and provide first indications regarding influential target figures of such a system in the rural area of Sarstedt, Germany. Our results show that larger fleets entail inefficiencies due to suboptimal utilization of monetary and material resources and increase traffic volume while higher amounts of unused vehicles may exacerbate spatial conflicts. Nevertheless, to fit the given demand within our study area, a comparatively large fleet of about 25 vehicles is necessary to provide reliable service, assuming maximum passenger waiting times of six minutes to the expense of higher standby times, rebalancing effort, and higher costs for vehicle acquisition and maintenance.
After kidney transplantation graft rejection must be prevented. Therefore, a multitude of parameters of the patient is observed pre- and postoperatively. To support this process, the Screen Reject research project is developing a data warehouse optimized for kidney rejection diagnostics. In the course of this project it was discovered that important information are only available in form of free texts instead of structured data and can therefore not be processed by standard ETL tools, which is necessary to establish a digital expert system for rejection diagnostics. Due to this reason, data integration has been improved by a combination of methods from natural language processing and methods from image processing. Based on state-of-the-art data warehousing technologies (Microsoft SSIS), a generic data integration tool has been developed. The tool was evaluated by extracting Banff-classification from 218 pathology reports and extracting HLA mismatches from about 1700 PDF files, both written in german language.
l. Einleitung/Ausgangssituation - Auswirkungen unabgestimmter Prozesse - Prozeßnahe Werkstattsteuerungen 2. Defizite, Prozeßanforderungen - Daten-Anforderungsprofil - Datenvolumen-Trichtermodell - Argumente für Leitstand-Einsatz 3. Integrierter Leitstandeinsatz - Top-Down-Ansatz - Integrierte Regelkreise - Hierarchisches Planungs- und Steuerungskonzept 4. Abgrenzung zwischen Leitstand-, PPS und BDE-Funktionen - Ressourcen-Verfügbarkeitsanforderungen - Leitstand-Funktionsumfang - Integrierter Logistik-Sollablauf 5. Kennzeichen der 2ten Leitstandgeneration - Wissensbasierter LS-Einsatz - Event-Steuerungen 6. Anforderungsgerechte Leitstandeinführung - CIM-house-Modell - Mitarbeiter-Anforderungen
Autonomous mobile six-legged robots are able to demonstrate the potential of intelligent control systems based on recurrent neural networks. The robots evaluate only two forward and two backward looking infrared sensor signals. Fast converging genetic training algorithms are applied to train the robots to move straight in six directions. The robots performed successfully within an obstacle environment and there could be observed a never trained useful interaction between each of the single robots. The paper describes the robot systems and presents the test results. Video clips are downloadable under www.inform.fh-hannover.de/download/lechner.php. Held on IFAC International Conference on Intelligent Control Systems and Signal Processing (ICONS 2003, April 2003, Portugal).
The impact of vertical and horizontal integration in the context of Industry 4.0 requires new concepts for the security of industrial Ethernet protocols. The defense in depth concept, basing on the combination of several measures, especially separation and segmentation, needs to be complimented by integrated protection measures for industrial real-time protocols. To cover this challenge, existing protocols need to be equipped with additional functionality to ensure the integrity and availability of the network communication, even in environments, where possible attackers can be present. In order to show a possible way to upgrade an existing protocol, this paper describes a security concept for the industrial Ethernet protocol PROFINET.
This paper presents a cascaded methodology for enhancing the path accuracy of industrial robots by using advanced control schemes. It includes kinematic calibration as well as dynamic modeling and identification. This is followed by a centralized model-based compensation of robot dynamics. The implemented feed-forward torque control shows the expected improvements of control accuracy. However, external measurements show the influence of joint elasticities as systematic path errors. To further increase the accuracy an iterative learning controller (ILC) based on external camera measurements is designed. The implementation yields to significant improvements of path accuracy. By means of a kind of automated ”Teach-In”, an overall effective concept for the automated calibration and optimization of the accuracy of industrial robots in high-dynamic path-applications is realized.
Microservices is an architectural style for complex application systems, promising some crucial benefits, e.g. better maintainability, flexible scalability, and fault tolerance. For this reason microservices has attracted attention in the software development departments of different industry sectors, such as ecommerce and streaming services. On the other hand, businesses have to face great challenges, which hamper the adoption of the architectural style. For instance, data are often persisted redundantly to provide fault tolerance. But the synchronization of those data for the sake of consistency is a major challenge. Our paper presents a case study from the insurance industry which focusses consistency issues when migrating a monolithic core application towards microservices. Based on the Domain Driven Design (DDD) methodology, we derive bounded contexts and a set of microservices assigned to these contexts. We discuss four different approaches to ensure consistency and propose a best practice to identify the most appropriate approach for a given scenario. Design and implementation details and compliance issues are presented as well.
Against the background of climate change and finite fossil resources, bio-based plastics have been in the focus of research for the last decade and were identified as a promising alternative to fossil-based plastics. Now, with an evolving bio-based plastic market and application range, the environmental advantages of bio-based plastic have come to the fore and identified as crucial by different stakeholders. While the majority of assessments for bio-based plastics are carried out based on attributional life cycle assessment, there have been only few consequential studies done in this area. Also, the application of eco-design strategies has not been in the focus for the bio-based products due to the prevailing misconceptions of renewable materials (as feedstock for bio-based plastics) considered in itself as an ‘eco-design strategy’. In this paper, we discuss the life cycle assessment as well as eco-design strategies of a bio-based product taking attributional as well as consequential approaches into account.
INHALT: l. Einleitung und Standortbestimmung 2. Japanische Wertvorstellungen 3. Inhalte der Lean Production 4. Fertigungssegmentierung 5. Informationsmanagement mit CIM- und Logistik-Komponenten 6. Logistikgerechte Strukturen der Lean Production 7. Realisierung der Lean Production 8. Zusammenfassung
In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics.
An der Fachhochschule Hannover wurde Mitte 2007 das Projekt "Herbar-Digital" gestartet. In dem Forschungsprojekt "Herbar-Digital" sollen aus 3,5 Millionen Papierbögen (Herbarbelege) des Botanischen Museums Berlin möglichst alle Objekte erkannt werden und separat verarbeitbar sein. Bei den Objekten handelt es sich um Barcodes, Tüten, Stempel, Farbtabellen, Elemente aus dem Pflanzenbereich sowie Hand- und Druckschriften. Es soll unter Zuhilfenahme des ADA-BOOST-Algorithmus vom Verfasser eine Objekterkennung realisiert werden, die folgende Eigenschaften aufweist: Position der zu erkennenden Objekte im Bild variabel, auch dreidimensionale - und konturschwache Objekte müssen erkannt werden, gleiche Objekte unterschiedlicher Form müssen erkennbar sein, das System muss lernfähig sein.
Das Forschungsprojekt „Herbar Digital” startete 2007 mit dem Ziel der Digitalisierung des Bestands von mehr als 3,5 Millionen getrockneter Pflanzen bzw. Pflanzenteile auf Papierbögen (Herbarbelege) des Botanischen Museums Berlin. Die Aufgabe des Autors ist die Analyse der hochaufgelösten Bilder mit 10400 Zeilen und 7500 Spalten. Die Herbarbelege können außerdem unterschiedliche Objekte enthalten wie Umschläge mit zusätzlichen Pflanzenteilen, gedruckte oder handgeschriebene Etiketten, Farbtabellen, Maßstäbe, Stempel, Barcodes, farbige „Typus-Etiketten“ und handschriftliche Anmerkungen direkt auf dem Beleg. Die schriftlichen Anmerkungen, insbesondere in Handschrift, sind von besonderem Interesse. Kommerzielle OCR-Software kann oftmals Schrift in komplexen Umgebungen nicht lokalisieren, wie sie häufig auf den Herbarbelegen vorliegt, auf denen Schrift zwischen Blättern, Wurzeln und anderen Objekten angeordnet ist. Im folgenden wird eine Methode vorgestellt, die es ermöglicht, Schriftpassagen im Bild automatisch zu finden.
We present a novel long short-term memory (LSTM) approach for time-series prediction of the sand demand which arises from preparing the sand moulds for the iron casting process of a foundry. With our approach, we contribute to qualify LSTM and its combination with feedback-corrected optimal scheduling for industrial processes.
The sand is produced in an energy intensive mixing process which is controlled by optimal scheduling. The optimal scheduling is solved for a fixed prediction horizon. One major influencing factor is the sand demand, which is highly disturbed, for example due to production interruptions. The causes of production interruptions are in general physically unknown. We assume that information about the future behavior of the sand demand is included in current and past process data. Therefore, we choose LSTM networks for predicting the time-series of the sand demand.
The sand demand prediction is performed by our multi model approach. This approach outperforms the currently used naive estimation, even when predicting far into the future. Our LSTM based prediction approach can forecast the sand demand with a conformity up to 38 % and a mean value accuracy of approximately 99%. Simulating the optimal scheduling with sand demand prediction leads to an improvement in energy savings of approximately 1.1% compared to the naive estimation. The application of our novel approach at the real production plant of a foundry proves the simulation results and verifies the capability of our approach.
Der Bachelor-Studiengang Mediendesigninformatik der Hochschule Hannover ist ein Informatikstudiengang mit dem speziellen Anwendungsgebiet Mediendesign. In Abgrenzung von Studiengängen der Medieninformatik liegt der Anwendungsfokus auf der kreativen Gestaltung etwa von 3D-Modellierungen, Animationen und Computerspielen. Absolvent*innen des Studiengangs sollen an der Schnittstelle zwischen Informatik und Mediendesign agieren können, zum Beispiel bei der Erstellung von Benutzungsschnittstellen und VR/AR-Anwendungen. Der Artikel stellt das Curriculum des interdisziplinären Studiengangs vor und reflektiert nach dem Abschluss der ersten beiden Studierendenkohorten die Erfahrungen, indem die ursprünglichen Ziele den Zahlen der Hochschulstatistik und den Ergebnissen zweier Studierendenbefragungen gegenübergestellt werden.
An der Bibliothek der Fachhochschule Hannover (FHH) ergab sich durch den Umzug eines Fachbereiches und die Auslagerung der entsprechenden Bestände die Chance, die defizitäre Situation bezüglich studentischer Arbeitsplätze zu verbessern. Das bisherige offene Konzept mit einem Nebeneinander von Medienaufstellung, Einzel- sowie Gruppenarbeitsplätzen hatte zu erheblichen Störungen der durchaus noch vorhandenen still lesenden Bibliotheksnutzern geführt. Die Zahl der Arbeitsplätze zu erhöhen und gleichzeitig die Arbeitsbedingungen hinsichtlich Akustik und Klimatisierung zu verbessern war deshalb vorrangiges Ziel eines studentischen Projektes des Studiengangs Innenarchitektur der FHH. Als Ergebnis entstand ein einheitliches Gesamtkonzept mit einer strikten Trennung der Funktionsbereiche und eine Vielzahl unterschiedlicher Einzelmaßnahmen (Nutzbare Atrien, Bibliothekslounge, Lernkabinen etc.). Nach der Entscheidung der Hochschulleitung, die Umbaumaßnahmen aus Studienbeiträgen zu finanzieren,wurde in Kooperation von Hochschule (Bibliotheksleitung, Liegenschaftsdezernat), Staatlichem Baumanagement, einem beauftragten freien Architekturbüro sowie einem Akustiker mit Planung und schrittweiser Realisierung der Umbaumaßnahmen begonnen. Im Vortrag werden das Konzept, der Planungsprozess sowie inzwischen (Stand: 5/2008) geplante und realisierte Baumaßnahmen anhand zahlreicher Illustrationen erläutert.
Hadoop is a Java-based open source programming framework, which supports the processing and storage of large volumes of data sets in a distributed computing environment. On the other hand, an overwhelming majority of organizations are moving their big data processing and storing to the cloud to take advantage of cost reduction – the cloud eliminates the need for investing heavily in infrastructures, which may or may not be used by organizations. This paper shows how organizations can alleviate some of the obstacles faced when trying to make Hadoop run in the cloud.
Da mit wachsender Verwendung von Kunststoffen für konstruktive Anwendungen deren Klebbarkeit immer größere Bedeutung erhält und Vorbehandlungen mit umweltbelastenden Verfahren nur mit Vorbehalt eigesetzt werden können, dürften umweltfreundliche Verbehandlungen, wie die Vorbehandlung im Niederdruckplasma, im Folgenden kurz NDP-Vorbehandlung genannt, eine steigende Bedeutung erlangen. Die Umweltfreundlichkeit des Verfahrens ist darin begründet, dass keine verbrauchten Beizbäder anfallen, die mit hohem Aufwand beseitigt werden müssen, und dass der Prozess in einem geschlossenem System abläuft und somit ein unkontrolliertes Entweichen von Schadstoffen nicht möglich ist. Bei der Ndp-Behandlung gegebenenfalls entstehende Schadstoffe fallen nur in sehr geringem Umfang an und können leicht abgefangen und nachbehandelt werden. Da die verwendeten Gase nicht giftig sind, geht von ihnen keine Gefährdung aus.
NOA is a search engine for scientific images from open access publications based on full text indexing of all text referring to the images and filtering for disciplines and image type. Images will be annotated with Wikipedia categories for better discoverability and for uploading to WikiCommons. Currently we have indexed approximately 2,7 Million images from over 710 000 scientific papers from all fields of science.
Portable-micro-Combined-Heat-and-Power-units are a gateway technology bridging conventional vehicles and Battery Electric Vehicles (BEV). Being a new technology, new software has to be created that can be easily adapted to changing requirements. We propose and evaluate three different architectures based on three architectural paradigms. Using a scenario-based evaluation, we conclude that a Service-Oriented Architecture (SOA) using microservices provides a higher quality solution than a layered or Event-Driven Complex-Event-Processing (ED-CEP) approach. Future work will include implementation and simulation-driven evaluation.
The dependency of word similarity in vector space models on the frequency of words has been noted in a few studies, but has received very little attention. We study the influence of word frequency in a set of 10 000 randomly selected word pairs for a number of different combinations of feature weighting schemes and similarity measures. We find that the similarity of word pairs for all methods, except for the one using singular value decomposition to reduce the dimensionality of the feature space, is determined to a large extent by the frequency of the words. In a binary classification task of pairs of synonyms and unrelated words we find that for all similarity measures the results can be improved when we correct for the frequency bias.
In this paper we investigate how concreteness and abstractness are represented in word embedding spaces. We use data for English and German, and show that concreteness and abstractness can be determined independently and turn out to be completely opposite directions in the embedding space. Various methods can be used to determine the direction of concreteness, always resulting in roughly the same vector. Though concreteness is a central aspect of the meaning of words and can be detected clearly in embedding spaces, it seems not as easy to subtract or add concreteness to words to obtain other words or word senses like e.g. can be done with a semantic property like gender.
In parcel delivery, the “last mile” from the parcel hub to the customer is costly, especially for time-sensitive delivery tasks that have to be completed within hours after arrival. Recently, crowdshipping has attracted increased attention as a new alternative to traditional delivery modes. In crowdshipping, private citizens (“the crowd”) perform short detours in their daily lives to contribute to parcel delivery in exchange for small incentives. However, achieving desirable crowd behavior is challenging as the crowd is highly dynamic and consists of autonomous, self-interested individuals. Leveraging crowdshipping for time-sensitive deliveries remains an open challenge. In this paper, we present an agent-based approach to on-time parcel delivery with crowds. Our system performs data stream processing on the couriers’ smartphone sensor data to predict delivery delays. Whenever a delay is predicted, the system attempts to forge an agreement for transferring the parcel from the current deliverer to a more promising courier nearby. Our experiments show that through accurate delay predictions and purposeful task transfers many delays can be prevented that would occur without our approach.
Industrial Control Systems (ICS) succumb to an ever evolving variety of threats. Additionally, threats are increasing in number and get more complex. This requires a holistic and up-to-date security concept for ICS as a whole. Usually security concepts are applied and updated based on regularly performed ICS security assessments. Such ICS security assessments require high effort and extensive knowledge about ICS and its security. This is often a problem for small and mediumsized enterprises (SME), which do not have sufficient respective sufficiently skilled human resources. This paper defines in a first step requirements on the knowledge needed to perform an ICS security assessment and the life cycle of this knowledge. Afterwards the ICS security knowledge and its life cycle are developed and discussed considering the requirements and related work.
The German Corona Consensus (GECCO) established a uniform dataset in FHIR format for exchanging and sharing interoperable COVID-19 patient specific data between health information systems (HIS) for universities. For sharing the COVID-19 information with other locations that use openEHR, the data are to be converted in FHIR format. In this paper, we introduce our solution through a web-tool named “openEHR-to-FHIR” that converts compositions from an openEHR repository and stores in their respective GECCO FHIR profiles. The tool provides a REST web service for ad hoc conversion of openEHR compositions to FHIR profiles.
This paper presents a novel approach for modelling the energy consumption of the coupled parallel moulding sand mixers of a foundry as an optimal control problem. The minimization of energy consumption is optimized by scheduling the mixing processes in a linear integer programming scheme. The sand flow through the foundry’s sand preparation is characterized by a physical model. This model considers the sand demand of the moulding machine as disturbance, the stored sand masses in the mixer hoppers and machine hoppers, respectively. The novel approach of handling dwell-times for dosing, mixing and transport processes using dead-time systems and constraint pushing allows the application of a linear model. The formulation of the optimal control problem aims at real-time application as model predictive control at the production plant. Initial application results indicate an improvement in energy consumption of approximately 8%.