Refine
Year of publication
Document Type
- Conference Proceeding (158) (remove)
Has Fulltext
- yes (158)
Is part of the Bibliography
- no (158)
Keywords
- Digitalisierung (9)
- Mikroservice (8)
- Angewandte Botanik (7)
- Energiemanagement (7)
- Gepresste Pflanzen (7)
- Herbar Digital (7)
- Herbarium (7)
- Serviceorientierte Architektur (7)
- Virtualisierung (7)
- Agile Softwareentwicklung (6)
- Agilität <Management> (6)
- Erkennungssoftware (6)
- OCR (6)
- Recognition software (6)
- Computersicherheit (5)
- Insurance Industry (5)
- Text Mining (5)
- Versicherungswirtschaft (5)
- Automation (4)
- Bibliothek (4)
- Concreteness (4)
- E-Learning (4)
- Grader (4)
- Informationsmanagement (4)
- Rechnernetz (4)
- SOA (4)
- Semantik (4)
- Telearbeit (4)
- energy management (4)
- Agile methods (3)
- Ausbildung (3)
- Autobewerter (3)
- Big Data (3)
- Biogas (3)
- COVID-19 (3)
- Cloud Computing (3)
- Complex Event Processing (3)
- Computersimulation (3)
- Energieeffizienz (3)
- Erneuerbare Energien (3)
- German (3)
- Gießerei (3)
- Information Retrieval (3)
- Klassifikation (3)
- Microservices (3)
- Nachhaltigkeit (3)
- PROFInet (3)
- Regelenergie (3)
- Reserveleistung (3)
- Visualisierung (3)
- foundry (3)
- microservices (3)
- Agile software development (2)
- Benutzererlebnis (2)
- Benutzeroberfläche (2)
- Bibliothekswesen (2)
- Computerunterstütztes Lernen (2)
- Consistency (2)
- Contract Analysis (2)
- Deutsch (2)
- Disambiguation (2)
- Distributional Semantics (2)
- EEG (2)
- Elektrospinnen (2)
- Energieeinsparung (2)
- Ethernet (2)
- Framework <Informatik> (2)
- Ganzzahlige lineare Optimierung (2)
- Graja (2)
- Industrie 4.0 (2)
- Information Visualization (2)
- Konkretum <Linguistik> (2)
- Kulturerbe (2)
- Künstliche Intelligenz (2)
- Landwirtschaft (2)
- Machine Learning (2)
- Microservice (2)
- Microservices Architecture (2)
- Modellversuch BID (2)
- Molecular switches (2)
- Network Security (2)
- Neuronales Netz (2)
- Open Access (2)
- PPS (2)
- PROFINET Security (2)
- Programmieraufgabe (2)
- Programmierung (2)
- Rechtswissenschaften (2)
- Sachtext (2)
- Semantic Web (2)
- Simulation (2)
- Sprachnorm (2)
- Steuerungssystem (2)
- Triazole (2)
- Urban Logistics (2)
- User Interfaces (2)
- Vergleich (2)
- Vertrag (2)
- Wikibase (2)
- Wikidata (2)
- XML (2)
- agile methods (2)
- agile software development (2)
- batch-wise parallel process (2)
- dwell-time (2)
- eduscrum (2)
- linear integer programming (2)
- optimal scheduling (2)
- remote work (2)
- soft constraint (2)
- Ähnlichkeit (2)
- Übung <Hochschule> (2)
- 2D data processing (1)
- 3D data (1)
- 3d mapping (1)
- 4-day work week (1)
- API (1)
- ARIS (1)
- Abbreviations (1)
- Abkürzung (1)
- Ablaufplanung (1)
- Absolvent (1)
- Acronyms (1)
- Adaptive IT Infrastructure (1)
- Adhäsion (1)
- Agent <Informatik> (1)
- Agile Manifesto (1)
- Agile Methoden (1)
- Agile Practices (1)
- Agile Software Development (1)
- Agile education (1)
- Agile method (1)
- Agile practices (1)
- Air quality (1)
- Akronym (1)
- Algorithmus (1)
- Alternative work schedule (1)
- Ambiguität (1)
- Anergy (1)
- Anforderungsermittlung (1)
- Angewandte Informatik (1)
- Annotation (1)
- Anonymization (1)
- Application Programming Interface (1)
- Arbeitsablauf (1)
- Arbeitswelt (1)
- Arbeitszufriedenheit (1)
- Articial intelligence (1)
- Asymmetric encryption (1)
- Auswahl (1)
- Authentication (1)
- Authentifikation (1)
- Authorization (1)
- Automatische Klassifikation (1)
- Automatische Sprachanalyse (1)
- Automatisierte Bewertung (1)
- Automatisierte Programmbewertung (1)
- Automatisierungssystem (1)
- Autorisierung (1)
- Azyklischer gerichteter Graph (1)
- BaaS (Backend-as-a-service) (1)
- Bahnplanung (1)
- Batteriefahrzeug (1)
- Battery Electric Vehicles (1)
- Baumaßnahme (1)
- Beruf (1)
- Betriebsdaten (1)
- Betriebsdatenerfassung (1)
- Bewertungsaspekt (1)
- Bewertungsmaßstab (1)
- Bewertungsschema (1)
- Bibliothekar (1)
- Big Data Analytics (1)
- Big-Data-Datenplattform (1)
- Bilderkennung (1)
- Bildersprache (1)
- Bildersuchmaschine (1)
- Bildmaterial (1)
- Bildverarbeitung (1)
- Biokunststoff (1)
- Blackboard Pattern (1)
- Book of Abstract (1)
- Bring Your Own Device (1)
- C-SPARQL (1)
- CI/CD (1)
- CQL (1)
- Case Management (1)
- Chargenbetrieb (1)
- Chatbot (1)
- Choreography (1)
- Citizens (1)
- City-Logistik (1)
- Classification (1)
- Codegenerierung (1)
- Composite materials (1)
- Computer simulation (1)
- Computerlinguistik (1)
- Constructive Alignment (1)
- Consumerization (1)
- Context Awareness (1)
- Corporate Credit Risk (1)
- Corpus construction (1)
- Crowdshipping (1)
- Curriculumentwicklung (1)
- Cyber-Security (1)
- Data Cubes (1)
- Data Management (1)
- Data Science (1)
- Data handling (1)
- Data-Warehouse-Konzept (1)
- Datenaufbereitung (1)
- Datenerfassung (1)
- Datenqualität (1)
- Datenschutz (1)
- Datenstrom (1)
- Datenwürfel (1)
- Decision Support (1)
- Decision Support Systems, Clinical (1)
- Decision Support Tool (1)
- Deep Convolutional Networks (1)
- Design Science (1)
- Designwissenschaft <Informatik> (1)
- DevOps (1)
- Dewey-Dezimalklassifikation (1)
- Didactic (1)
- Didaktik (1)
- Dienstgüte (1)
- Digital Wellbeing (1)
- Digital storage (1)
- Digitaler Marktplatz (1)
- Digitalization (1)
- Digitization (1)
- Dimension 2 (1)
- Disambiguierung (1)
- District Heating (1)
- Docker (1)
- Dokumentanalyse (1)
- Domain Driven Design (DDD) (1)
- Drehkolbenverdichter (1)
- Dynamic identification (1)
- Dynamic modelling (1)
- Dynamische Modellierung (1)
- E - Assessment (1)
- E-Assessment (1)
- E-Grocery (1)
- EAssessment (1)
- EPN (1)
- Education (1)
- Eilzustellung (1)
- Electrospinning (1)
- Elektromobilität (1)
- Elektronischer Markt (1)
- Elektronischer Marktplatz (1)
- Empfehlungssystem (1)
- Enduser Device (1)
- Energieaufnahme (1)
- Energieverbrauch (1)
- Entscheidungsunterstützungssystem (1)
- Ereignisgesteuerte Prozesskette (1)
- Erneuerbare-Energien-Gesetz (2000) (1)
- Evaluation (1)
- Event Processing Network (1)
- Event Processing Network Model (1)
- Exergie (1)
- Exergy (1)
- FaaS (Function-as-a-service) (1)
- Farming 4.0 (1)
- Fassung (1)
- Feature and Text Extraction (1)
- Feldgeräte (1)
- Fernunterricht (1)
- Fernwärmeversorgung (1)
- Fertigung (1)
- Fertigungslogistik (1)
- Fertigungssteuerung (1)
- Figurative Language (1)
- Finanzkrise (1)
- Finite-Elemente-Methode (1)
- Flachheitsbasierte Vorsteuerung (1)
- Flexible Struktur (1)
- Focus Group (1)
- Foresight (1)
- Formelhafte Textabschnitte (1)
- Forschungsdaten (1)
- Framework (1)
- Function as a Service (1)
- Funktionsgenerator (1)
- Futurologie (1)
- Gedenkfeier (1)
- Gemischt-ganzzahlige Optimierung (1)
- Genetic algorithms (1)
- Genetischer Algorithmus (1)
- Geschlechtsunterschied (1)
- Geschäftsprozessmanagement (1)
- Geschäftsprozessmodellierung (1)
- Gesundheitsfürsorge (1)
- Graph-based Text Representations (1)
- Graphische Benutzeroberfläche (1)
- Grappa (1)
- Gruppeninterview (1)
- Hadoop (1)
- Handelsbot (1)
- Hannover / Fachhochschule Hannover / Bibliothek (1)
- Health IT (1)
- Heat Pump (1)
- Herbarbeleg (1)
- Hilfsprogramm (1)
- Hochschule (1)
- Hochschulpolitik (1)
- Home Care (1)
- Hybrid Conference (1)
- IBM PC (1)
- ICS Security (1)
- ISO 9001 (1)
- IT security (1)
- IT-Sicherheit (1)
- Image Recognition (1)
- Image Retrieval (1)
- Imagery (1)
- Images (1)
- Indicator Measurement (1)
- Industrial Security (1)
- Industrial robots (1)
- Industrieroboter (1)
- Industry 4.0 (1)
- Information Dissemination (1)
- Information Extraction (1)
- Information Management (1)
- Information Science (1)
- Informationskompetenz (1)
- Informationsmodell (1)
- Informationsmodellierung (1)
- Informationstechnik (1)
- Informationsvermittlung (1)
- Integration (1)
- Intelligent control (1)
- Intelligentes Stromnetz (1)
- Interaktion (1)
- Interdiziplinäre Studiengänge (1)
- Internet der Dinge (1)
- Investment Banking (1)
- Istio (1)
- Java <Programmiersprache> (1)
- Keyword Extraction (1)
- Kinematic calibration (1)
- Kinematik (1)
- Kleben (1)
- Knowledge Life Cycle (1)
- Knowledge Maps (1)
- Kommunikation (1)
- Kompakkt (1)
- Kompetenz (1)
- Kontextbezogenes System (1)
- Korpus <Linguistik> (1)
- Krankenhaus (1)
- Kreditrisiko (1)
- Kreditwesen (1)
- Kritische Masse (1)
- Kryptologie (1)
- Kubernetes (1)
- LIG (1)
- LSTM (1)
- Lastmanagement (1)
- Lastverteilung <Energietechnik> (1)
- Latent Semantic Analysis (1)
- Layout Detection (1)
- Lean Management (1)
- Lean Production (1)
- Lebensmittel (1)
- Lebensmitteleinzelhandel (1)
- Legal Documents (1)
- Legal Writings (1)
- Legende <Bild> (1)
- Leistungskennzahl (1)
- Leistungssteigerung (1)
- Leitstand (1)
- Lemmatization (1)
- Lernmanagementsystem (1)
- Lernmotivation (1)
- Lexical Semantics (1)
- Lieferservice (1)
- Linear Indexed Grammars (1)
- Linked Data (1)
- Linked Open Data (1)
- Literaturbericht (1)
- Liver Transplantation (1)
- Low Exergy Heat Net (1)
- Luftqualität (1)
- MIMOS II (1)
- Management (1)
- MapReduce (1)
- Markov Models (1)
- Maschinelles Lernen (1)
- Masterstudium (1)
- Mathematisches Modell (1)
- Media Didactic Concept (1)
- Mediendesign (1)
- Mediendesignausbildung (1)
- Mediendesigninformatik (1)
- Mediendidaktik (1)
- Medizinische Bibliothek (1)
- Messwerterfassung (1)
- Middleware (1)
- Mikro-Kraft-Wärme-Kopplung (1)
- Mikroprozessor (1)
- Mischanlage (1)
- Mobile (1)
- Mobile Device Management (1)
- Modellprädiktive Regelung (1)
- Modifizierte dezentrale (1)
- Motivation (1)
- Multidimensional Analysis (1)
- Mössbauer (1)
- Mößbauer-Spektrometer (1)
- Mößbauer-Spektroskopie (1)
- NFDI (1)
- NFDI4Culture – Konsortium für Forschungsdaten materieller und immaterieller Kulturgüter (1)
- NLP (1)
- NMPC (1)
- Neoliberalismus (1)
- Neural controls (1)
- Neural networks (1)
- Neural-network models (1)
- Nichtlineare modellprädiktive Regelung (1)
- Niederdruckplasma (1)
- Nierentransplantation (1)
- Notation <Klassifikation> (1)
- Nürnberg / Evangelische Studentengemeinde (1)
- OT Security (1)
- OT-Security (1)
- Online-Trajektoriengenerierung (1)
- Open Repositories (1)
- Open Science (1)
- Open Source (1)
- OpenRefine (1)
- OpenStack (1)
- Optimale Kontrolle (1)
- Orchestration (1)
- PDF <Dateiformat> (1)
- PDF Document Analysis (1)
- POS Tagging (1)
- PageRank (1)
- Paket (1)
- Paraphrase (1)
- Paraphrase Similarity (1)
- Path accuracy (1)
- Patient empowerment (1)
- Personennahverkehr (1)
- Pflege (1)
- Phraseologie (1)
- Physics (1)
- Physik (1)
- Plugin (1)
- Polymere (1)
- Polymers (1)
- Portable Micro-CHP Unit (1)
- Praxisprojekte (1)
- Pregel (1)
- Preisbildung (1)
- Preisdifferenzierung (1)
- Preissetzung (1)
- Privacy by Design (1)
- ProFormA (1)
- ProFormA-Aufgabenformat (1)
- Problemorientiertes Lernen (1)
- Processes (1)
- Produktionslogistik (1)
- Produktionsprozess (1)
- Prognose (1)
- Programmieraufgaben (1)
- Programmierausbildung (1)
- Projektmanagement (1)
- Prozessmanagement (1)
- Prozessmodell (1)
- Prozessmuster (1)
- Prozessoptimierung (1)
- Prüfstand (1)
- Pseudonymization (1)
- Pädagogik (1)
- QM (1)
- Qualifikation (1)
- Quality Management (1)
- Quality assessment (1)
- Quality of Service (1)
- Qualität (1)
- Qualitätsmanagement (1)
- REST <Informatik> (1)
- RESTful (1)
- RFID (1)
- Realisierung (1)
- Rechtsanwalt (1)
- Rechtsdokumente (1)
- Recommender System (1)
- Reduction of Complexity (1)
- Reference Architecture (1)
- Referenzmodell (1)
- Regalbediengerät (1)
- Regalförderzeug (1)
- Regional Development (1)
- Regional Innovation Systems (1)
- Regional Policy (1)
- Regulierung (1)
- Remote Arbeit (1)
- Remote work (1)
- Repository <Informatik> (1)
- Representational State Transfer (1)
- Requirements engineering (1)
- Resiliency (1)
- Richardson Maturity Model (1)
- Rissausbreitung (1)
- Robotics (1)
- Robotik (1)
- RuleCore (1)
- SCO (1)
- SOA co-existence (1)
- SYCAT (1)
- Sakura Science Program (1)
- Schaltungstechnik (1)
- Schlagwortkatalog (1)
- Schlagwortnormdatei (1)
- Schreibberatung (1)
- Schreibwerkstatt (1)
- Schwarmintelligenz (1)
- Scientific image search (1)
- Scrum <Vorgehensmodell> (1)
- Secure communication (1)
- Security (1)
- Security Knowledge (1)
- Security Ontology (1)
- Selbstgesteuertes Lernen (1)
- Self-directed Learning (1)
- Semantic Web Technologies (1)
- Semantics (1)
- Semantisches Datenmodell (1)
- Serverless Computing (1)
- Service Mesh (1)
- Service Orientation (1)
- Service-orientation (1)
- Shortest Path (1)
- Signal processing (1)
- Signaltechnik (1)
- Signalverarbeitung (1)
- Similarity Measures (1)
- Simulation Modeling (1)
- Situation Awareness (1)
- Smart Buildings (1)
- Smart Grid (1)
- Smart Society (1)
- Society 5.0 (1)
- Software (1)
- Software Architecture (1)
- Software-Tool (1)
- Softwarearchitektur (1)
- Softwareentwicklung (1)
- Softwaretest (1)
- Softwarewerkzeug (1)
- Spannungsintensitätsfaktor (1)
- Spezialbibliothekar (1)
- Spin crossover (1)
- Standardised formulation (1)
- Standardisierung (1)
- Standards (1)
- Statistical Methods (1)
- Statistische Methoden (1)
- Strategie (1)
- Strategische Vorausschau (1)
- Straßenverkehr (1)
- Structural Analysis (1)
- Studienbeiträge (1)
- Studiengebühr (1)
- Subprime-Krise (1)
- Supply Chain Management (1)
- Supply Chains (1)
- Sustainable development (1)
- Swarm Intelligence (1)
- Systemdienstleistungen (1)
- Systems Librarian, Data Librarian, Job advertisement analysis, Job profiles, New competencies (1)
- Taxonomie (1)
- Technisches Energiemanagementsystem (tEnMS) (1)
- Techno-Economic Analysis (1)
- Territorial Intelligence (1)
- Tertiary study (1)
- Tertiärbereich (1)
- Test Bench (1)
- Text Similarity (1)
- Text annotation (1)
- Textbooks (1)
- Thermal Storage (1)
- Thesaurus (1)
- Thin film (1)
- Tiefeninterview (1)
- Title Matching (1)
- Transaktionskosten (1)
- Transmission measurement setup (1)
- Transplantatabstoßung (1)
- Trendanalyse (1)
- Triazole complexes (1)
- Twitter <Softwareplattform> (1)
- Twitter analysis (1)
- Umweltbilanz (1)
- Unternehmen (1)
- Unternehmensgründung (1)
- User Generated Content (1)
- Variabilität (1)
- Verbal Idioms (1)
- Verbesserung (1)
- Verbundwerkstoff (1)
- Versicherungsbetrieb (1)
- Versicherungsvertrag (1)
- Verteiltes System (1)
- Vertragsklausel (1)
- Verweilzeit (1)
- Videospiel (1)
- Viertagewoche (1)
- Virtuelle Realität (1)
- Virtuelles Laboratorium (1)
- Visualization (1)
- Vorbehandlung <Technik> (1)
- Waveguides (1)
- Wellenleiter (1)
- Werkzeug (1)
- Wert (1)
- Wikimedia Commons (1)
- Wikipedia categories (1)
- Wirtschaftlichkeit (1)
- Wirtschaftsdemokratie (1)
- Wissenschaftliche Bibliothek (1)
- Wissenschaftliches Arbeiten (1)
- Wissensmanagement (1)
- Word Counting (1)
- Word Norms (1)
- Work From Home (1)
- Workflow (1)
- Wort (1)
- Wärmepumpe (1)
- Wärmespeicher (1)
- Wärmeübertragung (1)
- XML-Model (1)
- XML-Schema (1)
- Zeitreihe (1)
- Zukunftsforschung (1)
- Zweiwortsatz (1)
- abstractness (1)
- aerospace engineering (1)
- agent-based simulation (1)
- agile education (1)
- application (1)
- attributional LCA (1)
- bio-based plastics (1)
- biocomposites (1)
- build automation (1)
- build server (1)
- business process management (1)
- class room (1)
- code generation (1)
- combined heat and power (1)
- concreteness (1)
- consequential LCA (1)
- constraint pushing (1)
- context vectors (1)
- covid 19 (1)
- crack propagation rate (1)
- credit risk (1)
- critical mass (1)
- cultural heritage (1)
- cyber security (1)
- data mapping (1)
- data stream processing (1)
- data warehouse (1)
- digital twins (1)
- distance learning (1)
- distributed systems (1)
- distributional semantics (1)
- dynamic programming (1)
- dynamic trajectories (1)
- e-Assessment (1)
- e-mobility (1)
- eLearning (1)
- eco-design (1)
- eduDScloud (1)
- education (1)
- energy data (1)
- energy data information model (1)
- energy efficiency (1)
- energy information model (1)
- energy monitoring (1)
- energy profiles (1)
- event-driven process chain (1)
- fall prediction (1)
- fall prevention (1)
- fall risk (1)
- finite element method (1)
- flatness-based control (1)
- flexible structure (1)
- game analysis (1)
- gender (1)
- generic interface (1)
- graduate (1)
- graft rejection (1)
- graphical user interface (1)
- hemp (1)
- herbarium (1)
- high-quality Learning Formats (1)
- image processing (1)
- in-depth-interviews (1)
- increasing continuous differentiability (1)
- individuelle Programmieraufgabe (1)
- industrial production process (1)
- information extraction (1)
- information modeling (1)
- information system (1)
- integrated passenger and freight transport (1)
- key performance indicators (1)
- kidney transplant (1)
- library and information science (1)
- lidar (1)
- life-cycle-assessment (1)
- linked data (1)
- literature review (1)
- matrix calulations (1)
- measurement data acquisition (1)
- mixed-integer programming (1)
- model predictive control (1)
- moving average filter (1)
- natural fiber (1)
- neural network model (1)
- online trajectory generation (1)
- plant specimen (1)
- pmCHP (1)
- point clouds (1)
- prediction methods (1)
- private cloud (1)
- problem based learning (1)
- production control (1)
- professional life (1)
- real-time application (1)
- recommender systems (1)
- research data management (1)
- research information (1)
- rural transport simulation (1)
- scaling (1)
- scheduling (1)
- security (1)
- security protocol extensions (1)
- semantic knowledge (1)
- semistructured interview (1)
- sensor-based assessment (1)
- sentiment dictionaries (1)
- serverless architecture (1)
- serverless functions (1)
- service models (1)
- service-orientation (1)
- situation-awareness (1)
- smart buildings (1)
- standardized semantics (1)
- startup (1)
- stereo vision (1)
- stress intensity factor (1)
- supervised machine learning (1)
- survey (1)
- sustainability (1)
- system integration (1)
- systematic literature review (1)
- taxonomy (1)
- technical energy management (1)
- text mining (1)
- thesauri (1)
- time-series forecast (1)
- tool evaluation (1)
- user experience (1)
- user generated content (1)
- virtual distance teaching (1)
- virtual lab (1)
- virtual reality (1)
- visual delegates (1)
- visual perception (1)
- wearable sensors (1)
- web crawling (1)
- word embedding space (1)
- work satisfaction (1)
- work-life balance (1)
- working life (1)
- workload decomposition (1)
- Öffentliche Bibliothek (1)
- Überwachtes Lernen (1)
Regional knowledge map is a tool recently demanded by some actors in an institutional level to help regional policy and innovation in a territory. Besides, knowledge maps facilitate the interaction between the actors of a territory and the collective learning. This paper reports the work in progress of a research project which objective is to define a methodology to efficiently design territorial knowledge maps, by extracting information of big volumes of data contained in diverse sources of information related to a region. Knowledge maps facilitate management of the intellectual capital in organisations. This paper investigates the value to apply this tool to a territorial region to manage the structures, infrastructures and the resources to enable regional innovation and regional development. Their design involves the identification of information sources that are required to find which knowledge is located in a territory, which actors are involved in innovation, and which is the context to develop this innovation (structures, infrastructures, resources and social capital). This paper summarizes the theoretical background and framework for the design of a methodology for the construction of knowledge maps, and gives an overview of the main challenges for the design of regional knowledge maps.
Ein Schnittstellen-Datenmodell der Variabilität in automatisch bewerteten Programmieraufgaben
(2018)
Automatisch bewertete, variable Programmieraufgaben stellen besondere Schnittstellenanforderungen an Autobewerter (Grader) und Lernmanagementsysteme (LMS). Um Wiederverwendung von Aufgaben über Systemgrenzen hinweg zu begünstigen, schlagen wir vor, Aufgabenschablonen durch eine von allen beteiligten Systemen genutzte Middleware zu instanziieren und dabei Variabilitätsinformationen in einem Schnittstellen-Datenmodell zu transportieren. Wir stellen ein solches Datenmodell vor, welches für die Grader-unabhängige Kommunikation mit LMS ausgelegt ist und beispielhaft im Autobewerter Graja implementiert wurde. Zudem wird eine Dialogkomponente für die manuelle Werteauswahl vorgestellt, die auch bei großen Wertemengen effizient und Grader-unabhängig einsetzbar ist. Die Eignung des Dialogs und des Datenmodells wird anhand eines typischen Bewertungsszenarios diskutiert.
Automatisiert bewertbare Programmieraufgaben definieren Tests, die auf Einreichungen angewendet werden. Da Testergebnisse nicht mit Bewertungsergebnissen gleichzusetzen sind, schlagen wir ein Beschreibungsformat vor, das Testergebnisse auf Bewertungsergebnisse abbildet. Lehrkräfte können die Abbildungsvorschrift an ihren Lehrkontext anpassen. Der Vorschlag ist unabhängig von den beteiligten Autobewertern, von den eingesetzten Benutzungsschnittstellen und von der zu lernenden Programmiersprache einsetzbar. Das Format basiert auf verschachtelten Bewertungskategorien, welche um ein Nullifikationen-Konzept ergänzt werden.
Das ProFormA-Aufgabenformat wurde eingeführt, um den Austausch von Programmieraufgaben zwischen beliebigen Autobewertern (Grader) zu ermöglichen. Ein Autobewerter führt im ProFormA-Aufgabenformat spezifizierte „Tests“ sequentiell aus, um ein vom Studierenden eingereichtes Programm zu prüfen. Für die Strukturierung und Darstellung der Testergebnisse existiert derzeit kein graderübergreifender Standard. Wir schlagen eine Erweiterung des ProFormA-Aufgabenformats um eine Hierarchie von Bewertungsaspekten vor, die nach didaktischen Aspekten gruppiert ist und entsprechende Testausführungen referenziert. Die Erweiterung wurde in Graja umgesetzt, einem Autobewerter für Java-Programme. Je nach gewünschter Detaillierung der Bewertungsaspekte sind Testausführungen in Teilausführungen aufzubrechen. Wir illustrieren unseren Vorschlag mit den Testwerkzeugen Compiler, dynamischer Softwaretest, statische Analyse sowie unter Einsatz menschlicher Bewerter.
The methods developed in the research project "Herbar Digital" are to help plant taxonomists to master the great amount of material of about 3.5 million dried plants on paper sheets belonging to the Botanic Museum Berlin in Germany. Frequently the collector of the plant is unknown. So a procedure had to be developed in order to determine the writer of the handwriting on the sheet. In the present work the static character is transformed into a dynamic form. This is done with the model of an inert ball which is rolled through the written character. During this off-line writer recognition, different mathematical procedures are used such as the reproduction of the write line of individual characters by Legendre polynomials. When only one character is used, a recognition rate of about 40% is obtained. By combining multiple characters, the recognition rate rises considerably and reaches 98.7% with 13 characters and 93 writers (chosen randomly from the international IAM-database [3]). Another approach tries to identify the writer by handwritten words. The word is cut out and transformed into a 6-dimensional time series and compared e.g. by means of DTW-methods. A global statistical approach using the whole handwritten sentences results in a similar recognition rate of more than 98%. By combining the methods, a recognition rate of 99.5% is achieved.
Die Covid-19 Pandemie hat zu einem signifikanten Anstieg der Remote Work geführt. Die Veränderung in der Interaktion und Kollaboration ist für viele agile Teams eine Herausforderung gewesen. Diverse Studien zeigen unterschiedliche Effekte und Auswirkungen auf die Zusammenarbeit agiler Teams während der Pandemie. So ist die Kommunikation sachlicher und zielgerichteter geworden. Ebenso wird eine Verminderung des sozialen Austauschs in den Teams berichtet. Unser Artikel thematisiert die Veränderung der Interaktion in agilen Teams durch die Remote Work. Wir haben eine qualitative Fallstudie bei einem agilen Software-Entwicklungsteam bei Otto durchgeführt. Unsere Ergebnisse zeigen einen Zusammenhang zwischen den Auswirkungen auf die Interaktion und der persönlichen Autonomie der Team-Mitglieder. Darüber hinaus haben wir keine signifikanten negativen Effekte durch die veränderte Interaktion auf die agile Arbeitsweise festgestellt.
Discovery and efficient reuse of technology pictures using Wikimedia infrastructures. A proposal
(2016)
Multimedia objects, especially images and figures, are essential for the visualization and interpretation of research findings. The distribution and reuse of these scientific objects is significantly improved under open access conditions, for instance in Wikipedia articles, in research literature, as well as in education and knowledge dissemination, where licensing of images often represents a serious barrier.
Whereas scientific publications are retrievable through library portals or other online search services due to standardized indices there is no targeted retrieval and access to the accompanying images and figures yet. Consequently there is a great demand to develop standardized indexing methods for these multimedia open access objects in order to improve the accessibility to this material.
With our proposal, we hope to serve a broad audience which looks up a scientific or technical term in a web search portal first. Until now, this audience has little chance to find an openly accessible and reusable image narrowly matching their search term on first try - frustratingly so, even if there is in fact such an image included in some open access article.
Dieser Beitrag adressiert einleitend die aktuelle Bedrohungslage aus Sicht der Industrie mit einem Fokus auf das Feld und die Feldgeräte. Zentral wird dann die Frage behandelt, welchen Beitrag Feldgeräte im Kontext von hoch vernetzten Produktionsanlagen für die künftige IT-Sicherheit leisten können und müssen. Unter anderem werden auf Basis der bestehenden Standards wie IEC 62443-4-1, IEC 62443-4-2 oder der VDI 2182-1 und VDI 2182-4 ausgewählte Methoden und Maßnahmen am Beispiel eines Durchflussmessgerätes vorgestellt, die zur künftigen Absicherung von Feldgeräten notwendig sind.
The automated transfer of flight logbook information from aircrafts into aircraft maintenance systems leads to reduced ground and maintenance time and is thus desirable from an economical point of view. Until recently, flight logbooks have not been managed electronically in aircrafts or at least the data transfer from aircraft to ground maintenance system has been executed manually. Latest aircraft types such as the Airbus A380 or the Boeing 787 do support an electronic logbook and thus make an automated transfer possible. A generic flight logbook transfer system must deal with different data formats on the input side – due to different aircraft makes and models – as well as different, distributed aircraft maintenance systems for different airlines as aircraft operators. This article contributes the concept and top level distributed system architecture of such a generic system for automated flight log data transfer. It has been developed within a joint industry and applied research project. The architecture has already been successfully evaluated in a prototypical implementation.
Bei der Konzeption und Entwicklung der BID-Studiengänge ist neben den inhaltlichen und studienorganisatorischen Überlegungen die Ableitung und Entwicklung realistischer Planungsdaten eine der Hauptaufgaben des Modellversuchs BID und eine wesentliche Voraussetzung für ihre erfolgreiche Umsetzung in die Praxis gewesen. Auf diese Planungsergebnisse und die Umsetzung wird in diesem Beitrag vor allem einzugehen sein.
In the present paper we sketch an automated procedure to compare different versions of a contract. The contract texts used for this purpose are structurally differently composed PDF files that are converted into structured XML files by identifying and classifying text boxes. A classifier trained on manually annotated contracts achieves an accuracy of 87% on this task. We align contract versions and classify aligned text fragments into different similarity classes that enhance the manual comparison of changes in document versions. The main challenges are to deal with OCR errors and different layout of identical or similar texts. We demonstrate the procedure using some freely available contracts from the City of Hamburg written in German. The methods, however, are language agnostic and can be applied to other contracts as well.
The reuse of scientific raw data is a key demand of Open Science. In the project NOA we foster reuse of scientific images by collecting and uploading them to Wikimedia Commons. In this paper we present a text-based annotation method that proposes Wikipedia categories for open access images. The assigned categories can be used for image retrieval or to upload images to Wikimedia Commons. The annotation basically consists of two phases: extracting salient keywords and mapping these keywords to categories. The results are evaluated on a small record of open access images that were manually annotated.
For the analysis of contract texts, validated model texts, such as model clauses, can be used to identify used contract clauses. This paper investigates how the similarity between titles of model clauses and headings extracted from contracts can be computed, and which similarity measure is most suitable for this. For the calculation of the similarities between title pairs we tested various variants of string similarity and token based similarity. We also compare two additional semantic similarity measures based on word embeddings using pre-trained embeddings and word embeddings trained on contract texts. The identification of the model clause title can be used as a starting point for the mapping of clauses found in contracts to verified clauses.
In order to ensure validity in legal texts like contracts and case law, lawyers rely on standardised formulations that are written carefully but also represent a kind of code with a meaning and function known to all legal experts. Using directed (acyclic) graphs to represent standardized text fragments, we are able to capture variations concerning time specifications, slight rephrasings, names, places and also OCR errors. We show how we can find such text fragments by sentence clustering, pattern detection and clustering patterns. To test the proposed methods, we use two corpora of German contracts and court decisions, specially compiled for this purpose. However, the entire process for representing standardised text fragments is language-agnostic. We analyze and compare both corpora and give an quantitative and qualitative analysis of the text fragments found and present a number of examples from both corpora.
Legal documents often have a complex layout with many different headings, headers and footers, side notes, etc. For the further processing, it is important to extract these individual components correctly from a legally binding document, for example a signed PDF. A common approach to do so is to classify each (text) region of a page using its geometric and textual features. This approach works well, when the training and test data have a similar structure and when the documents of a collection to be analyzed have a rather uniform layout. We show that the use of global page properties can improve the accuracy of text element classification: we first classify each page into one of three layout types. After that, we can train a classifier for each of the three page types and thereby improve the accuracy on a manually annotated collection of 70 legal documents consisting of 20,938 text elements. When we split by page type, we achieve an improvement from 0.95 to 0.98 for single-column pages with left marginalia and from 0.95 to 0.96 for double-column pages. We developed our own feature-based method for page layout detection, which we benchmark against a standard implementation of a CNN image classifier. The approach presented here is based on corpus of freely available German contracts and general terms and conditions.
Both the corpus and all manual annotations are made freely available. The method is language agnostic.
Generalisierte Rechtsdokumente, bei denen für die individuellen Ausprägungen eines Vertrages die Positionen im Text bekannt sind, können eingesetzt werden, um erstens das Genehmigungsverfahren von Neuverträgen automatisiert zu unterstützen und zweitens als Vertragsgenerator neue Rechtsdokumente vorausgewählt zur Verfügung zu stellen. In diesem Beitrag wird, mithilfe von bekannten juristischen Texten gezeigt, wie formelhafte Textabschnitte identifiziert und häufige individuelle Ausprägungen klassifiziert werden können, um als Musterabschnitte eingesetzt zu werden. Es werden Einsatzbereiche vorgestellt und vorhandenes Potential für Legal Tech-Anwendungen aufgezeigt.
Building a well-founded understanding of the concepts, tasks and limitations of IT in all areas of society is an essential prerequisite for future developments in business and research. This applies in particular to the healthcare sector and medical research, which are affected by the noticeable advances in digitization. In the transfer project “Zukunftslabor Gesundheit” (ZLG), a teaching framework was developed to support the development of further education online courses in order to teach heterogeneous groups of learners independent of location and prior knowledge. The study at hand describes the development and components of the framework.
Complexes like iron (II)-triazoles exhibit spin crossover behavior at ambient temperature and are often considered for possible application. In previous studies, we implemented complexes of this type into polymer nanofibers and first polymer-based optical waveguide sensor systems. In our current study, we synthesized complexes of this type, implemented them into polymers and obtained composites through drop casting and doctor blading. We present that a certain combination of polymer and complex can lead to composites with high potential for optical devices. For this purpose, we used two different complexes [Fe(atrz)3](2 ns)2 and [Fe(atrz)3]Cl1.5(BF4)0.5 with different polymers for each composite. We show through transmission measurements and UV/VIS spectroscopy that the optical properties of these composite materials can reversibly change due to the spin crossover effect.
The Gravitational Search Algorithm is a swarm-based optimization metaheuristic that has been successfully applied to many problems. However, to date little analytical work has been done on this topic.
This paper performs a mathematical analysis of the formulae underlying the Gravitational Search Algorithm. From this analysis, it derives key properties of the algorithm's expected behavior and recommendations for parameter selection. It then confirms through empirical examination that these recommendations are sound.
Nowadays, REST is the most dominant architectural style of choice at least for newly created web services. So called RESTfulness is thus really a catchword for web application, which aim to expose parts of their functionality as RESTful web services. But are those web services RESTful indeed? This paper examines the RESTfulness of ten popular RESTful APIs (including Twitter and PayPal). For this examination, the paper defines REST, its characteristics as well as its pros and cons. Furthermore, Richardson's Maturity Model is shown and utilized to analyse those selected APIs regarding their RESTfulness. As an example, a simple, RESTful web service is provided as well.
A Look at Service Meshes
(2021)
Service meshes can be seen as an infrastructure layer for microservice-based applications that are specifically suited for distributed application architectures. It is the goal to introduce the concept of service meshes and its use for microservices with the example of an open source service mesh called Istio. This paper gives an introduction into the service mesh concept and its relation to microservices. It also gives an overview of selected features provided by Istio as relevant to the above concept and provides a small sample setup that demonstrates the core features.
Microservices are meanwhile an established software engineering vehicle, which more and more companies are examining and adopting for their development work. Naturally, reference architectures based on microservices come into mind as a valuable thing to utilize. Initial results for such architectures are published in generic and in domain-specific form. Missing to the best of our knowledge however, is a domain-specific reference architecture based on microservices, which takes into account specifics of the insurance industry domain. Jointly with partners from the German insurance industry, we take initial steps to fill this gap in the present article. Thus, we aim towards a microservices-based reference software architecture for (at least German) insurance companies. As the main results of this article we thus provide an initial such reference architecture together with a deeper look into two important parts of it.
Even for the more traditional insurance industry, the Microservices Architecture (MSA) style plays an increasingly important role in provisioning insurance services. However, insurance businesses must operate legacy applications, enterprise software, and service-based applications in parallel for a more extended transition period. The ultimate goal of our ongoing research is to design a microservice reference architecture in cooperation with our industry partners from the insurance domain that provides an approach for the integration of applications from different architecture paradigms. In Germany, individual insurance services are classified as part of the critical infrastructure. Therefore, German insurance companies must comply with the Federal Office for Information Security requirements, which the Federal Supervisory Authority enforces. Additionally, insurance companies must comply with relevant laws, regulations, and standards as part of the business’s compliance requirements. Note: Since Germany is seen as relatively ’tough’ with respect to privacy and security demands, fullfilling those demands might well be suitable (if not even ’over-achieving’) for insurances in other countries as well. The question raises thus, of how insurance services can be secured in an application landscape shaped by the MSA style to comply with the architectural and security requirements depicted above. This article highlights the specific regulations, laws, and standards the insurance industry must comply with. We present initial architectural patterns to address authentication and authorization in an MSA tailored to the requirements of our insurance industry partners.
To avoid the shortcomings of traditional monolithic applications, the Microservices Architecture (MSA) style plays an increasingly important role in providing business services. This is true even for the more conventional insurance industry with its highly heterogeneous application landscape and sophisticated cross-domain business processes. Therefore, the question arises of how workflows can be implemented to grant the required flexibility and agility and, on the other hand, to exploit the potential of the MSA style. In this article, we present two different approaches – orchestration and choreography. Using an application scenario from the insurance domain, both concepts are discussed. We introduce a pattern that outlines the mapping of a workflow to a choreography.
The transfer of historically grown monolithic software architectures into modern service-oriented architectures creates a lot of loose coupling points. This can lead to an unforeseen system behavior and can significantly impede those continuous modernization processes, since it is not clear where bottlenecks in a system arise. It is therefore necessary to monitor such modernization processes with an adaptive monitoring concept in order to be able to correctly record and interpret unpredictable system dynamics. For this purpose, a general measurement methodology and a specific implementation concept are presented in this work.
Microservices is an architectural style for complex application systems, promising some crucial benefits, e.g. better maintainability, flexible scalability, and fault tolerance. For this reason microservices has attracted attention in the software development departments of different industry sectors, such as ecommerce and streaming services. On the other hand, businesses have to face great challenges, which hamper the adoption of the architectural style. For instance, data are often persisted redundantly to provide fault tolerance. But the synchronization of those data for the sake of consistency is a major challenge. Our paper presents a case study from the insurance industry which focusses consistency issues when migrating a monolithic core application towards microservices. Based on the Domain Driven Design (DDD) methodology, we derive bounded contexts and a set of microservices assigned to these contexts. We discuss four different approaches to ensure consistency and propose a best practice to identify the most appropriate approach for a given scenario. Design and implementation details and compliance issues are presented as well.
In microservice architectures, data is often hold redundantly to create an overall resilient system. Although the synchronization of this data proposes a significant challenge, not much research has been done on this topic yet. This paper shows four general approaches for assuring consistency among services and demonstrates how to identify the best solution for a given architecture. For this, a microservice architecture, which implements the functionality of a mainframe-based legacy system from the insurance industry, serves as an example.
Cloud Computing: Serverless
(2021)
A serverless architecture is a new approach to offering services over the Internet. It combines BaaS (Backend-as-a-service) and FaaS (Function-as-a-service). With the serverless architecture no own or rented infrastructures are needed anymore. In addition, the company does not have to worry about scaling any longer, as this happens automatically and immediately. Furthermore, there is no need any longer for maintenance work on the servers, as this is completely taken over by the provider. Administrators are also no longer needed for the same reason. Finally, many ready-made functions are offered, with which the development effort can be reduced. As a result, the serverless architecture is very well suited to many application scenarios, and it can save considerable costs (server costs, maintenance costs, personnel costs, electricity costs, etc.). The company only must subdivide the source code of the application and upload it to the provider’s server. The rest is done by the provider.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarizes our comparison of all three tools from our final comparison round.
We present an approach towards a data acquisition system for digital twins that uses a 5G net- work for data transmission and localization. The current hardware setup, which utilizes stereo vision and LiDAR for 3D mapping, is explained together with two recorded point cloud data sets. Furthermore, a resulting digital twin comprised of voxelized point cloud data is shown. Ideas for future applications and challenges regarding the system are discussed and an outlook on further development is given.
To learn a subject, the acquisition of the associated technical language is important.
Despite this widely accepted importance of learning the technical language, hardly any studies are published that describe the characteristics of most technical languages that students are supposed to learn. This might largely be due to the absence of specialized text corpora to study such languages at lexical, syntactical and textual level. In the present paper we describe a corpus of German physics text that can be used to study the language used in physics. A large and a small variant are compiled. The small version of the corpus consists of 5.3 Million words and is available on request.
Autonomous mobile six-legged robots are able to demonstrate the potential of intelligent control systems based on recurrent neural networks. The robots evaluate only two forward and two backward looking infrared sensor signals. Fast converging genetic training algorithms are applied to train the robots to move straight in six directions. The robots performed successfully within an obstacle environment and there could be observed a never trained useful interaction between each of the single robots. The paper describes the robot systems and presents the test results. Video clips are downloadable under www.inform.fh-hannover.de/download/lechner.php. Held on IFAC International Conference on Intelligent Control Systems and Signal Processing (ICONS 2003, April 2003, Portugal).
The amount of papers published yearly increases since decades. Libraries need to make these resources accessible and available with classification being an important aspect and part of this process. This paper analyzes prerequisites and possibilities of automatic classification of medical literature. We explain the selection, preprocessing and analysis of data consisting of catalogue datasets from the library of the Hanover Medical School, Lower Saxony, Germany. In the present study, 19,348 documents, represented by notations of library classification systems such as e.g. the Dewey Decimal Classification (DDC), were classified into 514 different classes from the National Library of Medicine (NLM) classification system. The algorithm used was k-nearest-neighbours (kNN). A correct classification rate of 55.7% could be achieved. To the best of our knowledge, this is not only the first research conducted towards the use of the NLM classification in automatic classification but also the first approach that exclusively considers already assigned notations from other
classification systems for this purpose.
Fall events and their severe consequences represent not only a threatening problem for the affected individual, but also cause a significant burden for health care systems. Our research work aims to elucidate some of the prospects and problems of current sensor-based fall risk assessment approaches. Selected results of a questionnaire-based survey given to experts during topical workshops at international conferences are presented. The majority of domain experts confirmed that fall risk assessment could potentially be valuable for the community and that prediction is deemed possible, though limited. We conclude with a discussion of practical issues concerning adequate outcome parameters for clinical studies and data sharing within the research community. All participants agreed that sensor-based fall risk assessment is a promising and valuable approach, but that more prospective clinical studies with clearly defined outcome measures are necessary.
Editorial for the 17th European Networked Knowledge Organization Systems Workshop (NKOS 2017)
(2017)
Knowledge Organization Systems (KOS), in the form of classification systems, thesauri, lexical databases, ontologies, and taxonomies, play a crucial role in digital information management and applications generally. Carrying semantics in a well-controlled and documented way, Knowledge Organization Systems serve a variety of important functions: tools for representation and indexing of information and documents, knowledge-based support to information searchers, semantic road maps to domains and disciplines, communication tool by providing conceptual framework, and conceptual basis for knowledge based systems, e.g. automated classification systems. New networked KOS (NKOS) services and applications are emerging, and we have reached a stage where many KOS standards exist and the integration of linked services is no longer just a future scenario. This editorial describes the workshop outline and overview of presented papers at the 17th European Networked Knowledge Organization Systems Workshop (NKOS 2017) which was held during the TPDL 2017 Conference in Thessaloniki, Greece.
Editorial for the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016)
(2016)
Knowledge Organization Systems (KOS), in the form of classification systems, thesauri, lexical databases, ontologies, and taxonomies, play a crucial role in digital information management and applications generally. Carrying semantics in a well-controlled and documented way, Knowledge Organisation Systems serve a variety of important functions: tools for representation and indexing of information and documents, knowledge-based support to information searchers, semantic road maps to domains and disciplines, communication tool by providing conceptual framework, and conceptual basis for knowledge based systems, e.g. automated classification systems. New networked KOS (NKOS) services and applications are emerging, and we have reached a stage where many KOS standards exist and the integration of linked services is no longer just a future scenario. This editorial describes the workshop outline and overview of presented papers at the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) in Hannover, Germany.
Since textual user generated content from social media platforms contains valuable information for decision support and especially corporate credit risk analysis, automated approaches for text classification such as the application of sentiment dictionaries and machine learning algorithms have received great attention in recent user generated content based research endeavors. While machine learning algorithms require individual training data sets for varying sources, sentiment dictionaries can be applied to texts immediately, whereby domain specific dictionaries attain better results than domain independent word lists. We evaluate by means of a literature review how sentiment dictionaries can be constructed for specific domains and languages. Then, we construct nine versions of German sentiment dictionaries relying on a process model which we developed based on the literature review. We apply the dictionaries to a manually classified German language data set from Twitter in which hints for financial (in)stability of companies have been proven. Based on their classification accuracy, we rank the dictionaries and verify their ranking by utilizing Mc Nemar’s test for significance. Our results indicate, that the significantly best dictionary is based on the German language dictionary SentiWortschatz and an extension approach by use of the lexical-semantic database GermaNet. It achieves a classification accuracy of 59,19 % in the underlying three-case-scenario, in which the Tweets are labelled as negative, neutral or positive. A random classification would attain an accuracy of 33,3 % in the same scenario and hence, automated coding by use of the sentiment dictionaries can lead to a reduction of manual efforts. Our process model can be adopted by other researchers when constructing sentiment dictionaries for various domains and languages. Furthermore, our established dictionaries can be used by practitioners especially in the domain of corporate credit risk analysis for automated text classification which has been conducted manually to a great extent up to today.
During the Corona-Pandemic, information (e.g. from the analysis of balance sheets and payment behavior) traditionally used for corporate credit risk analysis became less valuable because it represents only past circumstances. Therefore, the use of currently published data from social media platforms, which have shown to contain valuable information regarding the financial stability of companies, should be evaluated. In this data e. g. additional information from disappointed employees or customers can be present. In order to analyze in how far this data can improve the information base for corporate credit risk assessment, Twitter data regarding the ten greatest insolvencies of German companies in 2020 and solvent counterparts is analyzed in this paper. The results from t-tests show, that sentiment before the insolvencies is significantly worse than in the comparison group which is in alignment with previously conducted research endeavors. Furthermore, companies can be classified as prospectively solvent or insolvent with up to 70% accuracy by applying the k-nearest-neighbor algorithm to monthly aggregated sentiment scores. No significant differences in the number of Tweets for both groups can be proven, which is in contrast to findings from studies which were conducted before the Corona-Pandemic. The results can be utilized by practitioners and scientists in order to improve decision support systems in the domain of corporate credit risk analysis. From a scientific point of view, the results show, that the information asymmetry between lenders and borrowers in credit relationships, which are principals and agents according to the principal-agent-theory, can be reduced based on user generated content from social media platforms. In future studies, it should be evaluated in how far the data can be integrated in established processes for credit decision making. Furthermore, additional social media platforms as well as samples of companies should be analyzed. Lastly, the authenticity of user generated contend should be taken into account in order to ensure, that credit decisions rely on truthful information only.
Techno-economic analysis that allocate costs to the energy flows of energy systems are helpful to understand the formation of costs within processes and to increase the cost efficiency. For the economic evaluation, the usefulness or quality of the energy is of great importance. In exergy-based methods, this is considered by allocating costs to the exergy instead of energy. As exergy represents the ability of performing work, it is often named the useful part of energy. In contrast, the anergy, the part of energy, which cannot perform work, is often assumed to be not useful.
However, heat flows as used e.g. in domestic heating are always a mixture of a relative small portion of exergy and a big portion of anergy. Although of lower quality, the anergy is obviously useful for these applications. The question is, whether it makes sense to differentiate between exergy and anergy and take both properties into account for the economic evaluation.
To answer this question, a new methodical concept based on the definition of an anergy-exergy cost ratio is compared to the commonly applied approaches of considering either energy or exergy as the basis for economic evaluation. These three different approaches for the economic analysis of thermal energy systems are applied to an exemplary heating system with thermal storages. It is shown that the results of the techno-economic analysis can be improved by giving anergy an economic value and that the proposed anergy-cost ratio allows a flexible adaptation of the evaluation depending on the economic constraints of a system.
Research into new forms of care for complex chronic diseases requires substantial efforts in the collection, storage, and analysis of medical data. Additionally, providing practical support for those who coordinate the actual care management process within a diversified network of regional service providers is also necessary. For instance, for stroke units, rehabilitation partners, ambulatory actors, as well as health insurance funds. In this paper, we propose the concept of comprehensive and practical receiver-oriented encryption (ROE) as a guiding principle for such data-intensive, research-oriented case management systems, and
illustrate our concept with the example of the IT infrastructure of the project STROKE OWL.
Visual effects and elements in video games and interactive virtual environments can be applied to transfer (or delegate) non-visual perceptions (e.g. proprioception, presence, pain) to players and users, thus increasing perceptual diversity via the visual modality. Such elements or efects are referred to as visual delegates (VDs). Current fndings on the experiences that VDs can elicit relate to specifc VDs, not to VDs in general. Deductive and comprehensive VD evaluation frameworks are lacking. We analyzed VDs in video games to generalize VDs in terms of their visual properties. We conducted a systematic paper analysis to explore player and user experiences observed in association with specifc VDs in user studies. We conducted semi-structured interviews with expert players to determine their preferences and the impact of VD properties. The resulting VD framework (VD-frame) contributes to a more strategic approach to identifying the impact of VDs on player and user experiences.
Agile methods require constant optimization of one’s approach and leading to the adaptation of agile practices. These practices are also adapted when introducing them to companies and their software development teams due to organizational constraints. As a consequence of the widespread use of agile methods, we notice a high variety of their elements:
Practices, roles, and artifacts. This multitude of agile practices, artifacts, and roles results in an unsystematic mixture. It leads to several questions: When is a practice a practice, and when is it a method or technique? This paper presents the tree of agile elements, a taxonomy of agile methods, based on the literature and guidelines of widely used agile methods. We describe a taxonomy of agile methods using terms and concepts of software engineering, in particular software process models. We aim to enable agile elements to be delimited, which should help companies, agile teams, and the research community gain a basic understanding of the interrelationships and dependencies of individual components of agile methods.
Context: Companies adapt agile methods, practices or artifacts for their use in practice since more than two decades. This adaptions result in a wide variety of described agile practices. For instance, the Agile Alliance lists 75 different practices in its Agile Glossary. This situation may lead to misunderstandings, as agile practices with similar names can be interpreted and used differently.
Objective: This paper synthesize an integrated list of agile practices, both from primary and secondary sources.
Method: We performed a tertiary study to identify existing overviews and lists of agile practices in the literature. We identified 876 studies, of which 37 were included.
Results: The results of our paper show that certain agile practices are listed and used more often in existing studies. Our integrated list of agile practices comprises 38 entries structured in five categories. Conclusion: The high number of agile practices and thus, the wide variety increased steadily over the past decades due to the adaption of agile methods. Based on our findings, we present a comprehensive overview of agile practices. The research community benefits from our integrated list of agile practices as a potential basis for future research. Also, practitioners benefit from our findings, as the structured overview of agile practices provides the opportunity to select or adapt practices for their specific needs.
This Innovative Practice Full Paper presents our learnings of the process to perform a Master of Science class with eduScrum integrating real world problems as projects. We prepared, performed, and evaluated an agile educational concept for the new Master of Science program Digital Transformation organized and provided by the department of business computing at the University of Applied Sciences and Arts - Hochschule Hannover in Germany. The course deals with innovative methodologies of agile project management and is attended by 25 students. We performed the class due the summer term in 2019 and 2020 as a teaching pair. The eduScrum method has been used in different educational contexts, including higher education. During the approach preparation, we decided to use challenges, problems, or questions from the industry. Thus, we acquired four companies and prepared in coordination with them dedicated project descriptions. Each project description was refined in the form of a backlog (list of requirements). We divided the class into four eduScrum teams, one team for each project. The subdivision of the class was done randomly.
Since we wanted to integrate realistic projects into industry partners’ implementation, we decided to adapt the eduScrum approach. The eduScrum teams were challenged with different projects, e.g., analyzing a dedicated phenomenon in a real project or creating a theoretical model for a company’s new project management approach. We present our experiences of the whole process to prepare, perform and evaluate an agile educational approach combined with projects from practice. We found, that the students value the agile method using real world problems. However, the results are mainly based on the summer term 2019, this paper also includes our learnings from virtual distance teaching during the Covid 19 pandemic in summer term 2020. The paper contributes to the distribution of methods for higher education teaching in the classroom and distance learning.
In 2020, the world changed due to the Covid 19 pandemic. Containment measures to reduce the spread of the virus were planned and implemented by many countries and companies. Worldwide, companies sent their employees to work from home. This change has led to significant challenges in teams that were co-located before the pandemic. Agile software development teams were affected by this switch, as agile methods focus on communication and collaboration. Research results have already been published on the challenges of switching to remote work and the effects on agile software development teams. This article presents a systematic literature review. We identified 12 relevant papers for our studies and analyzed them on detail. The results provide an overview how agile software development teams reacted to the switch to remote work, e.g., which agile practices they adapted. We also gained insights on the changes of the performance of agile software development teams and social effects on agile software development teams during the pandemic.
Companies worldwide have enabled their employees to work remotely as a consequence of the Covid 19 pandemic. Software development is a human-centered discipline and thrives on teamwork. Agile methods are focusing on several social aspects of software development. Software development teams in Germany were mainly co-located before the pandemic. This paper aims to validate the findings of existing studies by expanding on an existing multiple-case study. Therefore, we collected data by conducting semi-structured interviews, observing agile practices, and viewing project documents in three cases. Based on the results, we can confirm the following findings: 1) The teams rapidly adapted the agile practices and roles, 2) communication is more objective within the teams, 3) decreased social exchange between team members, 4) the expectation of a combined approach of remote and onsite work after the pandemic, 5) stable or increased (perceived) performance and 6) stable or increased well-being of team members.
Social skills are essential for a successful understanding of agile methods in software development. Several studies highlight the opportunities and advantages of integrating real-world projects and problems while collaborating with companies into higher education using agile methods. This integration comes with several opportunities and advantages for both the students and the company. The students are able to interact with real-world software development teams, analyze and understand their challenges and identify possible measures to tackle them. However, the integration of real-world problems and companies is complex and may come with a high effort in terms of coordination and preparation of the course. The challenges related to the interaction and communication with students are increased by virtual distance teaching during the Covid-19 pandemic as direct contact with students is missing. Also, we do not know how problem-based learning in virtual distance teaching is valued by the students. This paper presents our adapted eduScrum approach and learning outcome of integrating experiments with real-world software development teams from two companies into a Master of Science course organized in virtual distance teaching. The evaluation shows that students value analyzing real-world problems using agile methods. They highlight the interaction with real-world software development teams. Also, the students appreciate the organization of the course using an iterative approach with eduScrum. Based on our findings, we present four recommendations for the integration of agile methods and real world problems into higher education in virtual distance teaching settings. The results of our paper contribute to the practitioner and researcher/lecturer community, as we provide valuable insights how to fill the gap between practice and higher education in virtual distance settings.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
The impact of vertical and horizontal integration in the context of Industry 4.0 requires new concepts for the security of industrial Ethernet protocols. The defense in depth concept, basing on the combination of several measures, especially separation and segmentation, needs to be complimented by integrated protection measures for industrial real-time protocols. To cover this challenge, existing protocols need to be equipped with additional functionality to ensure the integrity and availability of the network communication, even in environments, where possible attackers can be present. In order to show a possible way to upgrade an existing protocol, this paper describes a security concept for the industrial Ethernet protocol PROFINET.