Refine
Year of publication
Document Type
- Conference Proceeding (50)
- Article (42)
- Bachelor Thesis (7)
- Part of a Book (2)
- Master's Thesis (2)
- Preprint (2)
- Doctoral Thesis (1)
- Report (1)
- Working Paper (1)
Language
- English (108) (remove)
Is part of the Bibliography
- no (108)
Keywords
- Serviceorientierte Architektur (9)
- Mikroservice (8)
- Computersicherheit (7)
- SOA (7)
- Agilität <Management> (6)
- Agile Softwareentwicklung (5)
- Insurance Industry (5)
- Künstliche Intelligenz (5)
- Nachhaltigkeit (5)
- Rechnernetz (5)
- Versicherungswirtschaft (5)
- Simulation (4)
- Visualisierung (4)
- Agile methods (3)
- COVID-19 (3)
- Cloud Computing (3)
- Complex Event Processing (3)
- Computersimulation (3)
- Deep learning (3)
- E-Learning (3)
- Empfehlungssystem (3)
- Information Visualization (3)
- Microservices (3)
- Network Security (3)
- Neuronales Netz (3)
- OSGi (3)
- Security (3)
- Semantic Web (3)
- Telearbeit (3)
- Virtuelle Realität (3)
- complex event processing (3)
- microservices (3)
- mobile health (3)
- AI (2)
- Agent <Informatik> (2)
- Agile software development (2)
- Akzeptanz (2)
- Benutzeroberfläche (2)
- Big Data (2)
- CEP (2)
- CI/CD (2)
- Chatbot (2)
- Consistency (2)
- Consumerization (2)
- Datenstrom (2)
- DevOps (2)
- Dienstgüte (2)
- ECA (2)
- Eindringerkennung (2)
- Ereignisgesteuerte Programmierung (2)
- ISO 9001 (2)
- Indicator Measurement (2)
- Machine Learning (2)
- Maschinelles Lernen (2)
- Maschinelles Sehen (2)
- Microservice (2)
- Microservices Architecture (2)
- Open Source (2)
- Patient (2)
- Qualität (2)
- Rendering (2)
- Resiliency (2)
- Resilienz (2)
- Service-orientation (2)
- Smart Device (2)
- Straßenverkehr (2)
- Tertiärbereich (2)
- Urban Logistics (2)
- User Interfaces (2)
- Verarbeitung komplexer Ereignisse (2)
- Versicherung (2)
- Versicherungsbetrieb (2)
- XML-Model (2)
- XML-Schema (2)
- acceptance (2)
- agile methods (2)
- agile software development (2)
- build automation (2)
- build server (2)
- digital divide (2)
- eduscrum (2)
- event-driven architecture (2)
- general practitioners (2)
- mHealth (2)
- remote work (2)
- tablet (2)
- virtual reality (2)
- 3d mapping (1)
- 4-day work week (1)
- AI influences (1)
- API (1)
- Abalone (1)
- Absolvent (1)
- Ad-hoc-Netz (1)
- Adaptive IT Infrastructure (1)
- Adaptives Verfahren (1)
- Agile Manifesto (1)
- Agile Practices (1)
- Agile Software Development (1)
- Agile education (1)
- Agile method (1)
- Agile practices (1)
- Air quality (1)
- Allgemeinarzt (1)
- AlphaGo (1)
- Alternative work schedule (1)
- Android (1)
- Angst (1)
- Anomalieerkennung (1)
- Anomaly detection (1)
- Anonymization (1)
- Antifragile (1)
- Application Programming Interface (1)
- Arbeitsablauf (1)
- Arbeitswelt (1)
- Arbeitszufriedenheit (1)
- Articial intelligence (1)
- Asymmetric encryption (1)
- Attack detection (1)
- Auswahl (1)
- Authentication (1)
- Authentifikation (1)
- Authorization (1)
- Autorisierung (1)
- BLAST algorithm (1)
- BaaS (Backend-as-a-service) (1)
- Bacterial genomics (1)
- Bankruptcy costs (1)
- Bat algorithm (1)
- Batteriefahrzeug (1)
- Battery Electric Vehicles (1)
- Bekleidungsindustrie (1)
- Benutzererlebnis (1)
- Benutzerfreundlichkeit (1)
- Beruf (1)
- Bestärkendes Lernen <Künstliche Intelligenz> (1)
- Big Data Analytics (1)
- Biometrie (1)
- Blackboard Pattern (1)
- Brettspiel (1)
- Bring Your Own Device (1)
- Business model (1)
- C-SPARQL (1)
- C2C (1)
- COBIT (1)
- CQL (1)
- Case Management (1)
- Chaos (1)
- Chaostheorie (1)
- ChatGPT (1)
- Choreography (1)
- City-Logistik (1)
- Code quality (1)
- Complex Event Processing (CEP) (1)
- Complex event processing (1)
- Compliance (1)
- Computer Graphics (1)
- Computer Vision (1)
- Computer simulation (1)
- Computergrafik (1)
- Context Awareness (1)
- Context-aware recommender systems (1)
- Continuous Delivery (1)
- Corporate Credit Risk (1)
- Cross-holdings (1)
- Crowdshipping (1)
- Crowdsourcing (1)
- Customer channel (1)
- Cyber Insurance (1)
- Cyber Risks (1)
- Cyber-Versicherung (1)
- Cyberattacke (1)
- Damage claims (1)
- Data Cubes (1)
- Data Management (1)
- Datenwürfel (1)
- Decision Support (1)
- Decision Support Tool (1)
- Deep Learning (1)
- Delphi (1)
- Delphi method characteristics (1)
- Delphi method variants (1)
- Depression (1)
- Design Science (1)
- Designwissenschaft <Informatik> (1)
- Diffusion Models (1)
- Distributed file systems (1)
- Docker (1)
- Domain Driven Design (DDD) (1)
- Dyadisches Gitter (1)
- Dünnes Gitter (1)
- E-Assessment (1)
- E-Grocery (1)
- E-Health (1)
- EPN (1)
- Echtzeitsimulation (1)
- Education (1)
- Eilzustellung (1)
- Eingebettetes System (1)
- Elektromobilität (1)
- Enduser Device (1)
- Energieerzeugung (1)
- Entrepreneurship (1)
- Entscheidungsunterstützungssystem (1)
- Erfolgsfaktor (1)
- Evaluation (1)
- Event Admin (EA) (1)
- Event Processing Network (1)
- Event Processing Network Model (1)
- Event monitoring (1)
- Explainability (1)
- Explainable anomaly detection (1)
- FaaS (Function-as-a-service) (1)
- Fault tolerance (1)
- Fernunterricht (1)
- Financial contagion (1)
- Financial network (1)
- Finanzplanung (1)
- Fire sales (1)
- Framework (1)
- Framework <Informatik> (1)
- Freiluftsport (1)
- Function as a Service (1)
- GAN (1)
- GPT-3 (1)
- Generative Adversarial Network (1)
- Genetic algorithms (1)
- Genetischer Algorithmus (1)
- Genomic databases (1)
- Geschlechtsunterschied (1)
- Geschäftsmodell (1)
- Gesichtserkennung (1)
- Graph embeddings (1)
- Graphische Benutzeroberfläche (1)
- Green Tourism (1)
- Hadoop (1)
- Hausarzt (1)
- Hochschullehre (1)
- IDS (1)
- ISO 27 K (1)
- ISO 27000 (1)
- ISO 27001 (1)
- ISO 27002 (1)
- ISO 9001 6.1 (1)
- ISO/IEC 27000 (1)
- IT Risk (1)
- IT Risk Management (1)
- IT Security Risk (1)
- IT Sicherheit (1)
- IT security (1)
- Idiosyncratic Risk (1)
- Information systems research (1)
- Informationstechnik (1)
- Insurance (1)
- Integrated Management (1)
- Intelligent Transport Systems (ITS) (1)
- Intelligent control (1)
- Intelligentes Stromnetz (1)
- Internationalisierung (1)
- Istio (1)
- JFLAP (1)
- Kardiovaskuläre Krankheit (1)
- Knowledge graphs (1)
- Kontextbezogenes System (1)
- Kontinuierliche Integration (1)
- Kreditrisiko (1)
- Kubernetes (1)
- LON-CAPA (1)
- Lean Management (1)
- Lebensmittel (1)
- Leistungskennzahl (1)
- Lernsoftware (1)
- Lieferservice (1)
- LightSabre (1)
- Literaturbericht (1)
- Location-based systems (1)
- Luftqualität (1)
- Lymphknoten (1)
- MANET (1)
- Machine-to-Machine-Kommunikation (1)
- Magnetometer (1)
- Management (1)
- MapReduce (1)
- MapReduce algorithm (1)
- Maps (1)
- Marketing (1)
- Marketingstrategie (1)
- Marktpotenzial (1)
- Masterstudium (1)
- Metagenomics (1)
- Metakognitive Therapie (1)
- Mikro-Kraft-Wärme-Kopplung (1)
- Mobile (1)
- Mobile Applications (1)
- Mobile Device (1)
- Mobile Device Management (1)
- Multidimensional Analysis (1)
- Multidimensional analysis (1)
- Music recommender (1)
- Musik (1)
- Nagios (1)
- Neural controls (1)
- Neural networks (1)
- Neural-network models (1)
- Nichtlineare Dynamik (1)
- NoSQL databases. (1)
- Nonlinear Dynamics (1)
- Normality model (1)
- Notfallmedizin (1)
- OECD datasets (1)
- Offenes Kommunikationssystem (1)
- Online services (1)
- Online-Dienst (1)
- Ontologies (1)
- Open systems (1)
- OpenStack (1)
- Opportunity Management (1)
- Optische Zeichenerkennung (1)
- Orchestration (1)
- Outdoor (1)
- PageRank (1)
- Paket (1)
- Pathologie (1)
- Pathology (1)
- Personennahverkehr (1)
- Physically Based Rendering (1)
- Policy Evaluation (1)
- Portable Micro-CHP Unit (1)
- Pregel (1)
- Privacy by Design (1)
- Problemorientiertes Lernen (1)
- Processes (1)
- Prognose (1)
- Projektmanagement (1)
- Prostatakrebs (1)
- Prozessmanagement (1)
- Prüfstand (1)
- Pseudonymization (1)
- Psychische Gesundheit (1)
- Psychokardiologie (1)
- QM (1)
- Quality Management (1)
- Quality assessment (1)
- Quality of Service (1)
- Quality of Service (QoS) (1)
- Quality perception (1)
- Qualitätsmanagement (1)
- Quellcode (1)
- REST <Informatik> (1)
- RESTful (1)
- RFID (1)
- Real-Time Rendering (1)
- Real-time Collaboration (1)
- Real-time simulation (1)
- Recommender System (1)
- Recommender systems (1)
- Reference Architecture (1)
- Referenzmodell (1)
- Reinforcement Learning (1)
- Remote work (1)
- Rendering (computer graphics) (1)
- Representational State Transfer (1)
- Richardson Maturity Model (1)
- Risiko (1)
- Risikomanagement (1)
- Risk Management (1)
- Robotics (1)
- Robotik (1)
- Rule learning (1)
- RuleCore (1)
- SEM (1)
- SIEM (1)
- SOA co-existence (1)
- SOAP (1)
- SPION (1)
- SUMO (Simulation of Urban MObility) (1)
- Scaling Law (1)
- Schadensersatzanspruch (1)
- Schwarmintelligenz (1)
- Scientific Visualization (1)
- Scrum <Vorgehensmodell> (1)
- Semantic Web Technologies (1)
- Semi-structured interviews (1)
- Sensor (1)
- Sensorsystem (1)
- Sentinel-Lymphknoten (1)
- Sequence alignment (1)
- Serverless Computing (1)
- Service Lifecycle (1)
- Service Management (1)
- Service Mesh (1)
- Service Monitoring (1)
- Service Orientation (1)
- Service Registry (1)
- Service Repository (1)
- Service Semantics (1)
- Shortest Path (1)
- Simulation Modeling (1)
- Situation Awareness (1)
- Skalierungsgesetz (1)
- Smart Buildings (1)
- Smart Grid (1)
- Smartphone (1)
- Social entrepreneurship (1)
- Software Architecture (1)
- Software Engineering (1)
- Software development (1)
- Softwarearchitektur (1)
- Softwareentwicklung (1)
- Softwarewerkzeug (1)
- Sonnenfinsternis (1)
- Source code properties (1)
- Spheres (1)
- Standortbezogener Dienst (1)
- Stochastic Modeling (1)
- Stochastischer Prozess (1)
- Strategie (1)
- Streaming <Kommunikationstechnik> (1)
- Strukturgleichungsmodell (1)
- Super Resolution (1)
- Supply Chain Management (1)
- Supply Chains (1)
- Sustainability (1)
- Sustainable Tourism (1)
- Sustainable development (1)
- Swarm Intelligence (1)
- Swarm algorithm (1)
- Synchronisierung (1)
- Synchronization (1)
- Systematic Risk (1)
- Systemic risk (1)
- Tactile map (1)
- Taxonomie (1)
- Taxonomy (1)
- Technology acceptance (1)
- Tertiary study (1)
- Test Bench (1)
- Theoretische Informatik (1)
- Tourism (1)
- Tourismusmarketing (1)
- Traffic Prediction (1)
- Traffic Simulation (1)
- Twitter <Softwareplattform> (1)
- Twitter analysis (1)
- Unternehmen (1)
- Usability Testing (1)
- User Generated Content (1)
- Verkehrsleitsystem (1)
- Verteiltes System (1)
- Videospiel (1)
- Viertagewoche (1)
- Virtual reality (1)
- Virtuelles Laboratorium (1)
- Visual Analytics (1)
- Visualization (1)
- WS-Security (1)
- Web service (1)
- Web services (1)
- Wind power plant (1)
- Windkraftwerk (1)
- Wissensgraph (1)
- Word Counting (1)
- Workflow (1)
- XML (1)
- Zentriertes Interview (1)
- ad-hoc networks (1)
- adaptive methods (1)
- aerospace engineering (1)
- agent-based simulation (1)
- agents (1)
- agile education (1)
- anaphylaxis (1)
- anxiety (1)
- architecture (1)
- asynchronous messaging (1)
- cardiovascular disease (1)
- cashing (1)
- class room (1)
- cloud computing (1)
- clustering on countries (1)
- collaborative coordination (1)
- complex event processing (CEP) (1)
- covid 19 (1)
- credit risk (1)
- data mapping (1)
- data protection (1)
- data stream learning (1)
- data stream processing (1)
- depression (1)
- digital intervention (1)
- digital twins (1)
- distance learning (1)
- distributed environments (1)
- distributed evacuation coordination (1)
- distributed systems (1)
- dyadic grid (1)
- e-learning (1)
- e-mobility (1)
- eduDScloud (1)
- educational virtual realities (1)
- eigenface (1)
- emergency medicine (1)
- enterprise apps (1)
- evacuation guidance (1)
- evaluation (1)
- event models (1)
- events (1)
- face recognition (1)
- financial planning (1)
- forecasting models on countries (1)
- game analysis (1)
- gender (1)
- generic interface (1)
- graduate (1)
- graphical user interface (1)
- head-mounted display (1)
- health care (1)
- higher education (1)
- immersive media (1)
- information system (1)
- integrated passenger and freight transport (1)
- key performance indicators (1)
- large language model (1)
- large scale systems (1)
- lidar (1)
- literature review (1)
- load balancing (1)
- lymphadenectomy (1)
- machine learning (1)
- machine-to-machine communication (1)
- magnetometer (1)
- management (1)
- market-based coordination (1)
- matrix calulations (1)
- mental health (1)
- metacognitive therapy (1)
- multi-dimensional data (1)
- multiagent systems (1)
- ontology (1)
- open source (1)
- patients (1)
- pmCHP (1)
- point clouds (1)
- position paper (1)
- presence experience (1)
- privacy (1)
- private cloud (1)
- problem based learning (1)
- professional life (1)
- prostate cancer (1)
- psychocardiology (1)
- real-time routing (1)
- recommender systems (1)
- reliable message delivery (1)
- rural transport simulation (1)
- scaling (1)
- security (1)
- semantic knowledge (1)
- semantic web application (1)
- semistructured interview (1)
- sentiment dictionaries (1)
- sentinel lymph node dissection (1)
- serverless architecture (1)
- serverless functions (1)
- service models (1)
- service-orientation (1)
- shopping cart system (1)
- simulation training (1)
- situation aware routing (1)
- situation-awareness (1)
- smart buildings (1)
- smart cities (1)
- smartphone (1)
- solid waste management (1)
- sparse grid (1)
- stereo vision (1)
- student project (1)
- superparamagnetic iron oxide nanoparticles (1)
- survey (1)
- sustainability (1)
- system integration (1)
- systematic literature review (1)
- taxonomy (1)
- teaching entrepreneurship (1)
- text mining (1)
- tool evaluation (1)
- training effectiveness (1)
- underprivileged adolescents (1)
- user experience (1)
- user generated content (1)
- user training (1)
- virtual distance teaching (1)
- virtual emergency scenario (1)
- virtual lab (1)
- virtual patient simulation (1)
- visual delegates (1)
- visual perception (1)
- web services (1)
- work satisfaction (1)
- work-life balance (1)
- working life (1)
- workload decomposition (1)
- Ökotourismus (1)
- Übung (1)
Institute
- Fakultät IV - Wirtschaft und Informatik (108) (remove)
In the last years generative models have gained large public attention due to their high level of quality in generated images. In short, generative models learn a distribution from a finite number of samples and are able then to generate infinite other samples. This can be applied to image data. In the past generative models have not been able to generate realistic images, but nowadays the results are almost indistinguishable from real images.
This work provides a comparative study of three generative models: Variational Autoencoder (VAE), Generative Adversarial Network (GAN) and Diffusion Models (DM). The goal is not to provide a definitive ranking indicating which one of them is the best, but to qualitatively and where possible quantitively decide which model is good with respect to a given criterion. Such criteria include realism, generalization and diversity, sampling, training difficulty, parameter efficiency, interpolating and inpainting capabilities, semantic editing as well as implementation difficulty. After a brief introduction of how each model works on the inside, they are compared against each other. The provided images help to see the differences among the models with respect to each criterion.
To give a short outlook on the results of the comparison of the three models, DMs generate most realistic images. They seem to generalize best and have a high variation among the generated images. However, they are based on an iterative process, which makes them the slowest of the three models in terms of sample generation time. On the other hand, GANs and VAEs generate their samples using one single forward-pass. The images generated by GANs are comparable to the DM and the images from VAEs are blurry, which makes them less desirable in comparison to GANs or DMs. However, both the VAE and the GAN, stand out from the DMs with respect to the interpolations and semantic editing, as they have a latent space, which makes space-walks possible and the changes are not as chaotic as in the case of DMs. Furthermore, concept-vectors can be found, which transform a given image along a given feature while leaving other features and structures mostly unchanged, which is difficult to archive with DMs.
In this paper, we consider the route coordination problem in emergency evacuation of large smart buildings. The building evacuation time is crucial in saving lives in emergency situations caused by imminent natural or man-made threats and disasters. Conventional approaches to evacuation route coordination are static and predefined. They rely on evacuation plans present only at a limited number of building locations and possibly a trained evacuation personnel to resolve unexpected contingencies. Smart buildings today are equipped with sensory infrastructure that can be used for an autonomous situation-aware evacuation guidance optimized in real time. A system providing such a guidance can help in avoiding additional evacuation casualties due to the flaws of the conventional evacuation approaches. Such a system should be robust and scalable to dynamically adapt to the number of evacuees and the size and safety conditions of a building. In this respect, we propose a distributed route recommender architecture for situation-aware evacuation guidance in smart buildings and describe its key modules in detail. We give an example of its functioning dynamics on a use case.
The transfer of historically grown monolithic software architectures into modern service-oriented architectures creates a lot of loose coupling points. This can lead to an unforeseen system behavior and can significantly impede those continuous modernization processes, since it is not clear where bottlenecks in a system arise. It is therefore necessary to monitor such modernization processes with an adaptive monitoring concept to be able to correctly record and interpret unpredictable system dynamics. This contribution presents a generic QoS measurement framework for service-based systems. The framework consists of an XML-based specification for the measurement to be performed – the Information Model (IM) – and the QoS System, which provides an execution platform for the IM. The framework will be applied to a standard business process of the German insurance industry, and the concepts of the IM and their mapping to artifacts of the QoS System will be presented. Furtherm ore, design and implementation of the QoS System’s parser and generator module and the generated artifacts are explained in detail, e.g., event model, agents, measurement module and analyzer module.
The transfer of historically grown monolithic software architectures into modern service-oriented architectures creates a lot of loose coupling points. This can lead to an unforeseen system behavior and can significantly impede those continuous modernization processes, since it is not clear where bottlenecks in a system arise. It is therefore necessary to monitor such modernization processes with an adaptive monitoring concept in order to be able to correctly record and interpret unpredictable system dynamics. For this purpose, a general measurement methodology and a specific implementation concept are presented in this work.
A Look at Service Meshes
(2021)
Service meshes can be seen as an infrastructure layer for microservice-based applications that are specifically suited for distributed application architectures. It is the goal to introduce the concept of service meshes and its use for microservices with the example of an open source service mesh called Istio. This paper gives an introduction into the service mesh concept and its relation to microservices. It also gives an overview of selected features provided by Istio as relevant to the above concept and provides a small sample setup that demonstrates the core features.
The Gravitational Search Algorithm is a swarm-based optimization metaheuristic that has been successfully applied to many problems. However, to date little analytical work has been done on this topic.
This paper performs a mathematical analysis of the formulae underlying the Gravitational Search Algorithm. From this analysis, it derives key properties of the algorithm's expected behavior and recommendations for parameter selection. It then confirms through empirical examination that these recommendations are sound.
Dramatic increases in the number of cyber security attacks and breaches toward businesses and organizations have been experienced in recent years. The negative impacts of these breaches not only cause the stealing and compromising of sensitive information, malfunctioning of network devices, disruption of everyday operations, financial damage to the attacked business or organization itself, but also may navigate to peer businesses/organizations in the same industry. Therefore, prevention and early detection of these attacks play a significant role in the continuity of operations in IT-dependent organizations. At the same time detection of various types of attacks has become extremely difficult as attacks get more sophisticated, distributed and enabled by Artificial Intelligence (AI). Detection and handling of these attacks require sophisticated intrusion detection systems which run on powerful hardware and are administered by highly experienced security staff. Yet, these resources are costly to employ, especially for small and medium-sized enterprises (SMEs). To address these issues, we developed an architecture -within the GLACIER project- that can be realized as an in-house operated Security Information Event Management (SIEM) system for SMEs. It is affordable for SMEs as it is solely based on free and open-source components and thus does not require any licensing fees. Moreover, it is a Self-Contained System (SCS) and does not require too much management effort. It requires short configuration and learning phases after which it can be self-contained as long as the monitored infrastructure is stable (apart from a reaction to the generated alerts which may be outsourced to a service provider in SMEs, if necessary). Another main benefit of this system is to supply data to advanced detection algorithms, such as multidimensional analysis algorithms, in addition to traditional SIEMspecific tasks like data collection, normalization, enrichment, and storage. It supports the application of novel methods to detect security-related anomalies. The most distinct feature of this system that differentiates it from similar solutions in the market is its user feedback capability. Detected anomalies are displayed in a Graphical User Interface (GUI) to the security staff who are allowed to give feedback for anomalies. Subsequently, this feedback is utilized to fine-tune the anomaly detection algorithm. In addition, this GUI also provides access to network actors for quick incident responses. The system in general is suitable for both Information Technology (IT) and Operational Technology (OT) environments, while the detection algorithm must be specifically trained for each of these environments individually.
The objective of this student project was for the students to develop, conduct, and supervise a training course for basic work place applications (word processing and business graphics). Students were responsible for the planning, organizing and the teaching of the course. As participants, underprivileged adolescents took part in order to learn the handling of IT applications and therefore, improve their job skills and have a better chance to get into employment. Therefore the adolescents do the role of trainees at the course. Our students worked with a population that is continually overlooked by the field.
As a result, the students trained to design and implement training courses, exercised to manage projects and increased their social responsibility and awareness concerning the way of life and living conditions of other young people. The underprivileged adolescents learned to use important business applications and increased their job skills and job chances. The overall design of our concept required extensive resources to supervise and to steer the students and the adolescents. The lecturers had to teach and to counsel the students and had to be on “stand-by” just in case they were needed to solve critical situations between the two groups of young people.
Intrusion detection systems and other network security components detect security-relevant events based on policies consisting of rules. If an event turns out as a false alarm, the corresponding policy has to be adjusted in order to reduce the number of false positives. Modified policies, however, need to be tested before going into productive use. We present a visual analysis tool for the evaluation of security events and related policies which integrates data from different sources using the IF-MAP specification and provides a “what-if” simulation for testing modified policies on past network dynamics. In this paper, we will describe the design and outcome of a user study that will help us to evaluate our visual analysis tool.
The digital transformation with its new technologies and customer expectation has a significant effect on the customer channels in the insurance industry. The objective of this study is the identification of enabling and hindering factors for the adoption of online claim notification services that are an important part of the customer experience in insurance. For this purpose, we conducted a quantitative cross-sectional survey based on the exemplary scenario of car insurance in Germany and analyzed the data via structural equation modeling (SEM). The findings show that, besides classical technology acceptance factors such as perceived usefulness and ease of use, digital mindset and status quo behavior play a role: acceptance of digital innovations, lacking endurance as well as lacking frustration tolerance with the status quo lead to a higher intention for use. Moreover, the results are strongly moderated by the severity of the damage event—an insurance-specific factor that is sparsely considered so far. The latter discovery implies that customers prefer a communication channel choice based on the individual circumstances of the claim.
Radioisotope-guided sentinel lymph node dissection (sLND) has shown high diagnostic reliability in prostate (PCa) and other cancers. To overcome the limitations of the radioactive tracers, magnetometer-guided sLND using superparamagnetic iron oxide nanoparticles (SPIONs) has been successfully used in PCa. This prospective study (SentiMag Pro II, DRKS00007671) determined the diagnostic accuracy of magnetometer-guided sLND in intermediate- and high-risk PCa. Fifty intermediate- or high-risk PCa patients (prostate-specific antigen (PSA) >= 10 ng/mL and/or Gleason score >= 7; median PSA 10.8 ng/mL, IQR 7.4–19.2 ng/mL) were enrolled. After the intraprostatic SPIONs injection a day earlier, patients underwent magnetometer-guided sLND and extended lymph node dissection (eLND, followed by radical prostatectomy. SLNs were detected in in vivo and in ex vivo samples. Diagnostic accuracy of sLND was assessed using eLND as the reference. SLNs were detected in all patients (detection rate 100%), with 447 sentinel lymph nodes SLNs (median 9, IQR 6–12) being identified and 966 LNs (median 18, IQR 15–23) being removed. Thirty-six percent (18/50) of patients had LN metastases (median 2, IQR 1–3). Magnetometer-guided sLND had 100% sensitivity, 97.0% specificity, 94.4% positive predictive value, 100% negative predictive value, 0.0% false negative rate, and 3.0% additional diagnostic value (LN metastases only in SLNs outside the eLND template). In vivo, one positive SLN/LN-positive patient was missed, resulting in a sensitivity of 94.4%. In conclusion, this new magnetic sentinel procedure has high accuracy for nodal staging in intermediate- and high-risk PCa. The reliability of intraoperative SLN detection using this magnetometer system requires verification in further multicentric studies.
This Innovative Practice Full Paper presents our learnings of the process to perform a Master of Science class with eduScrum integrating real world problems as projects. We prepared, performed, and evaluated an agile educational concept for the new Master of Science program Digital Transformation organized and provided by the department of business computing at the University of Applied Sciences and Arts - Hochschule Hannover in Germany. The course deals with innovative methodologies of agile project management and is attended by 25 students. We performed the class due the summer term in 2019 and 2020 as a teaching pair. The eduScrum method has been used in different educational contexts, including higher education. During the approach preparation, we decided to use challenges, problems, or questions from the industry. Thus, we acquired four companies and prepared in coordination with them dedicated project descriptions. Each project description was refined in the form of a backlog (list of requirements). We divided the class into four eduScrum teams, one team for each project. The subdivision of the class was done randomly.
Since we wanted to integrate realistic projects into industry partners’ implementation, we decided to adapt the eduScrum approach. The eduScrum teams were challenged with different projects, e.g., analyzing a dedicated phenomenon in a real project or creating a theoretical model for a company’s new project management approach. We present our experiences of the whole process to prepare, perform and evaluate an agile educational approach combined with projects from practice. We found, that the students value the agile method using real world problems. However, the results are mainly based on the summer term 2019, this paper also includes our learnings from virtual distance teaching during the Covid 19 pandemic in summer term 2020. The paper contributes to the distribution of methods for higher education teaching in the classroom and distance learning.
Smart Cities require reliable means for managing installations that offer essential services to the citizens. In this paper we focus on the problem of evacuation of smart buildings in case of emergencies. In particular, we present an abstract architecture for situation-aware evacuation guidance systems in smart buildings, describe its key modules in detail, and provide some concrete examples of its structure and dynamics.
Background: Virtual reality (VR) is increasingly used as simulation technology in emergency medicine education and training, in particular for training nontechnical skills. Experimental studies comparing teaching and learning in VR with traditional training media often demonstrate the equivalence or even superiority regarding particular variables of learning or training effectiveness.
Objective: In the EPICSAVE (Enhanced Paramedic Vocational Training with Serious Games and Virtual Environments) project, a highly immersive room-scaled multi-user 3-dimensional VR simulation environment was developed. In this feasibility study, we wanted to gain initial insights into the training effectiveness and media use factors influencing learning and training in VR.
Methods: The virtual emergency scenario was anaphylaxis grade III with shock, swelling of the upper and lower respiratory tract, as well as skin symptoms in a 5-year-old girl (virtual patient) visiting an indoor family amusement park with her grandfather (virtual agent). A cross-sectional, one-group pretest and posttest design was used to evaluate the training effectiveness and quality of the training execution. The sample included 18 active emergency physicians.
Results: The 18 participants rated the VR simulation training positive in terms of training effectiveness and quality of the training execution. A strong, significant correlation (r=.53, P=.01) between experiencing presence and assessing training effectiveness was observed. Perceived limitations in usability and a relatively high extraneous cognitive load reduced this positive effect.
Conclusions: The training within the virtual simulation environment was rated as an effective educational approach. Specific media use factors appear to modulate training effectiveness (ie, improvement through “experience of presence” or reduction through perceived limitations in usability). These factors should be specific targets in the further development of this VR simulation training.
Sustainable tourism is a niche market that has been growing in recent years. At the same time, companies in the mass tourism market have increasingly marketed themselves with a “green” image, although this market is not sustainable. In order to successfully market sustainability, targeted marketing tactics are needed.
The aim of this research is to establish appropriate marketing tactics for sustainable tourism in the niche market and in the mass market. The purpose is to uncover current marketing tactics for both the mass tourism market and the sustainable tourism niche market. It also intends to explore how consumers who are more interested in sustainability differ from consumers with less interest in sustainability in terms of their perception of sustainability in tourism. Furthermore, this research paper will assess the trustworthiness of sustainable travel offers and the trustworthiness of quality seals in sustainable tourism. For this purpose, an online survey was conducted, which was addressed at German-speaking consumers. The survey showed, that consumers with more general interest in sustainability also consider sustainability to be more relevant in tourism. Offers for sustainable travel and quality seals were perceived as not very trustworthy. Moreover, no link could be found between the interest in sustainability and the perception of trustworthiness.
On the basis of the above, it is advisable to directly advertise sustainability in the niche market and to mention sustainability in the mass market only as an accompaniment or not at all. Further research could be undertaken to identify which factors influence the trustworthiness of offers, and trustworthiness of quality seals in sustainable tourism.
During the Corona-Pandemic, information (e.g. from the analysis of balance sheets and payment behavior) traditionally used for corporate credit risk analysis became less valuable because it represents only past circumstances. Therefore, the use of currently published data from social media platforms, which have shown to contain valuable information regarding the financial stability of companies, should be evaluated. In this data e. g. additional information from disappointed employees or customers can be present. In order to analyze in how far this data can improve the information base for corporate credit risk assessment, Twitter data regarding the ten greatest insolvencies of German companies in 2020 and solvent counterparts is analyzed in this paper. The results from t-tests show, that sentiment before the insolvencies is significantly worse than in the comparison group which is in alignment with previously conducted research endeavors. Furthermore, companies can be classified as prospectively solvent or insolvent with up to 70% accuracy by applying the k-nearest-neighbor algorithm to monthly aggregated sentiment scores. No significant differences in the number of Tweets for both groups can be proven, which is in contrast to findings from studies which were conducted before the Corona-Pandemic. The results can be utilized by practitioners and scientists in order to improve decision support systems in the domain of corporate credit risk analysis. From a scientific point of view, the results show, that the information asymmetry between lenders and borrowers in credit relationships, which are principals and agents according to the principal-agent-theory, can be reduced based on user generated content from social media platforms. In future studies, it should be evaluated in how far the data can be integrated in established processes for credit decision making. Furthermore, additional social media platforms as well as samples of companies should be analyzed. Lastly, the authenticity of user generated contend should be taken into account in order to ensure, that credit decisions rely on truthful information only.
In this article, we present the software architecture of a new generation of advisory systems using Intelligent Agent and Semantic Web technologies. Multi-agent systems provide a well-suited paradigm to implement negotiation processes in a consultancy situation. Software agents act as clients and advisors, using their knowledge to assist human users. In the presented architecture, the domain knowledge is modeled semantically by means of XML-based ontology languages such as OWL. Using an inference engine, the agents reason, based on their knowledge to make decisions or proposals. The agent knowledge consists of different types of data: on the one hand, private data, which has to be protected against unauthorized access; and on the other hand, publicly accessible knowledge spread over different Web sites. As in a real consultancy, an agent only reveals sensitive private data, if they are indispensable for finding a solution. In addition, depending on the actual consultancy situation, each agent dynamically expands its knowledge base by accessing OWL knowledge sources from the Internet. Due to the standardization of OWL, knowledge models easily can be shared and accessed via the Internet. The usefulness of our approach is proved by the implementation of an advisory system in the Semantic E-learning Agent (SEA) project, whose objective is to develop virtual student advisers that render support to university students in order to successfully organize and perform their studies.
The usage of microservices promises a lot of benefits concerning scalability and maintainability, rewriting large monoliths is however not always possible. Especially in scientific projects, pure microservice architectures are therefore not feasible in every project. We propose the utilization of microservice principles for the construction of microsimulations for urban transport. We present a prototypical architecture for the connection of MATSim and AnyLogic, two widely used simulation tools in the context of urban transport simulation. The proposed system combines the two tools into a singular tool supporting civil engineers in decision making on innovative urban transport concepts.
In this paper various techniques in relation to large-scale systems are presented. At first, explanation of large-scale systems and differences from traditional systems are given. Next, possible specifications and requirements on hardware and software are listed. Finally, examples of large-scale systems are presented.
OSGi is a popular Java-based platform, which has its roots in the area of embedded systems. However, nowadays it is used more and more in enterprise systems. To fit this new application area, OSGi has recently been extended with the Remote Services specification. This specification enables distribution, which OSGi was previously lacking. However, the specification provides means for synchronous communication only and leaves out asynchronous communication. As an attempt to fill a gap in this field, we propose, implement and evaluate an approach for the integration of asynchronous messaging into OSGi.
The automated transfer of flight logbook information from aircrafts into aircraft maintenance systems leads to reduced ground and maintenance time and is thus desirable from an economical point of view. Until recently, flight logbooks have not been managed electronically in aircrafts or at least the data transfer from aircraft to ground maintenance system has been executed manually. Latest aircraft types such as the Airbus A380 or the Boeing 787 do support an electronic logbook and thus make an automated transfer possible. A generic flight logbook transfer system must deal with different data formats on the input side – due to different aircraft makes and models – as well as different, distributed aircraft maintenance systems for different airlines as aircraft operators. This article contributes the concept and top level distributed system architecture of such a generic system for automated flight log data transfer. It has been developed within a joint industry and applied research project. The architecture has already been successfully evaluated in a prototypical implementation.
Complex Event Processing (CEP) is a modern software technology for the dynamic analysis of continuous data streams. CEP is able of searching extremely large data streams in real time for the presence of event patterns. So far, specifying event patterns of CEP rules is still a manual task based on the expertise of domain experts. This paper presents a novel batinspired swarm algorithm for automatically mining CEP rule patterns that express the relevant causal and temporal relations hidden in data streams. The basic suitability and performance of the approach is proven by extensive evaluation with both synthetically generated data and real data from the traffic domain.
BYOD Bring Your Own Device
(2013)
Using modern devices like smartphones and tablets offers a wide variety of advantages; this has made them very popular as consumer devices in private life. Using them in the workplace is also popular. However, who wants to carry around and handle two devices; one for personal use, and one for work-related tasks? That is why “dual use”, using one single device for private and business applications, may represent a proper solution. The result is “Bring Your Own Device,” or BYOD, which describes the circumstance in which users make their own personal devices available for company use. For companies, this brings some opportunities and risks. We describe and discuss organizational issues, technical approaches, and solutions.
Cloud Computing: Serverless
(2021)
A serverless architecture is a new approach to offering services over the Internet. It combines BaaS (Backend-as-a-service) and FaaS (Function-as-a-service). With the serverless architecture no own or rented infrastructures are needed anymore. In addition, the company does not have to worry about scaling any longer, as this happens automatically and immediately. Furthermore, there is no need any longer for maintenance work on the servers, as this is completely taken over by the provider. Administrators are also no longer needed for the same reason. Finally, many ready-made functions are offered, with which the development effort can be reduced. As a result, the serverless architecture is very well suited to many application scenarios, and it can save considerable costs (server costs, maintenance costs, personnel costs, electricity costs, etc.). The company only must subdivide the source code of the application and upload it to the provider’s server. The rest is done by the provider.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarizes our comparison of all three tools from our final comparison round.
With an increasing complexity and scale, sufficient evaluation of Information Systems (IS) becomes a challenging and difficult task. Simulation modeling has proven as suitable and efficient methodology for evaluating IS and IS artifacts, presupposed it meets certain quality demands. However, existing research on simulation modeling quality solely focuses on quality in terms of accuracy and credibility, disregarding the role of additional quality aspects. Therefore, this paper proposes two design artifacts in order to ensure a holistic quality view on simulation quality. First, associated literature is reviewed in order to extract relevant quality factors in the context of simulation modeling, which can be used to evaluate the overall quality of a simulated solution before, during or after a given project. Secondly, the deduced quality factors are integrated in a quality assessment framework to provide structural guidance on the quality assessment procedure for simulation. In line with a Design Science Research (DSR) approach, we demonstrate the eligibility of both design artifacts by means of prototyping as well as an example case. Moreover, the assessment framework is evaluated and iteratively adjusted with the help of expert feedback.
In microservice architectures, data is often hold redundantly to create an overall resilient system. Although the synchronization of this data proposes a significant challenge, not much research has been done on this topic yet. This paper shows four general approaches for assuring consistency among services and demonstrates how to identify the best solution for a given architecture. For this, a microservice architecture, which implements the functionality of a mainframe-based legacy system from the insurance industry, serves as an example.
Cradle to Cradle – An analysis of the market potential in the German outdoor apparel industry
(2016)
The purpose of this study is to investigate the market potential in the German outdoor apparel industry by focusing on sustainable production in terms of environmental and human health. A literature study of the Cradle to Cradle (C2C) design concept is provided, as it represents a solution for pollution, waste and environmental destruction caused by the current industrial design and waste management. The data for the subsequent market- and competitive analysis of the German outdoor apparel industry was collected through secondary research in order to identify several key market indicators for the assessment of the market potential. The outcome of this research is the identification of a positioning strategy for outdoor apparel according to the C2C design concept. The results show stagnant growth rates in recent years in the German outdoor apparel market and strong rivalry among the competitors. However, a significant market potential was calculated and beneficial trends for sustainable outdoor brands were recognised. These findings reveal the existence of a market potential for an outdoor apparel brand according to the C2C design concept. By following a positioning strategy of transparency and full commitment to a sustainable production, the company might be able to gain market shares from its competitors, as future predictions indicate slow growth rates in the market. The results of this analysis can be of great interest for entrepreneurs that plan to enter the German outdoor apparel industry.
Since textual user generated content from social media platforms contains valuable information for decision support and especially corporate credit risk analysis, automated approaches for text classification such as the application of sentiment dictionaries and machine learning algorithms have received great attention in recent user generated content based research endeavors. While machine learning algorithms require individual training data sets for varying sources, sentiment dictionaries can be applied to texts immediately, whereby domain specific dictionaries attain better results than domain independent word lists. We evaluate by means of a literature review how sentiment dictionaries can be constructed for specific domains and languages. Then, we construct nine versions of German sentiment dictionaries relying on a process model which we developed based on the literature review. We apply the dictionaries to a manually classified German language data set from Twitter in which hints for financial (in)stability of companies have been proven. Based on their classification accuracy, we rank the dictionaries and verify their ranking by utilizing Mc Nemar’s test for significance. Our results indicate, that the significantly best dictionary is based on the German language dictionary SentiWortschatz and an extension approach by use of the lexical-semantic database GermaNet. It achieves a classification accuracy of 59,19 % in the underlying three-case-scenario, in which the Tweets are labelled as negative, neutral or positive. A random classification would attain an accuracy of 33,3 % in the same scenario and hence, automated coding by use of the sentiment dictionaries can lead to a reduction of manual efforts. Our process model can be adopted by other researchers when constructing sentiment dictionaries for various domains and languages. Furthermore, our established dictionaries can be used by practitioners especially in the domain of corporate credit risk analysis for automated text classification which has been conducted manually to a great extent up to today.
Delphi is a frequently used research method in the information systems (IS) field. The last fifteen years have seen many variants of the Delphi Method proposed and used in IS research. However, these variants do not seem to be properly derived; while all variants share certain characteristics, their reasoning for differentiation inconsistently varies. It seems that researchers tend to create “new” Delphi Method variants, although the underlying modification of the Delphi Method is, in fact, minor. This leads to a heterogeneity of Delphi Method variants and undermines scientific rigor when using Delphi. The study addresses this deficit and (1) identifies different variants of Delphi and determines their characteristics, (2) critically reflects to what extent a clear distinction between these variants exists, (3) shows the clearly distinguishable Delphi Method variants and their characteristics, (4) develops a proposed taxonomy of Delphi Method variants, and (5) evaluates and applies this taxonomy. The proposed taxonomy helps clearly differentiate Delphi Method variants and enhances methodological rigor when using the Delphi Method.
There are many aspects of code quality, some of which are difficult to capture or to measure. Despite the importance of software quality, there is a lack of commonly accepted measures or indicators for code quality that can be linked to quality attributes. We investigate software developers’ perceptions of source code quality and the practices they recommend to achieve these qualities. We analyze data from semi-structured interviews with 34 professional software developers, programming teachers and students from Europe and the U.S. For the interviews, participants were asked to bring code examples to exemplify what they consider good and bad code, respectively. Readability and structure were used most commonly as defining properties for quality code. Together with documentation, they were also suggested as the most common target properties for quality improvement. When discussing actual code, developers focused on structure, comprehensibility and readability as quality properties. When analyzing relationships between properties, the most commonly talked about target property was comprehensibility. Documentation, structure and readability were named most frequently as source properties to achieve good comprehensibility. Some of the most important source code properties contributing to code quality as perceived by developers lack clear definitions and are difficult to capture. More research is therefore necessary to measure the structure, comprehensibility and readability of code in ways that matter for developers and to relate these measures of code structure, comprehensibility and readability to common software quality attributes.
At University of Applied Sciences and Arts Hannover, LON-CAPA is used as a learning management system beside Moodle. LON-CAPA has a strong focus on e-assessment in mathematics and sciences. We used LON-CAPA in Hannover mainly in mathematics courses.
Since theoretical computer science needs a lot of mathematics, this course is also well-suited for e-assessment in LON-CAPA. Beside this, we already used JFLAP as an interactive tool to deal with automata, machines and grammars in theoretical computer science. In LON-CAPA, there exists a possibility of using external graders to grade problems.
We decided to write a grading engine (with JFLAP inside) to grade automata, machines and grammars handed in by students and to couple this with LON-CAPA. This report describes the types of questions that are now possible with this grader and how they can be authored in LON-CAPA.
End users urgently request using mobile devices at their workplace. They know these devices from their private life and appreciate functionality and usability, and want to benefit from these advantages at work as well. Limitations and restrictions would not be accepted by them. On the contrary, companies are obliged to employ substantial organizational and technical measures to ensure data security and compliance when allowing to use mobile devices at the workplace. So far, only individual arrangements have been presented addressing single issues in ensuring data security and compliance. However, companies need to follow a comprehensive set of measures addressing all relevant aspects of data security and compliance in order to play it safe. Thus, in this paper at first technical architectures for using mobile devices in enterprise IT are reviewed. Thereafter a set of compliance rules is presented and, as major contribution, technical measures are explained that enable a company to integrate mobile devices into enterprise IT while still complying with these rules comprehensively. Depending on the company context, one or more of the technical architectures have to be chosen impacting the specific technical measures for compliance as elaborated in this paper. Altogether this paper, for the first time, correlates technical architectures for using mobile devices at the workplace with technical measures to assure data security and compliance according to a comprehensive set of rules.
In service-oriented architectures the management of services is a crucial task during all stages of IT operations. Based on a case study performed for a group of finance companies the different aspects of service management are presented. First, the paper discusses how services must be described for management purposes. In particular, a special emphasis is placed on the integration of legacy/non web services. Secondly, the service lifecycle that underlies service management is presented. Especially, the relation to SOA governance and an appropriate tool support by registry repositories is outlined.
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
Our work is motivated primarily by the lack of standardization in the area of Event Processing Network (EPN) models. We identify general requirements for such models. These requirements encompass the possibility to describe events in the real world, to establish temporal and causal relationships among the events, to aggregate the events, to organize the events into a hierarchy, to categorize the events into simple or complex, to create an EPN model in an easy and simple way and to use that model ad hoc. As the major contribution, this paper applies the identified requirements to the RuleCore model.
Nowadays, REST is the most dominant architectural style of choice at least for newly created web services. So called RESTfulness is thus really a catchword for web application, which aim to expose parts of their functionality as RESTful web services. But are those web services RESTful indeed? This paper examines the RESTfulness of ten popular RESTful APIs (including Twitter and PayPal). For this examination, the paper defines REST, its characteristics as well as its pros and cons. Furthermore, Richardson's Maturity Model is shown and utilized to analyse those selected APIs regarding their RESTfulness. As an example, a simple, RESTful web service is provided as well.
This article discusses event monitoring options for heterogeneous event sources as they are given in nowadays heterogeneous distributed information systems. It follows the central assumption, that a fully generic event monitoring solution cannot provide complete support for event monitoring; instead, event source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Following from this, the core result of the work presented here is the extension of a configurable event monitoring (Web) service for a variety of event sources. A service approach allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.
Heterogeneity has to be taken into account when integrating a set of existing information sources into a distributed information system that are nowadays often based on Service- Oriented Architectures (SOA). This is also particularly applicable to distributed services such as event monitoring, which are useful in the context of Event Driven Architectures (EDA) and Complex Event Processing (CEP). Web services deal with this heterogeneity at a technical level, also providing little support for event processing. Our central thesis is that such a fully generic solution cannot provide complete support for event monitoring; instead, source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Our core result is the design of a configurable event monitoring (Web) service that allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.
Decision support systems for traffic management systems have to cope with a high volume of events continuously generated by sensors. Conventional software architectures do not explicitly target the efficient processing of continuous event streams. Recently, event-driven architectures (EDA) have been proposed as a new paradigm for event-based applications. In this paper we propose a reference architecture for event-driven traffic management systems, which enables the analysis and processing of complex event streams in real-time and is therefore well-suited for decision support in sensor-based traffic control sys- tems. We will illustrate our approach in the domain of road traffic management. In particular, we will report on the redesign of an intelligent transportation management system (ITMS) prototype for the high-capacity road network in Bilbao, Spain.
Objective
The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods
GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results
The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P < 0.001) and education (P < 0.001) were significant factors, but not gender (P > 0.99). For doctors, neither age (P = 0.73), professional experience (P > 0.99) nor gender (P = 0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions
GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
Objective: The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods: GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results: The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P<0.001) and education (P<0.001) were significant factors, but not gender (P>0.99). For doctors, neither age (P¼0.73), professional experience (P>0.99) nor gender (P¼0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions: GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
Renewable energy production is one of the strongest rising markets and further extreme growth can be anticipated due to desire of increased sustainability in many parts of the world. With the rising adoption of renewable power production, such facilities are increasingly attractive targets for cyber attacks. At the same time higher requirements on a reliable production are raised. In this paper we propose a concept that improves monitoring of renewable power plants by detecting anomalous behavior. The system does not only detect an anomaly, it also provides reasoning for the anomaly based on a specific mathematical model of the expected behavior by giving detailed information about various influential factors causing the alert. The set of influential factors can be configured into the system before learning normal behaviour. The concept is based on multidimensional analysis and has been implemented and successfully evaluated on actual data from different providers of wind power plants.
Pathologists need to identify abnormal changes in tissue. With the developing digitalization, the used tissue slides are stored digitally. This enables pathologists to annotate the region of interest with the support of software tools. PathoLearn is a web-based learning platform explicitly developed for the teacher-student scenario, where the goal is that students learn to identify potential abnormal changes. Artificial intelligence (AI) and machine learning (ML) have become very important in medicine. Many health sectors already utilize AI and ML. This will only increase in the future, also in the field of pathology. Therefore, it is important to teach students the fundamentals and concepts of AI and ML early in their studies. Additionally, creating and training AI generally requires knowledge of programming and technical details. This thesis evaluates how this boundary can be overcome by comparing existing end-to-end AI platforms and teaching tools for AI. It was shown that a visual programming editor offers a fitting abstraction for creating neural networks without programming. This was extended with real-time collaboration to enable students to work in groups. Additionally, an automatic training feature was implemented, removing the necessity to know technical details about training neural networks.
Report of a research project of the Fachhochschule Hannover, University of Applied Sciences and Arts, Department of Information Technologies. Automatic face recognition increases the security standards at public places and border checkpoints. The picture inside the identification documents could widely differ from the face, that is scanned under random lighting conditions and for unknown poses. The paper describes an optimal combination of three key algorithms of object recognition, that are able to perform in real time. The camera scan is processed by a recurrent neural network, by a Eigenfaces (PCA) method and by a least squares matching algorithm. Several examples demonstrate the achieved robustness and high recognition rate.
Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer.
In the context of modern mobility, topics such as smart-cities, Car2Car-Communication, extensive vehicle sensor-data, e-mobility and charging point management systems have to be considered. These topics of modern mobility often have in common that they are characterized by complex and extensive data situations. Vehicle position data, sensor data or vehicle communication data must be preprocessed, aggregated and analyzed. In many cases, the data is interdependent. For example, the vehicle position data of electric vehicles and surrounding charging points have a dependence on one another and characterize a competition situation between the vehicles. In the case of Car2Car-Communication, the positions of the vehicles must also be viewed in relation to each other. The data are dependent on each other and will influence the ability to establish a communication. This dependency can provoke very complex and large data situations, which can no longer be treated efficiently. With this work, a model is presented in order to be able to map such typical data situations with a strong dependency of the data among each other. Microservices can help reduce complexity.
This paper describes the latest accomplishments on the current research that is based on the master’s thesis “Ein System zur Erstellung taktiler Karten für blinde und sehbehinderte Menschen” (German for “A system creating tactile maps for blind and visually impaired people”) (Hänßgen, 2012). The system consists of two parts. The first part is new software especially designed and developed for creating tactile maps addressing the needs of blind and visually impaired people on tactile information. The second is an embossing device based on a modified CNC (computer numerical control) router. By using OpenStreetMap-data, the developed system is capable of embossing tactile maps into Braille paper and writing film.
Context: Agile software development (ASD) sets social aspects like communication and collaboration in focus. Thus, one may assume that the specific work organization of companies impacts the work of ASD teams. A major change in work organization is the switch to a 4-day work week, which some companies investigated in experiments. Also, recent studies show that ASD teams are affected by the switch to remote work since the Covid 19 pandemic outbreak in 2020.
Objective: Our study presents empirical findings on the effects on ASD teams operating remote in a 4-day work week organization. Method: We performed a qualitative single case study and conducted seven semi-structured interviews, observed 14 agile practices and screened eight project documents and protocols of agile practices.
Results: We found, that the teams adapted the agile method in use due to the change to a 4-day work week environment and the switch to remote work. The productivity of the two ASD teams did not decrease. Although the stress level of the ASD team member increased due to the 4-day work week, we found that the job satisfaction of the individual ASD team members is affected positively. Finally, we point to affects on social facets of the ASD teams.
Conclusion: The research community benefits from our results as the current state of research dealing with the effects of a 4-day work week on ASD teams is limited. Also, our findings provide several practical implications for ASD teams working remote in a 4-day work week.
Social skills are essential for a successful understanding of agile methods in software development. Several studies highlight the opportunities and advantages of integrating real-world projects and problems while collaborating with companies into higher education using agile methods. This integration comes with several opportunities and advantages for both the students and the company. The students are able to interact with real-world software development teams, analyze and understand their challenges and identify possible measures to tackle them. However, the integration of real-world problems and companies is complex and may come with a high effort in terms of coordination and preparation of the course. The challenges related to the interaction and communication with students are increased by virtual distance teaching during the Covid-19 pandemic as direct contact with students is missing. Also, we do not know how problem-based learning in virtual distance teaching is valued by the students. This paper presents our adapted eduScrum approach and learning outcome of integrating experiments with real-world software development teams from two companies into a Master of Science course organized in virtual distance teaching. The evaluation shows that students value analyzing real-world problems using agile methods. They highlight the interaction with real-world software development teams. Also, the students appreciate the organization of the course using an iterative approach with eduScrum. Based on our findings, we present four recommendations for the integration of agile methods and real world problems into higher education in virtual distance teaching settings. The results of our paper contribute to the practitioner and researcher/lecturer community, as we provide valuable insights how to fill the gap between practice and higher education in virtual distance settings.