Refine
Year of publication
Document Type
- Article (42) (remove)
Language
- English (42) (remove)
Has Fulltext
- yes (42)
Is part of the Bibliography
- no (42)
Keywords
- OSGi (3)
- SOA (3)
- complex event processing (3)
- mobile health (3)
- Akzeptanz (2)
- CEP (2)
- Computersicherheit (2)
- ECA (2)
- Empfehlungssystem (2)
- Ereignisgesteuerte Programmierung (2)
- Patient (2)
- Security (2)
- Serviceorientierte Architektur (2)
- Smart Device (2)
- Verarbeitung komplexer Ereignisse (2)
- Versicherung (2)
- Virtuelle Realität (2)
- acceptance (2)
- digital divide (2)
- event-driven architecture (2)
- general practitioners (2)
- mHealth (2)
- tablet (2)
- Ad-hoc-Netz (1)
- Adaptives Verfahren (1)
- Agent <Informatik> (1)
- Allgemeinarzt (1)
- Android (1)
- Angst (1)
- BLAST algorithm (1)
- Bacterial genomics (1)
- Bankruptcy costs (1)
- Bat algorithm (1)
- Biometrie (1)
- Business model (1)
- CI/CD (1)
- COBIT (1)
- Code quality (1)
- Complex Event Processing (CEP) (1)
- Complex event processing (1)
- Compliance (1)
- Consumerization (1)
- Context-aware recommender systems (1)
- Continuous Delivery (1)
- Cross-holdings (1)
- Crowdsourcing (1)
- Customer channel (1)
- Cyber Insurance (1)
- Cyber Risks (1)
- Cyber-Versicherung (1)
- Damage claims (1)
- Datenstrom (1)
- Delphi (1)
- Delphi method characteristics (1)
- Delphi method variants (1)
- Depression (1)
- DevOps (1)
- Dienstgüte (1)
- Distributed file systems (1)
- Dyadisches Gitter (1)
- Dünnes Gitter (1)
- E-Health (1)
- E-Learning (1)
- Echtzeitsimulation (1)
- Eindringerkennung (1)
- Eingebettetes System (1)
- Entrepreneurship (1)
- Event Admin (EA) (1)
- Event monitoring (1)
- Explainability (1)
- Fault tolerance (1)
- Financial contagion (1)
- Financial network (1)
- Finanzplanung (1)
- Fire sales (1)
- Genomic databases (1)
- Geschäftsmodell (1)
- Gesichtserkennung (1)
- Graph embeddings (1)
- Hausarzt (1)
- Hochschullehre (1)
- IDS (1)
- ISO 27 K (1)
- ISO 27000 (1)
- ISO 27001 (1)
- ISO 27002 (1)
- ISO/IEC 27000 (1)
- IT Risk (1)
- IT Risk Management (1)
- IT Security Risk (1)
- Idiosyncratic Risk (1)
- Indicator Measurement (1)
- Information systems research (1)
- Insurance (1)
- Internationalisierung (1)
- Kardiovaskuläre Krankheit (1)
- Knowledge graphs (1)
- Kontinuierliche Integration (1)
- Künstliche Intelligenz (1)
- LightSabre (1)
- Location-based systems (1)
- Lymphknoten (1)
- MANET (1)
- Machine-to-Machine-Kommunikation (1)
- Magnetometer (1)
- MapReduce algorithm (1)
- Maps (1)
- Metagenomics (1)
- Metakognitive Therapie (1)
- Mobile Applications (1)
- Mobile Device (1)
- Music recommender (1)
- Musik (1)
- Neuronales Netz (1)
- NoSQL databases. (1)
- Notfallmedizin (1)
- OECD datasets (1)
- Offenes Kommunikationssystem (1)
- Online services (1)
- Online-Dienst (1)
- Ontologies (1)
- Open Source (1)
- Open systems (1)
- Prostatakrebs (1)
- Psychische Gesundheit (1)
- Psychokardiologie (1)
- Quality of Service (QoS) (1)
- Quality perception (1)
- Qualität (1)
- Quellcode (1)
- Real-time simulation (1)
- Recommender systems (1)
- Rendering (1)
- Rendering (computer graphics) (1)
- Risiko (1)
- Rule learning (1)
- SEM (1)
- SIEM (1)
- SOAP (1)
- SPION (1)
- Schadensersatzanspruch (1)
- Semantic Web (1)
- Semi-structured interviews (1)
- Sensor (1)
- Sensorsystem (1)
- Sentinel-Lymphknoten (1)
- Sequence alignment (1)
- Service Lifecycle (1)
- Service Management (1)
- Service Monitoring (1)
- Service Registry (1)
- Service Repository (1)
- Service Semantics (1)
- Service-orientation (1)
- Simulation (1)
- Smartphone (1)
- Software Engineering (1)
- Software development (1)
- Sonnenfinsternis (1)
- Source code properties (1)
- Spheres (1)
- Standortbezogener Dienst (1)
- Strategie (1)
- Streaming <Kommunikationstechnik> (1)
- Strukturgleichungsmodell (1)
- Swarm algorithm (1)
- Systematic Risk (1)
- Systemic risk (1)
- Taxonomy (1)
- Technology acceptance (1)
- Versicherungsbetrieb (1)
- Virtual reality (1)
- WS-Security (1)
- Web service (1)
- Web services (1)
- Wissensgraph (1)
- XML-Model (1)
- XML-Schema (1)
- Zentriertes Interview (1)
- ad-hoc networks (1)
- adaptive methods (1)
- agents (1)
- anaphylaxis (1)
- anxiety (1)
- architecture (1)
- asynchronous messaging (1)
- build automation (1)
- build server (1)
- cardiovascular disease (1)
- cashing (1)
- cloud computing (1)
- clustering on countries (1)
- collaborative coordination (1)
- complex event processing (CEP) (1)
- data protection (1)
- data stream learning (1)
- depression (1)
- digital intervention (1)
- distributed environments (1)
- distributed evacuation coordination (1)
- dyadic grid (1)
- e-learning (1)
- educational virtual realities (1)
- eigenface (1)
- emergency medicine (1)
- enterprise apps (1)
- evacuation guidance (1)
- evaluation (1)
- event models (1)
- events (1)
- face recognition (1)
- financial planning (1)
- forecasting models on countries (1)
- head-mounted display (1)
- health care (1)
- immersive media (1)
- large scale systems (1)
- load balancing (1)
- lymphadenectomy (1)
- machine learning (1)
- machine-to-machine communication (1)
- magnetometer (1)
- market-based coordination (1)
- mental health (1)
- metacognitive therapy (1)
- multi-dimensional data (1)
- multiagent systems (1)
- ontology (1)
- patients (1)
- presence experience (1)
- privacy (1)
- prostate cancer (1)
- psychocardiology (1)
- real-time routing (1)
- reliable message delivery (1)
- security (1)
- semantic web application (1)
- sentinel lymph node dissection (1)
- shopping cart system (1)
- simulation training (1)
- situation aware routing (1)
- smart cities (1)
- smartphone (1)
- solid waste management (1)
- sparse grid (1)
- student project (1)
- superparamagnetic iron oxide nanoparticles (1)
- teaching entrepreneurship (1)
- training effectiveness (1)
- underprivileged adolescents (1)
- user training (1)
- virtual emergency scenario (1)
- virtual patient simulation (1)
- virtual reality (1)
- web services (1)
- Übung (1)
Institute
- Fakultät IV - Wirtschaft und Informatik (42) (remove)
In this paper we describe methods to approximate functions and differential operators on adaptive sparse (dyadic) grids. We distinguish between several representations of a function on the sparse grid and we describe how finite difference (FD) operators can be applied to these representations. For general variable coefficient equations on sparse grids, genuine finite element (FE) discretizations are not feasible and FD operators allow an easier operator evaluation than the adapted FE operators. However, the structure of the FD operators is complex. With the aim to construct an efficient multigrid procedure, we analyze the structure of the discrete Laplacian in its hierarchical representation and show the relation between the full and the sparse grid case. The rather complex relations, that are expressed by scaling matrices for each separate coordinate direction, make us doubt about the possibility of constructing efficient preconditioners that show spectral equivalence. Hence, we question the possibility of constructing a natural multigrid algorithm with optimal O(N) efficiency. We conjecture that for the efficient solution of a general class of adaptive grid problems it is better to accept an additional condition for the dyadic grids (condition L) and to apply adaptive hp-discretization.
The paper presents a comprehensive model of a banking system that integrates network effects, bankruptcy costs, fire sales, and cross-holdings. For the integrated financial market we prove the existence of a price-payment equilibrium and design an algorithm for the computation of the greatest and the least equilibrium. The number of defaults corresponding to the greatest price-payment equilibrium is analyzed in several comparative case studies. These illustrate the individual and joint impact of interbank liabilities, bankruptcy costs, fire sales and cross-holdings on systemic risk. We study policy implications and regulatory instruments, including central bank guarantees and quantitative easing, the significance of last wills of financial institutions, and capital requirements.
Background:
Many patients with cardiovascular disease also show a high comorbidity of mental disorders, especially such as anxiety and depression. This is, in turn, associated with a decrease in the quality of life. Psychocardiological treatment options are currently limited. Hence, there is a need for novel and accessible psychological help. Recently, we demonstrated that a brief face-to-face metacognitive therapy (MCT) based intervention is promising in treating anxiety and depression. Here, we aim to translate the face-to-face approach into digital application and explore the feasibility of this approach.
Methods:
We translated a validated brief psychocardiological intervention into a novel non-blended web app. The data of 18 patients suffering from various cardiac conditions but without diagnosed mental illness were analyzed after using the web app over a two-week period in a feasibility trial. The aim was whether a nonblended web app based MCT approach is feasible in the group of cardiovascular patients with cardiovascular disease.
Results:
Overall, patients were able to use the web app and rated it as satisfactory and beneficial. In addition, there was first indication that using the app improved the cardiac patients’ subjectively perceived health and reduced their anxiety. Therefore, the approach seems feasible for a future randomized controlled trial.
Conclusion:
Applying a metacognitive-based brief intervention via a nonblended web app seems to show good acceptance and feasibility in a small target group of patients with CVD. Future studies should further develop, improve and validate digital psychotherapy approaches, especially in patient groups with a lack of access to standard psychotherapeutic care.
There are many aspects of code quality, some of which are difficult to capture or to measure. Despite the importance of software quality, there is a lack of commonly accepted measures or indicators for code quality that can be linked to quality attributes. We investigate software developers’ perceptions of source code quality and the practices they recommend to achieve these qualities. We analyze data from semi-structured interviews with 34 professional software developers, programming teachers and students from Europe and the U.S. For the interviews, participants were asked to bring code examples to exemplify what they consider good and bad code, respectively. Readability and structure were used most commonly as defining properties for quality code. Together with documentation, they were also suggested as the most common target properties for quality improvement. When discussing actual code, developers focused on structure, comprehensibility and readability as quality properties. When analyzing relationships between properties, the most commonly talked about target property was comprehensibility. Documentation, structure and readability were named most frequently as source properties to achieve good comprehensibility. Some of the most important source code properties contributing to code quality as perceived by developers lack clear definitions and are difficult to capture. More research is therefore necessary to measure the structure, comprehensibility and readability of code in ways that matter for developers and to relate these measures of code structure, comprehensibility and readability to common software quality attributes.
The digital transformation with its new technologies and customer expectation has a significant effect on the customer channels in the insurance industry. The objective of this study is the identification of enabling and hindering factors for the adoption of online claim notification services that are an important part of the customer experience in insurance. For this purpose, we conducted a quantitative cross-sectional survey based on the exemplary scenario of car insurance in Germany and analyzed the data via structural equation modeling (SEM). The findings show that, besides classical technology acceptance factors such as perceived usefulness and ease of use, digital mindset and status quo behavior play a role: acceptance of digital innovations, lacking endurance as well as lacking frustration tolerance with the status quo lead to a higher intention for use. Moreover, the results are strongly moderated by the severity of the damage event—an insurance-specific factor that is sparsely considered so far. The latter discovery implies that customers prefer a communication channel choice based on the individual circumstances of the claim.
Music streaming platforms offer music listeners an overwhelming choice of music. Therefore, users of streaming platforms need the support of music recommendation systems to find music that suits their personal taste. Currently, a new class of recommender systems based on knowledge graph embeddings promises to improve the quality of recommendations, in particular to provide diverse and novel recommendations. This paper investigates how knowledge graph embeddings can improve music recommendations. First, it is shown how a collaborative knowledge graph can be derived from open music data sources. Based on this knowledge graph, the music recommender system EARS (knowledge graph Embedding-based Artist Recommender System) is presented in detail, with particular emphasis on recommendation diversity and explainability. Finally, a comprehensive evaluation with real-world data is conducted, comparing of different embeddings and investigating the influence of different types of knowledge.
The transfer of historically grown monolithic software architectures into modern service-oriented architectures creates a lot of loose coupling points. This can lead to an unforeseen system behavior and can significantly impede those continuous modernization processes, since it is not clear where bottlenecks in a system arise. It is therefore necessary to monitor such modernization processes with an adaptive monitoring concept to be able to correctly record and interpret unpredictable system dynamics. This contribution presents a generic QoS measurement framework for service-based systems. The framework consists of an XML-based specification for the measurement to be performed – the Information Model (IM) – and the QoS System, which provides an execution platform for the IM. The framework will be applied to a standard business process of the German insurance industry, and the concepts of the IM and their mapping to artifacts of the QoS System will be presented. Furtherm ore, design and implementation of the QoS System’s parser and generator module and the generated artifacts are explained in detail, e.g., event model, agents, measurement module and analyzer module.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarized comparison of two tools.
In this paper, we present a novel approach for real-time rendering of soft eclipse shadows cast by spherical, atmosphereless bodies. While this problem may seem simple at first, it is complicated by several factors. First, the extreme scale differences and huge mutual distances of the involved celestial bodies cause rendering artifacts in practice. Second, the surface of the Sun does not emit light evenly in all directions (an effect which is known as limb darkening). This makes it impossible to model the Sun as a uniform spherical light source. Finally, our intended applications include real-time rendering of solar eclipses in virtual reality, which require very high frame rates. As a solution to these problems, we precompute the amount of shadowing into an eclipse shadow map, which is parametrized so that it is independent of the position and size of the occluder. Hence, a single shadow map can be used for all spherical occluders in the Solar System. We assess the errors introduced by various simplifications and compare multiple approaches in terms of performance and precision. Last but not least, we compare our approaches to the state-of-the-art and to reference images. The implementation has been published under the MIT license.
The paper provides a comprehensive overview of modeling and pricing cyber insurance and includes clear and easily understandable explanations of the underlying mathematical concepts. We distinguish three main types of cyber risks: idiosyncratic, systematic, and systemic cyber risks. While for idiosyncratic and systematic cyber risks, classical actuarial and financial mathematics appear to be well-suited, systemic cyber risks require more sophisticated approaches that capture both network and strategic interactions. In the context of pricing cyber insurance policies, issues of interdependence arise for both systematic and systemic cyber risks; classical actuarial valuation needs to be extended to include more complex methods, such as concepts of risk-neutral valuation and (set-valued) monetary risk measures.
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
High-performance firms typically have two features in common: (i) they produce in more than one country and (ii) they produce more than one product. In this paper, we analyze the internationalization strategies of multi-product firms. Guided by several new stylized facts, we develop a theoretical model to determine optimal modes of market access at the firm–product level. We find that the most productive firmssell core varieties via foreign direct investment and export products with intermediate productivity. Shocks to trade costs and technology affect the endogenous decision to export or produce abroad at the product-level and, in turn, the relative productivity between parents and affiliates.
Dramatic increases in the number of cyber security attacks and breaches toward businesses and organizations have been experienced in recent years. The negative impacts of these breaches not only cause the stealing and compromising of sensitive information, malfunctioning of network devices, disruption of everyday operations, financial damage to the attacked business or organization itself, but also may navigate to peer businesses/organizations in the same industry. Therefore, prevention and early detection of these attacks play a significant role in the continuity of operations in IT-dependent organizations. At the same time detection of various types of attacks has become extremely difficult as attacks get more sophisticated, distributed and enabled by Artificial Intelligence (AI). Detection and handling of these attacks require sophisticated intrusion detection systems which run on powerful hardware and are administered by highly experienced security staff. Yet, these resources are costly to employ, especially for small and medium-sized enterprises (SMEs). To address these issues, we developed an architecture -within the GLACIER project- that can be realized as an in-house operated Security Information Event Management (SIEM) system for SMEs. It is affordable for SMEs as it is solely based on free and open-source components and thus does not require any licensing fees. Moreover, it is a Self-Contained System (SCS) and does not require too much management effort. It requires short configuration and learning phases after which it can be self-contained as long as the monitored infrastructure is stable (apart from a reaction to the generated alerts which may be outsourced to a service provider in SMEs, if necessary). Another main benefit of this system is to supply data to advanced detection algorithms, such as multidimensional analysis algorithms, in addition to traditional SIEMspecific tasks like data collection, normalization, enrichment, and storage. It supports the application of novel methods to detect security-related anomalies. The most distinct feature of this system that differentiates it from similar solutions in the market is its user feedback capability. Detected anomalies are displayed in a Graphical User Interface (GUI) to the security staff who are allowed to give feedback for anomalies. Subsequently, this feedback is utilized to fine-tune the anomaly detection algorithm. In addition, this GUI also provides access to network actors for quick incident responses. The system in general is suitable for both Information Technology (IT) and Operational Technology (OT) environments, while the detection algorithm must be specifically trained for each of these environments individually.
Decision support systems for traffic management systems have to cope with a high volume of events continuously generated by sensors. Conventional software architectures do not explicitly target the efficient processing of continuous event streams. Recently, event-driven architectures (EDA) have been proposed as a new paradigm for event-based applications. In this paper we propose a reference architecture for event-driven traffic management systems, which enables the analysis and processing of complex event streams in real-time and is therefore well-suited for decision support in sensor-based traffic control sys- tems. We will illustrate our approach in the domain of road traffic management. In particular, we will report on the redesign of an intelligent transportation management system (ITMS) prototype for the high-capacity road network in Bilbao, Spain.
M2M (machine-to-machine) systems use various communication technologies for automatically monitoring and controlling machines. In M2M systems, each machine emits a continuous stream of data records, which must be analyzed in real-time. Intelligent M2M systems should be able to diagnose their actual states and to trigger appropriate actions as soon as critical situations occur. In this paper, we show how complex event processing (CEP) can be used as the key technology for intelligent M2M systems. We provide an event-driven architecture that is adapted to the M2M domain. In particular, we define different models for the M2M domain, M2M machine states and M2M events. Furthermore, we present a general reference architecture defining the main stages of processing machine data. To prove the usefulness of our approach, we consider two real-world examples ‘solar power plants’ and ‘printers’, which show how easily the general architecture can be extended to concrete M2M scenarios.
Complex Event Processing (CEP) is a modern software technology for the dynamic analysis of continuous data streams. CEP is able of searching extremely large data streams in real time for the presence of event patterns. So far, specifying event patterns of CEP rules is still a manual task based on the expertise of domain experts. This paper presents a novel batinspired swarm algorithm for automatically mining CEP rule patterns that express the relevant causal and temporal relations hidden in data streams. The basic suitability and performance of the approach is proven by extensive evaluation with both synthetically generated data and real data from the traffic domain.
Nowadays, problems related with solid waste management become a challenge for most countries due to the rising generation of waste, related environmental issues, and associated costs of produced wastes. Effective waste management systems at different geographic levels require accurate forecasting of future waste generation. In this work, we investigate how open-access data, such as provided from the Organisation for Economic Co-operation and Development (OECD), can be used for the analysis of waste data. The main idea of this study is finding the links between socioeconomic and demographic variables that determine the amounts of types of solid wastes produced by countries. This would make it possible to accurately predict at the country level the waste production and determine the requirements for the development of effective waste management strategies. In particular, we use several machine learning data regression (Support Vector, Gradient Boosting, and Random Forest) and clustering models (k-means) to respectively predict waste production for OECD countries along years and also to perform clustering among these countries according to similar characteristics. The main contributions of our work are: (1) waste analysis at the OECD country-level to compare and cluster countries according to similar waste features predicted; (2) the detection of most relevant features for prediction models; and (3) the comparison between several regression models with respect to accuracy in predictions. Coefficient of determination (R2), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE), respectively, are used as indices of the efficiency of the developed models. Our experiments have shown that some data pre-processings on the OECD data are an essential stage required in the analysis; that Random Forest Regressor (RFR) produced the best prediction results over the dataset; and that these results are highly influenced by the quality of available socio-economic data. In particular, the RFR model exhibited the highest accuracy in predictions for most waste types. For example, for “municipal” waste, it produced, respectively, R2 = 1 and MAPE = 4.31 global error values for the test set; and for “household” waste, it, respectively, produced R2 = 1 and MAPE = 3.03. Our results indicate that the considered models (and specially RFR) all are effective in predicting the amount of produced wastes derived from input data for the considered countries.
Nowadays, most recommender systems are based on a centralized architecture, which can cause crucial issues in terms of trust, privacy, dependability, and costs. In this paper, we propose a decentralized and distributed MANET-based (Mobile Ad-hoc NETwork) recommender system for open facilities. The system is based on mobile devices that collect sensor data about users locations to derive implicit ratings that are used for collaborative filtering recommendations. The mechanisms of deriving ratings and propagating them in a MANET network are discussed in detail. Finally, extensive experiments demonstrate the suitability of the approach in terms of different performance metrics.
Background: Virtual reality (VR) is increasingly used as simulation technology in emergency medicine education and training, in particular for training nontechnical skills. Experimental studies comparing teaching and learning in VR with traditional training media often demonstrate the equivalence or even superiority regarding particular variables of learning or training effectiveness.
Objective: In the EPICSAVE (Enhanced Paramedic Vocational Training with Serious Games and Virtual Environments) project, a highly immersive room-scaled multi-user 3-dimensional VR simulation environment was developed. In this feasibility study, we wanted to gain initial insights into the training effectiveness and media use factors influencing learning and training in VR.
Methods: The virtual emergency scenario was anaphylaxis grade III with shock, swelling of the upper and lower respiratory tract, as well as skin symptoms in a 5-year-old girl (virtual patient) visiting an indoor family amusement park with her grandfather (virtual agent). A cross-sectional, one-group pretest and posttest design was used to evaluate the training effectiveness and quality of the training execution. The sample included 18 active emergency physicians.
Results: The 18 participants rated the VR simulation training positive in terms of training effectiveness and quality of the training execution. A strong, significant correlation (r=.53, P=.01) between experiencing presence and assessing training effectiveness was observed. Perceived limitations in usability and a relatively high extraneous cognitive load reduced this positive effect.
Conclusions: The training within the virtual simulation environment was rated as an effective educational approach. Specific media use factors appear to modulate training effectiveness (ie, improvement through “experience of presence” or reduction through perceived limitations in usability). These factors should be specific targets in the further development of this VR simulation training.