Refine
Year of publication
Document Type
- Article (54) (remove)
Has Fulltext
- yes (54)
Is part of the Bibliography
- no (54)
Keywords
- OSGi (3)
- SOA (3)
- complex event processing (3)
- mobile health (3)
- Akzeptanz (2)
- Arbeitsqualität (2)
- CEP (2)
- Computersicherheit (2)
- ECA (2)
- Empfehlungssystem (2)
- Ereignisgesteuerte Programmierung (2)
- Otto (GmbH & Co KG) (2)
- Patient (2)
- Security (2)
- Serviceorientierte Architektur (2)
- Smart Device (2)
- Telearbeit (2)
- Verarbeitung komplexer Ereignisse (2)
- Versicherung (2)
- Virtuelle Realität (2)
- acceptance (2)
- digital divide (2)
- event-driven architecture (2)
- general practitioners (2)
- mHealth (2)
- tablet (2)
- Abschlussarbeit (1)
- Ad-hoc-Netz (1)
- Adaptives Verfahren (1)
- Agent <Informatik> (1)
- Agile Softwareentwicklung (1)
- Agile software development (1)
- Allgemeinarzt (1)
- Android (1)
- Angst (1)
- Arbeitsbelastung (1)
- Arbeitsklima (1)
- Arbeitswelt (1)
- Arbeitszufriedenheit (1)
- BLAST algorithm (1)
- Bacterial genomics (1)
- Bankruptcy costs (1)
- Bat algorithm (1)
- Berufsleben (1)
- Berufsziel (1)
- Berufszufriedenheit (1)
- Betriebsorganisation (1)
- Biometrie (1)
- Business Plan (1)
- Business model (1)
- CI/CD (1)
- COBIT (1)
- CRM in Hochschulen (1)
- Checkliste (1)
- Code quality (1)
- Complex Event Processing (CEP) (1)
- Complex event processing (1)
- Compliance (1)
- Consumerization (1)
- Context-aware recommender systems (1)
- Continuous Delivery (1)
- Cross-holdings (1)
- Crowdsourcing (1)
- Customer channel (1)
- Cyber Insurance (1)
- Cyber Risks (1)
- Cyber-Versicherung (1)
- Damage claims (1)
- Datenstrom (1)
- Delphi (1)
- Delphi method characteristics (1)
- Delphi method variants (1)
- Depression (1)
- Design Thinking (1)
- DevOps (1)
- Dienstgüte (1)
- Distributed file systems (1)
- Dyadisches Gitter (1)
- Dünnes Gitter (1)
- E-Health (1)
- E-Learning (1)
- Echtzeitsimulation (1)
- Eindringerkennung (1)
- Eingebettetes System (1)
- Einzelfertigung (1)
- Endredaktion (1)
- Entrepreneurship (1)
- Event Admin (EA) (1)
- Event monitoring (1)
- Explainability (1)
- Fault tolerance (1)
- Fehlerverhütung (1)
- Financial contagion (1)
- Financial network (1)
- Finanzplanung (1)
- Fire sales (1)
- Frauenquote (1)
- Gender Mainstreaming (1)
- Genomic databases (1)
- Geschäftsmodell (1)
- Gesichtserkennung (1)
- Gleichberechtigung (1)
- Graph embeddings (1)
- Gute Arbeit (1)
- Hausarzt (1)
- Hochschullehre (1)
- IDS (1)
- ISO 27 K (1)
- ISO 27000 (1)
- ISO 27001 (1)
- ISO 27002 (1)
- ISO/IEC 27000 (1)
- IT Risk (1)
- IT Risk Management (1)
- IT Security Risk (1)
- Idiosyncratic Risk (1)
- Indicator Measurement (1)
- Information systems research (1)
- Instrument (1)
- Insurance (1)
- Internationalisierung (1)
- Kardiovaskuläre Krankheit (1)
- Knowledge graphs (1)
- Kontextfaktor (1)
- Kontinuierliche Integration (1)
- Kreatives Denken (1)
- Kreativität (1)
- Künstliche Intelligenz (1)
- LightSabre (1)
- Liquidität (1)
- Liquiditätsplan (1)
- Liquiditätsplanung (1)
- Location-based systems (1)
- Lymphknoten (1)
- MANET (1)
- Machine-to-Machine-Kommunikation (1)
- Magnetometer (1)
- Management (1)
- MapReduce algorithm (1)
- Maps (1)
- Metagenomics (1)
- Metakognitive Therapie (1)
- Mobile Applications (1)
- Mobile Device (1)
- Music recommender (1)
- Musik (1)
- Neuronales Netz (1)
- NoSQL databases. (1)
- Notfallmedizin (1)
- OECD datasets (1)
- Offenes Kommunikationssystem (1)
- Online services (1)
- Online-Dienst (1)
- Online-Portal (1)
- Ontologies (1)
- Open Source (1)
- Open systems (1)
- Optimalliquidität (1)
- PC-gestützt (1)
- Plan (1)
- Problemlösen (1)
- Prostatakrebs (1)
- Prüfungsangst (1)
- Psychische Gesundheit (1)
- Psychokardiologie (1)
- Psychologie (1)
- Quality of Service (QoS) (1)
- Quality perception (1)
- Qualität (1)
- Quellcode (1)
- Quotierung (1)
- Rationalität (1)
- Real-time simulation (1)
- Recommender systems (1)
- Remote work (1)
- Rendering (1)
- Rendering (computer graphics) (1)
- Risiko (1)
- Rule learning (1)
- SEM (1)
- SIEM (1)
- SOAP (1)
- SPION (1)
- Schadensersatzanspruch (1)
- Semantic Web (1)
- Semi-structured interviews (1)
- Sensor (1)
- Sensorsystem (1)
- Sentinel-Lymphknoten (1)
- Sequence alignment (1)
- Service Lifecycle (1)
- Service Management (1)
- Service Monitoring (1)
- Service Registry (1)
- Service Repository (1)
- Service Semantics (1)
- Service-orientation (1)
- Simulation (1)
- Smartphone (1)
- Soft Skills (1)
- Software Engineering (1)
- Software development (1)
- Soll-Ist-Vergleich (1)
- Sonnenfinsternis (1)
- Source code properties (1)
- Spheres (1)
- Standortbezogener Dienst (1)
- Strategie (1)
- Streaming <Kommunikationstechnik> (1)
- Strukturgleichungsmodell (1)
- Studienarbeit (1)
- Studiengangfinder (1)
- Swarm algorithm (1)
- Systematic Risk (1)
- Systemic risk (1)
- Taxonomy (1)
- Technology acceptance (1)
- Unternehmen (1)
- Unternehmensgründung (1)
- Versicherungsbetrieb (1)
- Verwaltung (1)
- Virtual reality (1)
- WS-Security (1)
- Web service (1)
- Web services (1)
- Web-Portal (1)
- Wirtschaftsinformatik (1)
- Wissensgraph (1)
- Work quality (1)
- Workload (1)
- XML-Model (1)
- XML-Schema (1)
- Zentriertes Interview (1)
- Zufriedenheit (1)
- ad-hoc networks (1)
- adaptive methods (1)
- agents (1)
- anaphylaxis (1)
- anxiety (1)
- architecture (1)
- asynchronous messaging (1)
- build automation (1)
- build server (1)
- cardiovascular disease (1)
- cashing (1)
- cloud computing (1)
- clustering on countries (1)
- collaborative coordination (1)
- complex event processing (CEP) (1)
- creativity (1)
- data protection (1)
- data stream learning (1)
- depression (1)
- digital intervention (1)
- distributed environments (1)
- distributed evacuation coordination (1)
- dyadic grid (1)
- e-learning (1)
- educational virtual realities (1)
- eigenface (1)
- emergency medicine (1)
- enterprise apps (1)
- evacuation guidance (1)
- evaluation (1)
- event models (1)
- events (1)
- examination (1)
- face recognition (1)
- fear (1)
- financial planning (1)
- forecasting models on countries (1)
- head-mounted display (1)
- health care (1)
- immersive media (1)
- kreativität (1)
- large scale systems (1)
- load balancing (1)
- lymphadenectomy (1)
- machine learning (1)
- machine-to-machine communication (1)
- magnetometer (1)
- market-based coordination (1)
- mental health (1)
- metacognitive therapy (1)
- multi-dimensional data (1)
- multiagent systems (1)
- ontology (1)
- patients (1)
- presence experience (1)
- privacy (1)
- prostate cancer (1)
- psychocardiology (1)
- rationalität (1)
- real-time routing (1)
- reliable message delivery (1)
- rollierend (1)
- security (1)
- semantic web application (1)
- sentinel lymph node dissection (1)
- shopping cart system (1)
- simulation training (1)
- situation aware routing (1)
- smart cities (1)
- smartphone (1)
- solid waste management (1)
- sparse grid (1)
- student project (1)
- superparamagnetic iron oxide nanoparticles (1)
- teaching entrepreneurship (1)
- training effectiveness (1)
- underprivileged adolescents (1)
- user training (1)
- vermeidbare Fehler (1)
- virtual emergency scenario (1)
- virtual patient simulation (1)
- virtual reality (1)
- web services (1)
- Übung (1)
Institute
- Fakultät IV - Wirtschaft und Informatik (54) (remove)
In this paper we describe methods to approximate functions and differential operators on adaptive sparse (dyadic) grids. We distinguish between several representations of a function on the sparse grid and we describe how finite difference (FD) operators can be applied to these representations. For general variable coefficient equations on sparse grids, genuine finite element (FE) discretizations are not feasible and FD operators allow an easier operator evaluation than the adapted FE operators. However, the structure of the FD operators is complex. With the aim to construct an efficient multigrid procedure, we analyze the structure of the discrete Laplacian in its hierarchical representation and show the relation between the full and the sparse grid case. The rather complex relations, that are expressed by scaling matrices for each separate coordinate direction, make us doubt about the possibility of constructing efficient preconditioners that show spectral equivalence. Hence, we question the possibility of constructing a natural multigrid algorithm with optimal O(N) efficiency. We conjecture that for the efficient solution of a general class of adaptive grid problems it is better to accept an additional condition for the dyadic grids (condition L) and to apply adaptive hp-discretization.
The paper presents a comprehensive model of a banking system that integrates network effects, bankruptcy costs, fire sales, and cross-holdings. For the integrated financial market we prove the existence of a price-payment equilibrium and design an algorithm for the computation of the greatest and the least equilibrium. The number of defaults corresponding to the greatest price-payment equilibrium is analyzed in several comparative case studies. These illustrate the individual and joint impact of interbank liabilities, bankruptcy costs, fire sales and cross-holdings on systemic risk. We study policy implications and regulatory instruments, including central bank guarantees and quantitative easing, the significance of last wills of financial institutions, and capital requirements.
Background:
Many patients with cardiovascular disease also show a high comorbidity of mental disorders, especially such as anxiety and depression. This is, in turn, associated with a decrease in the quality of life. Psychocardiological treatment options are currently limited. Hence, there is a need for novel and accessible psychological help. Recently, we demonstrated that a brief face-to-face metacognitive therapy (MCT) based intervention is promising in treating anxiety and depression. Here, we aim to translate the face-to-face approach into digital application and explore the feasibility of this approach.
Methods:
We translated a validated brief psychocardiological intervention into a novel non-blended web app. The data of 18 patients suffering from various cardiac conditions but without diagnosed mental illness were analyzed after using the web app over a two-week period in a feasibility trial. The aim was whether a nonblended web app based MCT approach is feasible in the group of cardiovascular patients with cardiovascular disease.
Results:
Overall, patients were able to use the web app and rated it as satisfactory and beneficial. In addition, there was first indication that using the app improved the cardiac patients’ subjectively perceived health and reduced their anxiety. Therefore, the approach seems feasible for a future randomized controlled trial.
Conclusion:
Applying a metacognitive-based brief intervention via a nonblended web app seems to show good acceptance and feasibility in a small target group of patients with CVD. Future studies should further develop, improve and validate digital psychotherapy approaches, especially in patient groups with a lack of access to standard psychotherapeutic care.
There are many aspects of code quality, some of which are difficult to capture or to measure. Despite the importance of software quality, there is a lack of commonly accepted measures or indicators for code quality that can be linked to quality attributes. We investigate software developers’ perceptions of source code quality and the practices they recommend to achieve these qualities. We analyze data from semi-structured interviews with 34 professional software developers, programming teachers and students from Europe and the U.S. For the interviews, participants were asked to bring code examples to exemplify what they consider good and bad code, respectively. Readability and structure were used most commonly as defining properties for quality code. Together with documentation, they were also suggested as the most common target properties for quality improvement. When discussing actual code, developers focused on structure, comprehensibility and readability as quality properties. When analyzing relationships between properties, the most commonly talked about target property was comprehensibility. Documentation, structure and readability were named most frequently as source properties to achieve good comprehensibility. Some of the most important source code properties contributing to code quality as perceived by developers lack clear definitions and are difficult to capture. More research is therefore necessary to measure the structure, comprehensibility and readability of code in ways that matter for developers and to relate these measures of code structure, comprehensibility and readability to common software quality attributes.
The digital transformation with its new technologies and customer expectation has a significant effect on the customer channels in the insurance industry. The objective of this study is the identification of enabling and hindering factors for the adoption of online claim notification services that are an important part of the customer experience in insurance. For this purpose, we conducted a quantitative cross-sectional survey based on the exemplary scenario of car insurance in Germany and analyzed the data via structural equation modeling (SEM). The findings show that, besides classical technology acceptance factors such as perceived usefulness and ease of use, digital mindset and status quo behavior play a role: acceptance of digital innovations, lacking endurance as well as lacking frustration tolerance with the status quo lead to a higher intention for use. Moreover, the results are strongly moderated by the severity of the damage event—an insurance-specific factor that is sparsely considered so far. The latter discovery implies that customers prefer a communication channel choice based on the individual circumstances of the claim.
Music streaming platforms offer music listeners an overwhelming choice of music. Therefore, users of streaming platforms need the support of music recommendation systems to find music that suits their personal taste. Currently, a new class of recommender systems based on knowledge graph embeddings promises to improve the quality of recommendations, in particular to provide diverse and novel recommendations. This paper investigates how knowledge graph embeddings can improve music recommendations. First, it is shown how a collaborative knowledge graph can be derived from open music data sources. Based on this knowledge graph, the music recommender system EARS (knowledge graph Embedding-based Artist Recommender System) is presented in detail, with particular emphasis on recommendation diversity and explainability. Finally, a comprehensive evaluation with real-world data is conducted, comparing of different embeddings and investigating the influence of different types of knowledge.
The transfer of historically grown monolithic software architectures into modern service-oriented architectures creates a lot of loose coupling points. This can lead to an unforeseen system behavior and can significantly impede those continuous modernization processes, since it is not clear where bottlenecks in a system arise. It is therefore necessary to monitor such modernization processes with an adaptive monitoring concept to be able to correctly record and interpret unpredictable system dynamics. This contribution presents a generic QoS measurement framework for service-based systems. The framework consists of an XML-based specification for the measurement to be performed – the Information Model (IM) – and the QoS System, which provides an execution platform for the IM. The framework will be applied to a standard business process of the German insurance industry, and the concepts of the IM and their mapping to artifacts of the QoS System will be presented. Furtherm ore, design and implementation of the QoS System’s parser and generator module and the generated artifacts are explained in detail, e.g., event model, agents, measurement module and analyzer module.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarized comparison of two tools.
In this paper, we present a novel approach for real-time rendering of soft eclipse shadows cast by spherical, atmosphereless bodies. While this problem may seem simple at first, it is complicated by several factors. First, the extreme scale differences and huge mutual distances of the involved celestial bodies cause rendering artifacts in practice. Second, the surface of the Sun does not emit light evenly in all directions (an effect which is known as limb darkening). This makes it impossible to model the Sun as a uniform spherical light source. Finally, our intended applications include real-time rendering of solar eclipses in virtual reality, which require very high frame rates. As a solution to these problems, we precompute the amount of shadowing into an eclipse shadow map, which is parametrized so that it is independent of the position and size of the occluder. Hence, a single shadow map can be used for all spherical occluders in the Solar System. We assess the errors introduced by various simplifications and compare multiple approaches in terms of performance and precision. Last but not least, we compare our approaches to the state-of-the-art and to reference images. The implementation has been published under the MIT license.
The paper provides a comprehensive overview of modeling and pricing cyber insurance and includes clear and easily understandable explanations of the underlying mathematical concepts. We distinguish three main types of cyber risks: idiosyncratic, systematic, and systemic cyber risks. While for idiosyncratic and systematic cyber risks, classical actuarial and financial mathematics appear to be well-suited, systemic cyber risks require more sophisticated approaches that capture both network and strategic interactions. In the context of pricing cyber insurance policies, issues of interdependence arise for both systematic and systemic cyber risks; classical actuarial valuation needs to be extended to include more complex methods, such as concepts of risk-neutral valuation and (set-valued) monetary risk measures.
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
High-performance firms typically have two features in common: (i) they produce in more than one country and (ii) they produce more than one product. In this paper, we analyze the internationalization strategies of multi-product firms. Guided by several new stylized facts, we develop a theoretical model to determine optimal modes of market access at the firm–product level. We find that the most productive firmssell core varieties via foreign direct investment and export products with intermediate productivity. Shocks to trade costs and technology affect the endogenous decision to export or produce abroad at the product-level and, in turn, the relative productivity between parents and affiliates.
Die gesetzlich vorgesehene Bereitstellung von Digitalisierungsangeboten stellt öffentliche Verwaltungen vor steigende Herausforderungen. Aufgrund der Heterogenität der Nutzerinnen und Nutzer ist es für öffentliche Verwaltungen häufig problematisch, klare Anforderungen zu erheben und zu erfüllen. Hinzukommen strukturelle und organisatorische Gegebenheiten wie beispielsweise ausgeprägte Entscheidungshierarchien, die eine nutzerzentrierte Vorgehensweise erschweren können. Darüber hinaus sieht sich die öffentliche Verwaltung zunehmend mit komplexer werdenden Problemen konfrontiert. Es stellt sich daher die Frage, wie in der öffentlichen Verwaltung ein moderner Ansatz zur Nutzerzentrierung und Problemlösung eingesetzt werden kann. Dieser Artikel präsentiert die Ergebnisse einer Einzelfallstudie bei der Niedersächsischen Landesbehörde für Straßenbau und Verkehr (NLStBV). Wir haben mit einer Fokusgruppe einen Design-Thinking-Workshop durchgeführt, um Potenziale und Anwendungsmöglichkeiten des Ansatzes in der öffentlichen Verwaltung zu identifizieren. Auf Basis einer SWOT-Analyse haben wir die Ergebnisse untersucht und geben vier konkrete Handlungsempfehlungen für die Einführung sowie Nutzung von Design Thinking.
Gutes Arbeiten für Mitarbeiter ist je nach Arbeitskontext unterschiedlich zu bewerten, hängt jedoch von der Gestaltung bestimmter Kontextfaktoren ab. Die Kontextfaktoren guter Arbeit sind der zentrale Forschungsgegenstand dieser Arbeit. Dabei steht ein E‑Commerce-Team (EC-Team) von Otto im Fokus der Untersuchungen.
Das Ziel unseres Artikels ist es, die Kontextfaktoren zu analysieren, die dazu führen, dass gute Arbeit ermöglicht wird. Dabei ist eine auf Dauer funktionierende Arbeitsweise gesucht, welche eine hohe Arbeitsqualität und -quantität ermöglicht. Dazu sind die beiden primären Ziele zu definieren, was gutes Arbeiten ausmacht und zum anderen die Kontextfaktoren für gutes Arbeiten innerhalb des EC-Teams bei Otto zu identifizieren.
Unsere Forschungsfrage lautet: Welche Kontextfaktoren sind für gutes Arbeiten bei Otto im EC-Team in der derzeitigen Remote-Arbeit besonders relevant und entsprechend gestaltbar?
Um die Forschungsfrage beantworten zu können, wird zunächst eine Literaturrecherche zur Definition von guter Arbeit vorgenommen. Anschließend wird untersucht, welche Faktoren laut Literatur zu einer guten Arbeit beitragen, um aus den resultierenden Faktoren Cluster zu bilden.
Die Cluster werden dem Otto EC-Team zur Abstimmung mit der Mehrpunktabfrage über das virtuelle Kollaborations-Tool MiroFootnote 2 zur Verfügung gestellt. Aufbauend auf dem Ergebnis der Abstimmung, werden ein Gamification Board, Erinnerungsmails und ein Stimmungsbarometer erstellt, um die Auswirkungen des Clusters im Rahmen eines Experiments zu analysieren.
Diese Maßnahmen werden innerhalb von zwei Wochen durchgeführt. Um die Erfahrungen der Probanden zu sammeln, werden anschließend Interviews durchgeführt und ausgewertet. Die Ergebnisse der Interviews fließen in die anschließende Handlungsempfehlung ein.
Durch die Covid-19-Pandemie und die damit einhergehenden Effekte auf die Arbeitswelt ist die Belastung der Mitarbeitenden in einen stärkeren Fokus gerückt worden. Dieser Umstand trifft unter anderem durch den umfassenden Wechsel in die Remote Work auch auf agile Software-Entwicklungsteams in vielen Unternehmen zu. Eine zu hohe Arbeitsbelastung kann zu diversen negativen Effekten, wie einem erhöhten Krankenstand, mangelndem Wohlbefinden der Mitarbeitenden oder reduzierter Produktivität führen. Es ist zudem bekannt, dass sich die Arbeitsbelastung in der Wissensarbeit auf die Qualität der Arbeitsergebnisse auswirkt. Dieser Forschungsbeitrag identifiziert potenzielle Faktoren der Arbeitsbelastung der Mitglieder eines agilen Software-Entwicklungsteams bei der Otto GmbH & Co KG. Auf der Grundlage der Faktoren präsentieren wir Maßnahmen zur Reduzierung von Arbeitsbelastung und erläutern unsere Erkenntnisse, die wir im Rahmen eines Experiments validiert haben. Unsere Ergebnisse zeigen, dass bereits kleinteilige Maßnahmen, wie das Einführen von Ruhearbeitsphasen während des Arbeitstages, zu positiven Effekten bspw. hinsichtlich einer gesteigerten Konzentrationsfähigkeit führen und wie sich diese auf die Qualität der Arbeitsergebnisse auswirken.
Dramatic increases in the number of cyber security attacks and breaches toward businesses and organizations have been experienced in recent years. The negative impacts of these breaches not only cause the stealing and compromising of sensitive information, malfunctioning of network devices, disruption of everyday operations, financial damage to the attacked business or organization itself, but also may navigate to peer businesses/organizations in the same industry. Therefore, prevention and early detection of these attacks play a significant role in the continuity of operations in IT-dependent organizations. At the same time detection of various types of attacks has become extremely difficult as attacks get more sophisticated, distributed and enabled by Artificial Intelligence (AI). Detection and handling of these attacks require sophisticated intrusion detection systems which run on powerful hardware and are administered by highly experienced security staff. Yet, these resources are costly to employ, especially for small and medium-sized enterprises (SMEs). To address these issues, we developed an architecture -within the GLACIER project- that can be realized as an in-house operated Security Information Event Management (SIEM) system for SMEs. It is affordable for SMEs as it is solely based on free and open-source components and thus does not require any licensing fees. Moreover, it is a Self-Contained System (SCS) and does not require too much management effort. It requires short configuration and learning phases after which it can be self-contained as long as the monitored infrastructure is stable (apart from a reaction to the generated alerts which may be outsourced to a service provider in SMEs, if necessary). Another main benefit of this system is to supply data to advanced detection algorithms, such as multidimensional analysis algorithms, in addition to traditional SIEMspecific tasks like data collection, normalization, enrichment, and storage. It supports the application of novel methods to detect security-related anomalies. The most distinct feature of this system that differentiates it from similar solutions in the market is its user feedback capability. Detected anomalies are displayed in a Graphical User Interface (GUI) to the security staff who are allowed to give feedback for anomalies. Subsequently, this feedback is utilized to fine-tune the anomaly detection algorithm. In addition, this GUI also provides access to network actors for quick incident responses. The system in general is suitable for both Information Technology (IT) and Operational Technology (OT) environments, while the detection algorithm must be specifically trained for each of these environments individually.
Decision support systems for traffic management systems have to cope with a high volume of events continuously generated by sensors. Conventional software architectures do not explicitly target the efficient processing of continuous event streams. Recently, event-driven architectures (EDA) have been proposed as a new paradigm for event-based applications. In this paper we propose a reference architecture for event-driven traffic management systems, which enables the analysis and processing of complex event streams in real-time and is therefore well-suited for decision support in sensor-based traffic control sys- tems. We will illustrate our approach in the domain of road traffic management. In particular, we will report on the redesign of an intelligent transportation management system (ITMS) prototype for the high-capacity road network in Bilbao, Spain.
M2M (machine-to-machine) systems use various communication technologies for automatically monitoring and controlling machines. In M2M systems, each machine emits a continuous stream of data records, which must be analyzed in real-time. Intelligent M2M systems should be able to diagnose their actual states and to trigger appropriate actions as soon as critical situations occur. In this paper, we show how complex event processing (CEP) can be used as the key technology for intelligent M2M systems. We provide an event-driven architecture that is adapted to the M2M domain. In particular, we define different models for the M2M domain, M2M machine states and M2M events. Furthermore, we present a general reference architecture defining the main stages of processing machine data. To prove the usefulness of our approach, we consider two real-world examples ‘solar power plants’ and ‘printers’, which show how easily the general architecture can be extended to concrete M2M scenarios.
Complex Event Processing (CEP) is a modern software technology for the dynamic analysis of continuous data streams. CEP is able of searching extremely large data streams in real time for the presence of event patterns. So far, specifying event patterns of CEP rules is still a manual task based on the expertise of domain experts. This paper presents a novel batinspired swarm algorithm for automatically mining CEP rule patterns that express the relevant causal and temporal relations hidden in data streams. The basic suitability and performance of the approach is proven by extensive evaluation with both synthetically generated data and real data from the traffic domain.