Refine
Year of publication
Document Type
- Conference Proceeding (50)
- Article (42)
- Bachelor Thesis (6)
- Part of a Book (2)
- Master's Thesis (2)
- Preprint (2)
- Doctoral Thesis (1)
- Report (1)
- Working Paper (1)
Language
- English (107) (remove)
Is part of the Bibliography
- no (107)
Keywords
- Serviceorientierte Architektur (9)
- Mikroservice (8)
- Computersicherheit (7)
- SOA (7)
- Agilität <Management> (6)
- Agile Softwareentwicklung (5)
- Insurance Industry (5)
- Künstliche Intelligenz (5)
- Nachhaltigkeit (5)
- Rechnernetz (5)
- Versicherungswirtschaft (5)
- Visualisierung (4)
- Agile methods (3)
- COVID-19 (3)
- Cloud Computing (3)
- Complex Event Processing (3)
- Computersimulation (3)
- E-Learning (3)
- Empfehlungssystem (3)
- Information Visualization (3)
- Microservices (3)
- Network Security (3)
- Neuronales Netz (3)
- OSGi (3)
- Security (3)
- Semantic Web (3)
- Simulation (3)
- Telearbeit (3)
- Virtuelle Realität (3)
- complex event processing (3)
- microservices (3)
- mobile health (3)
- AI (2)
- Agent <Informatik> (2)
- Agile software development (2)
- Akzeptanz (2)
- Benutzeroberfläche (2)
- Big Data (2)
- CEP (2)
- CI/CD (2)
- Chatbot (2)
- Consistency (2)
- Consumerization (2)
- Datenstrom (2)
- Deep learning (2)
- DevOps (2)
- Dienstgüte (2)
- ECA (2)
- Eindringerkennung (2)
- Ereignisgesteuerte Programmierung (2)
- ISO 9001 (2)
- Indicator Measurement (2)
- Machine Learning (2)
- Maschinelles Lernen (2)
- Maschinelles Sehen (2)
- Microservice (2)
- Microservices Architecture (2)
- Open Source (2)
- Patient (2)
- Qualität (2)
- Rendering (2)
- Resiliency (2)
- Resilienz (2)
- Service-orientation (2)
- Smart Device (2)
- Tertiärbereich (2)
- Urban Logistics (2)
- User Interfaces (2)
- Verarbeitung komplexer Ereignisse (2)
- Versicherung (2)
- Versicherungsbetrieb (2)
- XML-Model (2)
- XML-Schema (2)
- acceptance (2)
- agile methods (2)
- agile software development (2)
- build automation (2)
- build server (2)
- digital divide (2)
- eduscrum (2)
- event-driven architecture (2)
- general practitioners (2)
- mHealth (2)
- remote work (2)
- tablet (2)
- virtual reality (2)
- 3d mapping (1)
- 4-day work week (1)
- AI influences (1)
- API (1)
- Abalone (1)
- Absolvent (1)
- Ad-hoc-Netz (1)
- Adaptive IT Infrastructure (1)
- Adaptives Verfahren (1)
- Agile Manifesto (1)
- Agile Practices (1)
- Agile Software Development (1)
- Agile education (1)
- Agile method (1)
- Agile practices (1)
- Air quality (1)
- Allgemeinarzt (1)
- AlphaGo (1)
- Alternative work schedule (1)
- Android (1)
- Angst (1)
- Anomalieerkennung (1)
- Anomaly detection (1)
- Anonymization (1)
- Antifragile (1)
- Application Programming Interface (1)
- Arbeitsablauf (1)
- Arbeitswelt (1)
- Arbeitszufriedenheit (1)
- Articial intelligence (1)
- Asymmetric encryption (1)
- Attack detection (1)
- Auswahl (1)
- Authentication (1)
- Authentifikation (1)
- Authorization (1)
- Autorisierung (1)
- BLAST algorithm (1)
- BaaS (Backend-as-a-service) (1)
- Bacterial genomics (1)
- Bankruptcy costs (1)
- Bat algorithm (1)
- Batteriefahrzeug (1)
- Battery Electric Vehicles (1)
- Bekleidungsindustrie (1)
- Benutzererlebnis (1)
- Benutzerfreundlichkeit (1)
- Beruf (1)
- Bestärkendes Lernen <Künstliche Intelligenz> (1)
- Big Data Analytics (1)
- Biometrie (1)
- Blackboard Pattern (1)
- Brettspiel (1)
- Bring Your Own Device (1)
- Business model (1)
- C-SPARQL (1)
- C2C (1)
- COBIT (1)
- CQL (1)
- Case Management (1)
- Chaos (1)
- Chaostheorie (1)
- ChatGPT (1)
- Choreography (1)
- City-Logistik (1)
- Code quality (1)
- Complex Event Processing (CEP) (1)
- Complex event processing (1)
- Compliance (1)
- Computer Graphics (1)
- Computer Vision (1)
- Computer simulation (1)
- Computergrafik (1)
- Context Awareness (1)
- Context-aware recommender systems (1)
- Continuous Delivery (1)
- Corporate Credit Risk (1)
- Cross-holdings (1)
- Crowdshipping (1)
- Crowdsourcing (1)
- Customer channel (1)
- Cyber Insurance (1)
- Cyber Risks (1)
- Cyber-Versicherung (1)
- Cyberattacke (1)
- Damage claims (1)
- Data Cubes (1)
- Data Management (1)
- Datenwürfel (1)
- Decision Support (1)
- Decision Support Tool (1)
- Delphi (1)
- Delphi method characteristics (1)
- Delphi method variants (1)
- Depression (1)
- Design Science (1)
- Designwissenschaft <Informatik> (1)
- Diffusion Models (1)
- Distributed file systems (1)
- Docker (1)
- Domain Driven Design (DDD) (1)
- Dyadisches Gitter (1)
- Dünnes Gitter (1)
- E-Assessment (1)
- E-Grocery (1)
- E-Health (1)
- EPN (1)
- Echtzeitsimulation (1)
- Education (1)
- Eilzustellung (1)
- Eingebettetes System (1)
- Elektromobilität (1)
- Enduser Device (1)
- Energieerzeugung (1)
- Entrepreneurship (1)
- Entscheidungsunterstützungssystem (1)
- Erfolgsfaktor (1)
- Evaluation (1)
- Event Admin (EA) (1)
- Event Processing Network (1)
- Event Processing Network Model (1)
- Event monitoring (1)
- Explainability (1)
- Explainable anomaly detection (1)
- FaaS (Function-as-a-service) (1)
- Fault tolerance (1)
- Fernunterricht (1)
- Financial contagion (1)
- Financial network (1)
- Finanzplanung (1)
- Fire sales (1)
- Framework (1)
- Framework <Informatik> (1)
- Freiluftsport (1)
- Function as a Service (1)
- GAN (1)
- GPT-3 (1)
- Generative Adversarial Network (1)
- Genetic algorithms (1)
- Genetischer Algorithmus (1)
- Genomic databases (1)
- Geschlechtsunterschied (1)
- Geschäftsmodell (1)
- Gesichtserkennung (1)
- Graph embeddings (1)
- Graphische Benutzeroberfläche (1)
- Green Tourism (1)
- Hadoop (1)
- Hausarzt (1)
- Hochschullehre (1)
- IDS (1)
- ISO 27 K (1)
- ISO 27000 (1)
- ISO 27001 (1)
- ISO 27002 (1)
- ISO 9001 6.1 (1)
- ISO/IEC 27000 (1)
- IT Risk (1)
- IT Risk Management (1)
- IT Security Risk (1)
- IT Sicherheit (1)
- IT security (1)
- Idiosyncratic Risk (1)
- Information systems research (1)
- Informationstechnik (1)
- Insurance (1)
- Integrated Management (1)
- Intelligent control (1)
- Intelligentes Stromnetz (1)
- Internationalisierung (1)
- Istio (1)
- JFLAP (1)
- Kardiovaskuläre Krankheit (1)
- Knowledge graphs (1)
- Kontextbezogenes System (1)
- Kontinuierliche Integration (1)
- Kreditrisiko (1)
- Kubernetes (1)
- LON-CAPA (1)
- Lean Management (1)
- Lebensmittel (1)
- Leistungskennzahl (1)
- Lernsoftware (1)
- Lieferservice (1)
- LightSabre (1)
- Literaturbericht (1)
- Location-based systems (1)
- Luftqualität (1)
- Lymphknoten (1)
- MANET (1)
- Machine-to-Machine-Kommunikation (1)
- Magnetometer (1)
- Management (1)
- MapReduce (1)
- MapReduce algorithm (1)
- Maps (1)
- Marketing (1)
- Marketingstrategie (1)
- Marktpotenzial (1)
- Masterstudium (1)
- Metagenomics (1)
- Metakognitive Therapie (1)
- Mikro-Kraft-Wärme-Kopplung (1)
- Mobile (1)
- Mobile Applications (1)
- Mobile Device (1)
- Mobile Device Management (1)
- Multidimensional Analysis (1)
- Multidimensional analysis (1)
- Music recommender (1)
- Musik (1)
- Nagios (1)
- Neural controls (1)
- Neural networks (1)
- Neural-network models (1)
- Nichtlineare Dynamik (1)
- NoSQL databases. (1)
- Nonlinear Dynamics (1)
- Normality model (1)
- Notfallmedizin (1)
- OECD datasets (1)
- Offenes Kommunikationssystem (1)
- Online services (1)
- Online-Dienst (1)
- Ontologies (1)
- Open systems (1)
- OpenStack (1)
- Opportunity Management (1)
- Optische Zeichenerkennung (1)
- Orchestration (1)
- Outdoor (1)
- PageRank (1)
- Paket (1)
- Pathologie (1)
- Pathology (1)
- Personennahverkehr (1)
- Physically Based Rendering (1)
- Policy Evaluation (1)
- Portable Micro-CHP Unit (1)
- Pregel (1)
- Privacy by Design (1)
- Problemorientiertes Lernen (1)
- Processes (1)
- Projektmanagement (1)
- Prostatakrebs (1)
- Prozessmanagement (1)
- Prüfstand (1)
- Pseudonymization (1)
- Psychische Gesundheit (1)
- Psychokardiologie (1)
- QM (1)
- Quality Management (1)
- Quality assessment (1)
- Quality of Service (1)
- Quality of Service (QoS) (1)
- Quality perception (1)
- Qualitätsmanagement (1)
- Quellcode (1)
- REST <Informatik> (1)
- RESTful (1)
- RFID (1)
- Real-Time Rendering (1)
- Real-time Collaboration (1)
- Real-time simulation (1)
- Recommender System (1)
- Recommender systems (1)
- Reference Architecture (1)
- Referenzmodell (1)
- Reinforcement Learning (1)
- Remote work (1)
- Rendering (computer graphics) (1)
- Representational State Transfer (1)
- Richardson Maturity Model (1)
- Risiko (1)
- Risikomanagement (1)
- Risk Management (1)
- Robotics (1)
- Robotik (1)
- Rule learning (1)
- RuleCore (1)
- SEM (1)
- SIEM (1)
- SOA co-existence (1)
- SOAP (1)
- SPION (1)
- Scaling Law (1)
- Schadensersatzanspruch (1)
- Schwarmintelligenz (1)
- Scientific Visualization (1)
- Scrum <Vorgehensmodell> (1)
- Semantic Web Technologies (1)
- Semi-structured interviews (1)
- Sensor (1)
- Sensorsystem (1)
- Sentinel-Lymphknoten (1)
- Sequence alignment (1)
- Serverless Computing (1)
- Service Lifecycle (1)
- Service Management (1)
- Service Mesh (1)
- Service Monitoring (1)
- Service Orientation (1)
- Service Registry (1)
- Service Repository (1)
- Service Semantics (1)
- Shortest Path (1)
- Simulation Modeling (1)
- Situation Awareness (1)
- Skalierungsgesetz (1)
- Smart Buildings (1)
- Smart Grid (1)
- Smartphone (1)
- Social entrepreneurship (1)
- Software Architecture (1)
- Software Engineering (1)
- Software development (1)
- Softwarearchitektur (1)
- Softwareentwicklung (1)
- Softwarewerkzeug (1)
- Sonnenfinsternis (1)
- Source code properties (1)
- Spheres (1)
- Standortbezogener Dienst (1)
- Stochastic Modeling (1)
- Stochastischer Prozess (1)
- Strategie (1)
- Straßenverkehr (1)
- Streaming <Kommunikationstechnik> (1)
- Strukturgleichungsmodell (1)
- Super Resolution (1)
- Supply Chain Management (1)
- Supply Chains (1)
- Sustainability (1)
- Sustainable Tourism (1)
- Sustainable development (1)
- Swarm Intelligence (1)
- Swarm algorithm (1)
- Synchronisierung (1)
- Synchronization (1)
- Systematic Risk (1)
- Systemic risk (1)
- Tactile map (1)
- Taxonomie (1)
- Taxonomy (1)
- Technology acceptance (1)
- Tertiary study (1)
- Test Bench (1)
- Theoretische Informatik (1)
- Tourism (1)
- Tourismusmarketing (1)
- Twitter <Softwareplattform> (1)
- Twitter analysis (1)
- Unternehmen (1)
- Usability Testing (1)
- User Generated Content (1)
- Verteiltes System (1)
- Videospiel (1)
- Viertagewoche (1)
- Virtual reality (1)
- Virtuelles Laboratorium (1)
- Visual Analytics (1)
- Visualization (1)
- WS-Security (1)
- Web service (1)
- Web services (1)
- Wind power plant (1)
- Windkraftwerk (1)
- Wissensgraph (1)
- Word Counting (1)
- Workflow (1)
- XML (1)
- Zentriertes Interview (1)
- ad-hoc networks (1)
- adaptive methods (1)
- aerospace engineering (1)
- agent-based simulation (1)
- agents (1)
- agile education (1)
- anaphylaxis (1)
- anxiety (1)
- architecture (1)
- asynchronous messaging (1)
- cardiovascular disease (1)
- cashing (1)
- class room (1)
- cloud computing (1)
- clustering on countries (1)
- collaborative coordination (1)
- complex event processing (CEP) (1)
- covid 19 (1)
- credit risk (1)
- data mapping (1)
- data protection (1)
- data stream learning (1)
- data stream processing (1)
- depression (1)
- digital intervention (1)
- digital twins (1)
- distance learning (1)
- distributed environments (1)
- distributed evacuation coordination (1)
- distributed systems (1)
- dyadic grid (1)
- e-learning (1)
- e-mobility (1)
- eduDScloud (1)
- educational virtual realities (1)
- eigenface (1)
- emergency medicine (1)
- enterprise apps (1)
- evacuation guidance (1)
- evaluation (1)
- event models (1)
- events (1)
- face recognition (1)
- financial planning (1)
- forecasting models on countries (1)
- game analysis (1)
- gender (1)
- generic interface (1)
- graduate (1)
- graphical user interface (1)
- head-mounted display (1)
- health care (1)
- higher education (1)
- immersive media (1)
- information system (1)
- integrated passenger and freight transport (1)
- key performance indicators (1)
- large language model (1)
- large scale systems (1)
- lidar (1)
- literature review (1)
- load balancing (1)
- lymphadenectomy (1)
- machine learning (1)
- machine-to-machine communication (1)
- magnetometer (1)
- management (1)
- market-based coordination (1)
- matrix calulations (1)
- mental health (1)
- metacognitive therapy (1)
- multi-dimensional data (1)
- multiagent systems (1)
- ontology (1)
- open source (1)
- patients (1)
- pmCHP (1)
- point clouds (1)
- position paper (1)
- presence experience (1)
- privacy (1)
- private cloud (1)
- problem based learning (1)
- professional life (1)
- prostate cancer (1)
- psychocardiology (1)
- real-time routing (1)
- recommender systems (1)
- reliable message delivery (1)
- rural transport simulation (1)
- scaling (1)
- security (1)
- semantic knowledge (1)
- semantic web application (1)
- semistructured interview (1)
- sentiment dictionaries (1)
- sentinel lymph node dissection (1)
- serverless architecture (1)
- serverless functions (1)
- service models (1)
- service-orientation (1)
- shopping cart system (1)
- simulation training (1)
- situation aware routing (1)
- situation-awareness (1)
- smart buildings (1)
- smart cities (1)
- smartphone (1)
- solid waste management (1)
- sparse grid (1)
- stereo vision (1)
- student project (1)
- superparamagnetic iron oxide nanoparticles (1)
- survey (1)
- sustainability (1)
- system integration (1)
- systematic literature review (1)
- taxonomy (1)
- teaching entrepreneurship (1)
- text mining (1)
- tool evaluation (1)
- training effectiveness (1)
- underprivileged adolescents (1)
- user experience (1)
- user generated content (1)
- user training (1)
- virtual distance teaching (1)
- virtual emergency scenario (1)
- virtual lab (1)
- virtual patient simulation (1)
- visual delegates (1)
- visual perception (1)
- web services (1)
- work satisfaction (1)
- work-life balance (1)
- working life (1)
- workload decomposition (1)
- Ökotourismus (1)
- Übung (1)
Institute
- Fakultät IV - Wirtschaft und Informatik (107) (remove)
Agility is considered the silver bullet for survival in the VUCA world. However, many organisations are afraid of endangering their ISO 9001 certificate when introducing agile processes. A joint research project of the University of Applied Sciences and Arts Hannover and the DGQ has set itself the goal of providing more security in this area. The findings were based on interviews with managers and team members from various organisations of different sizes and industries working in an agile manner as well as on common audit practices and a literature analysis. The outcome presents a clear distinction of agility from flexibility as well as useful guidelines for the integration of agile processes in QM systems - for QM practitioners and auditors alike.
Integrated Risk and Opportunity Management (IROM) goes far beyond what is found in organizations today. However, it offers the best opportunity not only to keep pace with the VUCA world, but to actually profit from it. Accordingly, the introduction of opportunity-based thinking in addition to risk-based thinking is part of the design specification for ISO 9000 and ISO 9001. The prerequisite for the successful design of an IROM is the individual definition, control and integration of risk and opportunity management processes, considering eight success factors, the "8 C". Top management benefits directly from the result: better, coordinated decision memos enable faster and more appropriate decisions.
Subject of this work is the investigation of universal scaling laws which are observed in coupled chaotic systems. Progress is made by replacing the chaotic fluctuations in the perturbation dynamics by stochastic processes.
First, a continuous-time stochastic model for weakly coupled chaotic systems is introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck equation scaling relations are derived, which are confirmed by results of numerical simulations.
Next, the new effect of avoided crossing of Lyapunov exponents of weakly coupled disordered chaotic systems is described, which is qualitatively similar to the energy level repulsion in quantum systems. Using the scaling relations obtained for the coupling sensitivity of chaos, an asymptotic expression for the distribution function of small spacings between Lyapunov exponents is derived and compared with results of numerical simulations.
Finally, the synchronization transition in strongly coupled spatially extended chaotic systems is shown to resemble a continuous phase transition, with the coupling strength and the synchronization error as control and order parameter, respectively. Using results of numerical simulations and theoretical considerations in terms of a multiplicative noise partial differential equation, the universality classes of the observed two types of transition are determined (Kardar-Parisi-Zhang equation with saturating term, directed percolation).
The network security framework VisITMeta allows the visual evaluation and management of security event detection policies. By means of a "what-if" simulation the sensitivity of policies to specific events can be tested and adjusted. This paper presents the results of a user study for testing the usability of the approach by measuring the correct completion of given tasks as well as the user satisfaction by means of the system usability scale.
Intrusion detection systems and other network security components detect security-relevant events based on policies consisting of rules. If an event turns out as a false alarm, the corresponding policy has to be adjusted in order to reduce the number of false positives. Modified policies, however, need to be tested before going into productive use. We present a visual analysis tool for the evaluation of security events and related policies which integrates data from different sources using the IF-MAP specification and provides a “what-if” simulation for testing modified policies on past network dynamics. In this paper, we will describe the design and outcome of a user study that will help us to evaluate our visual analysis tool.
For anomaly-based intrusion detection in computer networks, data cubes can be used for building a model of the normal behavior of each cell. During inference an anomaly score is calculated based on the deviation of cell metrics from the corresponding normality model. A visualization approach is shown that combines different types of diagrams and charts with linked user interaction for filtering of data.
Objective
The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods
GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results
The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P < 0.001) and education (P < 0.001) were significant factors, but not gender (P > 0.99). For doctors, neither age (P = 0.73), professional experience (P > 0.99) nor gender (P = 0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions
GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
Objective: The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods: GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results: The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P<0.001) and education (P<0.001) were significant factors, but not gender (P>0.99). For doctors, neither age (P¼0.73), professional experience (P>0.99) nor gender (P¼0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions: GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
Hadoop is a Java-based open source programming framework, which supports the processing and storage of large volumes of data sets in a distributed computing environment. On the other hand, an overwhelming majority of organizations are moving their big data processing and storing to the cloud to take advantage of cost reduction – the cloud eliminates the need for investing heavily in infrastructures, which may or may not be used by organizations. This paper shows how organizations can alleviate some of the obstacles faced when trying to make Hadoop run in the cloud.
Our work is motivated primarily by the lack of standardization in the area of Event Processing Network (EPN) models. We identify general requirements for such models. These requirements encompass the possibility to describe events in the real world, to establish temporal and causal relationships among the events, to aggregate the events, to organize the events into a hierarchy, to categorize the events into simple or complex, to create an EPN model in an easy and simple way and to use that model ad hoc. As the major contribution, this paper applies the identified requirements to the RuleCore model.
In this paper, five ontologies are described, which include the event concepts. The paper provides an overview and comparison of existing event models. The main criteria for comparison are that there should be possibilities to model events with stretch in the time and location and participation of objects; however, there are other factors that should be taken into account as well. The paper also shows an example of using ontologies in complex event processing.
OSGi in Cloud Environments
(2013)
With an increasing complexity and scale, sufficient evaluation of Information Systems (IS) becomes a challenging and difficult task. Simulation modeling has proven as suitable and efficient methodology for evaluating IS and IS artifacts, presupposed it meets certain quality demands. However, existing research on simulation modeling quality solely focuses on quality in terms of accuracy and credibility, disregarding the role of additional quality aspects. Therefore, this paper proposes two design artifacts in order to ensure a holistic quality view on simulation quality. First, associated literature is reviewed in order to extract relevant quality factors in the context of simulation modeling, which can be used to evaluate the overall quality of a simulated solution before, during or after a given project. Secondly, the deduced quality factors are integrated in a quality assessment framework to provide structural guidance on the quality assessment procedure for simulation. In line with a Design Science Research (DSR) approach, we demonstrate the eligibility of both design artifacts by means of prototyping as well as an example case. Moreover, the assessment framework is evaluated and iteratively adjusted with the help of expert feedback.
The paper provides a comprehensive overview of modeling and pricing cyber insurance and includes clear and easily understandable explanations of the underlying mathematical concepts. We distinguish three main types of cyber risks: idiosyncratic, systematic, and systemic cyber risks. While for idiosyncratic and systematic cyber risks, classical actuarial and financial mathematics appear to be well-suited, systemic cyber risks require more sophisticated approaches that capture both network and strategic interactions. In the context of pricing cyber insurance policies, issues of interdependence arise for both systematic and systemic cyber risks; classical actuarial valuation needs to be extended to include more complex methods, such as concepts of risk-neutral valuation and (set-valued) monetary risk measures.
High-performance firms typically have two features in common: (i) they produce in more than one country and (ii) they produce more than one product. In this paper, we analyze the internationalization strategies of multi-product firms. Guided by several new stylized facts, we develop a theoretical model to determine optimal modes of market access at the firm–product level. We find that the most productive firmssell core varieties via foreign direct investment and export products with intermediate productivity. Shocks to trade costs and technology affect the endogenous decision to export or produce abroad at the product-level and, in turn, the relative productivity between parents and affiliates.
Complex Event Processing (CEP) has been established as a well-suited software technology for processing high-frequent data streams. However, intelligent stream based systems must integrate stream data with semantical background knowledge. In this work, we investigate different approaches on integrating stream data and semantic domain knowledge. In particular, we discuss from a software engineering per- spective two different architectures: an approach adding an ontology access mechanism to a common Continuous Query Language (CQL) is compared with C-SPARQL, a streaming extension of the RDF query language SPARQL.
Enterprise apps on mobile devices typically need to communicate with other system components by consuming web services. Since most of the current mobile device platforms (such as Android) do not provide built-in features for consuming SOAP services, extensions have to be designed. Additionally in order to accommodate the typical enhanced security requirements of enterprise apps, it is important to be able to deal with SOAP web service security extensions on client side. In this article we show that neither the built-in SOAP capabilities for Android web service clients are sufficient for enterprise apps nor are the necessary security features supported by the platform as is. After discussing different existing extensions making Android devices SOAP capable we explain why none of them is really satisfactory in an enterprise context. Then we present our own solution which accommodates not only SOAP but also the WS-Security features on top of SOAP. Our solution heavily relies on code generation in order to keep the flexibility benefits of SOAP on one hand while still keeping the development effort manageable for software development. Our approach provides a good foundation for the implementation of other SOAP extensions apart from security on the Android platform as well. In addition our solution based on the gSOAP framework may be used for other mobile platforms in a similar manner.
Music streaming platforms offer music listeners an overwhelming choice of music. Therefore, users of streaming platforms need the support of music recommendation systems to find music that suits their personal taste. Currently, a new class of recommender systems based on knowledge graph embeddings promises to improve the quality of recommendations, in particular to provide diverse and novel recommendations. This paper investigates how knowledge graph embeddings can improve music recommendations. First, it is shown how a collaborative knowledge graph can be derived from open music data sources. Based on this knowledge graph, the music recommender system EARS (knowledge graph Embedding-based Artist Recommender System) is presented in detail, with particular emphasis on recommendation diversity and explainability. Finally, a comprehensive evaluation with real-world data is conducted, comparing of different embeddings and investigating the influence of different types of knowledge.
Smart Cities require reliable means for managing installations that offer essential services to the citizens. In this paper we focus on the problem of evacuation of smart buildings in case of emergencies. In particular, we present an abstract architecture for situation-aware evacuation guidance systems in smart buildings, describe its key modules in detail, and provide some concrete examples of its structure and dynamics.
Sustainable tourism is a niche market that has been growing in recent years. At the same time, companies in the mass tourism market have increasingly marketed themselves with a “green” image, although this market is not sustainable. In order to successfully market sustainability, targeted marketing tactics are needed.
The aim of this research is to establish appropriate marketing tactics for sustainable tourism in the niche market and in the mass market. The purpose is to uncover current marketing tactics for both the mass tourism market and the sustainable tourism niche market. It also intends to explore how consumers who are more interested in sustainability differ from consumers with less interest in sustainability in terms of their perception of sustainability in tourism. Furthermore, this research paper will assess the trustworthiness of sustainable travel offers and the trustworthiness of quality seals in sustainable tourism. For this purpose, an online survey was conducted, which was addressed at German-speaking consumers. The survey showed, that consumers with more general interest in sustainability also consider sustainability to be more relevant in tourism. Offers for sustainable travel and quality seals were perceived as not very trustworthy. Moreover, no link could be found between the interest in sustainability and the perception of trustworthiness.
On the basis of the above, it is advisable to directly advertise sustainability in the niche market and to mention sustainability in the mass market only as an accompaniment or not at all. Further research could be undertaken to identify which factors influence the trustworthiness of offers, and trustworthiness of quality seals in sustainable tourism.
Complex Event Processing (CEP) is a modern software technology for the dynamic analysis of continuous data streams. CEP is able of searching extremely large data streams in real time for the presence of event patterns. So far, specifying event patterns of CEP rules is still a manual task based on the expertise of domain experts. This paper presents a novel batinspired swarm algorithm for automatically mining CEP rule patterns that express the relevant causal and temporal relations hidden in data streams. The basic suitability and performance of the approach is proven by extensive evaluation with both synthetically generated data and real data from the traffic domain.
M2M (machine-to-machine) systems use various communication technologies for automatically monitoring and controlling machines. In M2M systems, each machine emits a continuous stream of data records, which must be analyzed in real-time. Intelligent M2M systems should be able to diagnose their actual states and to trigger appropriate actions as soon as critical situations occur. In this paper, we show how complex event processing (CEP) can be used as the key technology for intelligent M2M systems. We provide an event-driven architecture that is adapted to the M2M domain. In particular, we define different models for the M2M domain, M2M machine states and M2M events. Furthermore, we present a general reference architecture defining the main stages of processing machine data. To prove the usefulness of our approach, we consider two real-world examples ‘solar power plants’ and ‘printers’, which show how easily the general architecture can be extended to concrete M2M scenarios.
In this article, we present the software architecture of a new generation of advisory systems using Intelligent Agent and Semantic Web technologies. Multi-agent systems provide a well-suited paradigm to implement negotiation processes in a consultancy situation. Software agents act as clients and advisors, using their knowledge to assist human users. In the presented architecture, the domain knowledge is modeled semantically by means of XML-based ontology languages such as OWL. Using an inference engine, the agents reason, based on their knowledge to make decisions or proposals. The agent knowledge consists of different types of data: on the one hand, private data, which has to be protected against unauthorized access; and on the other hand, publicly accessible knowledge spread over different Web sites. As in a real consultancy, an agent only reveals sensitive private data, if they are indispensable for finding a solution. In addition, depending on the actual consultancy situation, each agent dynamically expands its knowledge base by accessing OWL knowledge sources from the Internet. Due to the standardization of OWL, knowledge models easily can be shared and accessed via the Internet. The usefulness of our approach is proved by the implementation of an advisory system in the Semantic E-learning Agent (SEA) project, whose objective is to develop virtual student advisers that render support to university students in order to successfully organize and perform their studies.
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
Microservices build a deeply distributed system. Although this offers significant flexibility for development teams and helps to find solutions for scalability or security questions, it also intensifies the drawbacks of a distributed system. This article offers a decision framework, which helps to increase the resiliency of microservices. A metamodel is used to represent services, resiliency patterns, and quality attributes. Furthermore, the general idea for a suggestion procedure is outlined.
There are many aspects of code quality, some of which are difficult to capture or to measure. Despite the importance of software quality, there is a lack of commonly accepted measures or indicators for code quality that can be linked to quality attributes. We investigate software developers’ perceptions of source code quality and the practices they recommend to achieve these qualities. We analyze data from semi-structured interviews with 34 professional software developers, programming teachers and students from Europe and the U.S. For the interviews, participants were asked to bring code examples to exemplify what they consider good and bad code, respectively. Readability and structure were used most commonly as defining properties for quality code. Together with documentation, they were also suggested as the most common target properties for quality improvement. When discussing actual code, developers focused on structure, comprehensibility and readability as quality properties. When analyzing relationships between properties, the most commonly talked about target property was comprehensibility. Documentation, structure and readability were named most frequently as source properties to achieve good comprehensibility. Some of the most important source code properties contributing to code quality as perceived by developers lack clear definitions and are difficult to capture. More research is therefore necessary to measure the structure, comprehensibility and readability of code in ways that matter for developers and to relate these measures of code structure, comprehensibility and readability to common software quality attributes.
AlphaGo’s victory against Lee Sedol in the game of Go has been a milestone in artificial intelligence. After this success, the team behind the program further refined the architecture and applied it to many other games such as chess or shogi. In the following thesis, we try to apply the theory behind AlphaGo and its successor AlphaZero to the game of Abalone. Due to limitations in computational resources, we could not replicate the same exceptional performance.
Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer.
With the increasing significance of information technology, there is an urgent need for adequate measures of information security. Systematic information security management is one of most important initiatives for IT management. At least since reports about privacy and security breaches, fraudulent accounting practices, and attacks on IT systems appeared in public, organizations have recognized their responsibilities to safeguard physical and information assets. Security standards can be used as guideline or framework to develop and maintain an adequate information security management system (ISMS). The standards ISO/IEC 27000, 27001 and 27002 are international standards that are receiving growing recognition and adoption. They are referred to as “common language of organizations around the world” for information security. With ISO/IEC 27001 companies can have their ISMS certified by a third-party organization and thus show their customers evidence of their security measures.
Systematizing IT Risks
(2019)
IT risks — risks associated with the operation or use of information technology — have taken on great importance in business, and IT risk management is accordingly important in the science and practice of information management. Therefore, it is necessary to systematize IT risks in order to plan, manage and control for different risk-specific measures. In order to choose and implement suitable measures for managing IT risks, effect-based and causebased procedures are necessary. These procedures are explained in detail for IT security risks because of their special importance.
Aim/Purpose: We explore impressions and experiences of Information Systems graduates during their first years of employment in the IT field. The results help to understand work satisfaction, career ambition, and motivation of junior employees. This way, the attractiveness of working in the field of IS can be increased and the shortage of junior employees reduced.
Background: Currently IT professions are characterized by terms such as “shortage of professionals” and “shortage of junior employees”. To attract more people to work in IT detailed knowledge about experiences of junior employees is necessary.
Methodology: Data from a large survey of 193 graduates of the degree program “Information Systems” at University of Applied Sciences and Arts Hannover (Germany) show characteristics of their professional life like work satisfaction, motivation, career ambition, satisfaction with opportunities, development and career advancement, satisfaction with work-life balance. It is also asked whether men and women gain the same experiences when entering the job market and have the same perceptions.
Findings: The participants were highly satisfied with their work, but limitations or restrictions due to gender are noteworthy.
Recommendations for Practitioners: The results provide information on how human resource policies can make IT professions more attractive and thus convince graduates to seek jobs in the field. For instance, improving the balance between work and various areas of private life seems promising. Also, restrictions with respect to the work climate and improving communication along several dimensions need to be considered.
Future Research: More detailed research on ambition and achievement is necessary to understand gender differences.
The objective of this student project was for the students to develop, conduct, and supervise a training course for basic work place applications (word processing and business graphics). Students were responsible for the planning, organizing and the teaching of the course. As participants, underprivileged adolescents took part in order to learn the handling of IT applications and therefore, improve their job skills and have a better chance to get into employment. Therefore the adolescents do the role of trainees at the course. Our students worked with a population that is continually overlooked by the field.
As a result, the students trained to design and implement training courses, exercised to manage projects and increased their social responsibility and awareness concerning the way of life and living conditions of other young people. The underprivileged adolescents learned to use important business applications and increased their job skills and job chances. The overall design of our concept required extensive resources to supervise and to steer the students and the adolescents. The lecturers had to teach and to counsel the students and had to be on “stand-by” just in case they were needed to solve critical situations between the two groups of young people.
BYOD Bring Your Own Device
(2013)
Using modern devices like smartphones and tablets offers a wide variety of advantages; this has made them very popular as consumer devices in private life. Using them in the workplace is also popular. However, who wants to carry around and handle two devices; one for personal use, and one for work-related tasks? That is why “dual use”, using one single device for private and business applications, may represent a proper solution. The result is “Bring Your Own Device,” or BYOD, which describes the circumstance in which users make their own personal devices available for company use. For companies, this brings some opportunities and risks. We describe and discuss organizational issues, technical approaches, and solutions.
Nowadays, problems related with solid waste management become a challenge for most countries due to the rising generation of waste, related environmental issues, and associated costs of produced wastes. Effective waste management systems at different geographic levels require accurate forecasting of future waste generation. In this work, we investigate how open-access data, such as provided from the Organisation for Economic Co-operation and Development (OECD), can be used for the analysis of waste data. The main idea of this study is finding the links between socioeconomic and demographic variables that determine the amounts of types of solid wastes produced by countries. This would make it possible to accurately predict at the country level the waste production and determine the requirements for the development of effective waste management strategies. In particular, we use several machine learning data regression (Support Vector, Gradient Boosting, and Random Forest) and clustering models (k-means) to respectively predict waste production for OECD countries along years and also to perform clustering among these countries according to similar characteristics. The main contributions of our work are: (1) waste analysis at the OECD country-level to compare and cluster countries according to similar waste features predicted; (2) the detection of most relevant features for prediction models; and (3) the comparison between several regression models with respect to accuracy in predictions. Coefficient of determination (R2), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE), respectively, are used as indices of the efficiency of the developed models. Our experiments have shown that some data pre-processings on the OECD data are an essential stage required in the analysis; that Random Forest Regressor (RFR) produced the best prediction results over the dataset; and that these results are highly influenced by the quality of available socio-economic data. In particular, the RFR model exhibited the highest accuracy in predictions for most waste types. For example, for “municipal” waste, it produced, respectively, R2 = 1 and MAPE = 4.31 global error values for the test set; and for “household” waste, it, respectively, produced R2 = 1 and MAPE = 3.03. Our results indicate that the considered models (and specially RFR) all are effective in predicting the amount of produced wastes derived from input data for the considered countries.
Decision support systems for traffic management systems have to cope with a high volume of events continuously generated by sensors. Conventional software architectures do not explicitly target the efficient processing of continuous event streams. Recently, event-driven architectures (EDA) have been proposed as a new paradigm for event-based applications. In this paper we propose a reference architecture for event-driven traffic management systems, which enables the analysis and processing of complex event streams in real-time and is therefore well-suited for decision support in sensor-based traffic control sys- tems. We will illustrate our approach in the domain of road traffic management. In particular, we will report on the redesign of an intelligent transportation management system (ITMS) prototype for the high-capacity road network in Bilbao, Spain.
Nowadays, most recommender systems are based on a centralized architecture, which can cause crucial issues in terms of trust, privacy, dependability, and costs. In this paper, we propose a decentralized and distributed MANET-based (Mobile Ad-hoc NETwork) recommender system for open facilities. The system is based on mobile devices that collect sensor data about users locations to derive implicit ratings that are used for collaborative filtering recommendations. The mechanisms of deriving ratings and propagating them in a MANET network are discussed in detail. Finally, extensive experiments demonstrate the suitability of the approach in terms of different performance metrics.
Nowadays, smartphones and sensor devices can provide a variety of information about a user’s current situation. So far, many recommender systems neglect this kind of information and thus cannot provide situationspecific recommendations. Situation-aware recommender systems adapt to changes in the user’s environment and therefore are able to offer recommendations that are more appropriate for the current situation. In this paper, we present a software architecture that enables situation awareness for arbitrary recommendation techniques. The proposed system considers both (semi-)static user profiles and volatile situational knowledge to obtain meaningful recommendations. Furthermore, the implementation of the architecture in a museum of natural history is presented, which uses Complex Event Processing to achieve situation awareness.
In parcel delivery, the “last mile” from the parcel hub to the customer is costly, especially for time-sensitive delivery tasks that have to be completed within hours after arrival. Recently, crowdshipping has attracted increased attention as a new alternative to traditional delivery modes. In crowdshipping, private citizens (“the crowd”) perform short detours in their daily lives to contribute to parcel delivery in exchange for small incentives. However, achieving desirable crowd behavior is challenging as the crowd is highly dynamic and consists of autonomous, self-interested individuals. Leveraging crowdshipping for time-sensitive deliveries remains an open challenge. In this paper, we present an agent-based approach to on-time parcel delivery with crowds. Our system performs data stream processing on the couriers’ smartphone sensor data to predict delivery delays. Whenever a delay is predicted, the system attempts to forge an agreement for transferring the parcel from the current deliverer to a more promising courier nearby. Our experiments show that through accurate delay predictions and purposeful task transfers many delays can be prevented that would occur without our approach.
The transfer of historically grown monolithic software architectures into modern service-oriented architectures creates a lot of loose coupling points. This can lead to an unforeseen system behavior and can significantly impede those continuous modernization processes, since it is not clear where bottlenecks in a system arise. It is therefore necessary to monitor such modernization processes with an adaptive monitoring concept to be able to correctly record and interpret unpredictable system dynamics. This contribution presents a generic QoS measurement framework for service-based systems. The framework consists of an XML-based specification for the measurement to be performed – the Information Model (IM) – and the QoS System, which provides an execution platform for the IM. The framework will be applied to a standard business process of the German insurance industry, and the concepts of the IM and their mapping to artifacts of the QoS System will be presented. Furtherm ore, design and implementation of the QoS System’s parser and generator module and the generated artifacts are explained in detail, e.g., event model, agents, measurement module and analyzer module.
In this paper we describe methods to approximate functions and differential operators on adaptive sparse (dyadic) grids. We distinguish between several representations of a function on the sparse grid and we describe how finite difference (FD) operators can be applied to these representations. For general variable coefficient equations on sparse grids, genuine finite element (FE) discretizations are not feasible and FD operators allow an easier operator evaluation than the adapted FE operators. However, the structure of the FD operators is complex. With the aim to construct an efficient multigrid procedure, we analyze the structure of the discrete Laplacian in its hierarchical representation and show the relation between the full and the sparse grid case. The rather complex relations, that are expressed by scaling matrices for each separate coordinate direction, make us doubt about the possibility of constructing efficient preconditioners that show spectral equivalence. Hence, we question the possibility of constructing a natural multigrid algorithm with optimal O(N) efficiency. We conjecture that for the efficient solution of a general class of adaptive grid problems it is better to accept an additional condition for the dyadic grids (condition L) and to apply adaptive hp-discretization.
The automated transfer of flight logbook information from aircrafts into aircraft maintenance systems leads to reduced ground and maintenance time and is thus desirable from an economical point of view. Until recently, flight logbooks have not been managed electronically in aircrafts or at least the data transfer from aircraft to ground maintenance system has been executed manually. Latest aircraft types such as the Airbus A380 or the Boeing 787 do support an electronic logbook and thus make an automated transfer possible. A generic flight logbook transfer system must deal with different data formats on the input side – due to different aircraft makes and models – as well as different, distributed aircraft maintenance systems for different airlines as aircraft operators. This article contributes the concept and top level distributed system architecture of such a generic system for automated flight log data transfer. It has been developed within a joint industry and applied research project. The architecture has already been successfully evaluated in a prototypical implementation.
This paper describes the latest accomplishments on the current research that is based on the master’s thesis “Ein System zur Erstellung taktiler Karten für blinde und sehbehinderte Menschen” (German for “A system creating tactile maps for blind and visually impaired people”) (Hänßgen, 2012). The system consists of two parts. The first part is new software especially designed and developed for creating tactile maps addressing the needs of blind and visually impaired people on tactile information. The second is an embossing device based on a modified CNC (computer numerical control) router. By using OpenStreetMap-data, the developed system is capable of embossing tactile maps into Braille paper and writing film.
BACKGROUND:
Despite their increasing popularity, little is known about how users perceive mobile devices such as smartphones and tablet PCs in medical contexts. Available studies are often restricted to evaluating the success of specific interventions and do not adequately cover the users' basic attitudes, for example, their expectations or concerns toward using mobile devices in medical settings.
OBJECTIVE:
The objective of the study was to obtain a comprehensive picture, both from the perspective of the patients, as well as the doctors, regarding the use and acceptance of mobile devices within medical contexts in general well as the perceived challenges when introducing the technology.
METHODS:
Doctors working at Hannover Medical School (206/1151, response 17.90%), as well as patients being admitted to this facility (213/279, utilization 76.3%) were surveyed about their acceptance and use of mobile devices in medical settings. Regarding demographics, both samples were representative of the respective study population. GNU R (version 3.1.1) was used for statistical testing. Fisher's exact test, two-sided, alpha=.05 with Monte Carlo approximation, 2000 replicates, was applied to determine dependencies between two variables.
RESULTS:
The majority of participants already own mobile devices (doctors, 168/206, 81.6%; patients, 110/213, 51.6%). For doctors, use in a professional context does not depend on age (P=.66), professional experience (P=.80), or function (P=.34); gender was a factor (P=.009), and use was more common among male (61/135, 45.2%) than female doctors (17/67, 25%). A correlation between use of mobile devices and age (P=.001) as well as education (P=.002) was seen for patients. Minor differences regarding how mobile devices are perceived in sensitive medical contexts mostly relate to data security, patients are more critical of the devices being used for storing and processing patient data; every fifth patient opposed this, but nevertheless, 4.8% of doctors (10/206) use their devices for this purpose. Both groups voiced only minor concerns about the credibility of the provided content or the technical reliability of the devices. While 8.3% of the doctors (17/206) avoided use during patient contact because they thought patients might be unfamiliar with the devices, (25/213) 11.7% of patients expressed concerns about the technology being too complicated to be used in a health context.
CONCLUSIONS:
Differences in how patients and doctors perceive the use of mobile devices can be attributed to age and level of education; these factors are often mentioned as contributors of the problems with (mobile) technologies. To fully realize the potential of mobile technologies in a health care context, the needs of both the elderly as well as those who are educationally disadvantaged need to be carefully addressed in all strategies relating to mobile technology in a health context.
Renewable energy production is one of the strongest rising markets and further extreme growth can be anticipated due to desire of increased sustainability in many parts of the world. With the rising adoption of renewable power production, such facilities are increasingly attractive targets for cyber attacks. At the same time higher requirements on a reliable production are raised. In this paper we propose a concept that improves monitoring of renewable power plants by detecting anomalous behavior. The system does not only detect an anomaly, it also provides reasoning for the anomaly based on a specific mathematical model of the expected behavior by giving detailed information about various influential factors causing the alert. The set of influential factors can be configured into the system before learning normal behaviour. The concept is based on multidimensional analysis and has been implemented and successfully evaluated on actual data from different providers of wind power plants.
End users urgently request using mobile devices at their workplace. They know these devices from their private life and appreciate functionality and usability, and want to benefit from these advantages at work as well. Limitations and restrictions would not be accepted by them. On the contrary, companies are obliged to employ substantial organizational and technical measures to ensure data security and compliance when allowing to use mobile devices at the workplace. So far, only individual arrangements have been presented addressing single issues in ensuring data security and compliance. However, companies need to follow a comprehensive set of measures addressing all relevant aspects of data security and compliance in order to play it safe. Thus, in this paper at first technical architectures for using mobile devices in enterprise IT are reviewed. Thereafter a set of compliance rules is presented and, as major contribution, technical measures are explained that enable a company to integrate mobile devices into enterprise IT while still complying with these rules comprehensively. Depending on the company context, one or more of the technical architectures have to be chosen impacting the specific technical measures for compliance as elaborated in this paper. Altogether this paper, for the first time, correlates technical architectures for using mobile devices at the workplace with technical measures to assure data security and compliance according to a comprehensive set of rules.
In service-oriented architectures the management of services is a crucial task during all stages of IT operations. Based on a case study performed for a group of finance companies the different aspects of service management are presented. First, the paper discusses how services must be described for management purposes. In particular, a special emphasis is placed on the integration of legacy/non web services. Secondly, the service lifecycle that underlies service management is presented. Especially, the relation to SOA governance and an appropriate tool support by registry repositories is outlined.
The Gravitational Search Algorithm is a swarm-based optimization metaheuristic that has been successfully applied to many problems. However, to date little analytical work has been done on this topic.
This paper performs a mathematical analysis of the formulae underlying the Gravitational Search Algorithm. From this analysis, it derives key properties of the algorithm's expected behavior and recommendations for parameter selection. It then confirms through empirical examination that these recommendations are sound.
This article discusses event monitoring options for heterogeneous event sources as they are given in nowadays heterogeneous distributed information systems. It follows the central assumption, that a fully generic event monitoring solution cannot provide complete support for event monitoring; instead, event source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Following from this, the core result of the work presented here is the extension of a configurable event monitoring (Web) service for a variety of event sources. A service approach allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.
Heterogeneity has to be taken into account when integrating a set of existing information sources into a distributed information system that are nowadays often based on Service- Oriented Architectures (SOA). This is also particularly applicable to distributed services such as event monitoring, which are useful in the context of Event Driven Architectures (EDA) and Complex Event Processing (CEP). Web services deal with this heterogeneity at a technical level, also providing little support for event processing. Our central thesis is that such a fully generic solution cannot provide complete support for event monitoring; instead, source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Our core result is the design of a configurable event monitoring (Web) service that allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.