Refine
Year of publication
Document Type
- Article (300)
- Conference Proceeding (120)
- Bachelor Thesis (9)
- Periodical Part (9)
- Report (6)
- Master's Thesis (4)
- Working Paper (4)
- Part of a Book (3)
- Preprint (3)
- Book (2)
- Doctoral Thesis (1)
Language
- English (461) (remove)
Is part of the Bibliography
- no (461)
Keywords
- Euterentzündung (23)
- Student (12)
- Computersicherheit (10)
- Knowledge (10)
- Mumbai (10)
- Wissen (10)
- India (9)
- Serviceorientierte Architektur (9)
- bioplastics (9)
- biopolymers (9)
Objective
We aimed to investigate the proportion of young patients not returning to work (NRTW) at 1 year after ischemic stroke (IS) and during follow-up, and clinical factors associated with NRTW.
Methods
Patients from the Helsinki Young Stroke Registry with an IS occurring in the years 1994–2007, who were at paid employment within 1 year before IS, and with NIH Stroke Scale score ≤15 points at hospital discharge, were included. Data on periods of payment came from the Finnish Centre for Pensions, and death data from Statistics Finland. Multivariate logistic regression analyses assessed factors associated with NRTW 1 year after IS, and lasagna plots visualized the proportion of patients returning to work over time.
Results
We included a total of 769 patients, of whom 289 (37.6%) were not working at 1 year, 323 (42.0%) at 2 years, and 361 (46.9%) at 5 years from IS. When adjusted for age, sex, socioeconomic status, and NIH Stroke Scale score at admission, factors associated with NRTW at 1 year after IS were large anterior strokes, strokes caused by large artery atherosclerosis, high-risk sources of cardioembolism, and rare causes other than dissection compared with undetermined cause, moderate to severe aphasia vs no aphasia, mild and moderate to severe limb paresis vs no paresis, and moderate to severe visual field deficit vs no deficit.
Conclusions
NRTW is a frequent adverse outcome after IS in young adults with mild to moderate IS. Clinical variables available during acute hospitalization may allow prediction of NRTW.
Nitric oxide adsorption on a Au(100) single crystal has been investigated to identify the type of adsorption, the adsorption site, and the orientation and alignment of the adsorbed NO relative to the surface. This was done using a combination of 3D-surface velocity map imaging, near-ambient pressure X-ray photoelectron spectroscopy, and density functional theory. NO was observed to be molecularly adsorbed on gold at ~200 K. Very narrow angular distributions and cold rotational distributions of photodesorbed NO indicate that NO adsorbs on high-symmetry sites on the Au crystal, with the N–O bond axis close to the surface normal. Our density functional theory calculations show that NO preferentially adsorbs on the symmetric bridge (2f) site, which ensures efficient overlap of the NO π* orbital with the orbitals on the two neighbouring Au atoms, and with the N–O bond axis aligned along the surface normal, in agreement with our conclusions from the rotational state distributions. The combination of XPS, which reveals the orientation of NO on gold, with 3D-surface velocity map imaging and density functional theory thus allowed us to determine the adsorption site, orientation and alignment of nitric oxide adsorbed on Au(100).
Agility is considered the silver bullet for survival in the VUCA world. However, many organisations are afraid of endangering their ISO 9001 certificate when introducing agile processes. A joint research project of the University of Applied Sciences and Arts Hannover and the DGQ has set itself the goal of providing more security in this area. The findings were based on interviews with managers and team members from various organisations of different sizes and industries working in an agile manner as well as on common audit practices and a literature analysis. The outcome presents a clear distinction of agility from flexibility as well as useful guidelines for the integration of agile processes in QM systems - for QM practitioners and auditors alike.
Integrated Risk and Opportunity Management (IROM) goes far beyond what is found in organizations today. However, it offers the best opportunity not only to keep pace with the VUCA world, but to actually profit from it. Accordingly, the introduction of opportunity-based thinking in addition to risk-based thinking is part of the design specification for ISO 9000 and ISO 9001. The prerequisite for the successful design of an IROM is the individual definition, control and integration of risk and opportunity management processes, considering eight success factors, the "8 C". Top management benefits directly from the result: better, coordinated decision memos enable faster and more appropriate decisions.
Background
Uncomplicated urinary tract infections (UTI) are common in general practice and usually treated with antibiotics. This contributes to increasing resistance rates of uropathogenic bacteria. A previous trial showed a reduction of antibiotic use in women with UTI by initial symptomatic treatment with ibuprofen. However, this treatment strategy is not suitable for all women equally. Arctostaphylos uva-ursi (UU, bearberry extract arbutin) is a potential alternative treatment. This study aims at investigating whether an initial treatment with UU in women with UTI can reduce antibiotic use without significantly increasing the symptom burden or rate of complications.
Methods
This is a double-blind, randomized, and controlled comparative effectiveness trial. Women between 18 and 75 years with suspected UTI and at least two of the symptoms dysuria, urgency, frequency or lower abdominal pain will be assessed for eligibility in general practice and enrolled into the trial. Participants will receive either a defined daily dose of 3 × 2 arbutin 105 mg for 5 days (intervention) or fosfomycin 3 g once (control). Antibiotic therapy will be provided in the intervention group only if needed, i.e. for women with worsening or persistent symptoms. Two co-primary outcomes are the number of all antibiotic courses regardless of the medical indication from day 0–28, and the symptom burden, defined as a weighted sum of the daily total symptom scores from day 0–7. The trial result is considered positive if superiority of initial treatment with UU is demonstrated with reference to the co-primary outcome number of antibiotic courses and non-inferiority of initial treatment with UU with reference to the co-primary outcome symptom burden.
Discussion
The trial’s aim is to investigate whether initial treatment with UU is a safe and effective alternative treatment strategy in women with UTI. In that case, the results might change the existing treatment strategy in general practice by promoting delayed prescription of antibiotics and a reduction of antibiotic use in primary care.
Integrating distributional and lexical information for semantic classification of words using MRMF
(2016)
Semantic classification of words using distributional features is usually based on the semantic similarity of words. We show on two different datasets that a trained classifier using the distributional features directly gives better results. We use Support Vector Machines (SVM) and Multirelational Matrix Factorization (MRMF) to train classifiers. Both give similar results. However, MRMF, that was not used for semantic classification with distributional features before, can easily be extended with more matrices containing more information from different sources on the same problem. We demonstrate the effectiveness of the novel approach by including information from WordNet. Thus we show, that MRMF provides an interesting approach for building semantic classifiers that (1) gives better results than unsupervised approaches based on vector similarity, (2) gives similar results as other supervised methods and (3) can naturally be extended with other sources of information in order to improve the results.
The CogALex-V Shared Task provides two datasets that consists of pairs of words along with a classification of their semantic relation. The dataset for the first task distinguishes only between related and unrelated, while the second data set distinguishes several types of semantic relations. A number of recent papers propose to construct a feature vector that represents a pair of words by applying a pairwise simple operation to all elements of the feature vector. Subsequently, the pairs can be classified by training any classification algorithm on these vectors. In the present paper we apply this method to the provided datasets. We see that the results are not better than from the given simple baseline. We conclude that the results of the investigated method are strongly depended on the type of data to which it is applied.
In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics.
For indexing archived documents the Dutch Parliament uses a specialized thesaurus. For good results for full text retrieval and automatic classification it turns out to be important to add more synonyms to the existing thesaurus terms. In the present work we investigate the possibilities to find synonyms for terms of the parliaments thesaurus automatically. We propose to use distributional similarity (DS). In an experiment with pairs of synonyms and non-synonyms we train and test a classifier using distributional similarity and string similarity. Using ten-fold cross validation we were able to classify 75% of the pairs of a set of 6000 word pairs correctly.
Background: After kidney transplantation, immunosuppressive therapy causes impaired cellular immune defense leading to an increased risk of viral complications. Trough level monitoring of immunosuppressants is insufficient to estimate the individual intensity of immunosuppression. We have already shown that virus-specific T cells (Tvis) correlate with control of virus replication as well as with the intensity of immunosuppression. The multicentre IVIST01-trial should prove that additional steering of immunosuppressive and antiviral therapy by Tvis levels leads to better graft function by avoidance of over-immunosuppression (for example, viral infections) and drug toxicity (for example, nephrotoxicity).
Methods/design: The IVIST-trial starts 4 weeks after transplantation. Sixty-four pediatric kidney recipients are randomized either to a non-intervention group that is only treated conservatively or to an intervention group with additional monitoring by Tvis. The randomization is stratified by centre and cytomegalovirus (CMV) prophylaxis. In both groups the immunosuppressive medication (cyclosporine A and everolimus) is adopted in the same target range of trough levels. In the non-intervention group the immunosuppressive therapy (cyclosporine A and everolimus) is only steered by classical trough level monitoring and the antiviral therapy of a CMV infection is performed according to a standard protocol. In contrast, in the intervention group the dose of immunosuppressants is individually adopted according to Tvis levels as a direct measure of the intensity of immunosuppression in addition to classical trough level monitoring. In case of CMV infection or reactivation the antiviral management is based on the individual CMV-specific immune defense assessed by the CMV-Tvis level. Primary endpoint of the study is the glomerular filtration rate 2 years after transplantation; secondary endpoints are the number and severity of viral infections and the incidence of side effects of immunosuppressive and antiviral drugs.
Discussion: This IVIST01-trial will answer the question whether the new concept of steering immunosuppressive and antiviral therapy by Tvis levels leads to better future graft function. In terms of an effect-related drug monitoring, the study design aims to realize a personalization of immunosuppressive and antiviral management after transplantation. Based on the IVIST01-trial, immunomonitoring by Tvis might be incorporated into routine care after kidney transplantation.
Subject of this work is the investigation of universal scaling laws which are observed in coupled chaotic systems. Progress is made by replacing the chaotic fluctuations in the perturbation dynamics by stochastic processes.
First, a continuous-time stochastic model for weakly coupled chaotic systems is introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck equation scaling relations are derived, which are confirmed by results of numerical simulations.
Next, the new effect of avoided crossing of Lyapunov exponents of weakly coupled disordered chaotic systems is described, which is qualitatively similar to the energy level repulsion in quantum systems. Using the scaling relations obtained for the coupling sensitivity of chaos, an asymptotic expression for the distribution function of small spacings between Lyapunov exponents is derived and compared with results of numerical simulations.
Finally, the synchronization transition in strongly coupled spatially extended chaotic systems is shown to resemble a continuous phase transition, with the coupling strength and the synchronization error as control and order parameter, respectively. Using results of numerical simulations and theoretical considerations in terms of a multiplicative noise partial differential equation, the universality classes of the observed two types of transition are determined (Kardar-Parisi-Zhang equation with saturating term, directed percolation).
The network security framework VisITMeta allows the visual evaluation and management of security event detection policies. By means of a "what-if" simulation the sensitivity of policies to specific events can be tested and adjusted. This paper presents the results of a user study for testing the usability of the approach by measuring the correct completion of given tasks as well as the user satisfaction by means of the system usability scale.
Intrusion detection systems and other network security components detect security-relevant events based on policies consisting of rules. If an event turns out as a false alarm, the corresponding policy has to be adjusted in order to reduce the number of false positives. Modified policies, however, need to be tested before going into productive use. We present a visual analysis tool for the evaluation of security events and related policies which integrates data from different sources using the IF-MAP specification and provides a “what-if” simulation for testing modified policies on past network dynamics. In this paper, we will describe the design and outcome of a user study that will help us to evaluate our visual analysis tool.
For anomaly-based intrusion detection in computer networks, data cubes can be used for building a model of the normal behavior of each cell. During inference an anomaly score is calculated based on the deviation of cell metrics from the corresponding normality model. A visualization approach is shown that combines different types of diagrams and charts with linked user interaction for filtering of data.
Introduction:
Human Immunodeficiency Virus (HIV) infection remains prevalent co-morbidity, and among fracture patients. Few studies have investigated the role of exercise interventions in preventing bone demineralization in people who have fractures and HIV. If exercise exposed, HIV-infected individuals may experience improved bone health outcomes (BMD), function, quality of life (QoL). The study will aim to assess the impact of home based exercises on bone mineral density, functional capacity, QoL, and some serological markers of health in HIV infection among Nigerians and South Africans.
Methods and design:
The study is an assessor-blinded randomized controlled trial. Patients managed with internal and external fixation for femoral shaft fracture at the study sites will be recruited to participate in the study. The participants will be recruited 2 weeks post-discharge at the follow-up clinic with the orthopaedic surgeon. The study population will consist of all persons with femoral fracture and HIV-positive and negative (HIV-positive medically confirmed) aged 18 to 60 years attending the above-named health facilities. For the HIV-positive participants, a documented positive HIV result, as well as a history of being followed-up at the HIV treatment and care center. A developed home based exercise programme will be implemented in the experimental group while the control group continues with the usual rehabilitation programme. The primary outcome measures will be function, gait, bone mineral density, physical activity, and QoL.
Discussion:
The proposed trial will compare the effect of a home-based physical exercise-training programme in the management of femoral fracture to the usual physiotherapy management programmes with specific outcomes of bone mineral density, function, and inflammatory markers.
Background: Autism Spectrum Disorder (ASD) is characterized by impairments in social communication, limited repetitive behaviors, impaired language development, and interest or activity patterns, which include a group complex neurodevelopmental syndrome with diverse phenotypes that reveal considerable etiological and clinical heterogeneity and are also considered one of the most heritable disorders (over 90%). Genetic, epigenetic, and environmental factors play a role in the development of ASD.
Aim: This study was designed to investigate the extent of DNA damage in parents of autistic children by treating peripheral blood mononuclear cells (PBMCs) with bleomycin and hydrogen peroxide (H2O2).
Methods: Peripheral blood mononuclear cells (PBMCs) were isolated by the Ficoll method and treated with a specific concentration of bleomycin and H2O2 for 30 min and 5 min, respectively. Then, the degree of DNA damage was analyzed by the alkaline comet assay or single cell gel electrophoresis (SCGE), an effective way to measure DNA fragmentation in eukaryotic cells.
Results: Our findings revealed that there is a significant difference in the increase of DNA damage in parents with affected children compared to the control group, which can indicate the inability of the DNA molecule repair system. Furthermore, our study showed a significant association between fathers’ occupational difficulties (exposed to the influence of environmental factors), as well as family marriage, and suffering from ASD in offspring.
Conclusion: Our results suggested that the influence of environmental factors on parents of autistic children may affect the development of autistic disorder in their offspring. Subsequently, based on our results, investigating the effect of environmental factors on the amount of DNA damage in parents with affected children requires more studies.
Objective
The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods
GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results
The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P < 0.001) and education (P < 0.001) were significant factors, but not gender (P > 0.99). For doctors, neither age (P = 0.73), professional experience (P > 0.99) nor gender (P = 0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions
GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
Objective: The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods: GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results: The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P<0.001) and education (P<0.001) were significant factors, but not gender (P>0.99). For doctors, neither age (P¼0.73), professional experience (P>0.99) nor gender (P¼0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions: GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
The properties of these carbon nanostructures are determined by the structure and orientation of the graphitic domains during pyrolysis of carbon precursors. In this work, we investigated systematically the impact of creep stress during the stabilization process on the cyclization and molecular orientation of polyacrylonitrile as well as the graphitized structure after high temperature carbonization. Therefore, polyacrylonitrile (PAN) is electrospun and then stabilized with and without application of creep stress at different temperatures. The effect of creep stress on cyclization was monitored via Fourier transform IR spectroscopy (FTIR) and it was found that the degree of cyclization varies with the application of creep stress during the initial stages of cyclization at low temperatures (190°C and 210°C) in contrast to cyclization done at higher temperature (230°C). Herman molecular orientation factor was evaluated by polarized FTIR for PAN nanofibers cyclized with and without creep stress at 230°C-10 h. Subsequently, carbonization was performed at 1000°C and 1200°C for nanofibers cyclized at 230°C-10 h. Our results from XRD and Raman spectroscopy shows that the degree of graphitization and ordering of graphitic domains was enhanced for PAN nanofibers that were creep stressed during the cyclization process, even though both PAN nanofibers cyclized with creep stress and without creep stress showed the same amount of cyclized material. This increased degree of graphitization can be tracked to application of creep stress during the stabilization process which obviously favors the formation of sp2-hybridized carbon planes in the carbonization process. This finding highlights the impact of mechanical stress linking the cyclization of PAN nanofibers to graphitization.
Our results will pave the way for a deeper understanding of mechano-chemical processes to fabricate well-aligned graphitic domains which improves the mechanical and electrical properties of CNFs.
Improving the graphitic structure in carbon nanofibers (CNFs) is important for exploiting their potential in mechanical, electrical and electrochemical applications. Typically, the synthesis of carbon fibers with a highly graphitized structure demands a high temperature of almost 2500 °C. Furthermore, to achieve an improved graphitic structure, the stabilization of a precursor fiber has to be assisted by the presence of tension in order to enhance the molecular orientation. Keeping this in view, herein we report on the fabrication of graphene nanoplatelets (GNPs) doped carbon nanofibers using electrospinning followed by oxidative stabilization and carbonization. The effect of doping GNPs on the graphitic structure was investigated by carbonizing them at various temperatures (1000 °C, 1200 °C, 1500 °C and 1700 °C). Additionally, a stabilization was achieved with and without constant creep stress (only shrinkage stress) for both pristine and doped precursor nanofibers, which were eventually carbonized at 1700 °C. Our findings reveal that the GNPs doping results in improving the graphitic structure of polyacrylonitrile (PAN). Further, in addition to the templating effect during the nucleation and growth of graphitic crystals, the GNPs encapsulated in the PAN nanofiber matrix act in-situ as micro clamp units performing the anchoring function by preventing the loss of molecular orientation during the stabilization stage, when no external tension is applied to nanofiber mats. The templating effect of the entire graphitization process is reflected by an increased electrical conductivity along the fibers. Simultaneously, the electrical anisotropy is reduced, i.e., the GNPs provide effective pathways with improved conductivity acting like bridges between the nanofibers resulting in an improved conductivity across the fiber direction compared to the pristine PAN system.
The reactivity of graphene at its boundary region has been imaged using non-linear spectroscopy to address the controversy whether the terraces of graphene or its edges are more reactive. Graphene was functionalised with phenyl groups, and we subsequently scanned our vibrational sum-frequency generation setup from the functionalised graphene terraces across the edges. A greater phenyl signal is clearly observed at the edges, showing evidence of increased reactivity in the boundary region. We estimate an upper limit of 1 mm for the width of the CVD graphene boundary region.
We report the unambiguous detection of phenyl groups covalently attached to functionalised graphene using non-linear spectroscopy. Sum-frequency generation was employed to probe graphene on a gold surface after chemical functionalisation using a benzene diazonium salt. We observe a distinct resonance at 3064 cm1 which can clearly be assigned to an aromatic C–H stretch by comparison with a self-assembled monolayer on a gold substrate formed from benzenethiol. Not only does sum-frequency generation spectroscopy allow one to characterise functionalised graphene with higher sensitivity and much better specificity than many other spectroscopic techniques, but it also opens up the possibility to assess the coverage of graphene with functional groups, and to determine their orientation relative to the graphene surface.
Digital data on tangible and intangible cultural assets is an essential part of daily life, communication and experience. It has a lasting influence on the perception of cultural identity as well as on the interactions between research, the cultural economy and society. Throughout the last three decades, many cultural heritage institutions have contributed a wealth of digital representations of cultural assets (2D digital reproductions of paintings, sheet music, 3D digital models of sculptures, monuments, rooms, buildings), audio-visual data (music, film, stage performances), and procedural research data such as encoding and annotation formats. The long-term preservation and FAIR availability of research data from the cultural heritage domain is fundamentally important, not only for future academic success in the humanities but also for the cultural identity of individuals and society as a whole. Up to now, no coordinated effort for professional research data management on a national level exists in Germany. NFDI4Culture aims to fill this gap and create a usercentered, research-driven infrastructure that will cover a broad range of research domains from musicology, art history and architecture to performance, theatre, film, and media studies.
The research landscape addressed by the consortium is characterized by strong institutional differentiation. Research units in the consortium's community of interest comprise university institutes, art colleges, academies, galleries, libraries, archives and museums. This diverse landscape is also characterized by an abundance of research objects, methodologies and a great potential for data-driven research. In a unique effort carried out by the applicant and co-applicants of this proposal and ten academic societies, this community is interconnected for the first time through a federated approach that is ideally suited to the needs of the participating researchers. To promote collaboration within the NFDI, to share knowledge and technology and to provide extensive support for its users have been the guiding principles of the consortium from the beginning and will be at the heart of all workflows and decision-making processes. Thanks to these principles, NFDI4Culture has gathered strong support ranging from individual researchers to highlevel cultural heritage organizations such as the UNESCO, the International Council of Museums, the Open Knowledge Foundation and Wikimedia. On this basis, NFDI4Culture will take innovative measures that promote a cultural change towards a more reflective and sustainable handling of research data and at the same time boost qualification and professionalization in data-driven research in the domain of cultural heritage. This will create a long-lasting impact on science, cultural economy and society as a whole.
Background: Health information systems (HIS) are one of the most important areas for biomedical and health informatics. In order to professionally deal with HIS well-educated informaticians are needed. Because of these reasons, in 2001 an international course has been established: The Frank – van Swieten Lectures on Strategic Information Management of Health Information Systems.
Objectives: Reporting about the Frank – van Swieten Lectures and about our students‘ feedback on this course during the last 16 years. Summarizing our lessons learned and making recommendations for such international courses on HIS.
Methods: The basic concept of the Frank – van Swieten lectures is to teach the theoretical background in local lectures, to organize practical exercises on modelling sub-information systems of the respective local HIS and finally to conduct Joint Three Days as an international meeting were the resulting models are introduced and compared.
Results: During the last 16 years, the Universities of Amsterdam, Braunschweig, Heidelberg/Heilbronn, Leipzig as well as UMIT were involved in running this course. Overall, 517 students from these universities participated. Our students‘ feedback was clearly positive.
The Joint Three Days of the Frank – van Swieten Lectures, where at the end of the course all students can meet, turned out to be an important component of this course. Based on the last 16 years, we recommend common teaching materials, agreement on equivalent clinical areas for the exercises, support of group building of international student groups, motivation of using a collaboration platform, ensuring quality management of the course, addressing different levels of knowledge of the students, and ensuring sufficient funding for joint activities.
Conclusions: Although associated with considerable additional efforts, we can clearly recommend establishing such international courses on HIS, such as the Frank – van Swieten Lectures.
Background: In Germany, hospice and palliative care is well covered through inpatient, outpatient, and home-based care services. It is unknown if, and to what extent, there is a need for additional day care services to meet the specific needs of patients and caregivers.
Methods: Two day hospices and two palliative day care clinics were selected. In the first step, two managers from each facility (n = 8) were interviewed by telephone, using a semi-structured interview guide. In the second step, four focus groups were conducted, each with three to seven representatives of hospice and palliative care from the facilities’ hospice and palliative care networks. Interviews and focus groups were audio recorded, transcribed verbatim and analyzed using qualitative content analysis.
Results: The interviewed experts perceived day care services as providing additional patient and caregiver benefits. Specifically, the services were perceived to meet patient needs for social interaction and bundled treatments, especially for patients who did not fit into inpatient settings (due to, e.g., their young age or a lack of desire for inpatient admission). The services were also perceived to meet caregiver needs for support, providing short-term relief for the home care situation.
Conclusions: The results suggest that inpatient, outpatient, and home-based hospice and palliative care services do not meet the palliative care needs of all patients. Although the population that is most likely to benefit from day care services is assumed to be relatively small, such services may meet the needs of certain patient groups more effectively than other forms of care.
FID Civil Engineering, Architecture and Urbanism digital - A platform for science (BAUdigital)
(2022)
University Library Braunschweig (UB Braunschweig), University and State Library Darmstadt (ULB Darmstadt), TIB – Leibniz Information Centre for Technology and Natural Sciences and the Fraunhofer Information Centre for Planning and Building (Fraunhofer IRB) are jointly establishing a specialised information service (FID, "Fachinformationsdienst") for the disciplines of civil engineering, architecture and urbanism. The FID BAUdigital, which is funded by the German Research Foundation (DFG, "Deutsche Forschungsgemeinschaft"), will provide researchers working on digital design, planning and production methods in construction engineering with a joint information, networking and data exchange platform and support them with innovative services for documentation, archiving and publication in their data-based research.
Hadoop is a Java-based open source programming framework, which supports the processing and storage of large volumes of data sets in a distributed computing environment. On the other hand, an overwhelming majority of organizations are moving their big data processing and storing to the cloud to take advantage of cost reduction – the cloud eliminates the need for investing heavily in infrastructures, which may or may not be used by organizations. This paper shows how organizations can alleviate some of the obstacles faced when trying to make Hadoop run in the cloud.
Our work is motivated primarily by the lack of standardization in the area of Event Processing Network (EPN) models. We identify general requirements for such models. These requirements encompass the possibility to describe events in the real world, to establish temporal and causal relationships among the events, to aggregate the events, to organize the events into a hierarchy, to categorize the events into simple or complex, to create an EPN model in an easy and simple way and to use that model ad hoc. As the major contribution, this paper applies the identified requirements to the RuleCore model.
In this paper, five ontologies are described, which include the event concepts. The paper provides an overview and comparison of existing event models. The main criteria for comparison are that there should be possibilities to model events with stretch in the time and location and participation of objects; however, there are other factors that should be taken into account as well. The paper also shows an example of using ontologies in complex event processing.
OSGi in Cloud Environments
(2013)
With an increasing complexity and scale, sufficient evaluation of Information Systems (IS) becomes a challenging and difficult task. Simulation modeling has proven as suitable and efficient methodology for evaluating IS and IS artifacts, presupposed it meets certain quality demands. However, existing research on simulation modeling quality solely focuses on quality in terms of accuracy and credibility, disregarding the role of additional quality aspects. Therefore, this paper proposes two design artifacts in order to ensure a holistic quality view on simulation quality. First, associated literature is reviewed in order to extract relevant quality factors in the context of simulation modeling, which can be used to evaluate the overall quality of a simulated solution before, during or after a given project. Secondly, the deduced quality factors are integrated in a quality assessment framework to provide structural guidance on the quality assessment procedure for simulation. In line with a Design Science Research (DSR) approach, we demonstrate the eligibility of both design artifacts by means of prototyping as well as an example case. Moreover, the assessment framework is evaluated and iteratively adjusted with the help of expert feedback.
The paper provides a comprehensive overview of modeling and pricing cyber insurance and includes clear and easily understandable explanations of the underlying mathematical concepts. We distinguish three main types of cyber risks: idiosyncratic, systematic, and systemic cyber risks. While for idiosyncratic and systematic cyber risks, classical actuarial and financial mathematics appear to be well-suited, systemic cyber risks require more sophisticated approaches that capture both network and strategic interactions. In the context of pricing cyber insurance policies, issues of interdependence arise for both systematic and systemic cyber risks; classical actuarial valuation needs to be extended to include more complex methods, such as concepts of risk-neutral valuation and (set-valued) monetary risk measures.
The objective of this study is to analyze noise patterns during 599 visceral surgical procedures. Considering work-safety regulations, we will identify immanent noise patterns during major visceral surgeries. Increased levels of noise are known to have negative health impacts. Based on a very finegrained data collection over a year, this study will introduce a new procedure for visual representation of intra-surgery noise progression and pave new paths for future research on noise reduction in visceral surgery. Digital decibel sound-level meters were used to record the total noise in three operating theatres in one-second cycles over a year. These data were matched to archival data on surgery characteristics. Because surgeries inherently vary in length, we developed a new procedure to normalize surgery times to run cross-surgery comparisons. Based on this procedure, dBA values were adjusted to each normalized time point. Noise-level patterns are presented for surgeries contingent on important surgery characteristics: 16 different surgery types, operation method, day/night time point and operation complexity (complexity levels 1–3). This serves to cover a wide spectrum of day-to-day surgeries. The noise patterns reveal significant sound level differences of about 1 dBA, with the mostcommon noise level being spread between 55 and 60 dBA. This indicates a sound situation in many of the surgeries studied likely to cause stress in patients and staff. Absolute and relative risks of meeting or exceeding 60 dBA differ considerably across operation types. In conclusion, the study reveals that maximum noise levels of 55 dBA are frequently exceeded during visceral surgical procedures. Especially complex surgeries show, on average, a higher noise exposure. Our findings warrant active noise management for visceral surgery to reduce potential negative impacts of noise on surgical performance and outcome.
High-performance firms typically have two features in common: (i) they produce in more than one country and (ii) they produce more than one product. In this paper, we analyze the internationalization strategies of multi-product firms. Guided by several new stylized facts, we develop a theoretical model to determine optimal modes of market access at the firm–product level. We find that the most productive firmssell core varieties via foreign direct investment and export products with intermediate productivity. Shocks to trade costs and technology affect the endogenous decision to export or produce abroad at the product-level and, in turn, the relative productivity between parents and affiliates.
Complex Event Processing (CEP) has been established as a well-suited software technology for processing high-frequent data streams. However, intelligent stream based systems must integrate stream data with semantical background knowledge. In this work, we investigate different approaches on integrating stream data and semantic domain knowledge. In particular, we discuss from a software engineering per- spective two different architectures: an approach adding an ontology access mechanism to a common Continuous Query Language (CQL) is compared with C-SPARQL, a streaming extension of the RDF query language SPARQL.
All of us are aware of the changes in the information field during the last years. We all see the paradigm shift coming up and have some idea how it will challenge our profession in the future. But how the road to excellence - in education of information specialists in the future - will look like? There are different models (new and old ones) for reorganising the structure of education: * Integration * Specialisation * Step-by step-model * Modul System * Network System / Combination model The paper will present the actual level of discussion on building up a new curriculum at the Department of Information and Communication (IK) at the FH Hannover. Based on the mission statement of the department »Education of information professionals is a part of the dynamic evolution of knowledge society« the direction of change and the main goals will be presented. The different reorganisation models will be explained with its objectives, opportunities and forms of implementation. Some examples will show the ideas and tools for a first draft of a reconstruction plan to become fit for the future. This talk has been held at the German-Dutch University Conference »Information Specialists for the 21st Century« at the Fachhochschule Hannover - University of Applied Sciences, Department of Information and Communication, October 14 -15, 1999 in Hannover, Germany.
The aim of the podcast Digitization of Medicine is to interest a broader audience and, in particular, young women, in research and work in the field of medical informatics. This article presents the usage figures and discusses their significance for further research on the success of science communication. By 24/02/2022, a total of 24,351 downloads had been made. There were slightly more female than male listeners, and they tended to be younger. Despite the importance podcast are gaining for science communication, little is known about the respective user group and further research is needed. In this context, this paper aims to help make the effectiveness of podcasts comparable.
Quartz-crystal microbalances (QCMs) are commercially available mass sensors which mainly consist of a quartz resonator that oscillates at a characteristic frequency, which shifts when mass changes due to surface binding of molecules. In addition to mass changes, the viscosity of gases or liquids in contact with the sensor also shifts the resonance but also influences the quality factor (Q-factor). Typical biosensor applications demand operation in liquid environments leading to viscous damping strongly lowering Q-factors. For obtaining reliable measurements in liquid environments, excellent resonator control and signal processing are essential but standard resonator circuits like the Pierce and Colpitts oscillator fail to establish stable resonances. Here we present a lowcost, compact and robust oscillator circuit comprising of state-of-the-art commercially available surface-mount technology components which stimulates the QCMs oscillation, while it also establishes a control loop regulating the applied voltage. Thereby an increased energy dissipation by strong viscous damping in liquid solutions can be compensated and oscillations are stabilized. The presented circuit is suitable to be used in compact biosensor systems using custom-made miniaturized QCMs in microfluidic environments. As a proof of concept we used this circuit in combination with a customized microfabricated QCM in a microfluidic environment to measure the concentration of C-reactive protein (CRP) in buffer (PBS) down to concentrations as low as 5 μgmL -1.
Research question: Rivalries in team sports are commonly conceptualized as a threat to the fans’ identity. Therefore, past research has mainly focused on the negative consequences. However, theoretical arguments and empirical evidence suggest that rivalry has both negative and positive effects on fans’ self-concept. This research develops and empirically tests a model which captures and integrates these dual effects of rivalry.
Research methods: Data were collected via an on-site survey at home games of eight German Bundesliga football teams (N = 571). Structural equation modeling provides strong support for the proposed model.
Results and findings: In line with previous research, the results show that rivalry threatens fans’ identity as reflected in lower public collective self-esteem in relation to supporters of the rival team. However, the results also show that there are crucial positive consequences, such as higher perceptions of public collective self-esteem in relation to supporters of non-rival opponents, perceived ingroup distinctiveness and ingroup cohesion. These positive effects are mediated through increases in disidentification with the rival and perceived reciprocity of rivalry.
Implications: We contribute to the literature by providing a more balanced view of one of team sports’ key phenomena. Our results indicate that the prevalent conceptualization of rivalry as an identity threat should be amended by the positive consequences. Our research also offers guidance for the promotion of rivalries, where the managerial focus should be on creating a perception that a rivalry is reciprocal.
Research question: In order to reduce fan aggression surrounding rivalry games, team sport organizations often try to placate fans by downplaying the importance of the game (e.g. ‘the derby is not a war’). Drawing on the intergroup conflict literature, this research derives dual identity statements and examines their effectiveness in reducing fan aggressiveness compared to the managerial practice of downplaying rivalry.
Research methods: Three field experimental studies (one face-to-face survey and two online surveys) tested the hypotheses. Established rivalries in the German soccer league Bundesliga served as the empirical setting of the studies. The data were analyzed using ANCOVA and linear regression analyses.
Results and findings: Dual identity statements reduce fan aggressiveness compared to both downplay statements and a no-statement control condition, independent of team identification and trait aggression. Importantly, the managerial practice of downplaying rivalry appears to be counterproductive. It produces even higher levels of fan aggressiveness than making no statement, an effect caused by psychological reactance.
Implications: Sport organizations should not alienate their fan base by attempting to play down the importance of rivalry, which is an integral part of fan identity. Instead, they should strengthen the supporters’ unique identity (as fans of a particular team) while at the same time facilitating identification with the rival at a superordinate level (e.g. as joint fans of a region).
Marketing, get ready to rumble — How rivalry promotes distinctiveness for brands and consumers
(2018)
Scholars typically advise brands to stay away from public conflict with competitors as research has focused on negative consequences - e.g., price wars, escalating hostilities, and derogation. This research distinguishes between rivalry between firms (inter-firm brand rivalry) and rivalry between consumers (inter-consumer brand rivalry). Four studies and six samples show both types of rivalry can have positive consequences for both firms and consumers. Inter-firm brand rivalry boosts perceived distinctiveness of competing brands independent of consumption, attitude, familiarity, and involvement. Inter-consumer brand rivalry increases consumer group distinctiveness, an effect mediated by brand identification and rival brand disidentification. We extend social identity theory by demonstrating that: 1) outside actors like firms can promote inter-consumer rivalry through inter-firm rivalry and 2) promoting such conflict can actually provide benefits to consumers as well as firms. The paper challenges the axiom “never knock the competition,” deriving a counter-intuitive way to accomplish one of marketing's premier objectives.
Social comparison theories suggest that ingroups are strengthened whenever important outgroups are weakened (e.g., by losing status or power). It follows that ingroups have little reason to help outgroups facing an existential threat. We challenge this notion by showing that ingroups can also be weakened when relevant comparison outgroups are weakened, which can motivate ingroups to strategically offer help to ensure the outgroups' survival as a highly relevant comparison target. In three preregistered studies, we showed that an existential threat to an outgroup with high (vs. low) identity relevance affected strategic outgroup helping via two opposing mechanisms. The potential demise of a highly relevant outgroup increased participants’ perceptions of ingroup identity threat, which was positively related to helping. At the same time, the outgroup’s misery evoked schadenfreude, which was negatively related to helping. Our research exemplifies a group's secret desire for strong outgroups by underlining their importance for identity formation.
According to the third-person effect or the influence of presumed media influence approach, the presumption that the media has strong effects on other people can affect individuals’ attitudes and behavior. For instance, if people believe in strong media influences on others, they are more likely to increase their communication activities or support demands for restrictions on media. A standardized online survey among German journalists (N = 960) revealed that the stronger the journalists perceive the political online influence on the public to be, the more frequently they contradict unwanted political views in their articles. Moreover, even journalists are more likely to approve of restrictions on the Internet’s political influence, the stronger they believe the effects of online media to be. The data reveal no connections between communication activities and demands for restrictions.
Enterprise apps on mobile devices typically need to communicate with other system components by consuming web services. Since most of the current mobile device platforms (such as Android) do not provide built-in features for consuming SOAP services, extensions have to be designed. Additionally in order to accommodate the typical enhanced security requirements of enterprise apps, it is important to be able to deal with SOAP web service security extensions on client side. In this article we show that neither the built-in SOAP capabilities for Android web service clients are sufficient for enterprise apps nor are the necessary security features supported by the platform as is. After discussing different existing extensions making Android devices SOAP capable we explain why none of them is really satisfactory in an enterprise context. Then we present our own solution which accommodates not only SOAP but also the WS-Security features on top of SOAP. Our solution heavily relies on code generation in order to keep the flexibility benefits of SOAP on one hand while still keeping the development effort manageable for software development. Our approach provides a good foundation for the implementation of other SOAP extensions apart from security on the Android platform as well. In addition our solution based on the gSOAP framework may be used for other mobile platforms in a similar manner.
Music streaming platforms offer music listeners an overwhelming choice of music. Therefore, users of streaming platforms need the support of music recommendation systems to find music that suits their personal taste. Currently, a new class of recommender systems based on knowledge graph embeddings promises to improve the quality of recommendations, in particular to provide diverse and novel recommendations. This paper investigates how knowledge graph embeddings can improve music recommendations. First, it is shown how a collaborative knowledge graph can be derived from open music data sources. Based on this knowledge graph, the music recommender system EARS (knowledge graph Embedding-based Artist Recommender System) is presented in detail, with particular emphasis on recommendation diversity and explainability. Finally, a comprehensive evaluation with real-world data is conducted, comparing of different embeddings and investigating the influence of different types of knowledge.
Smart Cities require reliable means for managing installations that offer essential services to the citizens. In this paper we focus on the problem of evacuation of smart buildings in case of emergencies. In particular, we present an abstract architecture for situation-aware evacuation guidance systems in smart buildings, describe its key modules in detail, and provide some concrete examples of its structure and dynamics.
The increasing variety of combinations of different building technology components offers a high potential for energy and cost savings in today's buildings. However, in most cases, this potential is not yet fully exploited due to the lack of intelligent supervisory control systems that are required to manage the complexity of the resulting overall systems. In this article, we present the implementation of a mixed-integer nonlinear model predictive control approach as a smart realtime building energy management system. The presented methodology is based on a forward-looking optimization of the overall energy costs. It takes into account energy demand forecasts and varying electricity market prices. We achieve real-time capability of the controller by applying a decomposition approach, which approximates the optimal solution of the underlying mixed-integer optimal control problem by convexification and rounding of the relaxed solution. The quality of the suboptimal solution is evaluated by comparison with the globally optimal solution obtained by the dynamic programming method. Based on a real-world scenario, we demonstrate that utilization of the real-time capable mixedinteger nonlinear model predictive control approach in a building control system leads to savings of 16% in the total operating costs and 13% in primary energy compared to the state-of-the-art control strategy without any loss of comfort for the residents.
Mixed-integer NMPC for real-time supervisory energy management control in residential buildings
(2023)
In recent years, building energy supply and distribution systems have become more complex, with an increasing number of energy generators, stores, flows, and possible combinations of operating modes. This poses challenges for supervisory control, especially when balancing the conflicting goals of maximizing comfort while minimizing costs and emissions to contribute to global climate protection objectives. Mixed-integer nonlinear model predictive control is a promising approach for intelligent real-time control that is able to properly address the specific characteristics and restrictions of building energy systems. We present a strategy that utilizes a decomposition approach, combining partial outer convexification with the Switch-Cost Aware Rounding procedure to handle switching behavior and operating time constraints of building components in real-time. The efficacy is demonstrated through practical applications in a single-family home with a combined heat and power unit and in a multi-family apartment complex with 18 residential units. Simulation studies show high correspondence to globally optimal solutions with significant cost savings potential of around 19%.
Research information, i.e., data about research projects, organisations, researchers or research outputs such as publications or patents, is spread across the web, usually residing in institutional and personal web pages or in semi-open databases and information systems. While there exists a wealth of unstructured information, structured data is limited and often exposed following proprietary or less-established schemas and interfaces. Therefore, a holistic and consistent view on research information across organisational and national boundaries is not feasible. On the other hand, web crawling and information extraction techniques have matured throughout the last decade, allowing for automated approaches of harvesting, extracting and consolidating research information into a more coherent knowledge graph. In this work, we give an overview of the current state of the art in research information sharing on the web and present initial ideas towards a more holistic approach for boot-strapping research information from available web sources.