Refine
Year of publication
Document Type
- Article (300)
- Conference Proceeding (120)
- Bachelor Thesis (9)
- Periodical Part (9)
- Report (6)
- Master's Thesis (4)
- Working Paper (4)
- Part of a Book (3)
- Preprint (3)
- Book (2)
Language
- English (461) (remove)
Is part of the Bibliography
- no (461)
Keywords
- Euterentzündung (23)
- Student (12)
- Computersicherheit (10)
- Knowledge (10)
- Mumbai (10)
- Wissen (10)
- India (9)
- Serviceorientierte Architektur (9)
- bioplastics (9)
- biopolymers (9)
Objective
We aimed to investigate the proportion of young patients not returning to work (NRTW) at 1 year after ischemic stroke (IS) and during follow-up, and clinical factors associated with NRTW.
Methods
Patients from the Helsinki Young Stroke Registry with an IS occurring in the years 1994–2007, who were at paid employment within 1 year before IS, and with NIH Stroke Scale score ≤15 points at hospital discharge, were included. Data on periods of payment came from the Finnish Centre for Pensions, and death data from Statistics Finland. Multivariate logistic regression analyses assessed factors associated with NRTW 1 year after IS, and lasagna plots visualized the proportion of patients returning to work over time.
Results
We included a total of 769 patients, of whom 289 (37.6%) were not working at 1 year, 323 (42.0%) at 2 years, and 361 (46.9%) at 5 years from IS. When adjusted for age, sex, socioeconomic status, and NIH Stroke Scale score at admission, factors associated with NRTW at 1 year after IS were large anterior strokes, strokes caused by large artery atherosclerosis, high-risk sources of cardioembolism, and rare causes other than dissection compared with undetermined cause, moderate to severe aphasia vs no aphasia, mild and moderate to severe limb paresis vs no paresis, and moderate to severe visual field deficit vs no deficit.
Conclusions
NRTW is a frequent adverse outcome after IS in young adults with mild to moderate IS. Clinical variables available during acute hospitalization may allow prediction of NRTW.
Nitric oxide adsorption on a Au(100) single crystal has been investigated to identify the type of adsorption, the adsorption site, and the orientation and alignment of the adsorbed NO relative to the surface. This was done using a combination of 3D-surface velocity map imaging, near-ambient pressure X-ray photoelectron spectroscopy, and density functional theory. NO was observed to be molecularly adsorbed on gold at ~200 K. Very narrow angular distributions and cold rotational distributions of photodesorbed NO indicate that NO adsorbs on high-symmetry sites on the Au crystal, with the N–O bond axis close to the surface normal. Our density functional theory calculations show that NO preferentially adsorbs on the symmetric bridge (2f) site, which ensures efficient overlap of the NO π* orbital with the orbitals on the two neighbouring Au atoms, and with the N–O bond axis aligned along the surface normal, in agreement with our conclusions from the rotational state distributions. The combination of XPS, which reveals the orientation of NO on gold, with 3D-surface velocity map imaging and density functional theory thus allowed us to determine the adsorption site, orientation and alignment of nitric oxide adsorbed on Au(100).
Agility is considered the silver bullet for survival in the VUCA world. However, many organisations are afraid of endangering their ISO 9001 certificate when introducing agile processes. A joint research project of the University of Applied Sciences and Arts Hannover and the DGQ has set itself the goal of providing more security in this area. The findings were based on interviews with managers and team members from various organisations of different sizes and industries working in an agile manner as well as on common audit practices and a literature analysis. The outcome presents a clear distinction of agility from flexibility as well as useful guidelines for the integration of agile processes in QM systems - for QM practitioners and auditors alike.
Integrated Risk and Opportunity Management (IROM) goes far beyond what is found in organizations today. However, it offers the best opportunity not only to keep pace with the VUCA world, but to actually profit from it. Accordingly, the introduction of opportunity-based thinking in addition to risk-based thinking is part of the design specification for ISO 9000 and ISO 9001. The prerequisite for the successful design of an IROM is the individual definition, control and integration of risk and opportunity management processes, considering eight success factors, the "8 C". Top management benefits directly from the result: better, coordinated decision memos enable faster and more appropriate decisions.
Background
Uncomplicated urinary tract infections (UTI) are common in general practice and usually treated with antibiotics. This contributes to increasing resistance rates of uropathogenic bacteria. A previous trial showed a reduction of antibiotic use in women with UTI by initial symptomatic treatment with ibuprofen. However, this treatment strategy is not suitable for all women equally. Arctostaphylos uva-ursi (UU, bearberry extract arbutin) is a potential alternative treatment. This study aims at investigating whether an initial treatment with UU in women with UTI can reduce antibiotic use without significantly increasing the symptom burden or rate of complications.
Methods
This is a double-blind, randomized, and controlled comparative effectiveness trial. Women between 18 and 75 years with suspected UTI and at least two of the symptoms dysuria, urgency, frequency or lower abdominal pain will be assessed for eligibility in general practice and enrolled into the trial. Participants will receive either a defined daily dose of 3 × 2 arbutin 105 mg for 5 days (intervention) or fosfomycin 3 g once (control). Antibiotic therapy will be provided in the intervention group only if needed, i.e. for women with worsening or persistent symptoms. Two co-primary outcomes are the number of all antibiotic courses regardless of the medical indication from day 0–28, and the symptom burden, defined as a weighted sum of the daily total symptom scores from day 0–7. The trial result is considered positive if superiority of initial treatment with UU is demonstrated with reference to the co-primary outcome number of antibiotic courses and non-inferiority of initial treatment with UU with reference to the co-primary outcome symptom burden.
Discussion
The trial’s aim is to investigate whether initial treatment with UU is a safe and effective alternative treatment strategy in women with UTI. In that case, the results might change the existing treatment strategy in general practice by promoting delayed prescription of antibiotics and a reduction of antibiotic use in primary care.
Integrating distributional and lexical information for semantic classification of words using MRMF
(2016)
Semantic classification of words using distributional features is usually based on the semantic similarity of words. We show on two different datasets that a trained classifier using the distributional features directly gives better results. We use Support Vector Machines (SVM) and Multirelational Matrix Factorization (MRMF) to train classifiers. Both give similar results. However, MRMF, that was not used for semantic classification with distributional features before, can easily be extended with more matrices containing more information from different sources on the same problem. We demonstrate the effectiveness of the novel approach by including information from WordNet. Thus we show, that MRMF provides an interesting approach for building semantic classifiers that (1) gives better results than unsupervised approaches based on vector similarity, (2) gives similar results as other supervised methods and (3) can naturally be extended with other sources of information in order to improve the results.
The CogALex-V Shared Task provides two datasets that consists of pairs of words along with a classification of their semantic relation. The dataset for the first task distinguishes only between related and unrelated, while the second data set distinguishes several types of semantic relations. A number of recent papers propose to construct a feature vector that represents a pair of words by applying a pairwise simple operation to all elements of the feature vector. Subsequently, the pairs can be classified by training any classification algorithm on these vectors. In the present paper we apply this method to the provided datasets. We see that the results are not better than from the given simple baseline. We conclude that the results of the investigated method are strongly depended on the type of data to which it is applied.
In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics.
For indexing archived documents the Dutch Parliament uses a specialized thesaurus. For good results for full text retrieval and automatic classification it turns out to be important to add more synonyms to the existing thesaurus terms. In the present work we investigate the possibilities to find synonyms for terms of the parliaments thesaurus automatically. We propose to use distributional similarity (DS). In an experiment with pairs of synonyms and non-synonyms we train and test a classifier using distributional similarity and string similarity. Using ten-fold cross validation we were able to classify 75% of the pairs of a set of 6000 word pairs correctly.
Background: After kidney transplantation, immunosuppressive therapy causes impaired cellular immune defense leading to an increased risk of viral complications. Trough level monitoring of immunosuppressants is insufficient to estimate the individual intensity of immunosuppression. We have already shown that virus-specific T cells (Tvis) correlate with control of virus replication as well as with the intensity of immunosuppression. The multicentre IVIST01-trial should prove that additional steering of immunosuppressive and antiviral therapy by Tvis levels leads to better graft function by avoidance of over-immunosuppression (for example, viral infections) and drug toxicity (for example, nephrotoxicity).
Methods/design: The IVIST-trial starts 4 weeks after transplantation. Sixty-four pediatric kidney recipients are randomized either to a non-intervention group that is only treated conservatively or to an intervention group with additional monitoring by Tvis. The randomization is stratified by centre and cytomegalovirus (CMV) prophylaxis. In both groups the immunosuppressive medication (cyclosporine A and everolimus) is adopted in the same target range of trough levels. In the non-intervention group the immunosuppressive therapy (cyclosporine A and everolimus) is only steered by classical trough level monitoring and the antiviral therapy of a CMV infection is performed according to a standard protocol. In contrast, in the intervention group the dose of immunosuppressants is individually adopted according to Tvis levels as a direct measure of the intensity of immunosuppression in addition to classical trough level monitoring. In case of CMV infection or reactivation the antiviral management is based on the individual CMV-specific immune defense assessed by the CMV-Tvis level. Primary endpoint of the study is the glomerular filtration rate 2 years after transplantation; secondary endpoints are the number and severity of viral infections and the incidence of side effects of immunosuppressive and antiviral drugs.
Discussion: This IVIST01-trial will answer the question whether the new concept of steering immunosuppressive and antiviral therapy by Tvis levels leads to better future graft function. In terms of an effect-related drug monitoring, the study design aims to realize a personalization of immunosuppressive and antiviral management after transplantation. Based on the IVIST01-trial, immunomonitoring by Tvis might be incorporated into routine care after kidney transplantation.
Subject of this work is the investigation of universal scaling laws which are observed in coupled chaotic systems. Progress is made by replacing the chaotic fluctuations in the perturbation dynamics by stochastic processes.
First, a continuous-time stochastic model for weakly coupled chaotic systems is introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck equation scaling relations are derived, which are confirmed by results of numerical simulations.
Next, the new effect of avoided crossing of Lyapunov exponents of weakly coupled disordered chaotic systems is described, which is qualitatively similar to the energy level repulsion in quantum systems. Using the scaling relations obtained for the coupling sensitivity of chaos, an asymptotic expression for the distribution function of small spacings between Lyapunov exponents is derived and compared with results of numerical simulations.
Finally, the synchronization transition in strongly coupled spatially extended chaotic systems is shown to resemble a continuous phase transition, with the coupling strength and the synchronization error as control and order parameter, respectively. Using results of numerical simulations and theoretical considerations in terms of a multiplicative noise partial differential equation, the universality classes of the observed two types of transition are determined (Kardar-Parisi-Zhang equation with saturating term, directed percolation).
The network security framework VisITMeta allows the visual evaluation and management of security event detection policies. By means of a "what-if" simulation the sensitivity of policies to specific events can be tested and adjusted. This paper presents the results of a user study for testing the usability of the approach by measuring the correct completion of given tasks as well as the user satisfaction by means of the system usability scale.
Intrusion detection systems and other network security components detect security-relevant events based on policies consisting of rules. If an event turns out as a false alarm, the corresponding policy has to be adjusted in order to reduce the number of false positives. Modified policies, however, need to be tested before going into productive use. We present a visual analysis tool for the evaluation of security events and related policies which integrates data from different sources using the IF-MAP specification and provides a “what-if” simulation for testing modified policies on past network dynamics. In this paper, we will describe the design and outcome of a user study that will help us to evaluate our visual analysis tool.
For anomaly-based intrusion detection in computer networks, data cubes can be used for building a model of the normal behavior of each cell. During inference an anomaly score is calculated based on the deviation of cell metrics from the corresponding normality model. A visualization approach is shown that combines different types of diagrams and charts with linked user interaction for filtering of data.
Introduction:
Human Immunodeficiency Virus (HIV) infection remains prevalent co-morbidity, and among fracture patients. Few studies have investigated the role of exercise interventions in preventing bone demineralization in people who have fractures and HIV. If exercise exposed, HIV-infected individuals may experience improved bone health outcomes (BMD), function, quality of life (QoL). The study will aim to assess the impact of home based exercises on bone mineral density, functional capacity, QoL, and some serological markers of health in HIV infection among Nigerians and South Africans.
Methods and design:
The study is an assessor-blinded randomized controlled trial. Patients managed with internal and external fixation for femoral shaft fracture at the study sites will be recruited to participate in the study. The participants will be recruited 2 weeks post-discharge at the follow-up clinic with the orthopaedic surgeon. The study population will consist of all persons with femoral fracture and HIV-positive and negative (HIV-positive medically confirmed) aged 18 to 60 years attending the above-named health facilities. For the HIV-positive participants, a documented positive HIV result, as well as a history of being followed-up at the HIV treatment and care center. A developed home based exercise programme will be implemented in the experimental group while the control group continues with the usual rehabilitation programme. The primary outcome measures will be function, gait, bone mineral density, physical activity, and QoL.
Discussion:
The proposed trial will compare the effect of a home-based physical exercise-training programme in the management of femoral fracture to the usual physiotherapy management programmes with specific outcomes of bone mineral density, function, and inflammatory markers.
Background: Autism Spectrum Disorder (ASD) is characterized by impairments in social communication, limited repetitive behaviors, impaired language development, and interest or activity patterns, which include a group complex neurodevelopmental syndrome with diverse phenotypes that reveal considerable etiological and clinical heterogeneity and are also considered one of the most heritable disorders (over 90%). Genetic, epigenetic, and environmental factors play a role in the development of ASD.
Aim: This study was designed to investigate the extent of DNA damage in parents of autistic children by treating peripheral blood mononuclear cells (PBMCs) with bleomycin and hydrogen peroxide (H2O2).
Methods: Peripheral blood mononuclear cells (PBMCs) were isolated by the Ficoll method and treated with a specific concentration of bleomycin and H2O2 for 30 min and 5 min, respectively. Then, the degree of DNA damage was analyzed by the alkaline comet assay or single cell gel electrophoresis (SCGE), an effective way to measure DNA fragmentation in eukaryotic cells.
Results: Our findings revealed that there is a significant difference in the increase of DNA damage in parents with affected children compared to the control group, which can indicate the inability of the DNA molecule repair system. Furthermore, our study showed a significant association between fathers’ occupational difficulties (exposed to the influence of environmental factors), as well as family marriage, and suffering from ASD in offspring.
Conclusion: Our results suggested that the influence of environmental factors on parents of autistic children may affect the development of autistic disorder in their offspring. Subsequently, based on our results, investigating the effect of environmental factors on the amount of DNA damage in parents with affected children requires more studies.
Objective
The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods
GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results
The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P < 0.001) and education (P < 0.001) were significant factors, but not gender (P > 0.99). For doctors, neither age (P = 0.73), professional experience (P > 0.99) nor gender (P = 0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions
GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
Objective: The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods: GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results: The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P<0.001) and education (P<0.001) were significant factors, but not gender (P>0.99). For doctors, neither age (P¼0.73), professional experience (P>0.99) nor gender (P¼0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions: GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
The properties of these carbon nanostructures are determined by the structure and orientation of the graphitic domains during pyrolysis of carbon precursors. In this work, we investigated systematically the impact of creep stress during the stabilization process on the cyclization and molecular orientation of polyacrylonitrile as well as the graphitized structure after high temperature carbonization. Therefore, polyacrylonitrile (PAN) is electrospun and then stabilized with and without application of creep stress at different temperatures. The effect of creep stress on cyclization was monitored via Fourier transform IR spectroscopy (FTIR) and it was found that the degree of cyclization varies with the application of creep stress during the initial stages of cyclization at low temperatures (190°C and 210°C) in contrast to cyclization done at higher temperature (230°C). Herman molecular orientation factor was evaluated by polarized FTIR for PAN nanofibers cyclized with and without creep stress at 230°C-10 h. Subsequently, carbonization was performed at 1000°C and 1200°C for nanofibers cyclized at 230°C-10 h. Our results from XRD and Raman spectroscopy shows that the degree of graphitization and ordering of graphitic domains was enhanced for PAN nanofibers that were creep stressed during the cyclization process, even though both PAN nanofibers cyclized with creep stress and without creep stress showed the same amount of cyclized material. This increased degree of graphitization can be tracked to application of creep stress during the stabilization process which obviously favors the formation of sp2-hybridized carbon planes in the carbonization process. This finding highlights the impact of mechanical stress linking the cyclization of PAN nanofibers to graphitization.
Our results will pave the way for a deeper understanding of mechano-chemical processes to fabricate well-aligned graphitic domains which improves the mechanical and electrical properties of CNFs.
Improving the graphitic structure in carbon nanofibers (CNFs) is important for exploiting their potential in mechanical, electrical and electrochemical applications. Typically, the synthesis of carbon fibers with a highly graphitized structure demands a high temperature of almost 2500 °C. Furthermore, to achieve an improved graphitic structure, the stabilization of a precursor fiber has to be assisted by the presence of tension in order to enhance the molecular orientation. Keeping this in view, herein we report on the fabrication of graphene nanoplatelets (GNPs) doped carbon nanofibers using electrospinning followed by oxidative stabilization and carbonization. The effect of doping GNPs on the graphitic structure was investigated by carbonizing them at various temperatures (1000 °C, 1200 °C, 1500 °C and 1700 °C). Additionally, a stabilization was achieved with and without constant creep stress (only shrinkage stress) for both pristine and doped precursor nanofibers, which were eventually carbonized at 1700 °C. Our findings reveal that the GNPs doping results in improving the graphitic structure of polyacrylonitrile (PAN). Further, in addition to the templating effect during the nucleation and growth of graphitic crystals, the GNPs encapsulated in the PAN nanofiber matrix act in-situ as micro clamp units performing the anchoring function by preventing the loss of molecular orientation during the stabilization stage, when no external tension is applied to nanofiber mats. The templating effect of the entire graphitization process is reflected by an increased electrical conductivity along the fibers. Simultaneously, the electrical anisotropy is reduced, i.e., the GNPs provide effective pathways with improved conductivity acting like bridges between the nanofibers resulting in an improved conductivity across the fiber direction compared to the pristine PAN system.
The reactivity of graphene at its boundary region has been imaged using non-linear spectroscopy to address the controversy whether the terraces of graphene or its edges are more reactive. Graphene was functionalised with phenyl groups, and we subsequently scanned our vibrational sum-frequency generation setup from the functionalised graphene terraces across the edges. A greater phenyl signal is clearly observed at the edges, showing evidence of increased reactivity in the boundary region. We estimate an upper limit of 1 mm for the width of the CVD graphene boundary region.
We report the unambiguous detection of phenyl groups covalently attached to functionalised graphene using non-linear spectroscopy. Sum-frequency generation was employed to probe graphene on a gold surface after chemical functionalisation using a benzene diazonium salt. We observe a distinct resonance at 3064 cm1 which can clearly be assigned to an aromatic C–H stretch by comparison with a self-assembled monolayer on a gold substrate formed from benzenethiol. Not only does sum-frequency generation spectroscopy allow one to characterise functionalised graphene with higher sensitivity and much better specificity than many other spectroscopic techniques, but it also opens up the possibility to assess the coverage of graphene with functional groups, and to determine their orientation relative to the graphene surface.
Digital data on tangible and intangible cultural assets is an essential part of daily life, communication and experience. It has a lasting influence on the perception of cultural identity as well as on the interactions between research, the cultural economy and society. Throughout the last three decades, many cultural heritage institutions have contributed a wealth of digital representations of cultural assets (2D digital reproductions of paintings, sheet music, 3D digital models of sculptures, monuments, rooms, buildings), audio-visual data (music, film, stage performances), and procedural research data such as encoding and annotation formats. The long-term preservation and FAIR availability of research data from the cultural heritage domain is fundamentally important, not only for future academic success in the humanities but also for the cultural identity of individuals and society as a whole. Up to now, no coordinated effort for professional research data management on a national level exists in Germany. NFDI4Culture aims to fill this gap and create a usercentered, research-driven infrastructure that will cover a broad range of research domains from musicology, art history and architecture to performance, theatre, film, and media studies.
The research landscape addressed by the consortium is characterized by strong institutional differentiation. Research units in the consortium's community of interest comprise university institutes, art colleges, academies, galleries, libraries, archives and museums. This diverse landscape is also characterized by an abundance of research objects, methodologies and a great potential for data-driven research. In a unique effort carried out by the applicant and co-applicants of this proposal and ten academic societies, this community is interconnected for the first time through a federated approach that is ideally suited to the needs of the participating researchers. To promote collaboration within the NFDI, to share knowledge and technology and to provide extensive support for its users have been the guiding principles of the consortium from the beginning and will be at the heart of all workflows and decision-making processes. Thanks to these principles, NFDI4Culture has gathered strong support ranging from individual researchers to highlevel cultural heritage organizations such as the UNESCO, the International Council of Museums, the Open Knowledge Foundation and Wikimedia. On this basis, NFDI4Culture will take innovative measures that promote a cultural change towards a more reflective and sustainable handling of research data and at the same time boost qualification and professionalization in data-driven research in the domain of cultural heritage. This will create a long-lasting impact on science, cultural economy and society as a whole.
Background: Health information systems (HIS) are one of the most important areas for biomedical and health informatics. In order to professionally deal with HIS well-educated informaticians are needed. Because of these reasons, in 2001 an international course has been established: The Frank – van Swieten Lectures on Strategic Information Management of Health Information Systems.
Objectives: Reporting about the Frank – van Swieten Lectures and about our students‘ feedback on this course during the last 16 years. Summarizing our lessons learned and making recommendations for such international courses on HIS.
Methods: The basic concept of the Frank – van Swieten lectures is to teach the theoretical background in local lectures, to organize practical exercises on modelling sub-information systems of the respective local HIS and finally to conduct Joint Three Days as an international meeting were the resulting models are introduced and compared.
Results: During the last 16 years, the Universities of Amsterdam, Braunschweig, Heidelberg/Heilbronn, Leipzig as well as UMIT were involved in running this course. Overall, 517 students from these universities participated. Our students‘ feedback was clearly positive.
The Joint Three Days of the Frank – van Swieten Lectures, where at the end of the course all students can meet, turned out to be an important component of this course. Based on the last 16 years, we recommend common teaching materials, agreement on equivalent clinical areas for the exercises, support of group building of international student groups, motivation of using a collaboration platform, ensuring quality management of the course, addressing different levels of knowledge of the students, and ensuring sufficient funding for joint activities.
Conclusions: Although associated with considerable additional efforts, we can clearly recommend establishing such international courses on HIS, such as the Frank – van Swieten Lectures.
Background: In Germany, hospice and palliative care is well covered through inpatient, outpatient, and home-based care services. It is unknown if, and to what extent, there is a need for additional day care services to meet the specific needs of patients and caregivers.
Methods: Two day hospices and two palliative day care clinics were selected. In the first step, two managers from each facility (n = 8) were interviewed by telephone, using a semi-structured interview guide. In the second step, four focus groups were conducted, each with three to seven representatives of hospice and palliative care from the facilities’ hospice and palliative care networks. Interviews and focus groups were audio recorded, transcribed verbatim and analyzed using qualitative content analysis.
Results: The interviewed experts perceived day care services as providing additional patient and caregiver benefits. Specifically, the services were perceived to meet patient needs for social interaction and bundled treatments, especially for patients who did not fit into inpatient settings (due to, e.g., their young age or a lack of desire for inpatient admission). The services were also perceived to meet caregiver needs for support, providing short-term relief for the home care situation.
Conclusions: The results suggest that inpatient, outpatient, and home-based hospice and palliative care services do not meet the palliative care needs of all patients. Although the population that is most likely to benefit from day care services is assumed to be relatively small, such services may meet the needs of certain patient groups more effectively than other forms of care.
FID Civil Engineering, Architecture and Urbanism digital - A platform for science (BAUdigital)
(2022)
University Library Braunschweig (UB Braunschweig), University and State Library Darmstadt (ULB Darmstadt), TIB – Leibniz Information Centre for Technology and Natural Sciences and the Fraunhofer Information Centre for Planning and Building (Fraunhofer IRB) are jointly establishing a specialised information service (FID, "Fachinformationsdienst") for the disciplines of civil engineering, architecture and urbanism. The FID BAUdigital, which is funded by the German Research Foundation (DFG, "Deutsche Forschungsgemeinschaft"), will provide researchers working on digital design, planning and production methods in construction engineering with a joint information, networking and data exchange platform and support them with innovative services for documentation, archiving and publication in their data-based research.
Hadoop is a Java-based open source programming framework, which supports the processing and storage of large volumes of data sets in a distributed computing environment. On the other hand, an overwhelming majority of organizations are moving their big data processing and storing to the cloud to take advantage of cost reduction – the cloud eliminates the need for investing heavily in infrastructures, which may or may not be used by organizations. This paper shows how organizations can alleviate some of the obstacles faced when trying to make Hadoop run in the cloud.
Our work is motivated primarily by the lack of standardization in the area of Event Processing Network (EPN) models. We identify general requirements for such models. These requirements encompass the possibility to describe events in the real world, to establish temporal and causal relationships among the events, to aggregate the events, to organize the events into a hierarchy, to categorize the events into simple or complex, to create an EPN model in an easy and simple way and to use that model ad hoc. As the major contribution, this paper applies the identified requirements to the RuleCore model.
In this paper, five ontologies are described, which include the event concepts. The paper provides an overview and comparison of existing event models. The main criteria for comparison are that there should be possibilities to model events with stretch in the time and location and participation of objects; however, there are other factors that should be taken into account as well. The paper also shows an example of using ontologies in complex event processing.
OSGi in Cloud Environments
(2013)
With an increasing complexity and scale, sufficient evaluation of Information Systems (IS) becomes a challenging and difficult task. Simulation modeling has proven as suitable and efficient methodology for evaluating IS and IS artifacts, presupposed it meets certain quality demands. However, existing research on simulation modeling quality solely focuses on quality in terms of accuracy and credibility, disregarding the role of additional quality aspects. Therefore, this paper proposes two design artifacts in order to ensure a holistic quality view on simulation quality. First, associated literature is reviewed in order to extract relevant quality factors in the context of simulation modeling, which can be used to evaluate the overall quality of a simulated solution before, during or after a given project. Secondly, the deduced quality factors are integrated in a quality assessment framework to provide structural guidance on the quality assessment procedure for simulation. In line with a Design Science Research (DSR) approach, we demonstrate the eligibility of both design artifacts by means of prototyping as well as an example case. Moreover, the assessment framework is evaluated and iteratively adjusted with the help of expert feedback.
The paper provides a comprehensive overview of modeling and pricing cyber insurance and includes clear and easily understandable explanations of the underlying mathematical concepts. We distinguish three main types of cyber risks: idiosyncratic, systematic, and systemic cyber risks. While for idiosyncratic and systematic cyber risks, classical actuarial and financial mathematics appear to be well-suited, systemic cyber risks require more sophisticated approaches that capture both network and strategic interactions. In the context of pricing cyber insurance policies, issues of interdependence arise for both systematic and systemic cyber risks; classical actuarial valuation needs to be extended to include more complex methods, such as concepts of risk-neutral valuation and (set-valued) monetary risk measures.
The objective of this study is to analyze noise patterns during 599 visceral surgical procedures. Considering work-safety regulations, we will identify immanent noise patterns during major visceral surgeries. Increased levels of noise are known to have negative health impacts. Based on a very finegrained data collection over a year, this study will introduce a new procedure for visual representation of intra-surgery noise progression and pave new paths for future research on noise reduction in visceral surgery. Digital decibel sound-level meters were used to record the total noise in three operating theatres in one-second cycles over a year. These data were matched to archival data on surgery characteristics. Because surgeries inherently vary in length, we developed a new procedure to normalize surgery times to run cross-surgery comparisons. Based on this procedure, dBA values were adjusted to each normalized time point. Noise-level patterns are presented for surgeries contingent on important surgery characteristics: 16 different surgery types, operation method, day/night time point and operation complexity (complexity levels 1–3). This serves to cover a wide spectrum of day-to-day surgeries. The noise patterns reveal significant sound level differences of about 1 dBA, with the mostcommon noise level being spread between 55 and 60 dBA. This indicates a sound situation in many of the surgeries studied likely to cause stress in patients and staff. Absolute and relative risks of meeting or exceeding 60 dBA differ considerably across operation types. In conclusion, the study reveals that maximum noise levels of 55 dBA are frequently exceeded during visceral surgical procedures. Especially complex surgeries show, on average, a higher noise exposure. Our findings warrant active noise management for visceral surgery to reduce potential negative impacts of noise on surgical performance and outcome.
High-performance firms typically have two features in common: (i) they produce in more than one country and (ii) they produce more than one product. In this paper, we analyze the internationalization strategies of multi-product firms. Guided by several new stylized facts, we develop a theoretical model to determine optimal modes of market access at the firm–product level. We find that the most productive firmssell core varieties via foreign direct investment and export products with intermediate productivity. Shocks to trade costs and technology affect the endogenous decision to export or produce abroad at the product-level and, in turn, the relative productivity between parents and affiliates.
Complex Event Processing (CEP) has been established as a well-suited software technology for processing high-frequent data streams. However, intelligent stream based systems must integrate stream data with semantical background knowledge. In this work, we investigate different approaches on integrating stream data and semantic domain knowledge. In particular, we discuss from a software engineering per- spective two different architectures: an approach adding an ontology access mechanism to a common Continuous Query Language (CQL) is compared with C-SPARQL, a streaming extension of the RDF query language SPARQL.
All of us are aware of the changes in the information field during the last years. We all see the paradigm shift coming up and have some idea how it will challenge our profession in the future. But how the road to excellence - in education of information specialists in the future - will look like? There are different models (new and old ones) for reorganising the structure of education: * Integration * Specialisation * Step-by step-model * Modul System * Network System / Combination model The paper will present the actual level of discussion on building up a new curriculum at the Department of Information and Communication (IK) at the FH Hannover. Based on the mission statement of the department »Education of information professionals is a part of the dynamic evolution of knowledge society« the direction of change and the main goals will be presented. The different reorganisation models will be explained with its objectives, opportunities and forms of implementation. Some examples will show the ideas and tools for a first draft of a reconstruction plan to become fit for the future. This talk has been held at the German-Dutch University Conference »Information Specialists for the 21st Century« at the Fachhochschule Hannover - University of Applied Sciences, Department of Information and Communication, October 14 -15, 1999 in Hannover, Germany.
The aim of the podcast Digitization of Medicine is to interest a broader audience and, in particular, young women, in research and work in the field of medical informatics. This article presents the usage figures and discusses their significance for further research on the success of science communication. By 24/02/2022, a total of 24,351 downloads had been made. There were slightly more female than male listeners, and they tended to be younger. Despite the importance podcast are gaining for science communication, little is known about the respective user group and further research is needed. In this context, this paper aims to help make the effectiveness of podcasts comparable.
Quartz-crystal microbalances (QCMs) are commercially available mass sensors which mainly consist of a quartz resonator that oscillates at a characteristic frequency, which shifts when mass changes due to surface binding of molecules. In addition to mass changes, the viscosity of gases or liquids in contact with the sensor also shifts the resonance but also influences the quality factor (Q-factor). Typical biosensor applications demand operation in liquid environments leading to viscous damping strongly lowering Q-factors. For obtaining reliable measurements in liquid environments, excellent resonator control and signal processing are essential but standard resonator circuits like the Pierce and Colpitts oscillator fail to establish stable resonances. Here we present a lowcost, compact and robust oscillator circuit comprising of state-of-the-art commercially available surface-mount technology components which stimulates the QCMs oscillation, while it also establishes a control loop regulating the applied voltage. Thereby an increased energy dissipation by strong viscous damping in liquid solutions can be compensated and oscillations are stabilized. The presented circuit is suitable to be used in compact biosensor systems using custom-made miniaturized QCMs in microfluidic environments. As a proof of concept we used this circuit in combination with a customized microfabricated QCM in a microfluidic environment to measure the concentration of C-reactive protein (CRP) in buffer (PBS) down to concentrations as low as 5 μgmL -1.
Research question: Rivalries in team sports are commonly conceptualized as a threat to the fans’ identity. Therefore, past research has mainly focused on the negative consequences. However, theoretical arguments and empirical evidence suggest that rivalry has both negative and positive effects on fans’ self-concept. This research develops and empirically tests a model which captures and integrates these dual effects of rivalry.
Research methods: Data were collected via an on-site survey at home games of eight German Bundesliga football teams (N = 571). Structural equation modeling provides strong support for the proposed model.
Results and findings: In line with previous research, the results show that rivalry threatens fans’ identity as reflected in lower public collective self-esteem in relation to supporters of the rival team. However, the results also show that there are crucial positive consequences, such as higher perceptions of public collective self-esteem in relation to supporters of non-rival opponents, perceived ingroup distinctiveness and ingroup cohesion. These positive effects are mediated through increases in disidentification with the rival and perceived reciprocity of rivalry.
Implications: We contribute to the literature by providing a more balanced view of one of team sports’ key phenomena. Our results indicate that the prevalent conceptualization of rivalry as an identity threat should be amended by the positive consequences. Our research also offers guidance for the promotion of rivalries, where the managerial focus should be on creating a perception that a rivalry is reciprocal.
Research question: In order to reduce fan aggression surrounding rivalry games, team sport organizations often try to placate fans by downplaying the importance of the game (e.g. ‘the derby is not a war’). Drawing on the intergroup conflict literature, this research derives dual identity statements and examines their effectiveness in reducing fan aggressiveness compared to the managerial practice of downplaying rivalry.
Research methods: Three field experimental studies (one face-to-face survey and two online surveys) tested the hypotheses. Established rivalries in the German soccer league Bundesliga served as the empirical setting of the studies. The data were analyzed using ANCOVA and linear regression analyses.
Results and findings: Dual identity statements reduce fan aggressiveness compared to both downplay statements and a no-statement control condition, independent of team identification and trait aggression. Importantly, the managerial practice of downplaying rivalry appears to be counterproductive. It produces even higher levels of fan aggressiveness than making no statement, an effect caused by psychological reactance.
Implications: Sport organizations should not alienate their fan base by attempting to play down the importance of rivalry, which is an integral part of fan identity. Instead, they should strengthen the supporters’ unique identity (as fans of a particular team) while at the same time facilitating identification with the rival at a superordinate level (e.g. as joint fans of a region).
Marketing, get ready to rumble — How rivalry promotes distinctiveness for brands and consumers
(2018)
Scholars typically advise brands to stay away from public conflict with competitors as research has focused on negative consequences - e.g., price wars, escalating hostilities, and derogation. This research distinguishes between rivalry between firms (inter-firm brand rivalry) and rivalry between consumers (inter-consumer brand rivalry). Four studies and six samples show both types of rivalry can have positive consequences for both firms and consumers. Inter-firm brand rivalry boosts perceived distinctiveness of competing brands independent of consumption, attitude, familiarity, and involvement. Inter-consumer brand rivalry increases consumer group distinctiveness, an effect mediated by brand identification and rival brand disidentification. We extend social identity theory by demonstrating that: 1) outside actors like firms can promote inter-consumer rivalry through inter-firm rivalry and 2) promoting such conflict can actually provide benefits to consumers as well as firms. The paper challenges the axiom “never knock the competition,” deriving a counter-intuitive way to accomplish one of marketing's premier objectives.
Social comparison theories suggest that ingroups are strengthened whenever important outgroups are weakened (e.g., by losing status or power). It follows that ingroups have little reason to help outgroups facing an existential threat. We challenge this notion by showing that ingroups can also be weakened when relevant comparison outgroups are weakened, which can motivate ingroups to strategically offer help to ensure the outgroups' survival as a highly relevant comparison target. In three preregistered studies, we showed that an existential threat to an outgroup with high (vs. low) identity relevance affected strategic outgroup helping via two opposing mechanisms. The potential demise of a highly relevant outgroup increased participants’ perceptions of ingroup identity threat, which was positively related to helping. At the same time, the outgroup’s misery evoked schadenfreude, which was negatively related to helping. Our research exemplifies a group's secret desire for strong outgroups by underlining their importance for identity formation.
According to the third-person effect or the influence of presumed media influence approach, the presumption that the media has strong effects on other people can affect individuals’ attitudes and behavior. For instance, if people believe in strong media influences on others, they are more likely to increase their communication activities or support demands for restrictions on media. A standardized online survey among German journalists (N = 960) revealed that the stronger the journalists perceive the political online influence on the public to be, the more frequently they contradict unwanted political views in their articles. Moreover, even journalists are more likely to approve of restrictions on the Internet’s political influence, the stronger they believe the effects of online media to be. The data reveal no connections between communication activities and demands for restrictions.
Enterprise apps on mobile devices typically need to communicate with other system components by consuming web services. Since most of the current mobile device platforms (such as Android) do not provide built-in features for consuming SOAP services, extensions have to be designed. Additionally in order to accommodate the typical enhanced security requirements of enterprise apps, it is important to be able to deal with SOAP web service security extensions on client side. In this article we show that neither the built-in SOAP capabilities for Android web service clients are sufficient for enterprise apps nor are the necessary security features supported by the platform as is. After discussing different existing extensions making Android devices SOAP capable we explain why none of them is really satisfactory in an enterprise context. Then we present our own solution which accommodates not only SOAP but also the WS-Security features on top of SOAP. Our solution heavily relies on code generation in order to keep the flexibility benefits of SOAP on one hand while still keeping the development effort manageable for software development. Our approach provides a good foundation for the implementation of other SOAP extensions apart from security on the Android platform as well. In addition our solution based on the gSOAP framework may be used for other mobile platforms in a similar manner.
Music streaming platforms offer music listeners an overwhelming choice of music. Therefore, users of streaming platforms need the support of music recommendation systems to find music that suits their personal taste. Currently, a new class of recommender systems based on knowledge graph embeddings promises to improve the quality of recommendations, in particular to provide diverse and novel recommendations. This paper investigates how knowledge graph embeddings can improve music recommendations. First, it is shown how a collaborative knowledge graph can be derived from open music data sources. Based on this knowledge graph, the music recommender system EARS (knowledge graph Embedding-based Artist Recommender System) is presented in detail, with particular emphasis on recommendation diversity and explainability. Finally, a comprehensive evaluation with real-world data is conducted, comparing of different embeddings and investigating the influence of different types of knowledge.
Smart Cities require reliable means for managing installations that offer essential services to the citizens. In this paper we focus on the problem of evacuation of smart buildings in case of emergencies. In particular, we present an abstract architecture for situation-aware evacuation guidance systems in smart buildings, describe its key modules in detail, and provide some concrete examples of its structure and dynamics.
The increasing variety of combinations of different building technology components offers a high potential for energy and cost savings in today's buildings. However, in most cases, this potential is not yet fully exploited due to the lack of intelligent supervisory control systems that are required to manage the complexity of the resulting overall systems. In this article, we present the implementation of a mixed-integer nonlinear model predictive control approach as a smart realtime building energy management system. The presented methodology is based on a forward-looking optimization of the overall energy costs. It takes into account energy demand forecasts and varying electricity market prices. We achieve real-time capability of the controller by applying a decomposition approach, which approximates the optimal solution of the underlying mixed-integer optimal control problem by convexification and rounding of the relaxed solution. The quality of the suboptimal solution is evaluated by comparison with the globally optimal solution obtained by the dynamic programming method. Based on a real-world scenario, we demonstrate that utilization of the real-time capable mixedinteger nonlinear model predictive control approach in a building control system leads to savings of 16% in the total operating costs and 13% in primary energy compared to the state-of-the-art control strategy without any loss of comfort for the residents.
Mixed-integer NMPC for real-time supervisory energy management control in residential buildings
(2023)
In recent years, building energy supply and distribution systems have become more complex, with an increasing number of energy generators, stores, flows, and possible combinations of operating modes. This poses challenges for supervisory control, especially when balancing the conflicting goals of maximizing comfort while minimizing costs and emissions to contribute to global climate protection objectives. Mixed-integer nonlinear model predictive control is a promising approach for intelligent real-time control that is able to properly address the specific characteristics and restrictions of building energy systems. We present a strategy that utilizes a decomposition approach, combining partial outer convexification with the Switch-Cost Aware Rounding procedure to handle switching behavior and operating time constraints of building components in real-time. The efficacy is demonstrated through practical applications in a single-family home with a combined heat and power unit and in a multi-family apartment complex with 18 residential units. Simulation studies show high correspondence to globally optimal solutions with significant cost savings potential of around 19%.
Research information, i.e., data about research projects, organisations, researchers or research outputs such as publications or patents, is spread across the web, usually residing in institutional and personal web pages or in semi-open databases and information systems. While there exists a wealth of unstructured information, structured data is limited and often exposed following proprietary or less-established schemas and interfaces. Therefore, a holistic and consistent view on research information across organisational and national boundaries is not feasible. On the other hand, web crawling and information extraction techniques have matured throughout the last decade, allowing for automated approaches of harvesting, extracting and consolidating research information into a more coherent knowledge graph. In this work, we give an overview of the current state of the art in research information sharing on the web and present initial ideas towards a more holistic approach for boot-strapping research information from available web sources.
Background: One of the major challenges in pediatric intensive care is the detection of life-threatening health conditions under acute time constraints and performance pressure. This includes the assessment of pediatric organ dysfunction (OD) that demands extraordinary clinical expertise and the clinician’s ability to derive a decision based on multiple information and data sources. Clinical decision support systems (CDSS) offer a solution to support medical staff in stressful routine work. Simultaneously, detection of OD by using computerized decision support approaches has been scarcely investigated, especially not in pediatrics.
Objectives: The aim of the study is to enhance an existing, interoperable, and rulebased CDSS prototype for tracing the progression of sepsis in critically ill children by augmenting it with the capability to detect SIRS/sepsis-associated hematologic OD, and to determine its diagnostic accuracy.
Methods: We reproduced an interoperable CDSS approach previously introduced by our working group: (1) a knowledge model was designed by following the commonKADS methodology, (2) routine care data was semantically standardized and harmonized using openEHR as clinical information standard, (3) rules were formulated and implemented in a business rule management system. Data from a prospective diagnostic study, including 168 patients, was used to estimate the diagnostic accuracy of the rule-based CDSS using the clinicians’ diagnoses as reference
Sustainable tourism is a niche market that has been growing in recent years. At the same time, companies in the mass tourism market have increasingly marketed themselves with a “green” image, although this market is not sustainable. In order to successfully market sustainability, targeted marketing tactics are needed.
The aim of this research is to establish appropriate marketing tactics for sustainable tourism in the niche market and in the mass market. The purpose is to uncover current marketing tactics for both the mass tourism market and the sustainable tourism niche market. It also intends to explore how consumers who are more interested in sustainability differ from consumers with less interest in sustainability in terms of their perception of sustainability in tourism. Furthermore, this research paper will assess the trustworthiness of sustainable travel offers and the trustworthiness of quality seals in sustainable tourism. For this purpose, an online survey was conducted, which was addressed at German-speaking consumers. The survey showed, that consumers with more general interest in sustainability also consider sustainability to be more relevant in tourism. Offers for sustainable travel and quality seals were perceived as not very trustworthy. Moreover, no link could be found between the interest in sustainability and the perception of trustworthiness.
On the basis of the above, it is advisable to directly advertise sustainability in the niche market and to mention sustainability in the mass market only as an accompaniment or not at all. Further research could be undertaken to identify which factors influence the trustworthiness of offers, and trustworthiness of quality seals in sustainable tourism.
The present research study investigated the susceptibility of common mastitis pathogens—obtained from clinical mastitis cases on 58 Northern German dairy farms—to routinely used antimicrobials. The broth microdilution method was used for detecting the Minimal Inhibitory Concentration (MIC) of Streptococcus agalactiae (n = 51), Streptococcus dysgalactiae (n = 54), Streptococcus uberis (n = 50), Staphylococcus aureus (n = 85), non-aureus staphylococci (n = 88), Escherichia coli (n = 54) and Klebsiella species (n = 52). Streptococci and staphylococci were tested against cefquinome, cefoperazone, cephapirin, penicillin, oxacillin, cloxacillin, amoxicillin/clavulanic acid and cefalexin/kanamycin. Besides cefquinome and amoxicillin/clavulanic acid, Gram-negative pathogens were examined for their susceptibility to marbofloxacin and sulfamethoxazole/trimethoprim. The examined S. dysgalactiae isolates exhibited the comparatively lowest MICs. S. uberis and S. agalactiae were inhibited at higher amoxicillin/clavulanic acid and cephapirin concentration levels, whereas S. uberis isolates additionally exhibited elevated cefquinome MICs. Most Gram-positive mastitis pathogens were inhibited at higher cloxacillin than oxacillin concentrations. The MICs of Gram-negative pathogens were higher than previously reported, whereby 7.4%, 5.6% and 11.1% of E. coli isolates had MICs above the highest concentrations tested for cefquinome, marbofloxacin and sulfamethoxazole/trimethoprim, respectively. Individual isolates showed MICs at comparatively higher concentrations, leading to the hypothesis that a certain amount of mastitis pathogens on German dairy farms might be resistant to frequently used antimicrobials.
Catalogs of competency-based learning objectives (CLO) were introduced and promoted as a prerequisite for high-quality, systematic curriculum development. While this is common in medicine, the consistent use of CLO is not yet well established in epidemiology, biometry, medical informatics, biomedical informatics, and nursing informatics especially in Germany. This paper aims to identify underlying obstacles and give recommendations in order to promote the dissemination of CLO for curricular development in health data and information sciences. To determine these obstacles and recommendations a public online expert workshop was organized. This paper summarizes the findings.
Compounds that exhibit the spin crossover effect are known to show a change of spin states through external stimuli. This reversible switching of spin states is accompanied by a change of the properties of the compound. Complexes, like iron (II)-triazole complexes, that exhibit this behavior at ambient temperature are often discussed for potential applications. In previous studies we synthesized iron (II)-triazole complexes and implemented them into electrospun nanofibers. We used Mössbauer spectroscopy in first studies to prove a successful implementation with maintaining spin crossover properties. Further studies from us showed that it is possible to use different electrospinning methods to either do a implementation or a deposition of the synthesized solid SCO material into or onto the polymer nanofibers. We now used a solvent in which both, the used iron (II)-triazole complex [Fe(atrz)3](2 ns)2 and three different polymers (Polyacrylonitrile, Polymethylmethacrylate and Polyvinylpyrrolidone), are soluble. This shall lead to a higher homogeneous distribution of the complex along the nanofibers. Mössbauer spectroscopy and other measurements are therefore in use to show a successful implementation without any significant changes to the complex.
The objective of this study was to investigate the occurrence of bacteremia in dairy cows with severe mastitis. Milk samples were collected from affected udder quarters, and corresponding blood samples were collected from dairy cows with severe mastitis at the time of diagnosis before any therapeutic measures were undertaken. The cultural detection of pathogens in blood classified a bacteremia. Further diagnostic tests were performed to provide evidence of bacteremia. This was realized by PCR with regard to S. aureus, E. coli and S. uberis and the Limulus test. Detection of culturable pathogens in the blood of cows with severe clinical mastitis was rare and occurred in only one of 70 (1.4%) cases. Overall, bacterial growth was detected in 53 of 70 (75.7%) milk samples. S. uberis (22/70), E. coli (12/70) and S. aureus (4/70) were the most frequently isolated pathogens from milk of cows with severe mastitis. PCR was performed in 38 of 70 (54.3%) blood samples. PCR was positive in eight of 38 cases. S. uberis was found most frequently in six blood samples (8.6%). E. coli was found on PCR in one blood sample (1.4%). S. aureus was identified in one blood sample (1.4%). When Coliforms were detected in the quarter milk sample, a Limulus test was performed in the corresponding blood sample. In three of 15 cases, the Limulus test was positive (4.3% of samples). Further studies are needed to investigate the occurrence of bacteremia in cows with severe mastitis in a higher population size.
Complex Event Processing (CEP) is a modern software technology for the dynamic analysis of continuous data streams. CEP is able of searching extremely large data streams in real time for the presence of event patterns. So far, specifying event patterns of CEP rules is still a manual task based on the expertise of domain experts. This paper presents a novel batinspired swarm algorithm for automatically mining CEP rule patterns that express the relevant causal and temporal relations hidden in data streams. The basic suitability and performance of the approach is proven by extensive evaluation with both synthetically generated data and real data from the traffic domain.
M2M (machine-to-machine) systems use various communication technologies for automatically monitoring and controlling machines. In M2M systems, each machine emits a continuous stream of data records, which must be analyzed in real-time. Intelligent M2M systems should be able to diagnose their actual states and to trigger appropriate actions as soon as critical situations occur. In this paper, we show how complex event processing (CEP) can be used as the key technology for intelligent M2M systems. We provide an event-driven architecture that is adapted to the M2M domain. In particular, we define different models for the M2M domain, M2M machine states and M2M events. Furthermore, we present a general reference architecture defining the main stages of processing machine data. To prove the usefulness of our approach, we consider two real-world examples ‘solar power plants’ and ‘printers’, which show how easily the general architecture can be extended to concrete M2M scenarios.
In this article, we present the software architecture of a new generation of advisory systems using Intelligent Agent and Semantic Web technologies. Multi-agent systems provide a well-suited paradigm to implement negotiation processes in a consultancy situation. Software agents act as clients and advisors, using their knowledge to assist human users. In the presented architecture, the domain knowledge is modeled semantically by means of XML-based ontology languages such as OWL. Using an inference engine, the agents reason, based on their knowledge to make decisions or proposals. The agent knowledge consists of different types of data: on the one hand, private data, which has to be protected against unauthorized access; and on the other hand, publicly accessible knowledge spread over different Web sites. As in a real consultancy, an agent only reveals sensitive private data, if they are indispensable for finding a solution. In addition, depending on the actual consultancy situation, each agent dynamically expands its knowledge base by accessing OWL knowledge sources from the Internet. Due to the standardization of OWL, knowledge models easily can be shared and accessed via the Internet. The usefulness of our approach is proved by the implementation of an advisory system in the Semantic E-learning Agent (SEA) project, whose objective is to develop virtual student advisers that render support to university students in order to successfully organize and perform their studies.
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
To optimise udder health at the herd level, identifying incurable mastitis cases as well as providing an adequate therapy and culling strategy are necessary. Cows with clinical mastitis should be administered antibiotic medication if it is most likely to improve mammary cure. The somatic cell count (SCC) in milk of the monthly implemented Dairy Herd Improvement (DHI) test represents the most important tool to decide whether a cow has a promising mammary cure rate. Differential cell count (DCC) facilitates the specification of the immunological ability of defence, for example by characterising leukocyte subpopulations or cell viability. The aim of this study was to assess the DCC and cell viability in DHI milk samples regarding the cytological (CC) and bacteriological cure (BC) of the udder within a longitudinal study, thereby gaining a predictive evaluation of whether a clinical mastitis benefits from an antibiotic treatment or not. The cows enrolled in this study had an SCC above 200,000 cells/mL in the previous DHI test. Study 1 assessed the CC by reference to the SCC of two consecutive DHI tests and included 1010 milk samples: 28.4% of the mammary glands were classified as cytologically cured and 71.6% as uncured. The final mixed logistic regression model identified the total number of non-vital cells as a significant factor associated with CC. An increasing amount of non-vital cells was related to a lower individual ability for CC. Cows which were in the first or second lactation possessed a higher probability of CC than cows having a lactation number above two. If animals developed a clinical mastitis after flow cytometric investigation, the BC was examined in study 2 by analysing quarter foremilk samples microbiologically. Taking 48 milk samples, 81.3% of the mammary glands were classified as bacteriologically cured and 18.7% as uncured. The percentage of total non-vital cells tended to be lower for cows which were cured, but no significance could be observed. This study revealed that the investigation of the proportion of non-vital cells in DHI milk samples can enhance the prognosis of whether an antibiotic treatment of clinical mastitis might be promising or not. Prospectively, this tool may be integrated in the DHI tests to facilitate the decision between therapy or culling.
We present a methodology based on mixed-integer nonlinear model predictive control for a real-time building energy management system in application to a single-family house with a combined heat and power (CHP) unit. The developed strategy successfully deals with the switching behavior of the system components as well as minimum admissible operating time constraints by use of a special switch-cost-aware rounding procedure. The quality of the presented solution is evaluated in comparison to the globally optimal dynamic programming method and conventional rule-based control strategy. Based on a real-world scenario, we show that our approach is more than real-time capable while maintaining high correspondence with the globally optimal solution. We achieve an average optimality gap of 2.5% compared to 20% for a conventional control approach, and are faster and more scalable than a dynamic programming approach.
This paper presents the fundamental investigation on crack propagation rate (CPR) and Stress Intensity Factor (SIF) for a typical fatigue and welded specimens which are Compact Tension (CT) and Single Edge Notch Tension (SENT) as well as Butt and longitudinal T-joint. The material data of austenitic stainless steel SS316L was used to observe crack propagation rate with different initial crack length and different tensile load was used for the fracture mechanics investigation. The geometry of the specimens was modelled by using open source software CASCA while Franc 2D was used for post processing based on Paris Erdogan Law with different crack increment steps. The analysis of crack propagation using fracture mechanics technique requires an accurate calculation of the stress intensity factor SIF and comparison of the critical strength of the material (KIC) was used to determine the critical crack length of the specimens. it can be concluded that open source finite element method software can be used for predicting of fatigue life on simplified geometry.
Microservices build a deeply distributed system. Although this offers significant flexibility for development teams and helps to find solutions for scalability or security questions, it also intensifies the drawbacks of a distributed system. This article offers a decision framework, which helps to increase the resiliency of microservices. A metamodel is used to represent services, resiliency patterns, and quality attributes. Furthermore, the general idea for a suggestion procedure is outlined.
There are many aspects of code quality, some of which are difficult to capture or to measure. Despite the importance of software quality, there is a lack of commonly accepted measures or indicators for code quality that can be linked to quality attributes. We investigate software developers’ perceptions of source code quality and the practices they recommend to achieve these qualities. We analyze data from semi-structured interviews with 34 professional software developers, programming teachers and students from Europe and the U.S. For the interviews, participants were asked to bring code examples to exemplify what they consider good and bad code, respectively. Readability and structure were used most commonly as defining properties for quality code. Together with documentation, they were also suggested as the most common target properties for quality improvement. When discussing actual code, developers focused on structure, comprehensibility and readability as quality properties. When analyzing relationships between properties, the most commonly talked about target property was comprehensibility. Documentation, structure and readability were named most frequently as source properties to achieve good comprehensibility. Some of the most important source code properties contributing to code quality as perceived by developers lack clear definitions and are difficult to capture. More research is therefore necessary to measure the structure, comprehensibility and readability of code in ways that matter for developers and to relate these measures of code structure, comprehensibility and readability to common software quality attributes.
Clinical scores and motion-capturing gait analysis are today’s gold standard for outcome measurement after knee arthroplasty, although they are criticized for bias and their ability to reflect patients’ actual quality of life has been questioned. In this context, mobile gait analysis systems have been introduced to overcome some of these limitations. This study used a previously developed mobile gait analysis system comprising three inertial sensor units to evaluate daily activities and sports. The sensors were taped to the lumbosacral junction and the thigh and shank of the affected limb. The annotated raw data was evaluated using our validated proprietary software. Six patients undergoing knee arthroplasty were examined the day before and 12 months after surgery. All patients reported a satisfactory outcome, although four patients still had limitations in their desired activities. In this context, feasible running speed demonstrated a good correlation with reported impairments in sports-related activities. Notably, knee flexion angle while descending stairs and the ability to stop abruptly when running exhibited good correlation with the clinical stability and proprioception of the knee. Moreover, fatigue effects were displayed in some patients. The introduced system appears to be suitable for outcome measurement after knee arthroplasty and has the potential to overcome some of the limitations of stationary gait labs while gathering additional meaningful parameters regarding the force limits of the knee.
Worldwide, seagrass meadows are under threat. Consequently, there is a strong need for seagrass restoration to guarantee the provision of related ecosystem services such as nutrient cycling, carbon sequestration and habitat provision. Seagrass often grows in vast meadows in which the presence of seagrass itself leads to a reduction of hydrodynamic energy. By modifying the environment, seagrass thus serves as foundation species and ecosystem engineer improving habitat quality for itself and other species as well as positively affecting its own fitness. On the downside, this positive feedback mechanism can render natural recovery of vanished and destroyed seagrass meadows impossible. An innovative approach to promote positive feedback mechanisms in seagrass restoration is to create an artificial seagrass (ASG) that mimics the facilitation function of natural seagrass. ASG could provide a window of opportunity with respect to suitable hydrodynamic and light conditions as well as sediment stabilization to allow natural seagrass to re-establish. Here, we give an overview of challenges and open questions for the application of ASG to promote seagrass restoration based on experimental studies and restoration trials and we propose a general approach for the design of an ASG produced from biodegradable materials. Considering positive feedback mechanisms is crucial to support restoration attempts. ASG provides promising benefits when habitat conditions are too harsh for seagrass meadows to re-establish themselves.
NOA is a search engine for scientific images from open access publications based on full text indexing of all text referring to the images and filtering for disciplines and image type. Images will be annotated with Wikipedia categories for better discoverability and for uploading to WikiCommons. Currently we have indexed approximately 2,7 Million images from over 710 000 scientific papers from all fields of science.
Scientific papers from all disciplines contain many abbreviations and acronyms. In many cases these acronyms are ambiguous. We present a method to choose the contextual correct definition of an acronym that does not require training for each acronym and thus can be applied to a large number of different acronyms with only few instances. We constructed a set of 19,954 examples of 4,365 ambiguous acronyms from image captions in scientific papers along with their contextually correct definition from different domains. We learn word embeddings for all words in the corpus and compare the averaged context vector of the words in the expansion of an acronym with the weighted average vector of the words in the context of the acronym. We show that this method clearly outperforms (classical) cosine similarity. Furthermore, we show that word embeddings learned from a 1 billion word corpus of scientific exts outperform word embeddings learned from much larger general corpora.
Concreteness of words has been studied extensively in psycholinguistic literature. A number of datasets have been created with average values for perceived concreteness of words. We show that we can train a regression model on these data, using word embeddings and morphological features, that can predict these concreteness values with high accuracy. We evaluate the model on 7 publicly available datasets. Only for a few small subsets of these datasets prediction of concreteness values are found in the literature. Our results clearly outperform the reported results for these datasets.
Concreteness of words has been measured and used in psycholinguistics already for decades. Recently, it is also used in retrieval and NLP tasks. For English a number of well known datasets has been established with average values for perceived concreteness.
We give an overview of available datasets for German, their correlation and evaluate prediction algorithms for concreteness of German words. We show that these algorithms achieve similar results as for English datasets. Moreover, we show for all datasets there are no significant differences between a prediction model based on a regression model using word embeddings as features and a prediction algorithm based on word similarity according to the same embeddings.
Image captions in scientific papers usually are complementary to the images. Consequently, the captions contain many terms that do not refer to concepts visible in the image. We conjecture that it is possible to distinguish between these two types of terms in an image caption by analysing the text only. To examine this, we evaluated different features. The dataset we used to compute tf.idf values, word embeddings and concreteness values contains over 700 000 scientific papers with over 4,6 million images. The evaluation was done with a manually annotated subset of 329 images. Additionally, we trained a support vector machine to predict whether a term is a likely visible or not. We show that concreteness of terms is a very important feature to identify terms in captions and context that refer to concepts visible in images.
Malnutrition, nutritional deficiency, or undernutrition is an imbalanced nutritional status resulting from insufficient intake of nutrients to meet normal physiologic requirements. Malnutrition in childhood has both short-term consequences and long-term consequences on mental and physical health as well as the overall health development of children. Of all regions in the world, the Asia and the Pacific region has achieved the fastest rate of economic growth. There is no evidence that this rapid economic growth translates into a decline in malnutrition of children in Asian countries such as India.
Surface atomic relaxation and magnetism on hydrogen-adsorbed Fe(110) surfaces from first principles
(2016)
We have computed adsorption energies, vibrational frequencies, surface relaxation and buckling for hydrogen adsorbed on a body-centred-cubic Fe(110) surface as a function of the degree of H coverage. This adsorption system is important in a variety of technological processes such as the hydrogen embrittlement in ferritic steels, which motivated this work, and the Haber–Bosch process. We employed spin-polarised density functional theory to optimise geometries of a six-layer Fe slab, followed by frozen mode finite displacement phonon calculations to compute Fe–H vibrational frequencies. We have found that the quasi-threefold (3f) site is the most stable adsorption site, with adsorption energies of ∼3.0 eV/H for all coverages studied. The long-bridge (lb) site, which is close in energy to the 3f site, is actually a transition state leading to the stable 3f site. The calculated harmonic vibrational frequencies collectively span from 730 to 1220 cm−1, for a range of coverages. The increased first-to-second layer spacing in the presence of adsorbed hydrogen, and the pronounced buckling observed in the Fe surface layer, may facilitate the diffusion of hydrogen atoms into the bulk, and therefore impact the early stages of hydrogen embrittlement in steels.
The effect of magnetism on hydrogen adsorption and subsurface diffusion through face-centred cubic (fcc) γ-Fe(0 0 1) was investigated using spin-polarised density functional theory (s-DFT). The non-magnetic (NM), ferromagnetic (FM), and antiferromagnetic single (AFM1) and double layer (AFMD) structures were considered. For each magnetic state, the hydrogen preferentially adsorbs at the fourfold site, with adsorption energies of 4.07, 4.12, 4.03 and 4.05 eV/H atom for the NM, FM, AFM1 and AFMD structures. A total barrier of 1.34, 0.90, 1.32 and 1.25 eV and a bulk-like diffusion barrier of 0.6, 0.2, 0.4 and 0.3 eV were calculated for the NM, FM, AFM1 and AFMD magnetic states. The Fe atoms nearest to the H atom exhibited a reduced magnetic moment, whereas the next-nearest neighbour Fe atoms exhibited a non-negligible local perturbation in the magnetic moment. The presence of magnetically ordered structures has a minimal influence on the minimum energy path for H diffusion through the lattice and on the adsorption of H atoms on the Fe(0 0 1) surface, but we computed a significant reduction of the bulk-like diffusion barriers with respect to the non-magnetic state of fcc γ-Fe.
The adsorption of O atoms on the Fe(1 1 0) surface has been investigated by density functional theory for increasing degrees of oxygen coverage from 0.25 to 1 monolayer, to follow the evolution of the Osingle bondFe(1 1 0) system into an FeO(1 1 1)-like monolayer. We found that the quasi-threefold site is the most stable adsorption site for all coverages, with adsorption energies of ∼2.8–4.0 eV per O atom. Oxygen adsorption results in surface geometrical changes such as interlayer relaxation and buckling, the latter of which decreases with coverage. The calculated vibrational frequencies range from 265 to 470 cm−1 for the frustrated translational modes and 480–620 cm−1 for the stretching mode, and hence are in good agreement with the experimental values reported for bulk FeO wüstite. The hybridization of the oxygen 2p and iron 3d orbitals increases with oxygen coverage, and the partial density of states for the Osingle bondFe(1 1 0) system at full coverage resembles the one reported in the literature for bulk FeO. These results at full oxygen coverage point to the incipient formation of an FeO(1 1 1)-like monolayer that would eventually lead to the bulk FeO oxide layer.
This study is concerned with the early stages of hydrogen embrittlement on an atomistic scale. We employed density functional theory to investigate hydrogen diffusion through the (100), (110) and (111) surfaces of γ-Fe. The preferred adsorption sites and respective energies for hydrogen adsorption were established for each plane, as well as a minimum energy pathway for diffusion. The H atoms adsorb on the (100), (110) and (111) surfaces with energies of ∼4.06 eV, ∼3.92 eV and ∼4.05 eV, respectively. The barriers for bulk-like diffusion for the (100), (110) and (111) surfaces are ∼0.6 eV, ∼0.5 eV and ∼0.7 eV, respectively. We compared these calculated barriers with previously obtained experimental data in an Arrhenius plot, which indicates good agreement between experimentally measured and theoretically predicted activation energies. Texturing austenitic steels such that the (111) surfaces of grains are preferentially exposed at the cleavage planes may be a possibility to reduce hydrogen embrittlement.
The present investigation was conducted to investigate the in-vitro activity of ethanolic extract of roots of Centaurea behens by using DPPH radical scavenging activity, nitric oxide radical scavenging activity, hydrogen peroxide radical scavenging activity, hydroxyl radical. Result suggests that the extract possess significant antioxidant activity as compared to the standard ascorbic acid and thus further in vivo investigation is required to evaluate the medicinal significance of the extract which can be used for assessing the possible therapeutic importance of the drug.
AlphaGo’s victory against Lee Sedol in the game of Go has been a milestone in artificial intelligence. After this success, the team behind the program further refined the architecture and applied it to many other games such as chess or shogi. In the following thesis, we try to apply the theory behind AlphaGo and its successor AlphaZero to the game of Abalone. Due to limitations in computational resources, we could not replicate the same exceptional performance.
Background: Stereotactic radiosurgery (SRS) is an effective treatment for trigeminal neuralgia (TN). Nevertheless, a proportion of patients will experience recurrence and treatment-related sensory disturbances. In order to evaluate the predictors of efficacy and safety of image-guided non-isocentric radiosurgery, we analyzed the impact of trigeminal nerve volume and the nerve dose/volume relationship, together with relevant clinical characteristics.
Methods: Two-hundred and ninety-six procedures were performed on 262 patients at three centers. In 17 patients the TN was secondary to multiple sclerosis (MS). Trigeminal pain and sensory disturbances were classified according to the Barrow Neurological Institute (BNI) scale. Pain-free-intervals were investigated using Kaplan Meier analyses. Univariate and multivariate Cox regression analyses were performed to identify predictors.
Results: The median follow-up period was 38 months, median maximal dose 72.4 Gy, median target nerve volume 25mm3, and median prescription dose 60 Gy. Pain control rate (BNI I-III) at 6, 12, 24, 36, 48, and 60 months were 96.8, 90.9, 84.2, 81.4, 74.2, and 71.2%, respectively. Overall, 18% of patients developed sensory disturbances. Patients with volume ≥ 30mm3 were more likely to maintain pain relief (p = 0.031), and low integral dose (< 1.4 mJ) tended to be associated with more pain recurrence than intermediate (1.4–2.7 mJ) or high integral dose (> 2.7 mJ; low vs. intermediate: log-rank test, χ2 = 5.02, p = 0.019; low vs. high: log-rank test, χ2 = 6.026, p = 0.014). MS, integral dose, and mean dose were the factors associated with pain recurrence, while re-irradiation and MS were predictors for sensory disturbance in the multivariate analysis.
Conclusions: The dose to nerve volume ratio is predictive of pain recurrence in TN, and re-irradiation has a major impact on the development of sensory disturbances after non-isocentric SRS. Interestingly, the integral dose may differ significantly in treatments using apparently similar dose and volume constraints.
A new type of rotary compressor, called “rotary-chamber compressor”, consists of two interlocking rotors with 4 wings each, that perform non-uniform rotary movements. Both rotors have the same direction of rotation, while one rotor is accelerating, the other rotor is retarding. After surpassing a specific mark, the sequence changes and the leading rotor begins to retard and vice versa. Due to the resulting relative phase difference, the volume between the two wings is changing periodically, which allows pulsating working chambers. The technology was first introduced by its founder Jürgen Schukey in 1987. Since then, no further development on this machine is known to us except our own. In this contribution, a study on the kinematics of the rotary-chamber-compressor is presented. Initial studies have shown that changes in the kinematics of the rotors will have a direct influence on the thermodynamical variables, which, if optimized, can lead to an increased performance of the machine. Therefore, a mathematical model has been developed to obtain the performance parameters from different kinematic concepts by using numerical CFD analysis. Furthermore, additional optimization possibilities will be listed and discussed.
Primary data is an important source ofinformation for Competitive Intelligence. Traditionally, it has been collected from interviews with stakeholders, talks at conferences and other means of direct interpersonal communication. The role of the Internet in the data collection – if it was used at all – was that of a provider of supplementary secondary data. Here, this approach is challenged and, using three examples of Social Media, it is shown that the Internet can and does provide valuable primary information to the Competitive Intelligence professional. Accordingly, a case is made for a shift of focus in the data collection process.
Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer.
Purpose: Radiology reports mostly contain free-text, which makes it challenging to obtain structured data. Natural language processing (NLP) techniques transform free-text reports into machine-readable document vectors that are important for creating reliable, scalable methods for data analysis. The aim of this study is to classify unstructured radiograph reports according to fractures of the distal fibula and to find the best text mining method.
Materials & Methods: We established a novel German language report dataset: a designated search engine was used to identify radiographs of the ankle and the reports were manually labeled according to fractures of the distal fibula. This data was used to establish a machine learning pipeline, which implemented the text representation methods bag-of-words (BOW), term frequency-inverse document frequency (TF-IDF), principal component analysis (PCA), non-negative matrix factorization (NMF), latent Dirichlet allocation (LDA), and document embedding (doc2vec). The extracted document vectors were used to train neural networks (NN), support vector machines (SVM), and logistic regression (LR) to recognize distal fibula fractures. The results were compared via cross-tabulations of the accuracy (acc) and area under the curve (AUC).
Results: In total, 3268 radiograph reports were included, of which 1076 described a fracture of the distal fibula. Comparison of the text representation methods showed that BOW achieved the best results (AUC = 0.98; acc = 0.97), followed by TF-IDF (AUC = 0.97; acc = 0.96), NMF (AUC = 0.93; acc = 0.92), PCA (AUC = 0.92; acc = 0.9), LDA (AUC = 0.91; acc = 0.89) and doc2vec (AUC = 0.9; acc = 0.88). When comparing the different classifiers, NN (AUC = 0,91) proved to be superior to SVM (AUC = 0,87) and LR (AUC = 0,85).
Conclusion: An automated classification of unstructured reports of radiographs of the ankle can reliably detect findings of fractures of the distal fibula. A particularly suitable feature extraction method is the BOW model.
Key Points:
- The aim was to classify unstructured radiograph reports according to distal fibula fractures.
- Our automated classification system can reliably detect fractures of the distal fibula.
- A particularly suitable feature extraction method is the BOW model.
Data and Information Science: Book of Abstracts at BOBCATSSS 2022 Hybrid Conference, 23rd - 25th of May 2022, Debrecen.
This year marks the 30th anniversary of the BOBCATSSS. The BOBCATSSS is an international, annual symposium designed for librarians and information professionals in a rapidly changing environment. Over the past 30 years, the conference has included exciting topics, great venues, interested guests and engaging presenters.
This year we would like to introduce the topics of the many papers presented in the Book of Abstracts for the first time in presence at the University of Debrecen and hybrid. The Book of Abstracts provides an overview of all presentations given at BOBCATSSS. Presentations are listed in alphabetical order by title and include speeches, Pecha Kuchas, posters and workshops.
The theme of BOBCATSSS is Data and Information Science. Data and information are the basis for decisions and processes in business, politics and science. Particularly important in the current era of digital transformation. This is exactly where this year's subthemes come in. They deal with data science, openness as well as institutional roles.
With the increasing significance of information technology, there is an urgent need for adequate measures of information security. Systematic information security management is one of most important initiatives for IT management. At least since reports about privacy and security breaches, fraudulent accounting practices, and attacks on IT systems appeared in public, organizations have recognized their responsibilities to safeguard physical and information assets. Security standards can be used as guideline or framework to develop and maintain an adequate information security management system (ISMS). The standards ISO/IEC 27000, 27001 and 27002 are international standards that are receiving growing recognition and adoption. They are referred to as “common language of organizations around the world” for information security. With ISO/IEC 27001 companies can have their ISMS certified by a third-party organization and thus show their customers evidence of their security measures.
Systematizing IT Risks
(2019)
IT risks — risks associated with the operation or use of information technology — have taken on great importance in business, and IT risk management is accordingly important in the science and practice of information management. Therefore, it is necessary to systematize IT risks in order to plan, manage and control for different risk-specific measures. In order to choose and implement suitable measures for managing IT risks, effect-based and causebased procedures are necessary. These procedures are explained in detail for IT security risks because of their special importance.
Aim/Purpose: We explore impressions and experiences of Information Systems graduates during their first years of employment in the IT field. The results help to understand work satisfaction, career ambition, and motivation of junior employees. This way, the attractiveness of working in the field of IS can be increased and the shortage of junior employees reduced.
Background: Currently IT professions are characterized by terms such as “shortage of professionals” and “shortage of junior employees”. To attract more people to work in IT detailed knowledge about experiences of junior employees is necessary.
Methodology: Data from a large survey of 193 graduates of the degree program “Information Systems” at University of Applied Sciences and Arts Hannover (Germany) show characteristics of their professional life like work satisfaction, motivation, career ambition, satisfaction with opportunities, development and career advancement, satisfaction with work-life balance. It is also asked whether men and women gain the same experiences when entering the job market and have the same perceptions.
Findings: The participants were highly satisfied with their work, but limitations or restrictions due to gender are noteworthy.
Recommendations for Practitioners: The results provide information on how human resource policies can make IT professions more attractive and thus convince graduates to seek jobs in the field. For instance, improving the balance between work and various areas of private life seems promising. Also, restrictions with respect to the work climate and improving communication along several dimensions need to be considered.
Future Research: More detailed research on ambition and achievement is necessary to understand gender differences.
The objective of this student project was for the students to develop, conduct, and supervise a training course for basic work place applications (word processing and business graphics). Students were responsible for the planning, organizing and the teaching of the course. As participants, underprivileged adolescents took part in order to learn the handling of IT applications and therefore, improve their job skills and have a better chance to get into employment. Therefore the adolescents do the role of trainees at the course. Our students worked with a population that is continually overlooked by the field.
As a result, the students trained to design and implement training courses, exercised to manage projects and increased their social responsibility and awareness concerning the way of life and living conditions of other young people. The underprivileged adolescents learned to use important business applications and increased their job skills and job chances. The overall design of our concept required extensive resources to supervise and to steer the students and the adolescents. The lecturers had to teach and to counsel the students and had to be on “stand-by” just in case they were needed to solve critical situations between the two groups of young people.
BYOD Bring Your Own Device
(2013)
Using modern devices like smartphones and tablets offers a wide variety of advantages; this has made them very popular as consumer devices in private life. Using them in the workplace is also popular. However, who wants to carry around and handle two devices; one for personal use, and one for work-related tasks? That is why “dual use”, using one single device for private and business applications, may represent a proper solution. The result is “Bring Your Own Device,” or BYOD, which describes the circumstance in which users make their own personal devices available for company use. For companies, this brings some opportunities and risks. We describe and discuss organizational issues, technical approaches, and solutions.
During the European debt crisis, German and Greek media frequently reported on the political conflict between the two countries. This article examines to what extent the media coverage in one country about the other is considered by German and Greek citizens to be hostile (‘hostile media perception’) and influential (‘influence of presumed influence’). Data from a comparative survey in Germany (n = 492) and Greece (n = 484) show that news coverage by foreign media on the European debt crisis is perceived by respondents as hostile against their own country and as influential. Moreover, both media-related perceptions are linked with intensified perceptions of hostility, such as assumptions that an individual’s country is not respected in the other country or that the other country’s citizens are demanding that the individual’s country be punished. Based on these results, it is discussed whether media-related perceptions can have a conflict-intensifying effect in international crises.
Nowadays, problems related with solid waste management become a challenge for most countries due to the rising generation of waste, related environmental issues, and associated costs of produced wastes. Effective waste management systems at different geographic levels require accurate forecasting of future waste generation. In this work, we investigate how open-access data, such as provided from the Organisation for Economic Co-operation and Development (OECD), can be used for the analysis of waste data. The main idea of this study is finding the links between socioeconomic and demographic variables that determine the amounts of types of solid wastes produced by countries. This would make it possible to accurately predict at the country level the waste production and determine the requirements for the development of effective waste management strategies. In particular, we use several machine learning data regression (Support Vector, Gradient Boosting, and Random Forest) and clustering models (k-means) to respectively predict waste production for OECD countries along years and also to perform clustering among these countries according to similar characteristics. The main contributions of our work are: (1) waste analysis at the OECD country-level to compare and cluster countries according to similar waste features predicted; (2) the detection of most relevant features for prediction models; and (3) the comparison between several regression models with respect to accuracy in predictions. Coefficient of determination (R2), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE), respectively, are used as indices of the efficiency of the developed models. Our experiments have shown that some data pre-processings on the OECD data are an essential stage required in the analysis; that Random Forest Regressor (RFR) produced the best prediction results over the dataset; and that these results are highly influenced by the quality of available socio-economic data. In particular, the RFR model exhibited the highest accuracy in predictions for most waste types. For example, for “municipal” waste, it produced, respectively, R2 = 1 and MAPE = 4.31 global error values for the test set; and for “household” waste, it, respectively, produced R2 = 1 and MAPE = 3.03. Our results indicate that the considered models (and specially RFR) all are effective in predicting the amount of produced wastes derived from input data for the considered countries.
Decision support systems for traffic management systems have to cope with a high volume of events continuously generated by sensors. Conventional software architectures do not explicitly target the efficient processing of continuous event streams. Recently, event-driven architectures (EDA) have been proposed as a new paradigm for event-based applications. In this paper we propose a reference architecture for event-driven traffic management systems, which enables the analysis and processing of complex event streams in real-time and is therefore well-suited for decision support in sensor-based traffic control sys- tems. We will illustrate our approach in the domain of road traffic management. In particular, we will report on the redesign of an intelligent transportation management system (ITMS) prototype for the high-capacity road network in Bilbao, Spain.
Nowadays, most recommender systems are based on a centralized architecture, which can cause crucial issues in terms of trust, privacy, dependability, and costs. In this paper, we propose a decentralized and distributed MANET-based (Mobile Ad-hoc NETwork) recommender system for open facilities. The system is based on mobile devices that collect sensor data about users locations to derive implicit ratings that are used for collaborative filtering recommendations. The mechanisms of deriving ratings and propagating them in a MANET network are discussed in detail. Finally, extensive experiments demonstrate the suitability of the approach in terms of different performance metrics.
Nowadays, smartphones and sensor devices can provide a variety of information about a user’s current situation. So far, many recommender systems neglect this kind of information and thus cannot provide situationspecific recommendations. Situation-aware recommender systems adapt to changes in the user’s environment and therefore are able to offer recommendations that are more appropriate for the current situation. In this paper, we present a software architecture that enables situation awareness for arbitrary recommendation techniques. The proposed system considers both (semi-)static user profiles and volatile situational knowledge to obtain meaningful recommendations. Furthermore, the implementation of the architecture in a museum of natural history is presented, which uses Complex Event Processing to achieve situation awareness.
In parcel delivery, the “last mile” from the parcel hub to the customer is costly, especially for time-sensitive delivery tasks that have to be completed within hours after arrival. Recently, crowdshipping has attracted increased attention as a new alternative to traditional delivery modes. In crowdshipping, private citizens (“the crowd”) perform short detours in their daily lives to contribute to parcel delivery in exchange for small incentives. However, achieving desirable crowd behavior is challenging as the crowd is highly dynamic and consists of autonomous, self-interested individuals. Leveraging crowdshipping for time-sensitive deliveries remains an open challenge. In this paper, we present an agent-based approach to on-time parcel delivery with crowds. Our system performs data stream processing on the couriers’ smartphone sensor data to predict delivery delays. Whenever a delay is predicted, the system attempts to forge an agreement for transferring the parcel from the current deliverer to a more promising courier nearby. Our experiments show that through accurate delay predictions and purposeful task transfers many delays can be prevented that would occur without our approach.
Background: Maintenance of metal homeostasis is crucial in bacterial pathogenicity as metal starvation is the most important mechanism in the nutritional immunity strategy of host cells. Thus, pathogenic bacteria have evolved sensitive metal scavenging systems to overcome this particular host defence mechanism. The ruminant pathogen Mycobacterium avium ssp. paratuberculosis (MAP) displays a unique gut tropism and causes a chronic progressive intestinal inflammation. MAP possesses eight conserved lineage specific large sequence polymorphisms (LSP), which distinguish MAP from its ancestral M. avium ssp. hominissuis or other M. avium subspecies. LSP14 and LSP15 harbour many genes proposed to be involved in metal homeostasis and have been suggested to substitute for a MAP specific, impaired mycobactin synthesis.
Results: In the present study, we found that a LSP14 located putative IrtAB-like iron transporter encoded by mptABC was induced by zinc but not by iron starvation. Heterologous reporter gene assays with the lacZ gene under control of the mptABC promoter in M. smegmatis (MSMEG) and in a MSMEGΔfurB deletion mutant revealed a zinc dependent, metalloregulator FurB mediated expression of mptABC via a conserved mycobacterial FurB recognition site. Deep sequencing of RNA from MAP cultures treated with the zinc chelator TPEN revealed that 70 genes responded to zinc limitation. Remarkably, 45 of these genes were located on a large genomic island of approximately 90 kb which harboured LSP14 and LSP15. Thirty-five of these genes were predicted to be controlled by FurB, due to the presence of putative binding sites. This clustering of zinc responsive genes was exclusively found in MAP and not in other mycobacteria.
Conclusions: Our data revealed a particular genomic signature for MAP given by a unique zinc specific locus, thereby suggesting an exceptional relevance of zinc for the metabolism of MAP. MAP seems to be well adapted to maintain zinc homeostasis which might contribute to the peculiarity of MAP pathogenicity.
Appropriate data models are essential for the systematic collection, aggregation, and integration of health data and for subsequent analysis. However, recommendations for modeling health data are often not publicly available within specific projects. Therefore, the project Zukunftslabor Gesundheit investigates recommendations for modeling. Expert interviews with five experts were conducted and analyzed using qualitative content analysis. Based on the condensed categories “governance”, “modeling” and “standards”, the project team generated eight hypotheses for recommendations on health data modeling. In addition, relevant framework conditions such as different roles, international cooperation, education/training and political influence were identified. Although emerging from interviewing a small convenience sample of experts, the results help to plan more extensive data collections and to create recommendations for health data modeling.
BACKGROUND: Even though physician rating websites (PRWs) have been gaining in importance in both practice and research, little evidence is available on the association of patients' online ratings with the quality of care of physicians. It thus remains unclear whether patients should rely on these ratings when selecting a physician. The objective of this study was to measure the association between online ratings and structural and quality of care measures for 65 physician practices from the German Integrated Health Care Network "Quality and Efficiency" (QuE). METHODS: Online reviews from two German PRWs were included which covered a three-year period (2011 to 2013) and included 1179 and 991 ratings, respectively. Information for 65 QuE practices was obtained for the year 2012 and included 21 measures related to structural information (N = 6), process quality (N = 10), intermediate outcomes (N = 2), patient satisfaction (N = 1), and costs (N = 2). The Spearman rank coefficient of correlation was applied to measure the association between ratings and practice-related information. RESULTS: Patient satisfaction results from offline surveys and the patients per doctor ratio in a practice were shown to be significantly associated with online ratings on both PRWs. For one PRW, additional significant associations could be shown between online ratings and cost-related measures for medication, preventative examinations, and one diabetes type 2-related intermediate outcome measure. There again, results from the second PRW showed significant associations with the age of the physicians and the number of patients per practice, four process-related quality measures for diabetes type 2 and asthma, and one cost-related measure for medication. CONCLUSIONS: Several significant associations were found which varied between the PRWs. Patients interested in the satisfaction of other patients with a physician might select a physician on the basis of online ratings. Even though our results indicate associations with some diabetes and asthma measures, but not with coronary heart disease measures, there is still insufficient evidence to draw strong conclusions. The limited number of practices in our study may have weakened our findings.