Refine
Year of publication
Document Type
- Article (300) (remove)
Language
- English (300) (remove)
Has Fulltext
- yes (300)
Is part of the Bibliography
- no (300)
Keywords
- Euterentzündung (23)
- Student (11)
- Knowledge (10)
- Mumbai (10)
- Wissen (10)
- India (9)
- Germany (8)
- bovine mastitis (7)
- Adsorption (6)
- Antibiotikum (6)
Objective
We aimed to investigate the proportion of young patients not returning to work (NRTW) at 1 year after ischemic stroke (IS) and during follow-up, and clinical factors associated with NRTW.
Methods
Patients from the Helsinki Young Stroke Registry with an IS occurring in the years 1994–2007, who were at paid employment within 1 year before IS, and with NIH Stroke Scale score ≤15 points at hospital discharge, were included. Data on periods of payment came from the Finnish Centre for Pensions, and death data from Statistics Finland. Multivariate logistic regression analyses assessed factors associated with NRTW 1 year after IS, and lasagna plots visualized the proportion of patients returning to work over time.
Results
We included a total of 769 patients, of whom 289 (37.6%) were not working at 1 year, 323 (42.0%) at 2 years, and 361 (46.9%) at 5 years from IS. When adjusted for age, sex, socioeconomic status, and NIH Stroke Scale score at admission, factors associated with NRTW at 1 year after IS were large anterior strokes, strokes caused by large artery atherosclerosis, high-risk sources of cardioembolism, and rare causes other than dissection compared with undetermined cause, moderate to severe aphasia vs no aphasia, mild and moderate to severe limb paresis vs no paresis, and moderate to severe visual field deficit vs no deficit.
Conclusions
NRTW is a frequent adverse outcome after IS in young adults with mild to moderate IS. Clinical variables available during acute hospitalization may allow prediction of NRTW.
Nitric oxide adsorption on a Au(100) single crystal has been investigated to identify the type of adsorption, the adsorption site, and the orientation and alignment of the adsorbed NO relative to the surface. This was done using a combination of 3D-surface velocity map imaging, near-ambient pressure X-ray photoelectron spectroscopy, and density functional theory. NO was observed to be molecularly adsorbed on gold at ~200 K. Very narrow angular distributions and cold rotational distributions of photodesorbed NO indicate that NO adsorbs on high-symmetry sites on the Au crystal, with the N–O bond axis close to the surface normal. Our density functional theory calculations show that NO preferentially adsorbs on the symmetric bridge (2f) site, which ensures efficient overlap of the NO π* orbital with the orbitals on the two neighbouring Au atoms, and with the N–O bond axis aligned along the surface normal, in agreement with our conclusions from the rotational state distributions. The combination of XPS, which reveals the orientation of NO on gold, with 3D-surface velocity map imaging and density functional theory thus allowed us to determine the adsorption site, orientation and alignment of nitric oxide adsorbed on Au(100).
Background
Uncomplicated urinary tract infections (UTI) are common in general practice and usually treated with antibiotics. This contributes to increasing resistance rates of uropathogenic bacteria. A previous trial showed a reduction of antibiotic use in women with UTI by initial symptomatic treatment with ibuprofen. However, this treatment strategy is not suitable for all women equally. Arctostaphylos uva-ursi (UU, bearberry extract arbutin) is a potential alternative treatment. This study aims at investigating whether an initial treatment with UU in women with UTI can reduce antibiotic use without significantly increasing the symptom burden or rate of complications.
Methods
This is a double-blind, randomized, and controlled comparative effectiveness trial. Women between 18 and 75 years with suspected UTI and at least two of the symptoms dysuria, urgency, frequency or lower abdominal pain will be assessed for eligibility in general practice and enrolled into the trial. Participants will receive either a defined daily dose of 3 × 2 arbutin 105 mg for 5 days (intervention) or fosfomycin 3 g once (control). Antibiotic therapy will be provided in the intervention group only if needed, i.e. for women with worsening or persistent symptoms. Two co-primary outcomes are the number of all antibiotic courses regardless of the medical indication from day 0–28, and the symptom burden, defined as a weighted sum of the daily total symptom scores from day 0–7. The trial result is considered positive if superiority of initial treatment with UU is demonstrated with reference to the co-primary outcome number of antibiotic courses and non-inferiority of initial treatment with UU with reference to the co-primary outcome symptom burden.
Discussion
The trial’s aim is to investigate whether initial treatment with UU is a safe and effective alternative treatment strategy in women with UTI. In that case, the results might change the existing treatment strategy in general practice by promoting delayed prescription of antibiotics and a reduction of antibiotic use in primary care.
For indexing archived documents the Dutch Parliament uses a specialized thesaurus. For good results for full text retrieval and automatic classification it turns out to be important to add more synonyms to the existing thesaurus terms. In the present work we investigate the possibilities to find synonyms for terms of the parliaments thesaurus automatically. We propose to use distributional similarity (DS). In an experiment with pairs of synonyms and non-synonyms we train and test a classifier using distributional similarity and string similarity. Using ten-fold cross validation we were able to classify 75% of the pairs of a set of 6000 word pairs correctly.
Background: After kidney transplantation, immunosuppressive therapy causes impaired cellular immune defense leading to an increased risk of viral complications. Trough level monitoring of immunosuppressants is insufficient to estimate the individual intensity of immunosuppression. We have already shown that virus-specific T cells (Tvis) correlate with control of virus replication as well as with the intensity of immunosuppression. The multicentre IVIST01-trial should prove that additional steering of immunosuppressive and antiviral therapy by Tvis levels leads to better graft function by avoidance of over-immunosuppression (for example, viral infections) and drug toxicity (for example, nephrotoxicity).
Methods/design: The IVIST-trial starts 4 weeks after transplantation. Sixty-four pediatric kidney recipients are randomized either to a non-intervention group that is only treated conservatively or to an intervention group with additional monitoring by Tvis. The randomization is stratified by centre and cytomegalovirus (CMV) prophylaxis. In both groups the immunosuppressive medication (cyclosporine A and everolimus) is adopted in the same target range of trough levels. In the non-intervention group the immunosuppressive therapy (cyclosporine A and everolimus) is only steered by classical trough level monitoring and the antiviral therapy of a CMV infection is performed according to a standard protocol. In contrast, in the intervention group the dose of immunosuppressants is individually adopted according to Tvis levels as a direct measure of the intensity of immunosuppression in addition to classical trough level monitoring. In case of CMV infection or reactivation the antiviral management is based on the individual CMV-specific immune defense assessed by the CMV-Tvis level. Primary endpoint of the study is the glomerular filtration rate 2 years after transplantation; secondary endpoints are the number and severity of viral infections and the incidence of side effects of immunosuppressive and antiviral drugs.
Discussion: This IVIST01-trial will answer the question whether the new concept of steering immunosuppressive and antiviral therapy by Tvis levels leads to better future graft function. In terms of an effect-related drug monitoring, the study design aims to realize a personalization of immunosuppressive and antiviral management after transplantation. Based on the IVIST01-trial, immunomonitoring by Tvis might be incorporated into routine care after kidney transplantation.
Introduction:
Human Immunodeficiency Virus (HIV) infection remains prevalent co-morbidity, and among fracture patients. Few studies have investigated the role of exercise interventions in preventing bone demineralization in people who have fractures and HIV. If exercise exposed, HIV-infected individuals may experience improved bone health outcomes (BMD), function, quality of life (QoL). The study will aim to assess the impact of home based exercises on bone mineral density, functional capacity, QoL, and some serological markers of health in HIV infection among Nigerians and South Africans.
Methods and design:
The study is an assessor-blinded randomized controlled trial. Patients managed with internal and external fixation for femoral shaft fracture at the study sites will be recruited to participate in the study. The participants will be recruited 2 weeks post-discharge at the follow-up clinic with the orthopaedic surgeon. The study population will consist of all persons with femoral fracture and HIV-positive and negative (HIV-positive medically confirmed) aged 18 to 60 years attending the above-named health facilities. For the HIV-positive participants, a documented positive HIV result, as well as a history of being followed-up at the HIV treatment and care center. A developed home based exercise programme will be implemented in the experimental group while the control group continues with the usual rehabilitation programme. The primary outcome measures will be function, gait, bone mineral density, physical activity, and QoL.
Discussion:
The proposed trial will compare the effect of a home-based physical exercise-training programme in the management of femoral fracture to the usual physiotherapy management programmes with specific outcomes of bone mineral density, function, and inflammatory markers.
Background: Autism Spectrum Disorder (ASD) is characterized by impairments in social communication, limited repetitive behaviors, impaired language development, and interest or activity patterns, which include a group complex neurodevelopmental syndrome with diverse phenotypes that reveal considerable etiological and clinical heterogeneity and are also considered one of the most heritable disorders (over 90%). Genetic, epigenetic, and environmental factors play a role in the development of ASD.
Aim: This study was designed to investigate the extent of DNA damage in parents of autistic children by treating peripheral blood mononuclear cells (PBMCs) with bleomycin and hydrogen peroxide (H2O2).
Methods: Peripheral blood mononuclear cells (PBMCs) were isolated by the Ficoll method and treated with a specific concentration of bleomycin and H2O2 for 30 min and 5 min, respectively. Then, the degree of DNA damage was analyzed by the alkaline comet assay or single cell gel electrophoresis (SCGE), an effective way to measure DNA fragmentation in eukaryotic cells.
Results: Our findings revealed that there is a significant difference in the increase of DNA damage in parents with affected children compared to the control group, which can indicate the inability of the DNA molecule repair system. Furthermore, our study showed a significant association between fathers’ occupational difficulties (exposed to the influence of environmental factors), as well as family marriage, and suffering from ASD in offspring.
Conclusion: Our results suggested that the influence of environmental factors on parents of autistic children may affect the development of autistic disorder in their offspring. Subsequently, based on our results, investigating the effect of environmental factors on the amount of DNA damage in parents with affected children requires more studies.
Objective
The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods
GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results
The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P < 0.001) and education (P < 0.001) were significant factors, but not gender (P > 0.99). For doctors, neither age (P = 0.73), professional experience (P > 0.99) nor gender (P = 0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions
GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
Objective: The study’s objective was to assess factors contributing to the use of smart devices by general practitioners (GPs) and patients in the health domain, while specifically addressing the situation in Germany, and to determine whether, and if so, how both groups differ in their perceptions of these technologies.
Methods: GPs and patients of resident practices in the Hannover region, Germany, were surveyed between April and June 2014. A total of 412 GPs in this region were invited by email to participate via an electronic survey, with 50 GPs actually doing so (response rate 12.1%). For surveying the patients, eight regional resident practices were visited by study personnel (once each). Every second patient arriving there (inclusion criteria: of age, fluent in German) was asked to take part (paper-based questionnaire). One hundred and seventy patients participated; 15 patients who did not give consent were excluded.
Results: The majority of the participating patients (68.2%, 116/170) and GPs (76%, 38/50) owned mobile devices. Of the patients, 49.9% (57/116) already made health-related use of mobile devices; 95% (36/38) of the participating GPs used them in a professional context. For patients, age (P<0.001) and education (P<0.001) were significant factors, but not gender (P>0.99). For doctors, neither age (P¼0.73), professional experience (P>0.99) nor gender (P¼0.19) influenced usage rates. For patients, the primary use case was obtaining health (service)-related information. For GPs, interprofessional communication and retrieving information were in the foreground. There was little app-related interaction between both groups.
Conclusions: GPs and patients use smart mobile devices to serve their specific interests. However, the full potentials of mobile technologies for health purposes are not yet being taken advantage of. Doctors as well as other care providers and the patients should work together on exploring and realising the potential benefits of the technology.
The properties of these carbon nanostructures are determined by the structure and orientation of the graphitic domains during pyrolysis of carbon precursors. In this work, we investigated systematically the impact of creep stress during the stabilization process on the cyclization and molecular orientation of polyacrylonitrile as well as the graphitized structure after high temperature carbonization. Therefore, polyacrylonitrile (PAN) is electrospun and then stabilized with and without application of creep stress at different temperatures. The effect of creep stress on cyclization was monitored via Fourier transform IR spectroscopy (FTIR) and it was found that the degree of cyclization varies with the application of creep stress during the initial stages of cyclization at low temperatures (190°C and 210°C) in contrast to cyclization done at higher temperature (230°C). Herman molecular orientation factor was evaluated by polarized FTIR for PAN nanofibers cyclized with and without creep stress at 230°C-10 h. Subsequently, carbonization was performed at 1000°C and 1200°C for nanofibers cyclized at 230°C-10 h. Our results from XRD and Raman spectroscopy shows that the degree of graphitization and ordering of graphitic domains was enhanced for PAN nanofibers that were creep stressed during the cyclization process, even though both PAN nanofibers cyclized with creep stress and without creep stress showed the same amount of cyclized material. This increased degree of graphitization can be tracked to application of creep stress during the stabilization process which obviously favors the formation of sp2-hybridized carbon planes in the carbonization process. This finding highlights the impact of mechanical stress linking the cyclization of PAN nanofibers to graphitization.
Our results will pave the way for a deeper understanding of mechano-chemical processes to fabricate well-aligned graphitic domains which improves the mechanical and electrical properties of CNFs.
Improving the graphitic structure in carbon nanofibers (CNFs) is important for exploiting their potential in mechanical, electrical and electrochemical applications. Typically, the synthesis of carbon fibers with a highly graphitized structure demands a high temperature of almost 2500 °C. Furthermore, to achieve an improved graphitic structure, the stabilization of a precursor fiber has to be assisted by the presence of tension in order to enhance the molecular orientation. Keeping this in view, herein we report on the fabrication of graphene nanoplatelets (GNPs) doped carbon nanofibers using electrospinning followed by oxidative stabilization and carbonization. The effect of doping GNPs on the graphitic structure was investigated by carbonizing them at various temperatures (1000 °C, 1200 °C, 1500 °C and 1700 °C). Additionally, a stabilization was achieved with and without constant creep stress (only shrinkage stress) for both pristine and doped precursor nanofibers, which were eventually carbonized at 1700 °C. Our findings reveal that the GNPs doping results in improving the graphitic structure of polyacrylonitrile (PAN). Further, in addition to the templating effect during the nucleation and growth of graphitic crystals, the GNPs encapsulated in the PAN nanofiber matrix act in-situ as micro clamp units performing the anchoring function by preventing the loss of molecular orientation during the stabilization stage, when no external tension is applied to nanofiber mats. The templating effect of the entire graphitization process is reflected by an increased electrical conductivity along the fibers. Simultaneously, the electrical anisotropy is reduced, i.e., the GNPs provide effective pathways with improved conductivity acting like bridges between the nanofibers resulting in an improved conductivity across the fiber direction compared to the pristine PAN system.
The reactivity of graphene at its boundary region has been imaged using non-linear spectroscopy to address the controversy whether the terraces of graphene or its edges are more reactive. Graphene was functionalised with phenyl groups, and we subsequently scanned our vibrational sum-frequency generation setup from the functionalised graphene terraces across the edges. A greater phenyl signal is clearly observed at the edges, showing evidence of increased reactivity in the boundary region. We estimate an upper limit of 1 mm for the width of the CVD graphene boundary region.
We report the unambiguous detection of phenyl groups covalently attached to functionalised graphene using non-linear spectroscopy. Sum-frequency generation was employed to probe graphene on a gold surface after chemical functionalisation using a benzene diazonium salt. We observe a distinct resonance at 3064 cm1 which can clearly be assigned to an aromatic C–H stretch by comparison with a self-assembled monolayer on a gold substrate formed from benzenethiol. Not only does sum-frequency generation spectroscopy allow one to characterise functionalised graphene with higher sensitivity and much better specificity than many other spectroscopic techniques, but it also opens up the possibility to assess the coverage of graphene with functional groups, and to determine their orientation relative to the graphene surface.
Digital data on tangible and intangible cultural assets is an essential part of daily life, communication and experience. It has a lasting influence on the perception of cultural identity as well as on the interactions between research, the cultural economy and society. Throughout the last three decades, many cultural heritage institutions have contributed a wealth of digital representations of cultural assets (2D digital reproductions of paintings, sheet music, 3D digital models of sculptures, monuments, rooms, buildings), audio-visual data (music, film, stage performances), and procedural research data such as encoding and annotation formats. The long-term preservation and FAIR availability of research data from the cultural heritage domain is fundamentally important, not only for future academic success in the humanities but also for the cultural identity of individuals and society as a whole. Up to now, no coordinated effort for professional research data management on a national level exists in Germany. NFDI4Culture aims to fill this gap and create a usercentered, research-driven infrastructure that will cover a broad range of research domains from musicology, art history and architecture to performance, theatre, film, and media studies.
The research landscape addressed by the consortium is characterized by strong institutional differentiation. Research units in the consortium's community of interest comprise university institutes, art colleges, academies, galleries, libraries, archives and museums. This diverse landscape is also characterized by an abundance of research objects, methodologies and a great potential for data-driven research. In a unique effort carried out by the applicant and co-applicants of this proposal and ten academic societies, this community is interconnected for the first time through a federated approach that is ideally suited to the needs of the participating researchers. To promote collaboration within the NFDI, to share knowledge and technology and to provide extensive support for its users have been the guiding principles of the consortium from the beginning and will be at the heart of all workflows and decision-making processes. Thanks to these principles, NFDI4Culture has gathered strong support ranging from individual researchers to highlevel cultural heritage organizations such as the UNESCO, the International Council of Museums, the Open Knowledge Foundation and Wikimedia. On this basis, NFDI4Culture will take innovative measures that promote a cultural change towards a more reflective and sustainable handling of research data and at the same time boost qualification and professionalization in data-driven research in the domain of cultural heritage. This will create a long-lasting impact on science, cultural economy and society as a whole.
Background: Health information systems (HIS) are one of the most important areas for biomedical and health informatics. In order to professionally deal with HIS well-educated informaticians are needed. Because of these reasons, in 2001 an international course has been established: The Frank – van Swieten Lectures on Strategic Information Management of Health Information Systems.
Objectives: Reporting about the Frank – van Swieten Lectures and about our students‘ feedback on this course during the last 16 years. Summarizing our lessons learned and making recommendations for such international courses on HIS.
Methods: The basic concept of the Frank – van Swieten lectures is to teach the theoretical background in local lectures, to organize practical exercises on modelling sub-information systems of the respective local HIS and finally to conduct Joint Three Days as an international meeting were the resulting models are introduced and compared.
Results: During the last 16 years, the Universities of Amsterdam, Braunschweig, Heidelberg/Heilbronn, Leipzig as well as UMIT were involved in running this course. Overall, 517 students from these universities participated. Our students‘ feedback was clearly positive.
The Joint Three Days of the Frank – van Swieten Lectures, where at the end of the course all students can meet, turned out to be an important component of this course. Based on the last 16 years, we recommend common teaching materials, agreement on equivalent clinical areas for the exercises, support of group building of international student groups, motivation of using a collaboration platform, ensuring quality management of the course, addressing different levels of knowledge of the students, and ensuring sufficient funding for joint activities.
Conclusions: Although associated with considerable additional efforts, we can clearly recommend establishing such international courses on HIS, such as the Frank – van Swieten Lectures.
Background: In Germany, hospice and palliative care is well covered through inpatient, outpatient, and home-based care services. It is unknown if, and to what extent, there is a need for additional day care services to meet the specific needs of patients and caregivers.
Methods: Two day hospices and two palliative day care clinics were selected. In the first step, two managers from each facility (n = 8) were interviewed by telephone, using a semi-structured interview guide. In the second step, four focus groups were conducted, each with three to seven representatives of hospice and palliative care from the facilities’ hospice and palliative care networks. Interviews and focus groups were audio recorded, transcribed verbatim and analyzed using qualitative content analysis.
Results: The interviewed experts perceived day care services as providing additional patient and caregiver benefits. Specifically, the services were perceived to meet patient needs for social interaction and bundled treatments, especially for patients who did not fit into inpatient settings (due to, e.g., their young age or a lack of desire for inpatient admission). The services were also perceived to meet caregiver needs for support, providing short-term relief for the home care situation.
Conclusions: The results suggest that inpatient, outpatient, and home-based hospice and palliative care services do not meet the palliative care needs of all patients. Although the population that is most likely to benefit from day care services is assumed to be relatively small, such services may meet the needs of certain patient groups more effectively than other forms of care.
FID Civil Engineering, Architecture and Urbanism digital - A platform for science (BAUdigital)
(2022)
University Library Braunschweig (UB Braunschweig), University and State Library Darmstadt (ULB Darmstadt), TIB – Leibniz Information Centre for Technology and Natural Sciences and the Fraunhofer Information Centre for Planning and Building (Fraunhofer IRB) are jointly establishing a specialised information service (FID, "Fachinformationsdienst") for the disciplines of civil engineering, architecture and urbanism. The FID BAUdigital, which is funded by the German Research Foundation (DFG, "Deutsche Forschungsgemeinschaft"), will provide researchers working on digital design, planning and production methods in construction engineering with a joint information, networking and data exchange platform and support them with innovative services for documentation, archiving and publication in their data-based research.
In this paper, five ontologies are described, which include the event concepts. The paper provides an overview and comparison of existing event models. The main criteria for comparison are that there should be possibilities to model events with stretch in the time and location and participation of objects; however, there are other factors that should be taken into account as well. The paper also shows an example of using ontologies in complex event processing.
OSGi in Cloud Environments
(2013)
The paper provides a comprehensive overview of modeling and pricing cyber insurance and includes clear and easily understandable explanations of the underlying mathematical concepts. We distinguish three main types of cyber risks: idiosyncratic, systematic, and systemic cyber risks. While for idiosyncratic and systematic cyber risks, classical actuarial and financial mathematics appear to be well-suited, systemic cyber risks require more sophisticated approaches that capture both network and strategic interactions. In the context of pricing cyber insurance policies, issues of interdependence arise for both systematic and systemic cyber risks; classical actuarial valuation needs to be extended to include more complex methods, such as concepts of risk-neutral valuation and (set-valued) monetary risk measures.
The objective of this study is to analyze noise patterns during 599 visceral surgical procedures. Considering work-safety regulations, we will identify immanent noise patterns during major visceral surgeries. Increased levels of noise are known to have negative health impacts. Based on a very finegrained data collection over a year, this study will introduce a new procedure for visual representation of intra-surgery noise progression and pave new paths for future research on noise reduction in visceral surgery. Digital decibel sound-level meters were used to record the total noise in three operating theatres in one-second cycles over a year. These data were matched to archival data on surgery characteristics. Because surgeries inherently vary in length, we developed a new procedure to normalize surgery times to run cross-surgery comparisons. Based on this procedure, dBA values were adjusted to each normalized time point. Noise-level patterns are presented for surgeries contingent on important surgery characteristics: 16 different surgery types, operation method, day/night time point and operation complexity (complexity levels 1–3). This serves to cover a wide spectrum of day-to-day surgeries. The noise patterns reveal significant sound level differences of about 1 dBA, with the mostcommon noise level being spread between 55 and 60 dBA. This indicates a sound situation in many of the surgeries studied likely to cause stress in patients and staff. Absolute and relative risks of meeting or exceeding 60 dBA differ considerably across operation types. In conclusion, the study reveals that maximum noise levels of 55 dBA are frequently exceeded during visceral surgical procedures. Especially complex surgeries show, on average, a higher noise exposure. Our findings warrant active noise management for visceral surgery to reduce potential negative impacts of noise on surgical performance and outcome.
High-performance firms typically have two features in common: (i) they produce in more than one country and (ii) they produce more than one product. In this paper, we analyze the internationalization strategies of multi-product firms. Guided by several new stylized facts, we develop a theoretical model to determine optimal modes of market access at the firm–product level. We find that the most productive firmssell core varieties via foreign direct investment and export products with intermediate productivity. Shocks to trade costs and technology affect the endogenous decision to export or produce abroad at the product-level and, in turn, the relative productivity between parents and affiliates.
The aim of the podcast Digitization of Medicine is to interest a broader audience and, in particular, young women, in research and work in the field of medical informatics. This article presents the usage figures and discusses their significance for further research on the success of science communication. By 24/02/2022, a total of 24,351 downloads had been made. There were slightly more female than male listeners, and they tended to be younger. Despite the importance podcast are gaining for science communication, little is known about the respective user group and further research is needed. In this context, this paper aims to help make the effectiveness of podcasts comparable.
Quartz-crystal microbalances (QCMs) are commercially available mass sensors which mainly consist of a quartz resonator that oscillates at a characteristic frequency, which shifts when mass changes due to surface binding of molecules. In addition to mass changes, the viscosity of gases or liquids in contact with the sensor also shifts the resonance but also influences the quality factor (Q-factor). Typical biosensor applications demand operation in liquid environments leading to viscous damping strongly lowering Q-factors. For obtaining reliable measurements in liquid environments, excellent resonator control and signal processing are essential but standard resonator circuits like the Pierce and Colpitts oscillator fail to establish stable resonances. Here we present a lowcost, compact and robust oscillator circuit comprising of state-of-the-art commercially available surface-mount technology components which stimulates the QCMs oscillation, while it also establishes a control loop regulating the applied voltage. Thereby an increased energy dissipation by strong viscous damping in liquid solutions can be compensated and oscillations are stabilized. The presented circuit is suitable to be used in compact biosensor systems using custom-made miniaturized QCMs in microfluidic environments. As a proof of concept we used this circuit in combination with a customized microfabricated QCM in a microfluidic environment to measure the concentration of C-reactive protein (CRP) in buffer (PBS) down to concentrations as low as 5 μgmL -1.
Research question: Rivalries in team sports are commonly conceptualized as a threat to the fans’ identity. Therefore, past research has mainly focused on the negative consequences. However, theoretical arguments and empirical evidence suggest that rivalry has both negative and positive effects on fans’ self-concept. This research develops and empirically tests a model which captures and integrates these dual effects of rivalry.
Research methods: Data were collected via an on-site survey at home games of eight German Bundesliga football teams (N = 571). Structural equation modeling provides strong support for the proposed model.
Results and findings: In line with previous research, the results show that rivalry threatens fans’ identity as reflected in lower public collective self-esteem in relation to supporters of the rival team. However, the results also show that there are crucial positive consequences, such as higher perceptions of public collective self-esteem in relation to supporters of non-rival opponents, perceived ingroup distinctiveness and ingroup cohesion. These positive effects are mediated through increases in disidentification with the rival and perceived reciprocity of rivalry.
Implications: We contribute to the literature by providing a more balanced view of one of team sports’ key phenomena. Our results indicate that the prevalent conceptualization of rivalry as an identity threat should be amended by the positive consequences. Our research also offers guidance for the promotion of rivalries, where the managerial focus should be on creating a perception that a rivalry is reciprocal.
Research question: In order to reduce fan aggression surrounding rivalry games, team sport organizations often try to placate fans by downplaying the importance of the game (e.g. ‘the derby is not a war’). Drawing on the intergroup conflict literature, this research derives dual identity statements and examines their effectiveness in reducing fan aggressiveness compared to the managerial practice of downplaying rivalry.
Research methods: Three field experimental studies (one face-to-face survey and two online surveys) tested the hypotheses. Established rivalries in the German soccer league Bundesliga served as the empirical setting of the studies. The data were analyzed using ANCOVA and linear regression analyses.
Results and findings: Dual identity statements reduce fan aggressiveness compared to both downplay statements and a no-statement control condition, independent of team identification and trait aggression. Importantly, the managerial practice of downplaying rivalry appears to be counterproductive. It produces even higher levels of fan aggressiveness than making no statement, an effect caused by psychological reactance.
Implications: Sport organizations should not alienate their fan base by attempting to play down the importance of rivalry, which is an integral part of fan identity. Instead, they should strengthen the supporters’ unique identity (as fans of a particular team) while at the same time facilitating identification with the rival at a superordinate level (e.g. as joint fans of a region).
Marketing, get ready to rumble — How rivalry promotes distinctiveness for brands and consumers
(2018)
Scholars typically advise brands to stay away from public conflict with competitors as research has focused on negative consequences - e.g., price wars, escalating hostilities, and derogation. This research distinguishes between rivalry between firms (inter-firm brand rivalry) and rivalry between consumers (inter-consumer brand rivalry). Four studies and six samples show both types of rivalry can have positive consequences for both firms and consumers. Inter-firm brand rivalry boosts perceived distinctiveness of competing brands independent of consumption, attitude, familiarity, and involvement. Inter-consumer brand rivalry increases consumer group distinctiveness, an effect mediated by brand identification and rival brand disidentification. We extend social identity theory by demonstrating that: 1) outside actors like firms can promote inter-consumer rivalry through inter-firm rivalry and 2) promoting such conflict can actually provide benefits to consumers as well as firms. The paper challenges the axiom “never knock the competition,” deriving a counter-intuitive way to accomplish one of marketing's premier objectives.
Social comparison theories suggest that ingroups are strengthened whenever important outgroups are weakened (e.g., by losing status or power). It follows that ingroups have little reason to help outgroups facing an existential threat. We challenge this notion by showing that ingroups can also be weakened when relevant comparison outgroups are weakened, which can motivate ingroups to strategically offer help to ensure the outgroups' survival as a highly relevant comparison target. In three preregistered studies, we showed that an existential threat to an outgroup with high (vs. low) identity relevance affected strategic outgroup helping via two opposing mechanisms. The potential demise of a highly relevant outgroup increased participants’ perceptions of ingroup identity threat, which was positively related to helping. At the same time, the outgroup’s misery evoked schadenfreude, which was negatively related to helping. Our research exemplifies a group's secret desire for strong outgroups by underlining their importance for identity formation.
According to the third-person effect or the influence of presumed media influence approach, the presumption that the media has strong effects on other people can affect individuals’ attitudes and behavior. For instance, if people believe in strong media influences on others, they are more likely to increase their communication activities or support demands for restrictions on media. A standardized online survey among German journalists (N = 960) revealed that the stronger the journalists perceive the political online influence on the public to be, the more frequently they contradict unwanted political views in their articles. Moreover, even journalists are more likely to approve of restrictions on the Internet’s political influence, the stronger they believe the effects of online media to be. The data reveal no connections between communication activities and demands for restrictions.
Enterprise apps on mobile devices typically need to communicate with other system components by consuming web services. Since most of the current mobile device platforms (such as Android) do not provide built-in features for consuming SOAP services, extensions have to be designed. Additionally in order to accommodate the typical enhanced security requirements of enterprise apps, it is important to be able to deal with SOAP web service security extensions on client side. In this article we show that neither the built-in SOAP capabilities for Android web service clients are sufficient for enterprise apps nor are the necessary security features supported by the platform as is. After discussing different existing extensions making Android devices SOAP capable we explain why none of them is really satisfactory in an enterprise context. Then we present our own solution which accommodates not only SOAP but also the WS-Security features on top of SOAP. Our solution heavily relies on code generation in order to keep the flexibility benefits of SOAP on one hand while still keeping the development effort manageable for software development. Our approach provides a good foundation for the implementation of other SOAP extensions apart from security on the Android platform as well. In addition our solution based on the gSOAP framework may be used for other mobile platforms in a similar manner.
Music streaming platforms offer music listeners an overwhelming choice of music. Therefore, users of streaming platforms need the support of music recommendation systems to find music that suits their personal taste. Currently, a new class of recommender systems based on knowledge graph embeddings promises to improve the quality of recommendations, in particular to provide diverse and novel recommendations. This paper investigates how knowledge graph embeddings can improve music recommendations. First, it is shown how a collaborative knowledge graph can be derived from open music data sources. Based on this knowledge graph, the music recommender system EARS (knowledge graph Embedding-based Artist Recommender System) is presented in detail, with particular emphasis on recommendation diversity and explainability. Finally, a comprehensive evaluation with real-world data is conducted, comparing of different embeddings and investigating the influence of different types of knowledge.
Mixed-integer NMPC for real-time supervisory energy management control in residential buildings
(2023)
In recent years, building energy supply and distribution systems have become more complex, with an increasing number of energy generators, stores, flows, and possible combinations of operating modes. This poses challenges for supervisory control, especially when balancing the conflicting goals of maximizing comfort while minimizing costs and emissions to contribute to global climate protection objectives. Mixed-integer nonlinear model predictive control is a promising approach for intelligent real-time control that is able to properly address the specific characteristics and restrictions of building energy systems. We present a strategy that utilizes a decomposition approach, combining partial outer convexification with the Switch-Cost Aware Rounding procedure to handle switching behavior and operating time constraints of building components in real-time. The efficacy is demonstrated through practical applications in a single-family home with a combined heat and power unit and in a multi-family apartment complex with 18 residential units. Simulation studies show high correspondence to globally optimal solutions with significant cost savings potential of around 19%.
Background: One of the major challenges in pediatric intensive care is the detection of life-threatening health conditions under acute time constraints and performance pressure. This includes the assessment of pediatric organ dysfunction (OD) that demands extraordinary clinical expertise and the clinician’s ability to derive a decision based on multiple information and data sources. Clinical decision support systems (CDSS) offer a solution to support medical staff in stressful routine work. Simultaneously, detection of OD by using computerized decision support approaches has been scarcely investigated, especially not in pediatrics.
Objectives: The aim of the study is to enhance an existing, interoperable, and rulebased CDSS prototype for tracing the progression of sepsis in critically ill children by augmenting it with the capability to detect SIRS/sepsis-associated hematologic OD, and to determine its diagnostic accuracy.
Methods: We reproduced an interoperable CDSS approach previously introduced by our working group: (1) a knowledge model was designed by following the commonKADS methodology, (2) routine care data was semantically standardized and harmonized using openEHR as clinical information standard, (3) rules were formulated and implemented in a business rule management system. Data from a prospective diagnostic study, including 168 patients, was used to estimate the diagnostic accuracy of the rule-based CDSS using the clinicians’ diagnoses as reference
The present research study investigated the susceptibility of common mastitis pathogens—obtained from clinical mastitis cases on 58 Northern German dairy farms—to routinely used antimicrobials. The broth microdilution method was used for detecting the Minimal Inhibitory Concentration (MIC) of Streptococcus agalactiae (n = 51), Streptococcus dysgalactiae (n = 54), Streptococcus uberis (n = 50), Staphylococcus aureus (n = 85), non-aureus staphylococci (n = 88), Escherichia coli (n = 54) and Klebsiella species (n = 52). Streptococci and staphylococci were tested against cefquinome, cefoperazone, cephapirin, penicillin, oxacillin, cloxacillin, amoxicillin/clavulanic acid and cefalexin/kanamycin. Besides cefquinome and amoxicillin/clavulanic acid, Gram-negative pathogens were examined for their susceptibility to marbofloxacin and sulfamethoxazole/trimethoprim. The examined S. dysgalactiae isolates exhibited the comparatively lowest MICs. S. uberis and S. agalactiae were inhibited at higher amoxicillin/clavulanic acid and cephapirin concentration levels, whereas S. uberis isolates additionally exhibited elevated cefquinome MICs. Most Gram-positive mastitis pathogens were inhibited at higher cloxacillin than oxacillin concentrations. The MICs of Gram-negative pathogens were higher than previously reported, whereby 7.4%, 5.6% and 11.1% of E. coli isolates had MICs above the highest concentrations tested for cefquinome, marbofloxacin and sulfamethoxazole/trimethoprim, respectively. Individual isolates showed MICs at comparatively higher concentrations, leading to the hypothesis that a certain amount of mastitis pathogens on German dairy farms might be resistant to frequently used antimicrobials.
Catalogs of competency-based learning objectives (CLO) were introduced and promoted as a prerequisite for high-quality, systematic curriculum development. While this is common in medicine, the consistent use of CLO is not yet well established in epidemiology, biometry, medical informatics, biomedical informatics, and nursing informatics especially in Germany. This paper aims to identify underlying obstacles and give recommendations in order to promote the dissemination of CLO for curricular development in health data and information sciences. To determine these obstacles and recommendations a public online expert workshop was organized. This paper summarizes the findings.
The objective of this study was to investigate the occurrence of bacteremia in dairy cows with severe mastitis. Milk samples were collected from affected udder quarters, and corresponding blood samples were collected from dairy cows with severe mastitis at the time of diagnosis before any therapeutic measures were undertaken. The cultural detection of pathogens in blood classified a bacteremia. Further diagnostic tests were performed to provide evidence of bacteremia. This was realized by PCR with regard to S. aureus, E. coli and S. uberis and the Limulus test. Detection of culturable pathogens in the blood of cows with severe clinical mastitis was rare and occurred in only one of 70 (1.4%) cases. Overall, bacterial growth was detected in 53 of 70 (75.7%) milk samples. S. uberis (22/70), E. coli (12/70) and S. aureus (4/70) were the most frequently isolated pathogens from milk of cows with severe mastitis. PCR was performed in 38 of 70 (54.3%) blood samples. PCR was positive in eight of 38 cases. S. uberis was found most frequently in six blood samples (8.6%). E. coli was found on PCR in one blood sample (1.4%). S. aureus was identified in one blood sample (1.4%). When Coliforms were detected in the quarter milk sample, a Limulus test was performed in the corresponding blood sample. In three of 15 cases, the Limulus test was positive (4.3% of samples). Further studies are needed to investigate the occurrence of bacteremia in cows with severe mastitis in a higher population size.
Complex Event Processing (CEP) is a modern software technology for the dynamic analysis of continuous data streams. CEP is able of searching extremely large data streams in real time for the presence of event patterns. So far, specifying event patterns of CEP rules is still a manual task based on the expertise of domain experts. This paper presents a novel batinspired swarm algorithm for automatically mining CEP rule patterns that express the relevant causal and temporal relations hidden in data streams. The basic suitability and performance of the approach is proven by extensive evaluation with both synthetically generated data and real data from the traffic domain.
M2M (machine-to-machine) systems use various communication technologies for automatically monitoring and controlling machines. In M2M systems, each machine emits a continuous stream of data records, which must be analyzed in real-time. Intelligent M2M systems should be able to diagnose their actual states and to trigger appropriate actions as soon as critical situations occur. In this paper, we show how complex event processing (CEP) can be used as the key technology for intelligent M2M systems. We provide an event-driven architecture that is adapted to the M2M domain. In particular, we define different models for the M2M domain, M2M machine states and M2M events. Furthermore, we present a general reference architecture defining the main stages of processing machine data. To prove the usefulness of our approach, we consider two real-world examples ‘solar power plants’ and ‘printers’, which show how easily the general architecture can be extended to concrete M2M scenarios.
In this article, we present the software architecture of a new generation of advisory systems using Intelligent Agent and Semantic Web technologies. Multi-agent systems provide a well-suited paradigm to implement negotiation processes in a consultancy situation. Software agents act as clients and advisors, using their knowledge to assist human users. In the presented architecture, the domain knowledge is modeled semantically by means of XML-based ontology languages such as OWL. Using an inference engine, the agents reason, based on their knowledge to make decisions or proposals. The agent knowledge consists of different types of data: on the one hand, private data, which has to be protected against unauthorized access; and on the other hand, publicly accessible knowledge spread over different Web sites. As in a real consultancy, an agent only reveals sensitive private data, if they are indispensable for finding a solution. In addition, depending on the actual consultancy situation, each agent dynamically expands its knowledge base by accessing OWL knowledge sources from the Internet. Due to the standardization of OWL, knowledge models easily can be shared and accessed via the Internet. The usefulness of our approach is proved by the implementation of an advisory system in the Semantic E-learning Agent (SEA) project, whose objective is to develop virtual student advisers that render support to university students in order to successfully organize and perform their studies.
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
To optimise udder health at the herd level, identifying incurable mastitis cases as well as providing an adequate therapy and culling strategy are necessary. Cows with clinical mastitis should be administered antibiotic medication if it is most likely to improve mammary cure. The somatic cell count (SCC) in milk of the monthly implemented Dairy Herd Improvement (DHI) test represents the most important tool to decide whether a cow has a promising mammary cure rate. Differential cell count (DCC) facilitates the specification of the immunological ability of defence, for example by characterising leukocyte subpopulations or cell viability. The aim of this study was to assess the DCC and cell viability in DHI milk samples regarding the cytological (CC) and bacteriological cure (BC) of the udder within a longitudinal study, thereby gaining a predictive evaluation of whether a clinical mastitis benefits from an antibiotic treatment or not. The cows enrolled in this study had an SCC above 200,000 cells/mL in the previous DHI test. Study 1 assessed the CC by reference to the SCC of two consecutive DHI tests and included 1010 milk samples: 28.4% of the mammary glands were classified as cytologically cured and 71.6% as uncured. The final mixed logistic regression model identified the total number of non-vital cells as a significant factor associated with CC. An increasing amount of non-vital cells was related to a lower individual ability for CC. Cows which were in the first or second lactation possessed a higher probability of CC than cows having a lactation number above two. If animals developed a clinical mastitis after flow cytometric investigation, the BC was examined in study 2 by analysing quarter foremilk samples microbiologically. Taking 48 milk samples, 81.3% of the mammary glands were classified as bacteriologically cured and 18.7% as uncured. The percentage of total non-vital cells tended to be lower for cows which were cured, but no significance could be observed. This study revealed that the investigation of the proportion of non-vital cells in DHI milk samples can enhance the prognosis of whether an antibiotic treatment of clinical mastitis might be promising or not. Prospectively, this tool may be integrated in the DHI tests to facilitate the decision between therapy or culling.
We present a methodology based on mixed-integer nonlinear model predictive control for a real-time building energy management system in application to a single-family house with a combined heat and power (CHP) unit. The developed strategy successfully deals with the switching behavior of the system components as well as minimum admissible operating time constraints by use of a special switch-cost-aware rounding procedure. The quality of the presented solution is evaluated in comparison to the globally optimal dynamic programming method and conventional rule-based control strategy. Based on a real-world scenario, we show that our approach is more than real-time capable while maintaining high correspondence with the globally optimal solution. We achieve an average optimality gap of 2.5% compared to 20% for a conventional control approach, and are faster and more scalable than a dynamic programming approach.
There are many aspects of code quality, some of which are difficult to capture or to measure. Despite the importance of software quality, there is a lack of commonly accepted measures or indicators for code quality that can be linked to quality attributes. We investigate software developers’ perceptions of source code quality and the practices they recommend to achieve these qualities. We analyze data from semi-structured interviews with 34 professional software developers, programming teachers and students from Europe and the U.S. For the interviews, participants were asked to bring code examples to exemplify what they consider good and bad code, respectively. Readability and structure were used most commonly as defining properties for quality code. Together with documentation, they were also suggested as the most common target properties for quality improvement. When discussing actual code, developers focused on structure, comprehensibility and readability as quality properties. When analyzing relationships between properties, the most commonly talked about target property was comprehensibility. Documentation, structure and readability were named most frequently as source properties to achieve good comprehensibility. Some of the most important source code properties contributing to code quality as perceived by developers lack clear definitions and are difficult to capture. More research is therefore necessary to measure the structure, comprehensibility and readability of code in ways that matter for developers and to relate these measures of code structure, comprehensibility and readability to common software quality attributes.
Clinical scores and motion-capturing gait analysis are today’s gold standard for outcome measurement after knee arthroplasty, although they are criticized for bias and their ability to reflect patients’ actual quality of life has been questioned. In this context, mobile gait analysis systems have been introduced to overcome some of these limitations. This study used a previously developed mobile gait analysis system comprising three inertial sensor units to evaluate daily activities and sports. The sensors were taped to the lumbosacral junction and the thigh and shank of the affected limb. The annotated raw data was evaluated using our validated proprietary software. Six patients undergoing knee arthroplasty were examined the day before and 12 months after surgery. All patients reported a satisfactory outcome, although four patients still had limitations in their desired activities. In this context, feasible running speed demonstrated a good correlation with reported impairments in sports-related activities. Notably, knee flexion angle while descending stairs and the ability to stop abruptly when running exhibited good correlation with the clinical stability and proprioception of the knee. Moreover, fatigue effects were displayed in some patients. The introduced system appears to be suitable for outcome measurement after knee arthroplasty and has the potential to overcome some of the limitations of stationary gait labs while gathering additional meaningful parameters regarding the force limits of the knee.
Worldwide, seagrass meadows are under threat. Consequently, there is a strong need for seagrass restoration to guarantee the provision of related ecosystem services such as nutrient cycling, carbon sequestration and habitat provision. Seagrass often grows in vast meadows in which the presence of seagrass itself leads to a reduction of hydrodynamic energy. By modifying the environment, seagrass thus serves as foundation species and ecosystem engineer improving habitat quality for itself and other species as well as positively affecting its own fitness. On the downside, this positive feedback mechanism can render natural recovery of vanished and destroyed seagrass meadows impossible. An innovative approach to promote positive feedback mechanisms in seagrass restoration is to create an artificial seagrass (ASG) that mimics the facilitation function of natural seagrass. ASG could provide a window of opportunity with respect to suitable hydrodynamic and light conditions as well as sediment stabilization to allow natural seagrass to re-establish. Here, we give an overview of challenges and open questions for the application of ASG to promote seagrass restoration based on experimental studies and restoration trials and we propose a general approach for the design of an ASG produced from biodegradable materials. Considering positive feedback mechanisms is crucial to support restoration attempts. ASG provides promising benefits when habitat conditions are too harsh for seagrass meadows to re-establish themselves.
Malnutrition, nutritional deficiency, or undernutrition is an imbalanced nutritional status resulting from insufficient intake of nutrients to meet normal physiologic requirements. Malnutrition in childhood has both short-term consequences and long-term consequences on mental and physical health as well as the overall health development of children. Of all regions in the world, the Asia and the Pacific region has achieved the fastest rate of economic growth. There is no evidence that this rapid economic growth translates into a decline in malnutrition of children in Asian countries such as India.
Surface atomic relaxation and magnetism on hydrogen-adsorbed Fe(110) surfaces from first principles
(2016)
We have computed adsorption energies, vibrational frequencies, surface relaxation and buckling for hydrogen adsorbed on a body-centred-cubic Fe(110) surface as a function of the degree of H coverage. This adsorption system is important in a variety of technological processes such as the hydrogen embrittlement in ferritic steels, which motivated this work, and the Haber–Bosch process. We employed spin-polarised density functional theory to optimise geometries of a six-layer Fe slab, followed by frozen mode finite displacement phonon calculations to compute Fe–H vibrational frequencies. We have found that the quasi-threefold (3f) site is the most stable adsorption site, with adsorption energies of ∼3.0 eV/H for all coverages studied. The long-bridge (lb) site, which is close in energy to the 3f site, is actually a transition state leading to the stable 3f site. The calculated harmonic vibrational frequencies collectively span from 730 to 1220 cm−1, for a range of coverages. The increased first-to-second layer spacing in the presence of adsorbed hydrogen, and the pronounced buckling observed in the Fe surface layer, may facilitate the diffusion of hydrogen atoms into the bulk, and therefore impact the early stages of hydrogen embrittlement in steels.
The effect of magnetism on hydrogen adsorption and subsurface diffusion through face-centred cubic (fcc) γ-Fe(0 0 1) was investigated using spin-polarised density functional theory (s-DFT). The non-magnetic (NM), ferromagnetic (FM), and antiferromagnetic single (AFM1) and double layer (AFMD) structures were considered. For each magnetic state, the hydrogen preferentially adsorbs at the fourfold site, with adsorption energies of 4.07, 4.12, 4.03 and 4.05 eV/H atom for the NM, FM, AFM1 and AFMD structures. A total barrier of 1.34, 0.90, 1.32 and 1.25 eV and a bulk-like diffusion barrier of 0.6, 0.2, 0.4 and 0.3 eV were calculated for the NM, FM, AFM1 and AFMD magnetic states. The Fe atoms nearest to the H atom exhibited a reduced magnetic moment, whereas the next-nearest neighbour Fe atoms exhibited a non-negligible local perturbation in the magnetic moment. The presence of magnetically ordered structures has a minimal influence on the minimum energy path for H diffusion through the lattice and on the adsorption of H atoms on the Fe(0 0 1) surface, but we computed a significant reduction of the bulk-like diffusion barriers with respect to the non-magnetic state of fcc γ-Fe.
The adsorption of O atoms on the Fe(1 1 0) surface has been investigated by density functional theory for increasing degrees of oxygen coverage from 0.25 to 1 monolayer, to follow the evolution of the Osingle bondFe(1 1 0) system into an FeO(1 1 1)-like monolayer. We found that the quasi-threefold site is the most stable adsorption site for all coverages, with adsorption energies of ∼2.8–4.0 eV per O atom. Oxygen adsorption results in surface geometrical changes such as interlayer relaxation and buckling, the latter of which decreases with coverage. The calculated vibrational frequencies range from 265 to 470 cm−1 for the frustrated translational modes and 480–620 cm−1 for the stretching mode, and hence are in good agreement with the experimental values reported for bulk FeO wüstite. The hybridization of the oxygen 2p and iron 3d orbitals increases with oxygen coverage, and the partial density of states for the Osingle bondFe(1 1 0) system at full coverage resembles the one reported in the literature for bulk FeO. These results at full oxygen coverage point to the incipient formation of an FeO(1 1 1)-like monolayer that would eventually lead to the bulk FeO oxide layer.
This study is concerned with the early stages of hydrogen embrittlement on an atomistic scale. We employed density functional theory to investigate hydrogen diffusion through the (100), (110) and (111) surfaces of γ-Fe. The preferred adsorption sites and respective energies for hydrogen adsorption were established for each plane, as well as a minimum energy pathway for diffusion. The H atoms adsorb on the (100), (110) and (111) surfaces with energies of ∼4.06 eV, ∼3.92 eV and ∼4.05 eV, respectively. The barriers for bulk-like diffusion for the (100), (110) and (111) surfaces are ∼0.6 eV, ∼0.5 eV and ∼0.7 eV, respectively. We compared these calculated barriers with previously obtained experimental data in an Arrhenius plot, which indicates good agreement between experimentally measured and theoretically predicted activation energies. Texturing austenitic steels such that the (111) surfaces of grains are preferentially exposed at the cleavage planes may be a possibility to reduce hydrogen embrittlement.
The present investigation was conducted to investigate the in-vitro activity of ethanolic extract of roots of Centaurea behens by using DPPH radical scavenging activity, nitric oxide radical scavenging activity, hydrogen peroxide radical scavenging activity, hydroxyl radical. Result suggests that the extract possess significant antioxidant activity as compared to the standard ascorbic acid and thus further in vivo investigation is required to evaluate the medicinal significance of the extract which can be used for assessing the possible therapeutic importance of the drug.
Background: Stereotactic radiosurgery (SRS) is an effective treatment for trigeminal neuralgia (TN). Nevertheless, a proportion of patients will experience recurrence and treatment-related sensory disturbances. In order to evaluate the predictors of efficacy and safety of image-guided non-isocentric radiosurgery, we analyzed the impact of trigeminal nerve volume and the nerve dose/volume relationship, together with relevant clinical characteristics.
Methods: Two-hundred and ninety-six procedures were performed on 262 patients at three centers. In 17 patients the TN was secondary to multiple sclerosis (MS). Trigeminal pain and sensory disturbances were classified according to the Barrow Neurological Institute (BNI) scale. Pain-free-intervals were investigated using Kaplan Meier analyses. Univariate and multivariate Cox regression analyses were performed to identify predictors.
Results: The median follow-up period was 38 months, median maximal dose 72.4 Gy, median target nerve volume 25mm3, and median prescription dose 60 Gy. Pain control rate (BNI I-III) at 6, 12, 24, 36, 48, and 60 months were 96.8, 90.9, 84.2, 81.4, 74.2, and 71.2%, respectively. Overall, 18% of patients developed sensory disturbances. Patients with volume ≥ 30mm3 were more likely to maintain pain relief (p = 0.031), and low integral dose (< 1.4 mJ) tended to be associated with more pain recurrence than intermediate (1.4–2.7 mJ) or high integral dose (> 2.7 mJ; low vs. intermediate: log-rank test, χ2 = 5.02, p = 0.019; low vs. high: log-rank test, χ2 = 6.026, p = 0.014). MS, integral dose, and mean dose were the factors associated with pain recurrence, while re-irradiation and MS were predictors for sensory disturbance in the multivariate analysis.
Conclusions: The dose to nerve volume ratio is predictive of pain recurrence in TN, and re-irradiation has a major impact on the development of sensory disturbances after non-isocentric SRS. Interestingly, the integral dose may differ significantly in treatments using apparently similar dose and volume constraints.
Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer.
Purpose: Radiology reports mostly contain free-text, which makes it challenging to obtain structured data. Natural language processing (NLP) techniques transform free-text reports into machine-readable document vectors that are important for creating reliable, scalable methods for data analysis. The aim of this study is to classify unstructured radiograph reports according to fractures of the distal fibula and to find the best text mining method.
Materials & Methods: We established a novel German language report dataset: a designated search engine was used to identify radiographs of the ankle and the reports were manually labeled according to fractures of the distal fibula. This data was used to establish a machine learning pipeline, which implemented the text representation methods bag-of-words (BOW), term frequency-inverse document frequency (TF-IDF), principal component analysis (PCA), non-negative matrix factorization (NMF), latent Dirichlet allocation (LDA), and document embedding (doc2vec). The extracted document vectors were used to train neural networks (NN), support vector machines (SVM), and logistic regression (LR) to recognize distal fibula fractures. The results were compared via cross-tabulations of the accuracy (acc) and area under the curve (AUC).
Results: In total, 3268 radiograph reports were included, of which 1076 described a fracture of the distal fibula. Comparison of the text representation methods showed that BOW achieved the best results (AUC = 0.98; acc = 0.97), followed by TF-IDF (AUC = 0.97; acc = 0.96), NMF (AUC = 0.93; acc = 0.92), PCA (AUC = 0.92; acc = 0.9), LDA (AUC = 0.91; acc = 0.89) and doc2vec (AUC = 0.9; acc = 0.88). When comparing the different classifiers, NN (AUC = 0,91) proved to be superior to SVM (AUC = 0,87) and LR (AUC = 0,85).
Conclusion: An automated classification of unstructured reports of radiographs of the ankle can reliably detect findings of fractures of the distal fibula. A particularly suitable feature extraction method is the BOW model.
Key Points:
- The aim was to classify unstructured radiograph reports according to distal fibula fractures.
- Our automated classification system can reliably detect fractures of the distal fibula.
- A particularly suitable feature extraction method is the BOW model.
With the increasing significance of information technology, there is an urgent need for adequate measures of information security. Systematic information security management is one of most important initiatives for IT management. At least since reports about privacy and security breaches, fraudulent accounting practices, and attacks on IT systems appeared in public, organizations have recognized their responsibilities to safeguard physical and information assets. Security standards can be used as guideline or framework to develop and maintain an adequate information security management system (ISMS). The standards ISO/IEC 27000, 27001 and 27002 are international standards that are receiving growing recognition and adoption. They are referred to as “common language of organizations around the world” for information security. With ISO/IEC 27001 companies can have their ISMS certified by a third-party organization and thus show their customers evidence of their security measures.
Systematizing IT Risks
(2019)
IT risks — risks associated with the operation or use of information technology — have taken on great importance in business, and IT risk management is accordingly important in the science and practice of information management. Therefore, it is necessary to systematize IT risks in order to plan, manage and control for different risk-specific measures. In order to choose and implement suitable measures for managing IT risks, effect-based and causebased procedures are necessary. These procedures are explained in detail for IT security risks because of their special importance.
The objective of this student project was for the students to develop, conduct, and supervise a training course for basic work place applications (word processing and business graphics). Students were responsible for the planning, organizing and the teaching of the course. As participants, underprivileged adolescents took part in order to learn the handling of IT applications and therefore, improve their job skills and have a better chance to get into employment. Therefore the adolescents do the role of trainees at the course. Our students worked with a population that is continually overlooked by the field.
As a result, the students trained to design and implement training courses, exercised to manage projects and increased their social responsibility and awareness concerning the way of life and living conditions of other young people. The underprivileged adolescents learned to use important business applications and increased their job skills and job chances. The overall design of our concept required extensive resources to supervise and to steer the students and the adolescents. The lecturers had to teach and to counsel the students and had to be on “stand-by” just in case they were needed to solve critical situations between the two groups of young people.
During the European debt crisis, German and Greek media frequently reported on the political conflict between the two countries. This article examines to what extent the media coverage in one country about the other is considered by German and Greek citizens to be hostile (‘hostile media perception’) and influential (‘influence of presumed influence’). Data from a comparative survey in Germany (n = 492) and Greece (n = 484) show that news coverage by foreign media on the European debt crisis is perceived by respondents as hostile against their own country and as influential. Moreover, both media-related perceptions are linked with intensified perceptions of hostility, such as assumptions that an individual’s country is not respected in the other country or that the other country’s citizens are demanding that the individual’s country be punished. Based on these results, it is discussed whether media-related perceptions can have a conflict-intensifying effect in international crises.
Nowadays, problems related with solid waste management become a challenge for most countries due to the rising generation of waste, related environmental issues, and associated costs of produced wastes. Effective waste management systems at different geographic levels require accurate forecasting of future waste generation. In this work, we investigate how open-access data, such as provided from the Organisation for Economic Co-operation and Development (OECD), can be used for the analysis of waste data. The main idea of this study is finding the links between socioeconomic and demographic variables that determine the amounts of types of solid wastes produced by countries. This would make it possible to accurately predict at the country level the waste production and determine the requirements for the development of effective waste management strategies. In particular, we use several machine learning data regression (Support Vector, Gradient Boosting, and Random Forest) and clustering models (k-means) to respectively predict waste production for OECD countries along years and also to perform clustering among these countries according to similar characteristics. The main contributions of our work are: (1) waste analysis at the OECD country-level to compare and cluster countries according to similar waste features predicted; (2) the detection of most relevant features for prediction models; and (3) the comparison between several regression models with respect to accuracy in predictions. Coefficient of determination (R2), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE), respectively, are used as indices of the efficiency of the developed models. Our experiments have shown that some data pre-processings on the OECD data are an essential stage required in the analysis; that Random Forest Regressor (RFR) produced the best prediction results over the dataset; and that these results are highly influenced by the quality of available socio-economic data. In particular, the RFR model exhibited the highest accuracy in predictions for most waste types. For example, for “municipal” waste, it produced, respectively, R2 = 1 and MAPE = 4.31 global error values for the test set; and for “household” waste, it, respectively, produced R2 = 1 and MAPE = 3.03. Our results indicate that the considered models (and specially RFR) all are effective in predicting the amount of produced wastes derived from input data for the considered countries.
Decision support systems for traffic management systems have to cope with a high volume of events continuously generated by sensors. Conventional software architectures do not explicitly target the efficient processing of continuous event streams. Recently, event-driven architectures (EDA) have been proposed as a new paradigm for event-based applications. In this paper we propose a reference architecture for event-driven traffic management systems, which enables the analysis and processing of complex event streams in real-time and is therefore well-suited for decision support in sensor-based traffic control sys- tems. We will illustrate our approach in the domain of road traffic management. In particular, we will report on the redesign of an intelligent transportation management system (ITMS) prototype for the high-capacity road network in Bilbao, Spain.
Nowadays, most recommender systems are based on a centralized architecture, which can cause crucial issues in terms of trust, privacy, dependability, and costs. In this paper, we propose a decentralized and distributed MANET-based (Mobile Ad-hoc NETwork) recommender system for open facilities. The system is based on mobile devices that collect sensor data about users locations to derive implicit ratings that are used for collaborative filtering recommendations. The mechanisms of deriving ratings and propagating them in a MANET network are discussed in detail. Finally, extensive experiments demonstrate the suitability of the approach in terms of different performance metrics.
Background: Maintenance of metal homeostasis is crucial in bacterial pathogenicity as metal starvation is the most important mechanism in the nutritional immunity strategy of host cells. Thus, pathogenic bacteria have evolved sensitive metal scavenging systems to overcome this particular host defence mechanism. The ruminant pathogen Mycobacterium avium ssp. paratuberculosis (MAP) displays a unique gut tropism and causes a chronic progressive intestinal inflammation. MAP possesses eight conserved lineage specific large sequence polymorphisms (LSP), which distinguish MAP from its ancestral M. avium ssp. hominissuis or other M. avium subspecies. LSP14 and LSP15 harbour many genes proposed to be involved in metal homeostasis and have been suggested to substitute for a MAP specific, impaired mycobactin synthesis.
Results: In the present study, we found that a LSP14 located putative IrtAB-like iron transporter encoded by mptABC was induced by zinc but not by iron starvation. Heterologous reporter gene assays with the lacZ gene under control of the mptABC promoter in M. smegmatis (MSMEG) and in a MSMEGΔfurB deletion mutant revealed a zinc dependent, metalloregulator FurB mediated expression of mptABC via a conserved mycobacterial FurB recognition site. Deep sequencing of RNA from MAP cultures treated with the zinc chelator TPEN revealed that 70 genes responded to zinc limitation. Remarkably, 45 of these genes were located on a large genomic island of approximately 90 kb which harboured LSP14 and LSP15. Thirty-five of these genes were predicted to be controlled by FurB, due to the presence of putative binding sites. This clustering of zinc responsive genes was exclusively found in MAP and not in other mycobacteria.
Conclusions: Our data revealed a particular genomic signature for MAP given by a unique zinc specific locus, thereby suggesting an exceptional relevance of zinc for the metabolism of MAP. MAP seems to be well adapted to maintain zinc homeostasis which might contribute to the peculiarity of MAP pathogenicity.
Appropriate data models are essential for the systematic collection, aggregation, and integration of health data and for subsequent analysis. However, recommendations for modeling health data are often not publicly available within specific projects. Therefore, the project Zukunftslabor Gesundheit investigates recommendations for modeling. Expert interviews with five experts were conducted and analyzed using qualitative content analysis. Based on the condensed categories “governance”, “modeling” and “standards”, the project team generated eight hypotheses for recommendations on health data modeling. In addition, relevant framework conditions such as different roles, international cooperation, education/training and political influence were identified. Although emerging from interviewing a small convenience sample of experts, the results help to plan more extensive data collections and to create recommendations for health data modeling.
BACKGROUND: Even though physician rating websites (PRWs) have been gaining in importance in both practice and research, little evidence is available on the association of patients' online ratings with the quality of care of physicians. It thus remains unclear whether patients should rely on these ratings when selecting a physician. The objective of this study was to measure the association between online ratings and structural and quality of care measures for 65 physician practices from the German Integrated Health Care Network "Quality and Efficiency" (QuE). METHODS: Online reviews from two German PRWs were included which covered a three-year period (2011 to 2013) and included 1179 and 991 ratings, respectively. Information for 65 QuE practices was obtained for the year 2012 and included 21 measures related to structural information (N = 6), process quality (N = 10), intermediate outcomes (N = 2), patient satisfaction (N = 1), and costs (N = 2). The Spearman rank coefficient of correlation was applied to measure the association between ratings and practice-related information. RESULTS: Patient satisfaction results from offline surveys and the patients per doctor ratio in a practice were shown to be significantly associated with online ratings on both PRWs. For one PRW, additional significant associations could be shown between online ratings and cost-related measures for medication, preventative examinations, and one diabetes type 2-related intermediate outcome measure. There again, results from the second PRW showed significant associations with the age of the physicians and the number of patients per practice, four process-related quality measures for diabetes type 2 and asthma, and one cost-related measure for medication. CONCLUSIONS: Several significant associations were found which varied between the PRWs. Patients interested in the satisfaction of other patients with a physician might select a physician on the basis of online ratings. Even though our results indicate associations with some diabetes and asthma measures, but not with coronary heart disease measures, there is still insufficient evidence to draw strong conclusions. The limited number of practices in our study may have weakened our findings.
BACKGROUND: Over the past decade, physician-rating websites have been gaining attention in scientific literature and in the media. However, little knowledge is available about the awareness and the impact of using such sites on health care professionals. It also remains unclear what key predictors are associated with the knowledge and the use of physician-rating websites. OBJECTIVE: To estimate the current level of awareness and use of physician-rating websites in Germany and to determine their impact on physician choice making and the key predictors which are associated with the knowledge and the use of physician-rating websites. METHODS: This study was designed as a cross-sectional survey. An online panel was consulted in January 2013. A questionnaire was developed containing 28 questions; a pretest was carried out to assess the comprehension of the questionnaire. Several sociodemographic (eg, age, gender, health insurance status, Internet use) and 2 health-related independent variables (ie, health status and health care utilization) were included. Data were analyzed using descriptive statistics, chi-square tests, and t tests. Binary multivariate logistic regression models were performed for elaborating the characteristics of physician-rating website users. Results from the logistic regression are presented for both the observed and weighted sample. RESULTS: In total, 1505 respondents (mean age 43.73 years, SD 14.39; 857/1505, 57.25% female) completed our survey. Of all respondents, 32.09% (483/1505) heard of physician-rating websites and 25.32% (381/1505) already had used a website when searching for a physician. Furthermore, 11.03% (166/1505) had already posted a rating on a physician-rating website. Approximately 65.35% (249/381) consulted a particular physician based on the ratings shown on the websites; in contrast, 52.23% (199/381) had not consulted a particular physician because of the publicly reported ratings. Significantly higher likelihoods for being aware of the websites could be demonstrated for female participants (P<.001), those who were widowed (P=.01), covered by statutory health insurance (P=.02), and with higher health care utilization (P<.001). Health care utilization was significantly associated with all dependent variables in our multivariate logistic regression models (P<.001). Furthermore, significantly higher scores could be shown for health insurance status in the unweighted and Internet use in the weighted models. CONCLUSIONS: Neither health policy makers nor physicians should underestimate the influence of physician-rating websites. They already play an important role in providing information to help patients decide on an appropriate physician. Assuming there will be a rising level of public awareness, the influence of their use will increase well into the future. Future studies should assess the impact of physician-rating websites under experimental conditions and investigate whether physician-rating websites have the potential to reflect the quality of care offered by health care providers.
Objective: To evaluate the impact of different dissemination channels on the awareness and usage of hospital performance reports among referring physicians, as well as the usefulness of such reports from the referring physicians’ perspective.
Data sources/Study setting: Primary data collected from a survey with 277 referring physicians (response rate = 26.2%) in Nuremberg, Germany (03–06/2016).
Study design: Cluster-randomised controlled trial at the practice level. Physician practices were randomly assigned to one of two conditions: (1) physicians in the control arm could become aware of the performance reports via mass media channels (Mass Media, npr MM=132, nph MM=147); (2) physicians in the intervention arm also received a printed version of the report via mail (Mass and Special Media, npr MSM=117; nph MSM=130). <br> Principal findings: Overall, 68% of respondents recalled hospital performance reports and 21% used them for referral decisions. Physicians from the Mass and Special Media group were more likely to be aware of the performance reports (OR 4.16; 95% CI 2.16–8.00, p < .001) but not more likely to be influenced when referring patients into hospitals (OR 1.73; 95% CI 0.72–4.12, p > .05). On a 1 (very good) to 6 (insufficient) scale, the usefulness of the performance reports was rated 3.67 (±1.40). Aggregated presentation formats were rated more helpful than detailed hospital quality information.
Conclusions: Hospital quality reports have limited impact on referral practices. To increase the latter, concerns raised by referring physicians must be given more weight. Those principally refer to the underlying data, the design of the reports, and the lack of important information.
Background: Physician-rating websites have become a popular tool to create more transparency about the quality of health care providers. So far, it remains unknown whether online-based rating websites have the potential to contribute to a better standard of care. Objective: Our goal was to examine which health care providers use online rating websites and for what purposes, and whether health care providers use online patient ratings to improve patient care. Methods: We conducted an online-based cross-sectional study by surveying 2360 physicians and other health care providers (September 2015). In addition to descriptive statistics, we performed multilevel logistic regression models to ascertain the effects of providers' demographics as well as report card-related variables on the likelihood that providers implement measures to improve patient care. Results: Overall, more than half of the responding providers surveyed (54.66%, 1290/2360) used online ratings to derive measures to improve patient care (implemented measures: mean 3.06, SD 2.29). Ophthalmologists (68%, 40/59) and gynecologists (65.4%, 123/188) were most likely to implement any measures. The most widely implemented quality measures were related to communication with patients (28.77%, 679/2360), the appointment scheduling process (23.60%, 557/2360), and office workflow (21.23%, 501/2360). Scaled-survey results had a greater impact on deriving measures than narrative comments. Multilevel logistic regression models revealed medical specialty, the frequency of report card use, and the appraisal of the trustworthiness of scaled-survey ratings to be significantly associated predictors for implementing measures to improve patient care because of online ratings. Conclusions: Our results suggest that online ratings displayed on physician-rating websites have an impact on patient care. Despite the limitations of our study and unintended consequences of physician-rating websites, they still may have the potential to improve patient care.
Purpose: The calculation of aggregated composite measures is a widely used strategy to reduce the amount of data on hospital report cards. Therefore, this study aims to elicit and compare preferences of both patients as well as referring physicians regarding publicly available hospital quality information.
Methods: Based on systematic literature reviews as well as qualitative analysis, two discrete choice experiments (DCEs) were applied to elicit patients’ and referring physicians’ preferences. The DCEs were conducted using a fractional factorial design. Statistical data analysis was performed using multinomial logit models.
Results: Apart from five identical attributes, one specific attribute was identified for each study group, respectively. Overall, 322 patients (mean age 68.99) and 187 referring physicians (mean age 53.60) were included. Our models displayed significant coefficients for all attributes (p < 0.001 each). Among patients, “Postoperative complication rate” (20.6%; level range of 1.164) was rated highest, followed by “Mobility at hospital discharge” (19.9%; level range of 1.127), and ‘‘The number of cases treated” (18.5%; level range of 1.045). In contrast, referring physicians valued most the ‘‘One-year revision surgery rate’’ (30.4%; level range of 1.989), followed by “The number of cases treated” (21.0%; level range of 1.372), and “Postoperative complication rate” (17.2%; level range of 1.123).
Conclusion: We determined considerable differences between both study groups when calculating the relative value of publicly available hospital quality information. This may have an impact when calculating aggregated composite measures based on consumer-based weighting.
Background: Physician-rating websites are currently gaining in popularity because they increase transparency in the health care system. However, research on the characteristics and content of these portals remains limited.
Objective: To identify and synthesize published evidence in peer-reviewed journals regarding frequently discussed issues about physician-rating websites.
Methods: Peer-reviewed English and German language literature was searched in seven databases (Medline (via PubMed), the Cochrane Library, Business Source Complete, ABI/Inform Complete, PsycInfo, Scopus, and ISI web of knowledge) without any time constraints. Additionally, reference lists of included studies were screened to assure completeness. The following eight previously defined questions were addressed: 1) What percentage of physicians has been rated? 2) What is the average number of ratings on physician-rating websites? 3) Are there any differences among rated physicians related to socioeconomic status? 4) Are ratings more likely to be positive or negative? 5) What significance do patient narratives have? 6) How should physicians deal with physician-rating websites? 7) What major shortcomings do physician-rating websites have? 8) What recommendations can be made for further improvement of physician-rating websites?
Results: Twenty-four articles published in peer-reviewed journals met our inclusion criteria. Most studies were published by US (n=13) and German (n=8) researchers; however, the focus differed considerably. The current usage of physician-rating websites is still low but is increasing. International data show that 1 out of 6 physicians has been rated, and approximately 90% of all ratings on physician-rating websites were positive. Although often a concern, we could not find any evidence of "doctor-bashing". Physicians should not ignore these websites, but rather, monitor the information available and use it for internal and ex-ternal purpose. Several shortcomings limit the significance of the results published on physician-rating websites; some recommendations to address these limitations are presented.
Conclusions: Although the number of publications is still low, physician-rating websites are gaining more attention in research. But the current condition of physician-rating websites is lacking. This is the case both in the United States and in Germany. Further research is necessary to increase the quality of the websites, especially from the patients’ perspective.
Background: Physician-rating websites (PRWs) may lead to quality improvements in case they enable and establish a peer-to-peer communication between patients and physicians. Yet, we know little about whether and how physicians respond on the Web to patient ratings.
Objective: The objective of this study was to describe trends in physicians’ Web-based responses to patient ratings over time, to identify what physician characteristics influence Web-based responses, and to examine the topics physicians are likely to respond to.
Methods: We analyzed physician responses to more than 1 million patient ratings displayed on the German PRW, jameda, from 2010 to 2015. Quantitative analysis contained chi-square analyses and the Mann-Whitney U test. Quantitative content techniques were applied to determine the topics physicians respond to based on a randomly selected sample of 600 Web-based ratings and corresponding physician responses.
Results: Overall, physicians responded to 1.58% (16,640/1,052,347) of all Web-based ratings, with an increasing trend over time from 0.70% (157/22,355) in 2010 to 1.88% (6377/339,919) in 2015. Web-based ratings that were responded to had significantly worse rating results than ratings that were not responded to (2.15 vs 1.74, P<.001). Physicians who respond on the Web to patient ratings differ significantly from nonresponders regarding several characteristics such as gender and patient recommendation results (P<.001 each). Regarding scaled-survey rating elements, physicians were most likely to respond to the waiting time within the practice (19.4%, 99/509) and the time spent with the patient (18.3%, 110/600). Almost one-third of topics in narrative comments were answered by the physicians (30.66%, 382/1246).
Conclusions: So far, only a minority of physicians have taken the chance to respond on the Web to patient ratings. This is likely because of (1) the low awareness of PRWs among physicians, (2) the fact that only a few PRWs enable physicians to respond on the Web to patient ratings, and (3) the lack of an active moderator to establish peer-to-peer communication. PRW providers should foster more frequent communication between the patient and the physician and encourage physicians to respond on the Web to patient ratings. Further research is needed to learn more about the motivation of physicians to respond or not respond to Web-based patient ratings.
Purpose
This study aims to determine the intention to use hospital report cards (HRCs) for hospital referral purposes in the presence or absence of patient-reported outcomes (PROs) as well as to explore the relevance of publicly available hospital performance information from the perspective of referring physicians.
Methods
We identified the most relevant information for hospital referral purposes based on a literature review and qualitative research. Primary survey data were collected (May–June 2021) on a sample of 591 referring orthopedists in Germany and analyzed using structural equation modeling. Participating orthopedists were recruited using a sequential mixed-mode strategy and randomly allocated to work with HRCs in the presence (intervention) or absence (control) of PROs.
Results
Overall, 420 orthopedists (mean age 53.48, SD 8.04) were included in the analysis. The presence of PROs on HRCs was not associated with an increased intention to use HRCs (p = 0.316). Performance expectancy was shown to be the most important determinant for using HRCs (path coefficient: 0.387, p < .001). However, referring physicians have doubts as to whether HRCs can help them. We identified “complication rate” and “the number of cases treated” as most important for the hospital referral decision making; PROs were rated slightly less important.
Conclusions
This study underpins the purpose of HRCs, namely to support referring physicians in searching for a hospital. Nevertheless, only a minority would support the use of HRCs for the next hospital search in its current form. We showed that presenting relevant information on HRCs did not increase their use intention.
This paper aims to provide a structured overview of four open, participatory formats that are particularly applicable in inquiry-based teaching and learning contexts: hackathons, book sprints, barcamps, and learning circles. Using examples, mostly from the work and experience context of the Open Science Lab at TIB Hannover, we address concrete processes, working methods, possible outcomes and challenges.
The compilation offers an introduction to the topic and is intended to provide tools for testing in practice.
In a cross-sectional study, impact of management in dairy farms on calf mortality rates and prevalence of rotavirus and Cryptosporidium parvum in feces of calves was investigated. Sixty-two commercial dairy herds in Mecklenburg-Western Pomerania, Germany, were stratified selected in 2019. We performed in-person interviews and fecal specimens in samples of all-female calves of age 7 up to 21 days. Management data were documented on farm level. A Multiscreen Ag-ELISA was performed to determine rotavirus and Cryptosporidium parvum. Associations between two calf mortality rates, detection of C. parvum and rotavirus, and predictors were examined with GLM models. In farms with routine vaccination against respiratory diseases, 31-days mortality rate was 4.2% +/-1.26 compared to 7.6% +/-0.97 (p = 0.040) on non-vaccinating farms. Six-months mortality was lower in farms that continued feeding milk to calves during periods of diarrhea compared to farms that did not (6.9% +/-0.8 vs. 12.4% +/-2.3). In case of a routine shifting of calves from the calving box into calf boxes less C. parvum was detected compared to an individual moving of calves (33.3% +/-2.6 vs. 19.6% +/-5.3; p = 0.024). Our model confirms a positive association between occurrence of aqueous feces and frequency of detection of C. parvum (45.4% +/-23.6 vs. 21.4% +/-18.7; p < 0.001). Frequency of detection of rotavirus was lower in farms that reported a defined amount of applicated colostrum per calf than in farms that presented a range of colostrum instead of a defined amount. This study indicates the potential for mitigation of risk factors for mortality in calves.
Background: Interprofessionalism, considered as collaboration between medical professionals, has gained prominence over recent decades and evidence for its impact has grown. The steadily increasing number of residents in nursing homes will challenge medical care and the interaction across professions, especially nurses and general practitioners (GPS). The nursing home visit, a key element of medical care, has been underrepresented in research. This study explores GP perspectives on interprofessional collaboration with a focus on their visits to nursing homes in order to understand their experiences and expectations. This research represents an aspect of the interprof study, which explores medical care needs as well as the perceived collaboration and communication by nursing home residents, their families, GPS and nurses. This paper focusses on GPS' views, investigating in particular their visits to nursing homes in order to understand their experiences. Methods: Open guideline-interviews covering interprofessional collaboration and the visit process were conducted with 30 GPS in three study centers and analyzed with grounded theory methodology. GPS were recruited via postal request and existing networks of the research partners. Results: Four different types of nursing home visits were found: visits on demand, periodical visits, nursing home rounds and ad-hoc-decision based visits. We identified the core category "productive performance" of home visits in nursing homes which stands for the balance of GPŚ individual efforts and rewards. GPS used different strategies to perform a productive home visit: preparing strategies, on-site strategies and investing strategies. Conclusion: We compiled a theory of GPS home visits in nursing homes in Germany. The findings will be useful for research, and scientific and management purposes to generate a deeper understanding of GP perspectives and thereby improve interprofessional collaboration to ensure a high quality of care.
Type 2 Diabetes Mellitus: Risk Evaluation and Advice in Undergraduate Students in Ashrafieh, Lebanon
(2016)
Type 2 diabetes mellitus (T2DM) is a chronic lifestyle disease. It has become evident that T2DM occurs even among the younger age groups.1 In Lebanon, T2DM has a major public health impact through high disease prevalence, significant downstream pathophysiologic effects, and enormous financial liabilities.2
Monitoring of clinical trials is a fundamental process required by regulatory agencies. It assures the compliance of a center to the required regulations and the trial protocol. Traditionally, monitoring teams relied on extensive on-site visits and source data verification. However, this is costly, and the outcome is limited. Thus, central statistical monitoring (CSM) is an additional approach recently embraced by the International Council for Harmonisation (ICH) to detect problematic or erroneous data by using visualizations and statistical control measures. Existing implementations have been primarily focused on detecting inlier and outlier data. Other approaches include principal component analysis and distribution of the data. Here we focus on the utilization of comparisons of centers to the Grand mean for different model types and assumptions for common data types, such as binomial, ordinal, and continuous response variables. We implement the usage of multiple comparisons of single centers to the Grand mean of all centers. This approach is also available for various non-normal data types that are abundant in clinical trials. Further, using confidence intervals, an assessment of equivalence to the Grand mean can be applied. In a Monte Carlo simulation study, the applied statistical approaches have been investigated for their ability to control type I error and the assessment of their respective power for balanced and unbalanced designs which are common in registry data and clinical trials. Data from the German Multiple Sclerosis Registry (GMSR) including proportions of missing data, adverse events and disease severity scores were used to verify the results on Real-World-Data (RWD).
Improving Risk Assessment in Clinical Trials: Toward a Systematic Risk-Based Monitoring Approach
(2021)
Regulatory authorities have encouraged the usage of a risk-based monitoring (RBM) system in clinical trials before trial initiation for detection of potential risks and inclusion of a mitigation plan in the monitoring strategy. Several RBM tools were developed after the International Council for Harmonization gave sponsors the flexibility to initiate an approach to enhance quality management in a clinical trial. However, various studies have demonstrated the need for improvement of the available RBM tools as each does not provide a comprehensive overview of the characteristics, focus, and application. This research lays out a rationale for a risk methodology assessment (RMA) within the RBM system. The core purpose of RMA is to deliver a scientifically based evaluation and decision of any potential risk in a clinical trial. Thereby, a monitoring plan can be developed to elude prior identified risk outcome. To demonstrate RMA’s theoretical approach in practice, a Shiny web application (R Foundation for Statistical Computing) was designed to describe the assessment process of risk analysis and visualization tools that eventually aid in focusing monitoring activities. RMA focuses on the identification of an individual risk and visualizes its weight on the trial. The scoring algorithm of the presented approach computes the assessment of the individual risk in a radar plot and computes the overall score of the trial. Moreover, RMA’s novelty lies in its ability to decrease biased decision making during risk assessment by categorizing risk influence and detectability; a characteristic pivotal to serve RBM in assessing risks, and in contributing to a better understanding in the monitoring technique necessary for developing a functional monitoring plan. Future research should focus on validating the power of RMAs to demonstrate its efficiency. This would facilitate the process of characterizing the strengths and weaknesses of RMA in practice.
Severe mastitis can lead to considerable disturbances in the cows’ general condition and even to septicemia and death. The aim of this cross-sectional study was to identify factors associated with the severity of the clinical expression of mastitis. Streptococcus (Str.) uberis (29.9%) was the most frequently isolated pathogen, followed by coliform bacteria (22.3%). The majority of all mastitis cases (n = 854) in this study were either mild or moderate, but 21.1% were severe. It can be deduced that the combination of coliform pathogens and increasing pathogen shedding of these showed associations with severe mastitis. Furthermore, animal-related factors associated with severe disease progression were stages of lactation, and previous diseases in the period prior to the mastitis episode. Cows in early lactation had more severe mastitis. Ketosis and uterine diseases in temporal relation to the mastitis were associated with more severe mastitis in the diseased cows. Hypocalcemia was significantly associated with milder mastitis. As another factor, treatment with corticosteroids within two weeks before mastitis was associated with higher severity of mastitis. Knowledge of these risk factors may provide the basis for randomized controlled trials of the exact influence of these on the severity of mastitis.
Acute stroke care is a time-critical process. Improving communication
and documentation process may support a positive effect on medical outcome. To achieve this goal, a new system using a mobile application has been integrated into existing infrastructure at Hannover Medical School (MHH). Within a pilot project, this system has been brought into clinical daily routine in February 2022. Insights generated may support further applications in clinical use-cases.
An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction.
Background: In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases.
Results: This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevanceshifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis.
Conclusion: The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
A semiparametric approach for meta-analysis of diagnostic accuracy studies with multiple cut-offs
(2022)
The accuracy of a diagnostic test is often expressed using a pair of measures: sensitivity (proportion of test positives among all individuals with target condition) and specificity (proportion of test negatives among all individuals without targetcondition). If the outcome of a diagnostic test is binary, results from different studies can easily be summarized in a meta-analysis. However, if the diagnostic test is based on a discrete or continuous measure (e.g., a biomarker), several cut-offs within one study as well as among different studies are published. Instead of taking all information of the cut-offs into account in the meta-analysis, a single cut-off per study is often selected arbitrarily for the analysis, even though there are statistical methods for the incorporation of several cut-offs. For these methods, distributional assumptions have to be met and/or the models may not converge when specific data structures occur. We propose a semiparametric approach to overcome both problems. Our simulation study shows that the diagnostic accuracy is underestimated, although this underestimation in sensitivity and specificity is relatively small. The comparative approach of Steinhauser et al. is better in terms of coverage probability, but may lead to convergence problems. In addition to the simulation results, we illustrate the application of the semiparametric approach using a published meta-analysis for a diagnostic test differentiating between bacterial and viral meningitis in children.
Molecular hydrogen production from amorphous solid water during low energy electron irradiation
(2017)
This work investigates the production of molecular hydrogen isotopologues (H2, HD, and D2) during low energy electron irradiation of layered and isotopically labelled thin films of amorphous solid water (ASW) in ultrahigh vacuum. Experimentally, the production of these molecules with both irradiation time and incident electron energy in the range 400 to 500 eV is reported as a function of the depth of a buried D2O layer in an H2O film. H2 is produced consistently in all measurements, reflecting the H2O component of the film, though it does exhibit a modest reduction in intensity at the time corresponding to product escape from the buried D2O layer. In contrast, HD and D2 production exhibit peaks at times corresponding to product escape from the buried D2O layer in the composite film. These features broaden the deeper the HD or D2 is formed due to diffusion. A simple random-walk model is presented that can qualitatively explain the appearance profile of these peaks as a function of the incident electron penetration.
Family risks are known to be detrimental to children’s attachment development. This study investigated whether parental sensitivity plays different roles in early attachment development in the context of risk: Sensitivity was hypothesized to mediate risk effects on attachment, as well as a moderator that shapes the relation between risk and attachment. Multiple family risks, parental sensitivity (defined as responsivity and supportive presence), and children’s attachment security of 197 infants and toddlers (Mage = 15.25 months) and their caregivers were assessed in a prospective study with a cohort-sequential-design in Germany. Caregivers’ sensitivity served as a mediator of risk effects on attachment as well as a moderator that buffers adverse consequences of risk. Early sensitivity might be relevant in setting the stage for attachment development supporting resilience.
Growing up in high-risk environments is detrimental to children’s development of attachment security. Parenting behavior is hypothesized to be the mechanism through which risks exert their influence. However, risk influences can vary between individuals by gender. Aim of this study was to explore specific pathways of family risk on early attachment security and additionally examine the transmission via parenting behavior. The sample consisted of 197 children and their primary caregivers. Children’s age ranged between 10 and 21 months (M = 15.25, SD = 3.59). Data assessment included 21 distal and proximal family risk factors, children’s attachment security, and parental responsivity and supportive presence. Whereas distal risk factors had an adverse effect only on girls’ attachment security, proximal risks negatively affected only boys’ attachment security. Additionally, patterns of risk factors occurring in our sample were analyzed using an exploratory principal component analysis. Regardless of the child’s gender, a low socio-economic status was negatively related to attachment security of all children. Migration and crowding and a high emotional load of the primary caregiver both negatively predicted girls’ but not boys’ attachment security. However, the attachment security of boys was affected by a negative family climate. Most of the adverse risk effects on attachment security were mediated by parental responsivity and supportive presence so that the transmission of risk occurs through parenting behavior. Results revealed a different susceptibility of family risks for girls and boys. The consideration of a gender-sensitive approach in developmental psychopathology and interventions of developmental child welfare services is recommended.
Chronic kidney disease is one of the main causes of mortality worldwide. It affects more than 800 million patients globally, accounting for approximately 10% of the general population. The significant burden of the disease prompts healthcare systems to implement adequate preventive and therapeutic measures. This systematic review and meta-analysis aimed to provide a concise summary of the findings published in the existing body of research about the influence that mobile health technology has on the outcomes of patients with the disease. A comprehensive systematic literature review was conducted from inception until March 1st, 2023. This systematic review and meta-analysis included all clinical trials that compared the efficacy of mobile app-based educational programs to that of more conventional educational treatment for the patients. Eleven papers were included in the current analysis, representing 759 CKD patients. 381 patients were randomly assigned to use the mobile apps, while 378 individuals were assigned to the control group. The mean systolic blood pressure was considerably lower in the mobile app group (MD -4.86; 95%-9.60, -0.13; p=0.04). Meanwhile, the mean level of satisfaction among patients who used the mobile app was considerably greater (MD 0.75; 95% CI 0.03, 1.46; p=0.04). Additionally, the mean self-management scores in the mobile app groups were significantly higher (SMD 0.534; 95% CI 0.201, 0.867; p=0.002). Mobile health applications are potentially valuable interventions for patients. This technology improved the self-management of the disease, reducing the mean levels of systolic blood pressure with a high degree of patient satisfaction.
Powder bed-based additive manufacturing processes offer an extended freedom in design and enable the processing of metals, ceramics, and polymers with a high level of relative density. The latter is a prevalent measure of process and component quality, which depends on various input variables. A key point in this context is the condition of powder beds. To enhance comprehension of their particle-level formation and facilitate process optimization, simulations based on the Discrete Element Method are increasingly employed in research. To generate qualitatively as well as quantitatively reliable simulation results, an adaptation of the contact model parameterization is necessary. However, current adaptation methods often require the implementation of models that significantly increase computational effort, therefore limiting their applicability. To counteract this obstacle, a sophisticated formula-based adaptation and evaluation method is presented in this research. Additionally, the developed method enables accelerated parameter determination with limited experimental effort. Thus, it represents an integrative component, which supports further research efforts based on the Discrete Element Method by significantly reducing the parameterization effort. The universal nature of deducting this method also allows its adaptation to similar parameterization problems and its implementation in other fields of research.
Aim:
To characterize palliative care patients, to estimate the incidence, prevalence, and 1-year all-cause mortality in patients in Germany who received palliative care treatment.
Subject and methods:
The study analyzed the InGef Research Database, which covers 4 million people insured in German statutory health insurance companies. Specific outpatient and inpatient reimbursement codes were used to capture cases with palliative conditions. The prevalence was ascertained for the year 2015. The incidence was calculated for patients without documented palliative care services in the year before the observation period. The Kaplan–Meier method was used to analyze the 1-year all-cause mortality.
Results:
The incidence rate of palliative conditions was 41.3 and 34.9 per 10,000 persons in women and men, respectively. The prevalence per 10,000 persons was 61.3 in women and 51.1 in men. The 1-year all-cause mortality among patients receiving their first palliative care treatment was 67.5%. Mortality was lower in patients receiving general outpatient palliative care treatment (AAPV; 60.8%) compared to patients receiving specialized outpatient palliative care treatment (SAPV; 86.1%) or inpatient palliative care treatment (90.6%). Within the first 30 days, mortality was particularly high (~43.0%).
Conclusions:
In Germany, more than 400,000 patients per year receive palliative care treatment, which is lower compared to estimates of the number of persons with a potential need for palliative care. This gap was observed particularly in younger to middle-aged individuals. The findings indicate a demand for methodologically sound studies to investigate the public health burden and to quantify the unmet need for palliative care in Germany.
Background
Chronic obstructive pulmonary disease (COPD) causes significant morbidity and mortality worldwide. Estimation of incidence, prevalence and disease burden through routine insurance data is challenging because of under-diagnosis and under-treatment, particularly for early stage disease in health care systems where outpatient International Classification of Diseases (ICD) diagnoses are not collected. This poses the question of which criteria are commonly applied to identify COPD patients in claims datasets in the absence of ICD diagnoses, and which information can be used as a substitute. The aim of this systematic review is to summarize previously reported methodological approaches for the identification of COPD patients through routine data and to compile potential criteria for the identification of COPD patients if ICD codes are not available.
Methods
A systematic literature review was performed in Medline via PubMed and Google Scholar from January 2000 through October 2018, followed by a manual review of the included studies by at least two independent raters. Study characteristics and all identifying criteria used in the studies were systematically extracted from the publications, categorized, and compiled in evidence tables.
Results
In total, the systematic search yielded 151 publications. After title and abstract screening, 38 publications were included into the systematic assessment. In these studies, the most frequently used (22/38) criteria set to identify COPD patients included ICD codes, hospitalization, and ambulatory visits. Only four out of 38 studies used methods other than ICD coding. In a significant proportion of studies, the age range of the target population (33/38) and hospitalization (30/38) were provided. Ambulatory data were included in 24, physician claims in 22, and pharmaceutical data in 18 studies. Only five studies used spirometry, two used surgery and one used oxygen therapy.
Conclusions
A variety of different criteria is used for the identification of COPD from routine data. The most promising criteria set in data environments where ambulatory diagnosis codes are lacking is the consideration of additional illness-related information with special attention to pharmacotherapy data. Further health services research should focus on the application of more systematic internal and/or external validation approaches.
We report velocity-dependent internal energy distributions of nitric oxide molecules, NO, scattered off graphene supported on gold to further explore the dynamics of the collision process between NO radicals and graphene. These experiments were performed by directing a molecular beam of NO onto graphene in a surface-velocity map imaging setup, which allowed us to record internal energy distributions of the NO radicals as a function of their velocity. We do not observe bond formation but (1) major contributions from direct inelastic scattering and (2) a smaller trapping–desorption component where some physisorbed NO molecules have residence times on the order of microseconds. This is in agreement with our classical molecular dynamics simulations which also observe a small proportion of two- and multi-bounce collisions events but likewise a small proportion of NO radicals trapped at the surface for the entire length of the molecular dynamics simulations (a few picoseconds). Despite a collision energy of 0.31 eV, which would be sufficient to populate NO(v = 1), we do not detect vibrationally excited nitric oxide.
We performed classical molecular dynamics simulations to model the scattering process of nitric oxide, NO, off graphene supported on gold. This is motivated by our desire to probe the energy transfer in collisions with graphene. Since many of these collision systems comprising of graphene and small molecules have been shown to scatter non-reactively, classical molecular dynamics appear to describe such systems sufficiently. We directed thousands of trajectories of NO molecules onto graphene along the surface normal, while varying impact position, but also speed, orientation, and rotational excitation of the nitric oxide, and compare the results with experimental data. While experiment and theory do not match quantitatively, we observe agreement that the relative amount of kineti cenergy lost during the collision increases with increasing initial kinetic energy of the NO. Furthermore, while at higher collision energies, all NO molecules lose some energy, and the vast majority of NO is scattered back, in contrast at low impact energies, the fraction of those nitric oxide molecules that are trapped at the surface increases, and some NO molecules even gain some kinetic energy during the collision process. The collision energy seems to preferentially go into the collective motion of the carbon atoms in the graphene sheet.
The aim of this study was to examine the opinions of farmers on a consulting project, which was established for organic dairy farms in Northern Germany involving different animal health experts who participated in the meetings. Furthermore, the properties of measures that are of decisive importance for implementation on the farms were identified to improve consultancy services for dairy farming. Once a year, the farmers met on a host-farm in one of three groups consisting of five to nine farms, a facilitator and an expert. At each meeting, a host-farm was visited and the analysed data of all participating farms of the previous year were presented to the group members. Each farmer had the possibility to report on success stories and issues concerning his herd. During discussions, the farmers first proposed mutual farm-specific measures for improving herd health and animal welfare. Afterwards, the expert named possible interventions and commented on the given measures of the farmers. All measures were noted by the facilitator. At the end of each meeting, each farmer could choose which of the given measures he wanted to implement. Open group-interviews as well as anonymous questionnaires for the farmers were used at the meetings in winter 2016/2017 to evaluate their perception of this consulting project and to determine which properties of measures were important for implementation on the farms. Based on the results of this study, the participating farmers were very positive towards this kind of consulting project. They favoured the participation of an expert during the meetings and the analysis of farm-specific data. Farmers mostly chose measures for implementation proposed by farmers and approved by the expert, followed by those proposed by the expert only. Measures were chosen when they were practical in the implementation, effective, efficient and took a low additional workload for implementation.
HOXA9 and MEIS1 are frequently upregulated in acute myeloid leukemia (AML), including those with MLL‐rearrangement. Because of their pivotal role in hemostasis, HOXA9 and MEIS1 appear non‐druggable. We, thus, interrogated gene expression data of pre‐leukemic (overexpressing Hoxa9) and leukemogenic (overexpressing Hoxa9 and Meis1; H9M) murine cell lines to identify cancer vulnerabilities. Through gene expression analysis and gene set enrichment analyses, we compiled a list of 15 candidates for functional validation. Using a novel lentiviral multiplexing approach, we selected and tested highly active sgRNAs to knockout candidate genes by CRISPR/Cas9, and subsequently identified a H9M cell growth dependency on the cytosolic phospholipase A2 (PLA2G4A). Similar results were obtained by shRNA‐mediated suppression of Pla2g4a. Remarkably, pharmacologic inhibition of PLA2G4A with arachidonyl trifluoromethyl ketone (AACOCF3) accelerated the loss of H9M cells in bulk cultures. Additionally, AACOCF3 treatment of H9M cells reduced colony numbers and colony sizes in methylcellulose. Moreover, AACOCF3 was highly active in human AML with MLL rearrangement, in which PLA2G4A was significantly higher expressed than in AML patients without MLL rearrangement, and is sufficient as an independent prognostic marker. Our work, thus, identifies PLA2G4A as a prognostic marker and potential therapeutic target for H9M‐dependent AML with MLL‐rearrangement.
Of late, decrease in mineral oil supplies has stimulated research on use of biomass as an alternative energy source. Climate change has brought problems such as increased drought and erratic rains. This, together with a rise in land degeneration problems with concomitant loss in soil fertility has inspired the scientific world to look for alternative bio-energy species. Euphorbia tirucalli L., a tree with C3/CAM metabolism in leaves/stem, can be cultivated on marginal, arid land and could be a good alternative source of biofuel.
We analyzed a broad variety of E. tirucalli plants collected from different countries for their genetic diversity using AFLP. Physiological responses to induced drought stress were determined in a number of genotypes by monitoring growth parameters and influence on photosynthesis. For future breeding of economically interesting genotypes, rubber content and biogas production were quantified.
Cluster analysis shows that the studied genotypes are divided into two groups, African and mostly non-African genotypes. Different genotypes respond significantly different to various levels of water. Malate measurement indicates that there is induction of CAM in leaves following drought stress. Rubber content varies strongly between genotypes. An investigation of the biogas production capacities of six E. tirucalli genotypes reveals biogas yields higher than from rapeseed but lower than maize silage.
We present a small case study on citations of conference posters using poster collections from both Figshare and Zenodo. The study takes into account the years 2016–2020 according to the dates of publication on the platforms. Citation data was taken from DataCite, Crossref and Dimensions. Primarily, we want to know to what extent scientific posters are being cited and thereby which impact posters potentially have on the scholarly landscape and especially on academic publications. Our data-driven analysis reveals that posters are rarely cited. Citations could only be found for 1% of the posters in our dataset. A limitation in this study however is that the impact of academic posters was not measured empirical but rather descriptive.
A German university has developed a learning information system to improve information literacy among German students. An online tutorial based on this Lerninformationssystem has been developed. The structure of this learning information system is described, an online tutorial based on it is illustrated, and the different learning styles that it supports are indicated.
The transfer of historically grown monolithic software architectures into modern service-oriented architectures creates a lot of loose coupling points. This can lead to an unforeseen system behavior and can significantly impede those continuous modernization processes, since it is not clear where bottlenecks in a system arise. It is therefore necessary to monitor such modernization processes with an adaptive monitoring concept to be able to correctly record and interpret unpredictable system dynamics. This contribution presents a generic QoS measurement framework for service-based systems. The framework consists of an XML-based specification for the measurement to be performed – the Information Model (IM) – and the QoS System, which provides an execution platform for the IM. The framework will be applied to a standard business process of the German insurance industry, and the concepts of the IM and their mapping to artifacts of the QoS System will be presented. Furtherm ore, design and implementation of the QoS System’s parser and generator module and the generated artifacts are explained in detail, e.g., event model, agents, measurement module and analyzer module.
Harmonisation of German Health Care Data Using the OMOP Common Data Model – A Practice Report
(2023)
Data harmonization is an important step in large-scale data analysis and for generating evidence on real world data in healthcare. With the OMOP common data model, a relevant instrument for data harmonization is available that is being promoted by different networks and communities. At the Hannover Medical School (MHH) in Germany, an Enterprise Clinical Research Data Warehouse (ECRDW) is established and harmonization of that data source is the focus of this work. We present MHH’s first implementation of the OMOP common data model on top of the ECRDW data source and demonstrate the challenges concerning the mapping of German healthcare terminologies to a standardized format.