Refine
Document Type
- Article (11)
Language
- English (11)
Has Fulltext
- yes (11)
Is part of the Bibliography
- no (11)
Keywords
- Germany (2)
- meta-analysis (2)
- Altersgruppe (1)
- Bortedella (1)
- Dairy cow (1)
- Diagnostik (1)
- Epidemiologie (1)
- Feeding management (1)
- Fragebogen (1)
- FurB (1)
Institute
Background: After kidney transplantation, immunosuppressive therapy causes impaired cellular immune defense leading to an increased risk of viral complications. Trough level monitoring of immunosuppressants is insufficient to estimate the individual intensity of immunosuppression. We have already shown that virus-specific T cells (Tvis) correlate with control of virus replication as well as with the intensity of immunosuppression. The multicentre IVIST01-trial should prove that additional steering of immunosuppressive and antiviral therapy by Tvis levels leads to better graft function by avoidance of over-immunosuppression (for example, viral infections) and drug toxicity (for example, nephrotoxicity).
Methods/design: The IVIST-trial starts 4 weeks after transplantation. Sixty-four pediatric kidney recipients are randomized either to a non-intervention group that is only treated conservatively or to an intervention group with additional monitoring by Tvis. The randomization is stratified by centre and cytomegalovirus (CMV) prophylaxis. In both groups the immunosuppressive medication (cyclosporine A and everolimus) is adopted in the same target range of trough levels. In the non-intervention group the immunosuppressive therapy (cyclosporine A and everolimus) is only steered by classical trough level monitoring and the antiviral therapy of a CMV infection is performed according to a standard protocol. In contrast, in the intervention group the dose of immunosuppressants is individually adopted according to Tvis levels as a direct measure of the intensity of immunosuppression in addition to classical trough level monitoring. In case of CMV infection or reactivation the antiviral management is based on the individual CMV-specific immune defense assessed by the CMV-Tvis level. Primary endpoint of the study is the glomerular filtration rate 2 years after transplantation; secondary endpoints are the number and severity of viral infections and the incidence of side effects of immunosuppressive and antiviral drugs.
Discussion: This IVIST01-trial will answer the question whether the new concept of steering immunosuppressive and antiviral therapy by Tvis levels leads to better future graft function. In terms of an effect-related drug monitoring, the study design aims to realize a personalization of immunosuppressive and antiviral management after transplantation. Based on the IVIST01-trial, immunomonitoring by Tvis might be incorporated into routine care after kidney transplantation.
Background: In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases.
Results: This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevanceshifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis.
Conclusion: The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i) to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model) and (ii) to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate model.
Background
In the past years, it became apparent that health status and performance differ considerably within dairy farms in Northern Germany. In order to obtain clues with respect to possible causes of these differences, a case-control study was performed. Case farms, which showed signs of health and performance problems, and control farms, which had none of these signs, were compared. Risk factors from different areas such as health management, housing, hygiene and nutrition were investigated as these are known to be highly influential. The aim of this study was to identify major factors within these areas that have the strongest association with health and performance problems of dairy herds in Northern Germany.
Results
In the final model, a lower energy density in the roughage fraction of the diet, more pens with dirty lying areas and a low ratio of cows per watering spaces were associated with a higher risk for herd health problems. Moreover, case farms were affected by infections with intestinal parasites, lungworms, liver flukes and Johne’s Disease numerically more often than control farms. Case farms more often had pens with raised cubicles compared to the deep bedded stalls or straw yards found in control farms. In general, the hygiene of the floors and beddings was worse in case farms. Concerning nutrition, the microbiological and sensory quality of the provided silages was often insufficient, even in control farms. Less roughage was provided to early lactating cows and the feed was pushed to the feeding fence less frequently in case farms than in control farms.
Conclusions
The results show that milk yield and health status were associated with various factors from different areas stressing the importance of all aspects of management for good animal health and performance. Moreover, this study confirmed well-known risk factors for health problems and performance losses. These should better be taken heed of in herd health management.
The objective was to establish and standardise a broth microdilution susceptibility testing method for porcine Bordetella (B.) bronchiseptica. B. bronchiseptica isolates from different geographical regions and farms were genotyped by macrorestriction analysis and subsequent pulsed-field gel electrophoresis. One reference and one type strain plus two field isolates of B. bronchiseptica were chosen to analyse growth curves in four different media: cation-adjusted Mueller-Hinton broth (CAMHB) with and without 2% lysed horse blood, Brain-Heart-Infusion (BHI), and Caso broth. The growth rate of each test strain in each medium was determined by culture enumeration and the suitability of CAMHB was confirmed by comparative statistical analysis. Thereafter, reference and type strain and eight epidemiologically unrelated field isolates of B. bronchiseptica were used to test the suitability of a broth microdilution susceptibility testing method following CLSI-approved performance standards given in document VET01-A4. Susceptibility tests, using 20 antimicrobial agents, were performed in five replicates, and data were collected after 20 and 24 hours incubation and statistically analysed. Due to the low growth rate of B. bronchiseptica, an incubation time of 24 hours resulted in significantly more homogeneous minimum inhibitory concentrations after five replications compared to a 20-hour incubation. An interlaboratory comparison trial including susceptibility testing of 24 antimicrobial agents revealed a high mean level of reproducibility (97.9%) of the modified method. Hence, in a harmonization for broth microdilution susceptibility testing of B. bronchiseptica, an incubation time of 24 hours in CAMHB medium with an incubation temperature of 35°C and an inoculum concentration of approximately 5 x 105 cfu/ml was proposed.
Background: Epidemiological and experimental studies suggest that exposure to ultrafine particles (UFP) might aggravate the allergic inflammation of the lung in asthmatics.
Methods: We exposed 12 allergic asthmatics in two subgroups in a double-blinded randomized cross-over design, first to freshly generated ultrafine carbon particles (64 μg/m3; 6.1 ± 0.4 × 105 particles/cm3 for 2 h) and then to filtered air or vice versa with a 28-day recovery period in-between. Eighteen hours after each exposure, grass pollen was instilled into a lung lobe via bronchoscopy. Another 24 hours later, inflammatory cells were collected by means of bronchoalveolar lavage (BAL). (Trial registration: NCT00527462)
Results: For the entire study group, inhalation of UFP by itself had no significant effect on the allergen induced
inflammatory response measured with total cell count as compared to exposure with filtered air (p = 0.188). However, the subgroup of subjects, which inhaled UFP during the first exposure, exhibited a significant increase in total BAL cells (p = 0.021), eosinophils (p = 0.031) and monocytes (p = 0.013) after filtered air exposure and subsequent allergen challenge 28 days later. Additionally, the potential of BAL cells to generate oxidant radicals was
significantly elevated at that time point. The subgroup that was exposed first to filtered air and 28 days later to UFP did not reveal differences between sessions.
Conclusions: Our data demonstrate that pre-allergen exposure to UFP had no acute effect on the allergic inflammation. However, the subgroup analysis lead to the speculation that inhaled UFP particles might have a long-term effect on the inflammatory course in asthmatic patients. This should be reconfirmed in further studies with an appropriate study design and sufficient number of subjects.
Background: Maintenance of metal homeostasis is crucial in bacterial pathogenicity as metal starvation is the most important mechanism in the nutritional immunity strategy of host cells. Thus, pathogenic bacteria have evolved sensitive metal scavenging systems to overcome this particular host defence mechanism. The ruminant pathogen Mycobacterium avium ssp. paratuberculosis (MAP) displays a unique gut tropism and causes a chronic progressive intestinal inflammation. MAP possesses eight conserved lineage specific large sequence polymorphisms (LSP), which distinguish MAP from its ancestral M. avium ssp. hominissuis or other M. avium subspecies. LSP14 and LSP15 harbour many genes proposed to be involved in metal homeostasis and have been suggested to substitute for a MAP specific, impaired mycobactin synthesis.
Results: In the present study, we found that a LSP14 located putative IrtAB-like iron transporter encoded by mptABC was induced by zinc but not by iron starvation. Heterologous reporter gene assays with the lacZ gene under control of the mptABC promoter in M. smegmatis (MSMEG) and in a MSMEGΔfurB deletion mutant revealed a zinc dependent, metalloregulator FurB mediated expression of mptABC via a conserved mycobacterial FurB recognition site. Deep sequencing of RNA from MAP cultures treated with the zinc chelator TPEN revealed that 70 genes responded to zinc limitation. Remarkably, 45 of these genes were located on a large genomic island of approximately 90 kb which harboured LSP14 and LSP15. Thirty-five of these genes were predicted to be controlled by FurB, due to the presence of putative binding sites. This clustering of zinc responsive genes was exclusively found in MAP and not in other mycobacteria.
Conclusions: Our data revealed a particular genomic signature for MAP given by a unique zinc specific locus, thereby suggesting an exceptional relevance of zinc for the metabolism of MAP. MAP seems to be well adapted to maintain zinc homeostasis which might contribute to the peculiarity of MAP pathogenicity.
Methods for standard meta-analysis of diagnostic test accuracy studies are well established and understood. For the more complex case in which studies report test accuracy across multiple thresholds, several approaches have recently been proposed. These are based on similar ideas, but make different assumptions. In this article, we apply four different approaches to data from a recent systematic review in the area of nephrology and compare the results. The four approaches use: a linear mixed effects model, a Bayesian multinomial random effects model, a time-to-event model and a nonparametric model, respectively. In the case study data, the accuracy of neutrophil gelatinase-associated lipocalin for the diagnosis of acute kidney injury was assessed in different scenarios, with sensitivity and specificity estimates available for three thresholds in each primary study. All approaches led to plausible and mostly similar summary results. However, we found considerable differences in results for some scenarios, for example, differences in the area under the receiver operating characteristic curve (AUC) of up to 0.13. The Bayesian approach tended to lead to the highest values of the AUC, and the nonparametric approach tended to produce the lowest values across the different scenarios. Though we recommend using these approaches, our findings motivate the need for a simulation study to explore optimal choice of method in various scenarios.
Background
The eResearch system “Prospective Monitoring and Management App (PIA)” allows researchers to implement questionnaires on any topic and to manage biosamples. Currently, we use PIA in the longitudinal study ZIFCO (Integrated DZIF Infection Cohort within the German National Cohort) in Hannover (Germany) to investigate e.g. associations of risk factors and infectious diseases. Our aim was to assess user acceptance and compliance to determine suitability of PIA for epidemiological research on transient infectious diseases.
Methods
ZIFCO participants used PIA to answer weekly questionnaires on health status and report spontaneous onset of symptoms. In case of symptoms of a respiratory infection, the app requested participants to self-sample a nasal swab for viral analysis. To assess user acceptance, we implemented the System Usability Scale (SUS) and fitted a linear regression model on the resulting score. For investigation of compliance with submitting the weekly health questionnaires, we used a logistic regression model with binomial response.
Results
We analyzed data of 313 participants (median age 52.5 years, 52.4% women). An average SUS of 72.0 reveals good acceptance of PIA. Participants with a higher technology readiness score at the beginning of study participation also reported higher user acceptance. Overall compliance with submitting the weekly health questionnaires showed a median of 55.7%. Being female, of younger age and being enrolled for a longer time decreased the odds to respond. However, women over 60 had a higher chance to respond than women under 60, while men under 40 had the highest chance to respond. Compliance with nasal swab self-sampling was 77.2%.
Discussion
Our findings show that PIA is suitable for the use in epidemiologic studies with regular short questionnaires. Still, we will focus on user engagement and gamification for the further development of PIA to help incentivize regular and long-term participation.
A semiparametric approach for meta-analysis of diagnostic accuracy studies with multiple cut-offs
(2022)
The accuracy of a diagnostic test is often expressed using a pair of measures: sensitivity (proportion of test positives among all individuals with target condition) and specificity (proportion of test negatives among all individuals without targetcondition). If the outcome of a diagnostic test is binary, results from different studies can easily be summarized in a meta-analysis. However, if the diagnostic test is based on a discrete or continuous measure (e.g., a biomarker), several cut-offs within one study as well as among different studies are published. Instead of taking all information of the cut-offs into account in the meta-analysis, a single cut-off per study is often selected arbitrarily for the analysis, even though there are statistical methods for the incorporation of several cut-offs. For these methods, distributional assumptions have to be met and/or the models may not converge when specific data structures occur. We propose a semiparametric approach to overcome both problems. Our simulation study shows that the diagnostic accuracy is underestimated, although this underestimation in sensitivity and specificity is relatively small. The comparative approach of Steinhauser et al. is better in terms of coverage probability, but may lead to convergence problems. In addition to the simulation results, we illustrate the application of the semiparametric approach using a published meta-analysis for a diagnostic test differentiating between bacterial and viral meningitis in children.