Refine
Year of publication
- 2022 (62) (remove)
Document Type
- Article (39)
- Conference Proceeding (19)
- Bachelor Thesis (4)
Language
- English (62) (remove)
Has Fulltext
- yes (62)
Is part of the Bibliography
- no (62)
Keywords
- Agile Softwareentwicklung (4)
- COVID-19 (4)
- Agile methods (3)
- Agilität <Management> (3)
- Computersicherheit (3)
- Euterentzündung (3)
- Telearbeit (3)
- Agile software development (2)
- Bindungsfähigkeit (2)
- E-Learning (2)
The German Corona Consensus (GECCO) established a uniform dataset in FHIR format for exchanging and sharing interoperable COVID-19 patient specific data between health information systems (HIS) for universities. For sharing the COVID-19 information with other locations that use openEHR, the data are to be converted in FHIR format. In this paper, we introduce our solution through a web-tool named “openEHR-to-FHIR” that converts compositions from an openEHR repository and stores in their respective GECCO FHIR profiles. The tool provides a REST web service for ad hoc conversion of openEHR compositions to FHIR profiles.
Techno-economic analysis that allocate costs to the energy flows of energy systems are helpful to understand the formation of costs within processes and to increase the cost efficiency. For the economic evaluation, the usefulness or quality of the energy is of great importance. In exergy-based methods, this is considered by allocating costs to the exergy instead of energy. As exergy represents the ability of performing work, it is often named the useful part of energy. In contrast, the anergy, the part of energy, which cannot perform work, is often assumed to be not useful.
However, heat flows as used e.g. in domestic heating are always a mixture of a relative small portion of exergy and a big portion of anergy. Although of lower quality, the anergy is obviously useful for these applications. The question is, whether it makes sense to differentiate between exergy and anergy and take both properties into account for the economic evaluation.
To answer this question, a new methodical concept based on the definition of an anergy-exergy cost ratio is compared to the commonly applied approaches of considering either energy or exergy as the basis for economic evaluation. These three different approaches for the economic analysis of thermal energy systems are applied to an exemplary heating system with thermal storages. It is shown that the results of the techno-economic analysis can be improved by giving anergy an economic value and that the proposed anergy-cost ratio allows a flexible adaptation of the evaluation depending on the economic constraints of a system.
This research focuses on the fundamental ideas and underlying principles of E-Learning technology, as well as theoretical considerations for an optimal learning environment. This theoretical exploration was then used as a basis for the design and construction of a new, interactive Web-Based ESH-Training. The quality and effectiveness of this new course was then compared with that of the existing analog PDF-Training via a test with a diverse sample of employee learners. Learners were later surveyed to ascertain their views on both trainings in terms of the quality of the content, facilitator, resources, and length. Results clearly showed that regardless of demographic factors, most employee learners preferred the new, Web-Based ESH-Training to the analog PDF-Training.
Introduction
Atopic dermatitis (AD) is a common inflammatory skin disease. Many patients are initiating a systemic therapy, if the disease is not adequately controlled with topical treatment only. Currently, there is little real-world evidence on the AD-related medical care situation in Germany. This study analyzed patient characteristics, treatment patterns, healthcare resource utilization and costs associated with systemically treated AD for the German healthcare system.
Methods
In this descriptive, retrospective cohort study, aggregated anonymized German health claims data from the InGef research database were used. Within a representative sample of four million insured individuals, patients with AD and systemic drug therapy initiation (SDTI) in the index year 2017 were identified and included into the study cohort. Systemic drug therapy included dupilumab, systemic corticosteroids (SCS) and systemic immunosuppressants (SIS). Patients were observed for one year starting from the date of SDTI in 2017.
Results
9975 patients were included (57.8% female, mean age 39.6 years [SD 25.5]). In the one-year observation period, the most common systemic drug therapy was SCS (> 99.0%). Administrations of dupilumab (0.3%) or dispensations of SIS were rare (cyclosporine: 0.5%, azathioprine: 0.6%, methotrexate: 0.1%). Median treatment duration of SCS, cyclosporine and azathioprine was 27 days, 102 days, and 109 days, respectively. 2.8% of the patients received phototherapy; 41.6% used topical corticosteroids and/or topical calcineurin inhibitor. Average annual costs for medications amounted to € 1237 per patient. Outpatient services were used by 99.6% with associated mean annual costs of € 943; 25.4% had at least one hospitalization (mean annual costs: € 5836). 5.3% of adult patients received sickness benefits with associated mean annual costs of € 5026.
Conclusions
Despite unfavorable risk–benefit profile, this study demonstrated a common treatment with SCS, whereas other systemic drug therapy options were rarely used. Furthermore, the results suggest a substantial economic burden for patients with AD and SDTI.
Background and Objectives:
Drawing causal conclusions from real-world data (RWD) poses methodological challenges and risk of bias. We aimed to systematically assess the type and impact of potential biases that may occur when analyzing RWD using the case of progressive ovarian cancer.
Methods:
We retrospectively compared overall survival with and without second-line chemotherapy (LOT2) using electronic medical records. Potential biases were determined using directed acyclic graphs. We followed a stepwise analytic approach ranging from crude analysis and multivariable-adjusted Cox model up to a full causal analysis using a marginal structural Cox model with replicates emulating a reference randomized controlled trial (RCT). To assess biases, we compared effect estimates (hazard ratios [HRs]) of each approach to the
HR of the reference trial.
Results:
The reference trial showed an HR for second line vs. delayed therapy of 1.01 (95% confidence interval [95% CI]: 0.82e1.25). The corresponding HRs from the RWD analysis ranged from 0.51 for simple baseline adjustments to 1.41 (95% CI: 1.22e1.64) accounting for immortal time bias with time-varying covariates. Causal trial emulation yielded an HR of 1.12 (95% CI: 0.96e1.28).
Conclusion:
Our study, using ovarian cancer as an example, shows the importance of a thorough causal design and analysis if one is expecting RWD to emulate clinical trial results.
Aim
Musculoskeletal disorders are a major public health problem in most developed countries. As a main cause of chronic pain, they have resulted in an increasing prescription of opioids worldwide. With regard to the situation in Germany, this study aimed at estimating the prevalence of musculoskeletal diseases such as chronic low back pain (CLBP) and hip/knee osteoarthritis (OA) and at depicting the applied treatment patterns.
Subject and methods
German claims data from the InGef Research Database were analyzed over a 6-year period (2011–2016). The dataset contains over 4 million people, enrolled in German statutory health insurances. Inpatient and outpatient diagnoses were considered for case identification of hip/knee OA and CLBP. The World Health Organization (WHO) analgesic ladder was applied to categorize patients according to their pain management interventions. Information on demographics, comorbidities, and adjuvant medication was collected.
Results
In 2016, n = 2,693,481 individuals (50.5% female, 49.5% male) were assigned to the study population; 62.5% of them were aged 18–60 years. In 2016, n = 146,443 patients (5.4%) with CLBP and n = 307,256 patients (11.4%) with hip/knee OA were identified. Of those with pre-specified pain management interventions (CLBP: 66.3%; hip/knee OA: 65.1%), most patients received WHO I class drugs (CLBP: 73.6%; hip/knee OA: 68.7%) as the highest level.
Conclusion
This study provides indications that CLBP and hip/knee OA are common chronic pain conditions in Germany, which are often subjected to pharmacological pain management. Compared to non-opioid analgesic prescriptions of the WHO I class, the dispensation of WHO class II and III opioids was markedly lower, though present to a considerable extent.
Even for the more traditional insurance industry, the Microservices Architecture (MSA) style plays an increasingly important role in provisioning insurance services. However, insurance businesses must operate legacy applications, enterprise software, and service-based applications in parallel for a more extended transition period. The ultimate goal of our ongoing research is to design a microservice reference architecture in cooperation with our industry partners from the insurance domain that provides an approach for the integration of applications from different architecture paradigms. In Germany, individual insurance services are classified as part of the critical infrastructure. Therefore, German insurance companies must comply with the Federal Office for Information Security requirements, which the Federal Supervisory Authority enforces. Additionally, insurance companies must comply with relevant laws, regulations, and standards as part of the business’s compliance requirements. Note: Since Germany is seen as relatively ’tough’ with respect to privacy and security demands, fullfilling those demands might well be suitable (if not even ’over-achieving’) for insurances in other countries as well. The question raises thus, of how insurance services can be secured in an application landscape shaped by the MSA style to comply with the architectural and security requirements depicted above. This article highlights the specific regulations, laws, and standards the insurance industry must comply with. We present initial architectural patterns to address authentication and authorization in an MSA tailored to the requirements of our insurance industry partners.
FID Civil Engineering, Architecture and Urbanism digital - A platform for science (BAUdigital)
(2022)
University Library Braunschweig (UB Braunschweig), University and State Library Darmstadt (ULB Darmstadt), TIB – Leibniz Information Centre for Technology and Natural Sciences and the Fraunhofer Information Centre for Planning and Building (Fraunhofer IRB) are jointly establishing a specialised information service (FID, "Fachinformationsdienst") for the disciplines of civil engineering, architecture and urbanism. The FID BAUdigital, which is funded by the German Research Foundation (DFG, "Deutsche Forschungsgemeinschaft"), will provide researchers working on digital design, planning and production methods in construction engineering with a joint information, networking and data exchange platform and support them with innovative services for documentation, archiving and publication in their data-based research.
To avoid the shortcomings of traditional monolithic applications, the Microservices Architecture (MSA) style plays an increasingly important role in providing business services. This is true even for the more conventional insurance industry with its highly heterogeneous application landscape and sophisticated cross-domain business processes. Therefore, the question arises of how workflows can be implemented to grant the required flexibility and agility and, on the other hand, to exploit the potential of the MSA style. In this article, we present two different approaches – orchestration and choreography. Using an application scenario from the insurance domain, both concepts are discussed. We introduce a pattern that outlines the mapping of a workflow to a choreography.
With the use of an energy management system in an industrial company according to ISO 50001, a step-by-step increase in energy efficiency can be achieved. The realization of energy monitoring and load management functions requires programs on edge devices or PLCs to acquire the data, adapt the data type or scale the values of the energy information. In addition, the energy information must be mapped to communication interfaces (e.g. based on OPC UA) in order to convey this energy information to the energy management application. The development of these energy management programs is associated with a high engineering effort, because the field devices from the heterogeneous field level do not provide the energy information in standardized semantics. To mitigate this engineering effort, a universal energy data information model (UEIM) is developed and presented in this paper.
Wikidata and Wikibase as complementary research data management services for cultural heritage data
(2022)
The NFDI (German National Research Data Infrastructure) consortia are associations of various institutions within a specific research field, which work together to develop common data infrastructures, guidelines, best practices and tools that conform to the principles of FAIR data. Within the NFDI, a common question is: What is the potential of Wikidata to be used as an application for science and research? In this paper, we address this question by tracing current research usecases and applications for Wikidata, its relation to standalone Wikibase instances, and how the two can function as complementary services to meet a range of research needs. This paper builds on lessons learned through the development of open data projects and software services within the Open Science Lab at TIB, Hannover, in the context of NFDI4Culture – the consortium including participants across the broad spectrum of the digital libraries, archives, and museums field, and the digital humanities.
A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.
Background: To improve interprofessional collaboration between registered nurses (RNs) and general practitioners (GPs) for nursing home residents (NHRs), the interprof ACT intervention package was developed. This complex intervention includes six components (e.g., shared goal setting, standardized procedures for GPs’ nursing home visits) that can be locally adapted. The cluster‑randomized interprof ACT trial evaluates the effects of this intervention on the cumulative incidence of hospital admissions (primary outcome) and secondary outcomes (e.g., length of hospital stays, utilization of emergency care services, and quality of life) within 12 months. It also includes a process evaluation which is subject of this protocol. The objectives of this evaluation are to assess the implementation of the interprof ACT intervention package and downstream effects on nurse–physician collaboration as well as preconditions and prospects for successive implementation into routine care.
Methods: This study uses a mixed methods triangulation design involving all 34 participating nursing homes (clusters). The quantitative part comprises paper‑based surveys among RNs, GPs, NHRs, and nursing home directors at baseline and 12 months. In the intervention group (17 clusters), data on the implementation of preplanned implementation strategies (training and supervision of nominated IPAVs, interprofessional kick‑off meetings) and local implementation activities will be recorded. Major outcome domains are the dose, reach and fidelity of the implementation of the intervention package, changes in interprofessional collaboration, and contextual factors. The qualitative part will be conducted in a subsample of 8 nursing homes (4 per study group) and includes repeated non‑participating observations and semistructured interviews on the interaction between involved health professionals and their work processes. Quantitative and qualitative data will be descriptively analyzed and then triangulated by means of joint displays and mixed methods informed regression models.
Discussion: By integrating a variety of qualitative and quantitative data sources, this process evaluation will allow comprehensive assessment of the implementation of the interprof ACT intervention package, the changes induced in interprofessional collaboration, and the influence of contextual factors. These data will reveal expected and unexpected changes in the procedures of interprofessional care delivery and thus facilitate accurate conclusions for the further design of routine care services for NHRs.
We present a methodology based on mixed-integer nonlinear model predictive control for a real-time building energy management system in application to a single-family house with a combined heat and power (CHP) unit. The developed strategy successfully deals with the switching behavior of the system components as well as minimum admissible operating time constraints by use of a special switch-cost-aware rounding procedure. The quality of the presented solution is evaluated in comparison to the globally optimal dynamic programming method and conventional rule-based control strategy. Based on a real-world scenario, we show that our approach is more than real-time capable while maintaining high correspondence with the globally optimal solution. We achieve an average optimality gap of 2.5% compared to 20% for a conventional control approach, and are faster and more scalable than a dynamic programming approach.
We present a novel long short-term memory (LSTM) approach for time-series prediction of the sand demand which arises from preparing the sand moulds for the iron casting process of a foundry. With our approach, we contribute to qualify LSTM and its combination with feedback-corrected optimal scheduling for industrial processes.
The sand is produced in an energy intensive mixing process which is controlled by optimal scheduling. The optimal scheduling is solved for a fixed prediction horizon. One major influencing factor is the sand demand, which is highly disturbed, for example due to production interruptions. The causes of production interruptions are in general physically unknown. We assume that information about the future behavior of the sand demand is included in current and past process data. Therefore, we choose LSTM networks for predicting the time-series of the sand demand.
The sand demand prediction is performed by our multi model approach. This approach outperforms the currently used naive estimation, even when predicting far into the future. Our LSTM based prediction approach can forecast the sand demand with a conformity up to 38 % and a mean value accuracy of approximately 99%. Simulating the optimal scheduling with sand demand prediction leads to an improvement in energy savings of approximately 1.1% compared to the naive estimation. The application of our novel approach at the real production plant of a foundry proves the simulation results and verifies the capability of our approach.
The optimization of lubricated sealing systems with respect to the stick-slip effect requires a friction model that describes the complex friction behavior in the lubricated contact area. This paper presents an efficient dynamic friction model based on the Stribeck curve, which allows to investigate the influencing parameters through finite element (FE) simulations. The simulation of a tribometer test using this friction model proofs that the model correlates well with the tribometer test results. It is shown that the system stiffness has a significant influence on the stick-slip tendency of the system.
Since textual user generated content from social media platforms contains valuable information for decision support and especially corporate credit risk analysis, automated approaches for text classification such as the application of sentiment dictionaries and machine learning algorithms have received great attention in recent user generated content based research endeavors. While machine learning algorithms require individual training data sets for varying sources, sentiment dictionaries can be applied to texts immediately, whereby domain specific dictionaries attain better results than domain independent word lists. We evaluate by means of a literature review how sentiment dictionaries can be constructed for specific domains and languages. Then, we construct nine versions of German sentiment dictionaries relying on a process model which we developed based on the literature review. We apply the dictionaries to a manually classified German language data set from Twitter in which hints for financial (in)stability of companies have been proven. Based on their classification accuracy, we rank the dictionaries and verify their ranking by utilizing Mc Nemar’s test for significance. Our results indicate, that the significantly best dictionary is based on the German language dictionary SentiWortschatz and an extension approach by use of the lexical-semantic database GermaNet. It achieves a classification accuracy of 59,19 % in the underlying three-case-scenario, in which the Tweets are labelled as negative, neutral or positive. A random classification would attain an accuracy of 33,3 % in the same scenario and hence, automated coding by use of the sentiment dictionaries can lead to a reduction of manual efforts. Our process model can be adopted by other researchers when constructing sentiment dictionaries for various domains and languages. Furthermore, our established dictionaries can be used by practitioners especially in the domain of corporate credit risk analysis for automated text classification which has been conducted manually to a great extent up to today.
A semiparametric approach for meta-analysis of diagnostic accuracy studies with multiple cut-offs
(2022)
The accuracy of a diagnostic test is often expressed using a pair of measures: sensitivity (proportion of test positives among all individuals with target condition) and specificity (proportion of test negatives among all individuals without targetcondition). If the outcome of a diagnostic test is binary, results from different studies can easily be summarized in a meta-analysis. However, if the diagnostic test is based on a discrete or continuous measure (e.g., a biomarker), several cut-offs within one study as well as among different studies are published. Instead of taking all information of the cut-offs into account in the meta-analysis, a single cut-off per study is often selected arbitrarily for the analysis, even though there are statistical methods for the incorporation of several cut-offs. For these methods, distributional assumptions have to be met and/or the models may not converge when specific data structures occur. We propose a semiparametric approach to overcome both problems. Our simulation study shows that the diagnostic accuracy is underestimated, although this underestimation in sensitivity and specificity is relatively small. The comparative approach of Steinhauser et al. is better in terms of coverage probability, but may lead to convergence problems. In addition to the simulation results, we illustrate the application of the semiparametric approach using a published meta-analysis for a diagnostic test differentiating between bacterial and viral meningitis in children.
Objective
Cyberknife robotic radiosurgery (RRS) provides single-session high-dose radiotherapy of brain tumors with a steep dose gradient and precise real-time image-guided motion correction. Although RRS appears to cause more radiation necrosis (RN), the radiometabolic changes after RRS have not been fully clarified. 18F-FET-PET/CT is used to differentiate recurrent tumor (RT) from RN after radiosurgery when MRI findings are indecisive. We explored the usefulness of dynamic parameters derived from 18F-FET PET in differentiating RT from RN after Cyberknife treatment in a single-center study population.
Methods
We retrospectively identified brain tumor patients with static and dynamic 18F-FET-PET/CT for suspected RN after Cyberknife. Static (tumor-to-background ratio) and dynamic PET parameters (time-activity curve, time-to-peak) were quantified. Analyses were performed for all lesions taken together (TOTAL) and for brain metastases only (METS). Diagnostic accuracy of PET parameters (using mean tumor-to-background ratio >1.95 and time-to-peak of 20 min for RT as cut-offs) and their respective improvement of diagnostic probability were analyzed.
Results
Fourteen patients with 28 brain tumors were included in quantitative analysis. Time-activity curves alone provided the highest sensitivities (TOTAL: 95%, METS: 100%) at the cost of specificity (TOTAL: 50%, METS: 57%). Combined mean tumor-to-background ratio and time-activity curve had the highest specificities (TOTAL: 63%, METS: 71%) and led to the highest increase in diagnosis probability of up to 16% p. – versus 5% p. when only static parameters were used.
Conclusions
This preliminary study shows that combined dynamic and static 18F-FET PET/CT parameters can be used in differentiating RT from RN after RRS.
Aim:
The most suitable method for assessment of response to peptide receptor radionuclide therapy (PRRT) of neuroendocrine tumors (NET) is still under debate. In this study we aimed to compare size (RECIST 1.1), density (Choi), Standardized Uptake Value (SUV) and a newly defined ZP combined parameter derived from Somatostatin Receptor (SSR) PET/CT for prediction of both response to PRRT and overall survival (OS).
Material and Methods:
Thirty-four NET patients with progressive disease (F:M 23:11; mean age 61.2 y; SD ± 12) treated with PRRT using either Lu-177 DOTATOC or Lu-177 DOTATATE and imaged with Ga-68 SSR PET/CT approximately 10–12 weeks prior to and after each treatment cycle were retrospectively analyzed. Median duration of follow-up after the first cycle was 63.9 months (range 6.2–86.2). A total of 77 lesions (2–8 per patient) were analyzed. Response assessment was performed according to RECIST 1.1, Choi and modified EORTC (MORE) criteria. In addition, a new parameter named ZP, the product of Hounsfield unit (HU) and SUVmean (Standard Uptake Value) of a tumor lesion, was tested. Further, SUV values (max and mean) of the tumor were normalized to SUV of normal liver parenchyma. Tumor response was defined as CR, PR, or SD. Gold standard for comparison of baseline parameters for prediction of response of individual target lesions to PRRT was change in size of lesions according to RECIST 1.1. For prediction of overall survival, the response after the first and second PRRT were tested.
Results:
Based on RECIST 1.1, Choi, MORE, and ZP, 85.3%, 64.7%, 61.8%, and 70.6% achieved a response whereas 14.7%, 35.3%, 38.2%, and 29.4% demonstrated PD (progressive disease), respectively. Baseline ZP and ZPnormalized were found to be the only parameters predictive of lesion progression after three PRRT cycles (AUC ZP 0.753; 95% CI 0.6–0.9, p 0.037; AUC ZPnormalized 0.766; 95% CI 0.6–0.9; p 0.029). Based on a cut-off-value of 1201, ZP achieved a sensitivity of 86% and a specificity of 67%, while ZPnormalized reached a sensitivity of 86% and a specificity of 76% at a cut-off-value of 198. Median OS in the total cohort was not reached. In univariate analysis amongst all parameters, only patients having progressive disease according to MORE after the second cycle of PRRT were found to have significantly shorter overall survival (median OS in objective responders not reached, in PD 29.2 months; p 0.015). Patients progressive after two cycles of PRRT according to ZP had shorter OS compared to those responding (median OS for responders not reached, for PD 47.2 months, p 0.066).
Conclusions:
In this explorative study, we showed that Choi, RECIST 1.1, and SUVmax-based response evaluation varied significantly from each other. Only patients showing progressive disease after two PRRT cycles according to MORE criteria had a worse prognosis while baseline ZP and
ZPnormalized performed best in predicting lesion progression after three cycles of PRRT.