Refine
Year of publication
Document Type
- Article (300)
- Conference Proceeding (120)
- Bachelor Thesis (9)
- Periodical Part (9)
- Report (6)
- Master's Thesis (4)
- Working Paper (4)
- Part of a Book (3)
- Book (2)
- Preprint (2)
Language
- English (460) (remove)
Has Fulltext
- yes (460) (remove)
Is part of the Bibliography
- no (460) (remove)
Keywords
- Euterentzündung (23)
- Student (12)
- Computersicherheit (10)
- Knowledge (10)
- Mumbai (10)
- Wissen (10)
- India (9)
- Serviceorientierte Architektur (9)
- bioplastics (9)
- biopolymers (9)
Training and evaluating deep learning models on road graphs for traffic prediction using SUMO
(2024)
The escalation of traffic volume in urban areas poses multifaceted challenges including increased accident risks, congestion, and prolonged travel times. Traditional approaches of expanding road infrastructure face limitations such as space constraints and the potential exacerbation of traffic issues.
Intelligent Transport Systems (ITS) present an alternative strategy to alleviate traffic problems by leveraging data-driven solutions. Central to ITS is traffic prediction, a process vital for applications like Traffic Management and Navigation Systems.
Recent advancements in traffic prediction have witnessed a surge of interest, particularly in deep learning methods optimized for graph-based data processing, being considered the most promising avenue presently.
These methods typically rely on real-life datasets containing traffic sensor data such as METR-LA and PeMS. However, the finite nature of real-life data prompts exploration into augmenting training and testing datasets with simulated traffic data.
This thesis explores the potential of utilizing traffic simulations, employing the microscopic traffic simulator SUMO, to train and test deep learning models for traffic prediction. A framework integrating PyTorch and SUMO is proposed for this purpose, aiming to elucidate the feasibility and effectiveness of using simulated traffic data for enhancing predictive models in traffic management systems.
The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.
The German Corona Consensus (GECCO) established a uniform dataset in FHIR format for exchanging and sharing interoperable COVID-19 patient specific data between health information systems (HIS) for universities. For sharing the COVID-19 information with other locations that use openEHR, the data are to be converted in FHIR format. In this paper, we introduce our solution through a web-tool named “openEHR-to-FHIR” that converts compositions from an openEHR repository and stores in their respective GECCO FHIR profiles. The tool provides a REST web service for ad hoc conversion of openEHR compositions to FHIR profiles.
Renewable energy production is one of the strongest rising markets and further extreme growth can be anticipated due to desire of increased sustainability in many parts of the world. With the rising adoption of renewable power production, such facilities are increasingly attractive targets for cyber attacks. At the same time higher requirements on a reliable production are raised. In this paper we propose a concept that improves monitoring of renewable power plants by detecting anomalous behavior. The system does not only detect an anomaly, it also provides reasoning for the anomaly based on a specific mathematical model of the expected behavior by giving detailed information about various influential factors causing the alert. The set of influential factors can be configured into the system before learning normal behaviour. The concept is based on multidimensional analysis and has been implemented and successfully evaluated on actual data from different providers of wind power plants.
Purpose: Radiology reports mostly contain free-text, which makes it challenging to obtain structured data. Natural language processing (NLP) techniques transform free-text reports into machine-readable document vectors that are important for creating reliable, scalable methods for data analysis. The aim of this study is to classify unstructured radiograph reports according to fractures of the distal fibula and to find the best text mining method.
Materials & Methods: We established a novel German language report dataset: a designated search engine was used to identify radiographs of the ankle and the reports were manually labeled according to fractures of the distal fibula. This data was used to establish a machine learning pipeline, which implemented the text representation methods bag-of-words (BOW), term frequency-inverse document frequency (TF-IDF), principal component analysis (PCA), non-negative matrix factorization (NMF), latent Dirichlet allocation (LDA), and document embedding (doc2vec). The extracted document vectors were used to train neural networks (NN), support vector machines (SVM), and logistic regression (LR) to recognize distal fibula fractures. The results were compared via cross-tabulations of the accuracy (acc) and area under the curve (AUC).
Results: In total, 3268 radiograph reports were included, of which 1076 described a fracture of the distal fibula. Comparison of the text representation methods showed that BOW achieved the best results (AUC = 0.98; acc = 0.97), followed by TF-IDF (AUC = 0.97; acc = 0.96), NMF (AUC = 0.93; acc = 0.92), PCA (AUC = 0.92; acc = 0.9), LDA (AUC = 0.91; acc = 0.89) and doc2vec (AUC = 0.9; acc = 0.88). When comparing the different classifiers, NN (AUC = 0,91) proved to be superior to SVM (AUC = 0,87) and LR (AUC = 0,85).
Conclusion: An automated classification of unstructured reports of radiographs of the ankle can reliably detect findings of fractures of the distal fibula. A particularly suitable feature extraction method is the BOW model.
Key Points:
- The aim was to classify unstructured radiograph reports according to distal fibula fractures.
- Our automated classification system can reliably detect fractures of the distal fibula.
- A particularly suitable feature extraction method is the BOW model.
The Wnt signaling pathway has been associated with many essential cell processes. This study aims to examine the effects of Wnt signaling on proliferation of cultured HEK293T cells. Cells were incubated with Wnt3a, and the activation of the Wnt pathway was followed by analysis of the level of the β-catenin protein and of the expression levels of the target genes MYC and CCND1. The level of β-catenin protein increased up to fourfold. While the mRNA levels of c-Myc and cyclin D1 increased slightly, the protein levels increased up to a factor of 1.5. Remarkably, MTT and BrdU assays showed different results when measuring the proliferation rate of Wnt3a stimulated HEK293T cells. In the BrdU assays an increase of the proliferation rate could be detected, which correlated to the applied Wnt3a concentration. Oppositely, this correlation could not be shown in the MTT assays. The MTT results, which are based on the mitochondrial activity, were confirmed by analysis of the succinate dehydrogenase complex by immunofluorescence and by western blotting. Taken together, our study shows that Wnt3a activates proliferation of HEK293 cells. These effects can be detected by measuring DNA synthesis rather than by measuring changes of mitochondrial activity.
In industrial production facilities, technical Energy Management Systems are used to measure, monitor and display energy consumption related information. The measurements take place at the field device level of the automation pyramid. The measured values are recorded and processed at the control level. The functionalities to monitor and display energy data are located at the MES level of the automation pyramid. So the energy data from all PLCs has to be aggregated, structured and provided for higher level systems. This contribution introduces a concept for an Energy Data Aggregation Layer, which provides the functionality described above. For the implementation of this Energy Data Aggregation Layer, a combination of AutomationML and OPC UA is used.
Integrated Risk and Opportunity Management (IROM) goes far beyond what is found in organizations today. However, it offers the best opportunity not only to keep pace with the VUCA world, but to actually profit from it. Accordingly, the introduction of opportunity-based thinking in addition to risk-based thinking is part of the design specification for ISO 9000 and ISO 9001. The prerequisite for the successful design of an IROM is the individual definition, control and integration of risk and opportunity management processes, considering eight success factors, the "8 C". Top management benefits directly from the result: better, coordinated decision memos enable faster and more appropriate decisions.
Harmonisation of German Health Care Data Using the OMOP Common Data Model – A Practice Report
(2023)
Data harmonization is an important step in large-scale data analysis and for generating evidence on real world data in healthcare. With the OMOP common data model, a relevant instrument for data harmonization is available that is being promoted by different networks and communities. At the Hannover Medical School (MHH) in Germany, an Enterprise Clinical Research Data Warehouse (ECRDW) is established and harmonization of that data source is the focus of this work. We present MHH’s first implementation of the OMOP common data model on top of the ECRDW data source and demonstrate the challenges concerning the mapping of German healthcare terminologies to a standardized format.
Autonomous and integrated passenger and freight transport (APFIT) is a promising approach to tackle both, traffic and last-mile-related issues such as environmental emissions, social and spatial conflicts or operational inefficiencies. By conducting an agent-based simulation, we shed light on this widely unexplored research topic and provide first indications regarding influential target figures of such a system in the rural area of Sarstedt, Germany. Our results show that larger fleets entail inefficiencies due to suboptimal utilization of monetary and material resources and increase traffic volume while higher amounts of unused vehicles may exacerbate spatial conflicts. Nevertheless, to fit the given demand within our study area, a comparatively large fleet of about 25 vehicles is necessary to provide reliable service, assuming maximum passenger waiting times of six minutes to the expense of higher standby times, rebalancing effort, and higher costs for vehicle acquisition and maintenance.
The NOA project collects and stores images from open access publications and makes them findable and reusable. During the project a focus group workshop was held to determine whether the development is addressing researchers’ needs. This took place before the second half of the project so that the results could be considered for further development since addressing users’ needs is a big part of the project. The focus was to find out what content and functionality they expect from image repositories.
In a first step, participants were asked to fill out a survey about their images use. Secondly, they tested different use cases on the live system. The first finding is that users have a need for finding scholarly images but it is not a routine task and they often do not know any image repositories. This is another reason for repositories to become more open and reach users by integrating with other content providers. The second finding is that users paid attention to image licenses but struggled to find and interpret them while also being unsure how to cite images. In general, there is a high demand for reusing scholarly images but the existing infrastructure has room to improve.
Building a well-founded understanding of the concepts, tasks and limitations of IT in all areas of society is an essential prerequisite for future developments in business and research. This applies in particular to the healthcare sector and medical research, which are affected by the noticeable advances in digitization. In the transfer project “Zukunftslabor Gesundheit” (ZLG), a teaching framework was developed to support the development of further education online courses in order to teach heterogeneous groups of learners independent of location and prior knowledge. The study at hand describes the development and components of the framework.
Powder bed-based additive manufacturing processes offer an extended freedom in design and enable the processing of metals, ceramics, and polymers with a high level of relative density. The latter is a prevalent measure of process and component quality, which depends on various input variables. A key point in this context is the condition of powder beds. To enhance comprehension of their particle-level formation and facilitate process optimization, simulations based on the Discrete Element Method are increasingly employed in research. To generate qualitatively as well as quantitatively reliable simulation results, an adaptation of the contact model parameterization is necessary. However, current adaptation methods often require the implementation of models that significantly increase computational effort, therefore limiting their applicability. To counteract this obstacle, a sophisticated formula-based adaptation and evaluation method is presented in this research. Additionally, the developed method enables accelerated parameter determination with limited experimental effort. Thus, it represents an integrative component, which supports further research efforts based on the Discrete Element Method by significantly reducing the parameterization effort. The universal nature of deducting this method also allows its adaptation to similar parameterization problems and its implementation in other fields of research.
Pathologists need to identify abnormal changes in tissue. With the developing digitalization, the used tissue slides are stored digitally. This enables pathologists to annotate the region of interest with the support of software tools. PathoLearn is a web-based learning platform explicitly developed for the teacher-student scenario, where the goal is that students learn to identify potential abnormal changes. Artificial intelligence (AI) and machine learning (ML) have become very important in medicine. Many health sectors already utilize AI and ML. This will only increase in the future, also in the field of pathology. Therefore, it is important to teach students the fundamentals and concepts of AI and ML early in their studies. Additionally, creating and training AI generally requires knowledge of programming and technical details. This thesis evaluates how this boundary can be overcome by comparing existing end-to-end AI platforms and teaching tools for AI. It was shown that a visual programming editor offers a fitting abstraction for creating neural networks without programming. This was extended with real-time collaboration to enable students to work in groups. Additionally, an automatic training feature was implemented, removing the necessity to know technical details about training neural networks.
After kidney transplantation graft rejection must be prevented. Therefore, a multitude of parameters of the patient is observed pre- and postoperatively. To support this process, the Screen Reject research project is developing a data warehouse optimized for kidney rejection diagnostics. In the course of this project it was discovered that important information are only available in form of free texts instead of structured data and can therefore not be processed by standard ETL tools, which is necessary to establish a digital expert system for rejection diagnostics. Due to this reason, data integration has been improved by a combination of methods from natural language processing and methods from image processing. Based on state-of-the-art data warehousing technologies (Microsoft SSIS), a generic data integration tool has been developed. The tool was evaluated by extracting Banff-classification from 218 pathology reports and extracting HLA mismatches from about 1700 PDF files, both written in german language.
In this poster we present the ongoing development of an integrated free and open source toolchain for semantic annotation of digitised cultural heritage. The toolchain development involves the specification of a common data model that aims to increase interoperability across diverse datasets and to enable new collaborative research approaches.
This paper aims to provide a structured overview of four open, participatory formats that are particularly applicable in inquiry-based teaching and learning contexts: hackathons, book sprints, barcamps, and learning circles. Using examples, mostly from the work and experience context of the Open Science Lab at TIB Hannover, we address concrete processes, working methods, possible outcomes and challenges.
The compilation offers an introduction to the topic and is intended to provide tools for testing in practice.
Techno-economic analysis that allocate costs to the energy flows of energy systems are helpful to understand the formation of costs within processes and to increase the cost efficiency. For the economic evaluation, the usefulness or quality of the energy is of great importance. In exergy-based methods, this is considered by allocating costs to the exergy instead of energy. As exergy represents the ability of performing work, it is often named the useful part of energy. In contrast, the anergy, the part of energy, which cannot perform work, is often assumed to be not useful.
However, heat flows as used e.g. in domestic heating are always a mixture of a relative small portion of exergy and a big portion of anergy. Although of lower quality, the anergy is obviously useful for these applications. The question is, whether it makes sense to differentiate between exergy and anergy and take both properties into account for the economic evaluation.
To answer this question, a new methodical concept based on the definition of an anergy-exergy cost ratio is compared to the commonly applied approaches of considering either energy or exergy as the basis for economic evaluation. These three different approaches for the economic analysis of thermal energy systems are applied to an exemplary heating system with thermal storages. It is shown that the results of the techno-economic analysis can be improved by giving anergy an economic value and that the proposed anergy-cost ratio allows a flexible adaptation of the evaluation depending on the economic constraints of a system.
Parametric study of piezoresistive structures in continuous fiber reinforced additive manufacturing
(2024)
Recent advancements in fiber reinforced additive manufacturing leverage the piezoresistivity of continuous carbon fibers. This effect enables the fabrication of structural components with inherent piezoresistive properties suitable for load measurement or structural monitoring. These are achieved without necessitating additional manufacturing or assembly procedures. However, there remain unexplored variables within the domain of continuous fiber-reinforced additive manufacturing. Crucially, the roles of fiber curvature radii and sensing fiber bundle counts have yet to be comprehensively addressed. Additionally, the compression-sensitive nature of printed carbon fiber-reinforced specimens remains a largely unexplored research area. To address these gaps, this study presents experimental analyses on tensile and three-point flexural specimens incorporating sensing carbon fiber strands. All specimens were fabricated with three distinct curvature radii. For the tensile specimens, the number of layers was also varied. Sensing fiber bundles were embedded on both tensile and compression sides of the flexural specimens. Mechanical testing revealed a linear-elastic behavior in the specimens. It was observed that carbon fibers supported the majority of the load, leading to brittle fractures. The resistance measurements showed a dependence on both the number of sensing layers and the radius of curvature, and exhibited a slight decreasing trend in the cyclic tests. Compared with the sensors subjected to tensile stress, the sensors embedded on the compression side showed a lower gauge factor.
This research focuses on the fundamental ideas and underlying principles of E-Learning technology, as well as theoretical considerations for an optimal learning environment. This theoretical exploration was then used as a basis for the design and construction of a new, interactive Web-Based ESH-Training. The quality and effectiveness of this new course was then compared with that of the existing analog PDF-Training via a test with a diverse sample of employee learners. Learners were later surveyed to ascertain their views on both trainings in terms of the quality of the content, facilitator, resources, and length. Results clearly showed that regardless of demographic factors, most employee learners preferred the new, Web-Based ESH-Training to the analog PDF-Training.
Compounds that exhibit the spin crossover effect are known to show a change of spin states through external stimuli. This reversible switching of spin states is accompanied by a change of the properties of the compound. Complexes, like iron (II)-triazole complexes, that exhibit this behavior at ambient temperature are often discussed for potential applications. In previous studies we synthesized iron (II)-triazole complexes and implemented them into electrospun nanofibers. We used Mössbauer spectroscopy in first studies to prove a successful implementation with maintaining spin crossover properties. Further studies from us showed that it is possible to use different electrospinning methods to either do a implementation or a deposition of the synthesized solid SCO material into or onto the polymer nanofibers. We now used a solvent in which both, the used iron (II)-triazole complex [Fe(atrz)3](2 ns)2 and three different polymers (Polyacrylonitrile, Polymethylmethacrylate and Polyvinylpyrrolidone), are soluble. This shall lead to a higher homogeneous distribution of the complex along the nanofibers. Mössbauer spectroscopy and other measurements are therefore in use to show a successful implementation without any significant changes to the complex.
Complexes like iron (II)-triazoles exhibit spin crossover behavior at ambient temperature and are often considered for possible application. In previous studies, we implemented complexes of this type into polymer nanofibers and first polymer-based optical waveguide sensor systems. In our current study, we synthesized complexes of this type, implemented them into polymers and obtained composites through drop casting and doctor blading. We present that a certain combination of polymer and complex can lead to composites with high potential for optical devices. For this purpose, we used two different complexes [Fe(atrz)3](2 ns)2 and [Fe(atrz)3]Cl1.5(BF4)0.5 with different polymers for each composite. We show through transmission measurements and UV/VIS spectroscopy that the optical properties of these composite materials can reversibly change due to the spin crossover effect.
The increasing variety of combinations of different building technology components offers a high potential for energy and cost savings in today's buildings. However, in most cases, this potential is not yet fully exploited due to the lack of intelligent supervisory control systems that are required to manage the complexity of the resulting overall systems. In this article, we present the implementation of a mixed-integer nonlinear model predictive control approach as a smart realtime building energy management system. The presented methodology is based on a forward-looking optimization of the overall energy costs. It takes into account energy demand forecasts and varying electricity market prices. We achieve real-time capability of the controller by applying a decomposition approach, which approximates the optimal solution of the underlying mixed-integer optimal control problem by convexification and rounding of the relaxed solution. The quality of the suboptimal solution is evaluated by comparison with the globally optimal solution obtained by the dynamic programming method. Based on a real-world scenario, we demonstrate that utilization of the real-time capable mixedinteger nonlinear model predictive control approach in a building control system leads to savings of 16% in the total operating costs and 13% in primary energy compared to the state-of-the-art control strategy without any loss of comfort for the residents.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
The PROFINET protocol has been extended in the current version to include security functions. This allows flexible network architectures with the consideration of OT security requirements to be designed for PROFINET, which were not possible due to the network segmentation previously required. In addition to the manufacturers of the protocol stacks, component manufacturers are also required to provide a secure implementation in their devices. The necessary measures go beyond the use of a secure protocol stack. Using the example of an Ethernet-APL transmitter with PROFINET communication, this article shows which technical and organizational conditions will have to be considered by PROFINET device manufacturers in the future.
Purpose: The calculation of aggregated composite measures is a widely used strategy to reduce the amount of data on hospital report cards. Therefore, this study aims to elicit and compare preferences of both patients as well as referring physicians regarding publicly available hospital quality information.
Methods: Based on systematic literature reviews as well as qualitative analysis, two discrete choice experiments (DCEs) were applied to elicit patients’ and referring physicians’ preferences. The DCEs were conducted using a fractional factorial design. Statistical data analysis was performed using multinomial logit models.
Results: Apart from five identical attributes, one specific attribute was identified for each study group, respectively. Overall, 322 patients (mean age 68.99) and 187 referring physicians (mean age 53.60) were included. Our models displayed significant coefficients for all attributes (p < 0.001 each). Among patients, “Postoperative complication rate” (20.6%; level range of 1.164) was rated highest, followed by “Mobility at hospital discharge” (19.9%; level range of 1.127), and ‘‘The number of cases treated” (18.5%; level range of 1.045). In contrast, referring physicians valued most the ‘‘One-year revision surgery rate’’ (30.4%; level range of 1.989), followed by “The number of cases treated” (21.0%; level range of 1.372), and “Postoperative complication rate” (17.2%; level range of 1.123).
Conclusion: We determined considerable differences between both study groups when calculating the relative value of publicly available hospital quality information. This may have an impact when calculating aggregated composite measures based on consumer-based weighting.
Chronic kidney disease is one of the main causes of mortality worldwide. It affects more than 800 million patients globally, accounting for approximately 10% of the general population. The significant burden of the disease prompts healthcare systems to implement adequate preventive and therapeutic measures. This systematic review and meta-analysis aimed to provide a concise summary of the findings published in the existing body of research about the influence that mobile health technology has on the outcomes of patients with the disease. A comprehensive systematic literature review was conducted from inception until March 1st, 2023. This systematic review and meta-analysis included all clinical trials that compared the efficacy of mobile app-based educational programs to that of more conventional educational treatment for the patients. Eleven papers were included in the current analysis, representing 759 CKD patients. 381 patients were randomly assigned to use the mobile apps, while 378 individuals were assigned to the control group. The mean systolic blood pressure was considerably lower in the mobile app group (MD -4.86; 95%-9.60, -0.13; p=0.04). Meanwhile, the mean level of satisfaction among patients who used the mobile app was considerably greater (MD 0.75; 95% CI 0.03, 1.46; p=0.04). Additionally, the mean self-management scores in the mobile app groups were significantly higher (SMD 0.534; 95% CI 0.201, 0.867; p=0.002). Mobile health applications are potentially valuable interventions for patients. This technology improved the self-management of the disease, reducing the mean levels of systolic blood pressure with a high degree of patient satisfaction.
Background: In Germany, hospice and palliative care is well covered through inpatient, outpatient, and home-based care services. It is unknown if, and to what extent, there is a need for additional day care services to meet the specific needs of patients and caregivers.
Methods: Two day hospices and two palliative day care clinics were selected. In the first step, two managers from each facility (n = 8) were interviewed by telephone, using a semi-structured interview guide. In the second step, four focus groups were conducted, each with three to seven representatives of hospice and palliative care from the facilities’ hospice and palliative care networks. Interviews and focus groups were audio recorded, transcribed verbatim and analyzed using qualitative content analysis.
Results: The interviewed experts perceived day care services as providing additional patient and caregiver benefits. Specifically, the services were perceived to meet patient needs for social interaction and bundled treatments, especially for patients who did not fit into inpatient settings (due to, e.g., their young age or a lack of desire for inpatient admission). The services were also perceived to meet caregiver needs for support, providing short-term relief for the home care situation.
Conclusions: The results suggest that inpatient, outpatient, and home-based hospice and palliative care services do not meet the palliative care needs of all patients. Although the population that is most likely to benefit from day care services is assumed to be relatively small, such services may meet the needs of certain patient groups more effectively than other forms of care.
We present an approach towards a data acquisition system for digital twins that uses a 5G net- work for data transmission and localization. The current hardware setup, which utilizes stereo vision and LiDAR for 3D mapping, is explained together with two recorded point cloud data sets. Furthermore, a resulting digital twin comprised of voxelized point cloud data is shown. Ideas for future applications and challenges regarding the system are discussed and an outlook on further development is given.
As a result of a research semester in the summer of 2022, a bibliography on multimodality in technical communication (TC) is presented. Given that TC primarily involves the development of instructional information, this bibliography holds relevance for anyone interested in the use of multimodality in the communication of procedural knowledge. The bibliography is publicly accessible as Zotero group library (https://bit.ly/multimodality_in_tc) and can be used and expanded.
After a description of the objectives and target group, the five disciplines from which the publications in the bibliography originate are presented. This is followed by information on the structure and search options of the Zotero group library, which are intended to support the search for publications on the respective research interest. The article concludes with some suggestions for collaborative efforts aimed at further enhancing and expanding the bibliography.
The author actively maintains the group library. Individuals seeking to contribute publications to the group library will receive the appropriate access rights from the author (claudia.villiger@hs-hannover.de). The author aspires to foster collaboration among researchers from diverse fields through this bibliography.
In this paper we describe methods to approximate functions and differential operators on adaptive sparse (dyadic) grids. We distinguish between several representations of a function on the sparse grid and we describe how finite difference (FD) operators can be applied to these representations. For general variable coefficient equations on sparse grids, genuine finite element (FE) discretizations are not feasible and FD operators allow an easier operator evaluation than the adapted FE operators. However, the structure of the FD operators is complex. With the aim to construct an efficient multigrid procedure, we analyze the structure of the discrete Laplacian in its hierarchical representation and show the relation between the full and the sparse grid case. The rather complex relations, that are expressed by scaling matrices for each separate coordinate direction, make us doubt about the possibility of constructing efficient preconditioners that show spectral equivalence. Hence, we question the possibility of constructing a natural multigrid algorithm with optimal O(N) efficiency. We conjecture that for the efficient solution of a general class of adaptive grid problems it is better to accept an additional condition for the dyadic grids (condition L) and to apply adaptive hp-discretization.
The paper presents a comprehensive model of a banking system that integrates network effects, bankruptcy costs, fire sales, and cross-holdings. For the integrated financial market we prove the existence of a price-payment equilibrium and design an algorithm for the computation of the greatest and the least equilibrium. The number of defaults corresponding to the greatest price-payment equilibrium is analyzed in several comparative case studies. These illustrate the individual and joint impact of interbank liabilities, bankruptcy costs, fire sales and cross-holdings on systemic risk. We study policy implications and regulatory instruments, including central bank guarantees and quantitative easing, the significance of last wills of financial institutions, and capital requirements.
Conventional fluorescent tubes are increasingly being replaced with innovative light-emitting diodes (LEDs) for lighting poultry houses. However, little is known about whether the flicker frequencies of LED luminaires are potential stressors in poultry husbandry. The term “light flicker” describes the fluctuations in the brightness of an electrically operated light source caused by the design and/or control of the light source. In this context, the critical flicker frequency (CFF) characterizes the frequency at which a sequence of light flashes is perceived as continuous light. It is known that CFF in birds is higher than that in humans and that light flicker can affect behavioral patterns and stress levels in several bird species. As there is a lack of knowledge about the impact of flicker frequency on fattening turkeys, this study aimed to investigate the effects of flicker frequency on the behavior, performance, and stress response in male turkeys. In 3 trials, a total of 1,646 male day-old turkey poults of the strain B.U.T. 6 with intact beaks were reared for 20 wk in 12 barn compartments of 18 m² each. Each barn compartment was illuminated using 2 full-spectrum LED lamps. Flicker frequencies of 165 Hz, 500 Hz, and 16 kHz were set in the luminaires to illuminate the compartments. Analyses of feather corticosterone concentration were performed on fully grown third-generation primaries (P 3) of 5 turkeys from each compartment. No significant differences were found in the development of live weight, feed consumption, or prevalence of injured or killed turkeys by conspecifics reared under the above flicker frequencies. The flicker frequencies also did not significantly influence feather corticosterone concentrations in the primaries of the turkeys. In conclusion, the present results indicate that flicker frequencies of 165 Hz or higher have no detrimental effect on growth performance, injurious pecking, or endocrine stress response in male turkeys and, thus, may be suitable for use as animal-friendly lighting.
Background: Autism Spectrum Disorder (ASD) is characterized by impairments in social communication, limited repetitive behaviors, impaired language development, and interest or activity patterns, which include a group complex neurodevelopmental syndrome with diverse phenotypes that reveal considerable etiological and clinical heterogeneity and are also considered one of the most heritable disorders (over 90%). Genetic, epigenetic, and environmental factors play a role in the development of ASD.
Aim: This study was designed to investigate the extent of DNA damage in parents of autistic children by treating peripheral blood mononuclear cells (PBMCs) with bleomycin and hydrogen peroxide (H2O2).
Methods: Peripheral blood mononuclear cells (PBMCs) were isolated by the Ficoll method and treated with a specific concentration of bleomycin and H2O2 for 30 min and 5 min, respectively. Then, the degree of DNA damage was analyzed by the alkaline comet assay or single cell gel electrophoresis (SCGE), an effective way to measure DNA fragmentation in eukaryotic cells.
Results: Our findings revealed that there is a significant difference in the increase of DNA damage in parents with affected children compared to the control group, which can indicate the inability of the DNA molecule repair system. Furthermore, our study showed a significant association between fathers’ occupational difficulties (exposed to the influence of environmental factors), as well as family marriage, and suffering from ASD in offspring.
Conclusion: Our results suggested that the influence of environmental factors on parents of autistic children may affect the development of autistic disorder in their offspring. Subsequently, based on our results, investigating the effect of environmental factors on the amount of DNA damage in parents with affected children requires more studies.
The miniaturized Mössbauer-spectrometer (MIMOS II), originally devised by Göstar Klingelhöfer, is further developed by the Renz group at the Leibniz University Hanover in cooperation with the Hanover University of Applied Sciences and Arts. A new processing unit with a two-dimensional (2D) data acquisition was developed by M. Jahns. The advantage of this data acquisition is that no thresholds need to be set before the measurement. The energy of each photon is determined and stored with the velocity of the drive. After the measurement, the relevant area can be selected for the Mössbauer spectrum. Now we have expanded the evaluation unit with a power supply for a MIMOS drive and a MIMOS PIN detector. So we have a very compact MIMOS transmissions measurement setup. With this setup it is possible to process the signals of two detectors serially. Currently we are working on a parallel signal processing.
Mixed-integer NMPC for real-time supervisory energy management control in residential buildings
(2023)
In recent years, building energy supply and distribution systems have become more complex, with an increasing number of energy generators, stores, flows, and possible combinations of operating modes. This poses challenges for supervisory control, especially when balancing the conflicting goals of maximizing comfort while minimizing costs and emissions to contribute to global climate protection objectives. Mixed-integer nonlinear model predictive control is a promising approach for intelligent real-time control that is able to properly address the specific characteristics and restrictions of building energy systems. We present a strategy that utilizes a decomposition approach, combining partial outer convexification with the Switch-Cost Aware Rounding procedure to handle switching behavior and operating time constraints of building components in real-time. The efficacy is demonstrated through practical applications in a single-family home with a combined heat and power unit and in a multi-family apartment complex with 18 residential units. Simulation studies show high correspondence to globally optimal solutions with significant cost savings potential of around 19%.
Background:
Many patients with cardiovascular disease also show a high comorbidity of mental disorders, especially such as anxiety and depression. This is, in turn, associated with a decrease in the quality of life. Psychocardiological treatment options are currently limited. Hence, there is a need for novel and accessible psychological help. Recently, we demonstrated that a brief face-to-face metacognitive therapy (MCT) based intervention is promising in treating anxiety and depression. Here, we aim to translate the face-to-face approach into digital application and explore the feasibility of this approach.
Methods:
We translated a validated brief psychocardiological intervention into a novel non-blended web app. The data of 18 patients suffering from various cardiac conditions but without diagnosed mental illness were analyzed after using the web app over a two-week period in a feasibility trial. The aim was whether a nonblended web app based MCT approach is feasible in the group of cardiovascular patients with cardiovascular disease.
Results:
Overall, patients were able to use the web app and rated it as satisfactory and beneficial. In addition, there was first indication that using the app improved the cardiac patients’ subjectively perceived health and reduced their anxiety. Therefore, the approach seems feasible for a future randomized controlled trial.
Conclusion:
Applying a metacognitive-based brief intervention via a nonblended web app seems to show good acceptance and feasibility in a small target group of patients with CVD. Future studies should further develop, improve and validate digital psychotherapy approaches, especially in patient groups with a lack of access to standard psychotherapeutic care.
In the last years generative models have gained large public attention due to their high level of quality in generated images. In short, generative models learn a distribution from a finite number of samples and are able then to generate infinite other samples. This can be applied to image data. In the past generative models have not been able to generate realistic images, but nowadays the results are almost indistinguishable from real images.
This work provides a comparative study of three generative models: Variational Autoencoder (VAE), Generative Adversarial Network (GAN) and Diffusion Models (DM). The goal is not to provide a definitive ranking indicating which one of them is the best, but to qualitatively and where possible quantitively decide which model is good with respect to a given criterion. Such criteria include realism, generalization and diversity, sampling, training difficulty, parameter efficiency, interpolating and inpainting capabilities, semantic editing as well as implementation difficulty. After a brief introduction of how each model works on the inside, they are compared against each other. The provided images help to see the differences among the models with respect to each criterion.
To give a short outlook on the results of the comparison of the three models, DMs generate most realistic images. They seem to generalize best and have a high variation among the generated images. However, they are based on an iterative process, which makes them the slowest of the three models in terms of sample generation time. On the other hand, GANs and VAEs generate their samples using one single forward-pass. The images generated by GANs are comparable to the DM and the images from VAEs are blurry, which makes them less desirable in comparison to GANs or DMs. However, both the VAE and the GAN, stand out from the DMs with respect to the interpolations and semantic editing, as they have a latent space, which makes space-walks possible and the changes are not as chaotic as in the case of DMs. Furthermore, concept-vectors can be found, which transform a given image along a given feature while leaving other features and structures mostly unchanged, which is difficult to archive with DMs.
There are many aspects of code quality, some of which are difficult to capture or to measure. Despite the importance of software quality, there is a lack of commonly accepted measures or indicators for code quality that can be linked to quality attributes. We investigate software developers’ perceptions of source code quality and the practices they recommend to achieve these qualities. We analyze data from semi-structured interviews with 34 professional software developers, programming teachers and students from Europe and the U.S. For the interviews, participants were asked to bring code examples to exemplify what they consider good and bad code, respectively. Readability and structure were used most commonly as defining properties for quality code. Together with documentation, they were also suggested as the most common target properties for quality improvement. When discussing actual code, developers focused on structure, comprehensibility and readability as quality properties. When analyzing relationships between properties, the most commonly talked about target property was comprehensibility. Documentation, structure and readability were named most frequently as source properties to achieve good comprehensibility. Some of the most important source code properties contributing to code quality as perceived by developers lack clear definitions and are difficult to capture. More research is therefore necessary to measure the structure, comprehensibility and readability of code in ways that matter for developers and to relate these measures of code structure, comprehensibility and readability to common software quality attributes.
The aim of this cross-sectional study was to investigate associated factors of the severity of clinical mastitis (CM). Milk samples of 249 cases of CM were microbiologically examined, of which 27.2% were mild, 38.5% moderate, and 34.3% severe mastitis. The samples were incubated aerobically and anaerobically to investigate the role of aerobic and anaerobic microorganisms. In addition, the pathogen shedding was quantitatively examined, and animal individual data, outside temperature and relative humidity, were collected to determine associated factors for the severity of CM. The pathogen isolated the most was Escherichia coli (35.2%), followed by Streptococcus spp. (16.4%). Non-aureus staphylococci (NaS) (15.4%) and other pathogens (e.g., Staphylococcus aureus, coryneforms) (15.4%) were the pathogens that were isolated the most for mild mastitis. Moderate mastitis was mostly caused by E. coli (38%). E. coli was also the most common pathogen in severe mastitis (50.6%), followed by Streptococcus spp. (16.4%), and Klebsiella spp. (10.3%). Obligate anaerobes (Clostridium spp.) were isolated in one case (0.4%) of moderate mastitis. The mortality rate (deceased or culled due to the mastitis in the following two weeks) was 34.5% for severe mastitis, 21.7% for moderate mastitis, and 4.4% for mild mastitis. The overall mortality rate of CM was 21.1%. The pathogen shedding (back logarithmized) was highest for severe mastitis (55,000 cfu/mL) and E. coli (91,200 cfu/mL). High pathogen shedding, low previous somatic cell count (SCC) before mastitis, high outside temperature, and high humidity were associated with severe courses of mastitis.
Appropriate data models are essential for the systematic collection, aggregation, and integration of health data and for subsequent analysis. However, recommendations for modeling health data are often not publicly available within specific projects. Therefore, the project Zukunftslabor Gesundheit investigates recommendations for modeling. Expert interviews with five experts were conducted and analyzed using qualitative content analysis. Based on the condensed categories “governance”, “modeling” and “standards”, the project team generated eight hypotheses for recommendations on health data modeling. In addition, relevant framework conditions such as different roles, international cooperation, education/training and political influence were identified. Although emerging from interviewing a small convenience sample of experts, the results help to plan more extensive data collections and to create recommendations for health data modeling.
Economic and political/governmental infrastructural factors are major contributors to the economic development/growth of all sectors of a country, such as in the area of healthcare systems and clinical research, including the pharmaceutical industry. But what is the interaction between economic, and political/governmental infrastructural factors and the development of healthcare systems, especially, the performance of the pharmaceutical industry? Information from selected articles of a literature search of PubMed and by using Google Advanced Search led to the generation of five categories of infrastructural factors, and were filled with data from 41 African Countries using the World Health Organization data repository. Median changes over time were given and tested by Wilcoxon signed-rank test and Friedman test, respectively. Analysis of factors related to availability of healthcare facilities showed that physicians and pharmacies were significant increased, with insignificantly decreased number of hospital beds. Healthcare Financing by the Government showed notable differences. Private health spending decreased significantly unlike Gross National Income. Analysis of infrastructural factors showed that stable supply of electricity and the associated use of the Internet improved significantly. The low level of data on the expansion of paved road networks suggests less developed medical services in remote rural areas. Healthcare systems in African countries improved over the last two decades, but differences between the individual countries still prevail and some of the countries cannot yet offer an attractive sales market for the products of pharmaceutical companies.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1, 2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
Context: Higher education is changing at an accelerating pace due to the widespread use of digital teaching and emerging technologies. In particular, AI assistants such as ChatGPT pose significant challenges for higher education institutions because they bring change to several areas, such as learning assessments or learning experiences.
Objective: Our objective is to discuss the impact of AI assistants in the context of higher education, outline possible changes to the context, and present recommendations for adapting to change.
Method: We review related work and develop a conceptual structure that visualizes the role of AI assistants in higher education.
Results: The conceptual structure distinguishes between humans, learning, organization, and disruptor, which guides our discussion regarding the implications of AI assistant usage in higher education. The discussion is based on evidence from related literature.
Conclusion: AI assistants will change the context of higher education in a disruptive manner, and the tipping point for this transformation has already been reached. It is in our hands to shape this transformation.
The aim of this cross-sectional study was to investigate the occurrence of bacteremia in severe mastitis cases of dairy cows. Milk and corresponding blood samples of 77 cases of severe mastitis were bacteriologically examined. All samples (milk and blood) were incubated aerobically and anaerobically to also investigate the role of obligate anaerobic microorganisms in addition to aerobic microorganisms in severe mastitis. Bacteremia occurred if identical bacterial strains were isolated from milk and blood samples of the same case. In addition, pathogen shedding was examined, and the data of animals and weather were collected to determine associated factors for the occurrence of bacteremia in severe mastitis. If Gram-negative bacteria were detected in milk samples, a Limulus test (detection of endotoxins) was also performed for corresponding blood samples without the growth of Gram-negative bacteria. In 74 cases (96.1%), microbial growth was detected in aerobically incubated milk samples. The most-frequently isolated bacteria in milk samples were Escherichia (E.) coli (48.9%), Streptococcus (S.) spp. (18.1%), and Klebsiella (K.) spp. (16%). Obligatory anaerobic microorganisms were not isolated. In 72 cases (93.5%) of the aerobically examined blood samples, microbial growth was detected. The most-frequently isolated pathogens in blood samples were non-aureus Staphylococci (NaS) (40.6%) and Bacillus spp. (12.3%). The Limulus test was positive for 60.5% of cases, which means a detection of endotoxins in most blood samples without the growth of Gram-negative bacteria. Bacteremia was confirmed in 12 cases (15.5%) for K. pneumoniae (5/12), E. coli (4/12), S. dysgalactiae (2/12), and S. uberis (1/12). The mortality rate (deceased or culled) was 66.6% for cases with bacteremia and 34.1% for cases without bacteremia. High pathogen shedding and high humidity were associated with the occurrence of bacteremia in severe mastitis.
Monitoring of clinical trials is a fundamental process required by regulatory agencies. It assures the compliance of a center to the required regulations and the trial protocol. Traditionally, monitoring teams relied on extensive on-site visits and source data verification. However, this is costly, and the outcome is limited. Thus, central statistical monitoring (CSM) is an additional approach recently embraced by the International Council for Harmonisation (ICH) to detect problematic or erroneous data by using visualizations and statistical control measures. Existing implementations have been primarily focused on detecting inlier and outlier data. Other approaches include principal component analysis and distribution of the data. Here we focus on the utilization of comparisons of centers to the Grand mean for different model types and assumptions for common data types, such as binomial, ordinal, and continuous response variables. We implement the usage of multiple comparisons of single centers to the Grand mean of all centers. This approach is also available for various non-normal data types that are abundant in clinical trials. Further, using confidence intervals, an assessment of equivalence to the Grand mean can be applied. In a Monte Carlo simulation study, the applied statistical approaches have been investigated for their ability to control type I error and the assessment of their respective power for balanced and unbalanced designs which are common in registry data and clinical trials. Data from the German Multiple Sclerosis Registry (GMSR) including proportions of missing data, adverse events and disease severity scores were used to verify the results on Real-World-Data (RWD).
The digital transformation with its new technologies and customer expectation has a significant effect on the customer channels in the insurance industry. The objective of this study is the identification of enabling and hindering factors for the adoption of online claim notification services that are an important part of the customer experience in insurance. For this purpose, we conducted a quantitative cross-sectional survey based on the exemplary scenario of car insurance in Germany and analyzed the data via structural equation modeling (SEM). The findings show that, besides classical technology acceptance factors such as perceived usefulness and ease of use, digital mindset and status quo behavior play a role: acceptance of digital innovations, lacking endurance as well as lacking frustration tolerance with the status quo lead to a higher intention for use. Moreover, the results are strongly moderated by the severity of the damage event—an insurance-specific factor that is sparsely considered so far. The latter discovery implies that customers prefer a communication channel choice based on the individual circumstances of the claim.
Introduction
Atopic dermatitis (AD) is a common inflammatory skin disease. Many patients are initiating a systemic therapy, if the disease is not adequately controlled with topical treatment only. Currently, there is little real-world evidence on the AD-related medical care situation in Germany. This study analyzed patient characteristics, treatment patterns, healthcare resource utilization and costs associated with systemically treated AD for the German healthcare system.
Methods
In this descriptive, retrospective cohort study, aggregated anonymized German health claims data from the InGef research database were used. Within a representative sample of four million insured individuals, patients with AD and systemic drug therapy initiation (SDTI) in the index year 2017 were identified and included into the study cohort. Systemic drug therapy included dupilumab, systemic corticosteroids (SCS) and systemic immunosuppressants (SIS). Patients were observed for one year starting from the date of SDTI in 2017.
Results
9975 patients were included (57.8% female, mean age 39.6 years [SD 25.5]). In the one-year observation period, the most common systemic drug therapy was SCS (> 99.0%). Administrations of dupilumab (0.3%) or dispensations of SIS were rare (cyclosporine: 0.5%, azathioprine: 0.6%, methotrexate: 0.1%). Median treatment duration of SCS, cyclosporine and azathioprine was 27 days, 102 days, and 109 days, respectively. 2.8% of the patients received phototherapy; 41.6% used topical corticosteroids and/or topical calcineurin inhibitor. Average annual costs for medications amounted to € 1237 per patient. Outpatient services were used by 99.6% with associated mean annual costs of € 943; 25.4% had at least one hospitalization (mean annual costs: € 5836). 5.3% of adult patients received sickness benefits with associated mean annual costs of € 5026.
Conclusions
Despite unfavorable risk–benefit profile, this study demonstrated a common treatment with SCS, whereas other systemic drug therapy options were rarely used. Furthermore, the results suggest a substantial economic burden for patients with AD and SDTI.
Purpose
This study aims to determine the intention to use hospital report cards (HRCs) for hospital referral purposes in the presence or absence of patient-reported outcomes (PROs) as well as to explore the relevance of publicly available hospital performance information from the perspective of referring physicians.
Methods
We identified the most relevant information for hospital referral purposes based on a literature review and qualitative research. Primary survey data were collected (May–June 2021) on a sample of 591 referring orthopedists in Germany and analyzed using structural equation modeling. Participating orthopedists were recruited using a sequential mixed-mode strategy and randomly allocated to work with HRCs in the presence (intervention) or absence (control) of PROs.
Results
Overall, 420 orthopedists (mean age 53.48, SD 8.04) were included in the analysis. The presence of PROs on HRCs was not associated with an increased intention to use HRCs (p = 0.316). Performance expectancy was shown to be the most important determinant for using HRCs (path coefficient: 0.387, p < .001). However, referring physicians have doubts as to whether HRCs can help them. We identified “complication rate” and “the number of cases treated” as most important for the hospital referral decision making; PROs were rated slightly less important.
Conclusions
This study underpins the purpose of HRCs, namely to support referring physicians in searching for a hospital. Nevertheless, only a minority would support the use of HRCs for the next hospital search in its current form. We showed that presenting relevant information on HRCs did not increase their use intention.
The shift towards RES introduces challenges related to power system stability due to the characteristics of inverter-based resources (IBRs) and the intermittent nature of renewable resources. This paper addresses these challenges by conducting comprehensive time and frequency simulations on the IEEE two-area benchmark power system with detailed type 4 wind turbine generators (WTGs), including turbines, generators, converters, filters, and controllers. The simulations analyse small-signal and transient stability, considering variations in active and reactive power, short-circuit events, and wind variations. Metrics such as rate of change of frequency (RoCoF), frequency nadir, percentage of frequency variation, and probability density function (PDF) are used to evaluate the system performance. The findings emphasise the importance of including detailed models of RES in stability analyses and demonstrate the impact of RES penetration on power system dynamics. This study contributes to a deeper understanding of RES integration challenges and provides insights for ensuring the reliable and secure operation of power systems in the presence of high levels of RES penetration.