Fakultät III - Medien, Information und Design
Refine
Year of publication
Document Type
- Article (140)
- Conference Proceeding (43)
- Bachelor Thesis (2)
- Working Paper (2)
- Part of a Book (1)
- Master's Thesis (1)
- Preprint (1)
- Report (1)
Language
- English (191) (remove)
Has Fulltext
- yes (191)
Is part of the Bibliography
- no (191)
Keywords
- Student (12)
- Knowledge (10)
- Mumbai (10)
- Wissen (10)
- India (9)
- Germany (8)
- Epidemiologie (6)
- HIV (6)
- Indien (6)
- Klinisches Experiment (6)
Institute
Obesity and excess adiposity account for approximately 20% of all cancer cases; however, biomarkers of risk remain to be elucidated. While fibroblast growth factor-2 (FGF2) is emerging as an attractive candidate biomarker for visceral adipose tissue mass, the role of circulating FGF2 in malignant transformation remains unknown. Moreover, functional assays for biomarker discovery are limited. We sought to determine if human serum could stimulate the 3D growth of a non-tumorigenic cell line. This type of anchorage-independent 3D growth in soft agar is a surrogate marker for acquired tumorigenicity of cell lines. We found that human serum from cancer-free men and women has the potential to stimulate growth in soft agar of non-tumorigenic epithelial JB6 P+ cells. We examined circulating levels of FGF2 in humans in malignant transformation in vitro in a pilot study of n = 33 men and women. Serum FGF2 levels were not associated with colony formation in epithelial cells (r = 0.05, p = 0.80); however, a fibroblast growth factor receptor-1 (FGFR1) selective inhibitor significantly blocked serum-stimulated transformation, suggesting that FGF2 activation of FGFR1 may be necessary, but not sufficient for the transforming effects of human serum. This pilot study indicates that the FGF2/FGFR1 axis plays a role in JB6 P+ malignant transformation and describes an assay to determine critical serum factors that have the potential to promote tumorigenesis.
The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.
The German Corona Consensus (GECCO) established a uniform dataset in FHIR format for exchanging and sharing interoperable COVID-19 patient specific data between health information systems (HIS) for universities. For sharing the COVID-19 information with other locations that use openEHR, the data are to be converted in FHIR format. In this paper, we introduce our solution through a web-tool named “openEHR-to-FHIR” that converts compositions from an openEHR repository and stores in their respective GECCO FHIR profiles. The tool provides a REST web service for ad hoc conversion of openEHR compositions to FHIR profiles.
Purpose: Radiology reports mostly contain free-text, which makes it challenging to obtain structured data. Natural language processing (NLP) techniques transform free-text reports into machine-readable document vectors that are important for creating reliable, scalable methods for data analysis. The aim of this study is to classify unstructured radiograph reports according to fractures of the distal fibula and to find the best text mining method.
Materials & Methods: We established a novel German language report dataset: a designated search engine was used to identify radiographs of the ankle and the reports were manually labeled according to fractures of the distal fibula. This data was used to establish a machine learning pipeline, which implemented the text representation methods bag-of-words (BOW), term frequency-inverse document frequency (TF-IDF), principal component analysis (PCA), non-negative matrix factorization (NMF), latent Dirichlet allocation (LDA), and document embedding (doc2vec). The extracted document vectors were used to train neural networks (NN), support vector machines (SVM), and logistic regression (LR) to recognize distal fibula fractures. The results were compared via cross-tabulations of the accuracy (acc) and area under the curve (AUC).
Results: In total, 3268 radiograph reports were included, of which 1076 described a fracture of the distal fibula. Comparison of the text representation methods showed that BOW achieved the best results (AUC = 0.98; acc = 0.97), followed by TF-IDF (AUC = 0.97; acc = 0.96), NMF (AUC = 0.93; acc = 0.92), PCA (AUC = 0.92; acc = 0.9), LDA (AUC = 0.91; acc = 0.89) and doc2vec (AUC = 0.9; acc = 0.88). When comparing the different classifiers, NN (AUC = 0,91) proved to be superior to SVM (AUC = 0,87) and LR (AUC = 0,85).
Conclusion: An automated classification of unstructured reports of radiographs of the ankle can reliably detect findings of fractures of the distal fibula. A particularly suitable feature extraction method is the BOW model.
Key Points:
- The aim was to classify unstructured radiograph reports according to distal fibula fractures.
- Our automated classification system can reliably detect fractures of the distal fibula.
- A particularly suitable feature extraction method is the BOW model.
The Wnt signaling pathway has been associated with many essential cell processes. This study aims to examine the effects of Wnt signaling on proliferation of cultured HEK293T cells. Cells were incubated with Wnt3a, and the activation of the Wnt pathway was followed by analysis of the level of the β-catenin protein and of the expression levels of the target genes MYC and CCND1. The level of β-catenin protein increased up to fourfold. While the mRNA levels of c-Myc and cyclin D1 increased slightly, the protein levels increased up to a factor of 1.5. Remarkably, MTT and BrdU assays showed different results when measuring the proliferation rate of Wnt3a stimulated HEK293T cells. In the BrdU assays an increase of the proliferation rate could be detected, which correlated to the applied Wnt3a concentration. Oppositely, this correlation could not be shown in the MTT assays. The MTT results, which are based on the mitochondrial activity, were confirmed by analysis of the succinate dehydrogenase complex by immunofluorescence and by western blotting. Taken together, our study shows that Wnt3a activates proliferation of HEK293 cells. These effects can be detected by measuring DNA synthesis rather than by measuring changes of mitochondrial activity.
Harmonisation of German Health Care Data Using the OMOP Common Data Model – A Practice Report
(2023)
Data harmonization is an important step in large-scale data analysis and for generating evidence on real world data in healthcare. With the OMOP common data model, a relevant instrument for data harmonization is available that is being promoted by different networks and communities. At the Hannover Medical School (MHH) in Germany, an Enterprise Clinical Research Data Warehouse (ECRDW) is established and harmonization of that data source is the focus of this work. We present MHH’s first implementation of the OMOP common data model on top of the ECRDW data source and demonstrate the challenges concerning the mapping of German healthcare terminologies to a standardized format.
The NOA project collects and stores images from open access publications and makes them findable and reusable. During the project a focus group workshop was held to determine whether the development is addressing researchers’ needs. This took place before the second half of the project so that the results could be considered for further development since addressing users’ needs is a big part of the project. The focus was to find out what content and functionality they expect from image repositories.
In a first step, participants were asked to fill out a survey about their images use. Secondly, they tested different use cases on the live system. The first finding is that users have a need for finding scholarly images but it is not a routine task and they often do not know any image repositories. This is another reason for repositories to become more open and reach users by integrating with other content providers. The second finding is that users paid attention to image licenses but struggled to find and interpret them while also being unsure how to cite images. In general, there is a high demand for reusing scholarly images but the existing infrastructure has room to improve.
Building a well-founded understanding of the concepts, tasks and limitations of IT in all areas of society is an essential prerequisite for future developments in business and research. This applies in particular to the healthcare sector and medical research, which are affected by the noticeable advances in digitization. In the transfer project “Zukunftslabor Gesundheit” (ZLG), a teaching framework was developed to support the development of further education online courses in order to teach heterogeneous groups of learners independent of location and prior knowledge. The study at hand describes the development and components of the framework.
After kidney transplantation graft rejection must be prevented. Therefore, a multitude of parameters of the patient is observed pre- and postoperatively. To support this process, the Screen Reject research project is developing a data warehouse optimized for kidney rejection diagnostics. In the course of this project it was discovered that important information are only available in form of free texts instead of structured data and can therefore not be processed by standard ETL tools, which is necessary to establish a digital expert system for rejection diagnostics. Due to this reason, data integration has been improved by a combination of methods from natural language processing and methods from image processing. Based on state-of-the-art data warehousing technologies (Microsoft SSIS), a generic data integration tool has been developed. The tool was evaluated by extracting Banff-classification from 218 pathology reports and extracting HLA mismatches from about 1700 PDF files, both written in german language.
In this poster we present the ongoing development of an integrated free and open source toolchain for semantic annotation of digitised cultural heritage. The toolchain development involves the specification of a common data model that aims to increase interoperability across diverse datasets and to enable new collaborative research approaches.