Refine
Document Type
- Article (1)
- Part of a Book (1)
- Conference Proceeding (1)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- Maschinelles Lernen (3) (remove)
Institute
- Fakultät III - Medien, Information und Design (3) (remove)
Library of Congress Subject Headings (LCSH) are popular for indexing library records. We studied the possibility of assigning LCSH automatically by training classifiers for terms used frequently in a large collection of abstracts of the literature on hand and by extracting headings from those abstracts. The resulting classifiers reach an acceptable level of precision, but fail in terms of recall partly because we could only train classifiers for a small number of LCSH. Extraction, i.e., the matching of headings in the text, produces better recall but extremely low precision. We found that combining both methods leads to a significant improvement of recall and a slight improvement of F1 score with only a small decrease in precision.
Legal documents often have a complex layout with many different headings, headers and footers, side notes, etc. For the further processing, it is important to extract these individual components correctly from a legally binding document, for example a signed PDF. A common approach to do so is to classify each (text) region of a page using its geometric and textual features. This approach works well, when the training and test data have a similar structure and when the documents of a collection to be analyzed have a rather uniform layout. We show that the use of global page properties can improve the accuracy of text element classification: we first classify each page into one of three layout types. After that, we can train a classifier for each of the three page types and thereby improve the accuracy on a manually annotated collection of 70 legal documents consisting of 20,938 text elements. When we split by page type, we achieve an improvement from 0.95 to 0.98 for single-column pages with left marginalia and from 0.95 to 0.96 for double-column pages. We developed our own feature-based method for page layout detection, which we benchmark against a standard implementation of a CNN image classifier. The approach presented here is based on corpus of freely available German contracts and general terms and conditions.
Both the corpus and all manual annotations are made freely available. The method is language agnostic.
Using openEHR Archetypes for Automated Extraction of Numerical Information from Clinical Narratives
(2019)
Up to 80% of medical information is documented by unstructured data such as clinical reports written in natural language. Such data is called unstructured because the information it contains cannot be retrieved automatically as straightforward as from structured data. However, we assume that the use of this flexible kind of documentation will remain a substantial part of a patient’s medical record, so that clinical information systems have to deal appropriately with this type of information description. On the other hand, there are efforts to achieve semantic interoperability between clinical application systems through information modelling concepts like HL7 FHIR or openEHR. Considering this, we propose an approach to transform unstructured documented information into openEHR archetypes. Furthermore, we aim to support the field of clinical text mining by recognizing and publishing the connections between openEHR archetypes and heterogeneous phrasings. We have evaluated our method by extracting the values to three openEHR archetypes from unstructured documents in English and German language.