Refine
Year of publication
Document Type
- Conference Proceeding (50) (remove)
Has Fulltext
- yes (50)
Is part of the Bibliography
- no (50)
Keywords
- Text Mining (5)
- Concreteness (4)
- Semantik (4)
- Ausbildung (3)
- Bibliothek (3)
- German (3)
- Information Retrieval (3)
- Informationsmanagement (3)
- Klassifikation (3)
- Bibliothekswesen (2)
- Contract Analysis (2)
- Deutsch (2)
- Digitalisierung (2)
- Disambiguation (2)
- Distributional Semantics (2)
- E-Learning (2)
- Grader (2)
- Graja (2)
- Konkretum <Linguistik> (2)
- Kulturerbe (2)
- Machine Learning (2)
- Modellversuch BID (2)
- Open Access (2)
- Programmieraufgabe (2)
- Rechtswissenschaften (2)
- Sachtext (2)
- Sprachnorm (2)
- Vergleich (2)
- Vertrag (2)
- Wikibase (2)
- Wikidata (2)
- Ähnlichkeit (2)
- 3D data (1)
- Abbreviations (1)
- Abkürzung (1)
- Acronyms (1)
- Akronym (1)
- Algorithmus (1)
- Ambiguität (1)
- Annotation (1)
- Autobewerter (1)
- Automatische Klassifikation (1)
- Automatische Sprachanalyse (1)
- Automatisierte Programmbewertung (1)
- Azyklischer gerichteter Graph (1)
- Benutzererlebnis (1)
- Bewertungsaspekt (1)
- Bewertungsmaßstab (1)
- Bibliothekar (1)
- Bilderkennung (1)
- Bildersprache (1)
- Bildersuchmaschine (1)
- Bildmaterial (1)
- Bildverarbeitung (1)
- Book of Abstract (1)
- Citizens (1)
- Classification (1)
- Computerlinguistik (1)
- Constructive Alignment (1)
- Corpus construction (1)
- Data Science (1)
- Data-Warehouse-Konzept (1)
- Datenaufbereitung (1)
- Decision Support Systems, Clinical (1)
- Deep Convolutional Networks (1)
- Dewey-Dezimalklassifikation (1)
- Didactic (1)
- Digital Wellbeing (1)
- Digitalization (1)
- Digitization (1)
- Disambiguierung (1)
- Dokumentanalyse (1)
- E - Assessment (1)
- Fassung (1)
- Feature and Text Extraction (1)
- Figurative Language (1)
- Focus Group (1)
- Formelhafte Textabschnitte (1)
- Forschungsdaten (1)
- Gesundheitsfürsorge (1)
- Graph-based Text Representations (1)
- Grappa (1)
- Gruppeninterview (1)
- Health IT (1)
- Hochschule (1)
- Home Care (1)
- Hybrid Conference (1)
- Image Recognition (1)
- Image Retrieval (1)
- Imagery (1)
- Images (1)
- Information Dissemination (1)
- Information Extraction (1)
- Information Management (1)
- Information Science (1)
- Java <Programmiersprache> (1)
- Keyword Extraction (1)
- Knowledge Maps (1)
- Kompakkt (1)
- Kompetenz (1)
- Korpus <Linguistik> (1)
- Krankenhaus (1)
- LIG (1)
- Latent Semantic Analysis (1)
- Layout Detection (1)
- Legal Documents (1)
- Legal Writings (1)
- Legende <Bild> (1)
- Lemmatization (1)
- Lernmotivation (1)
- Lexical Semantics (1)
- Linear Indexed Grammars (1)
- Linked Data (1)
- Linked Open Data (1)
- Liver Transplantation (1)
- Markov Models (1)
- Maschinelles Lernen (1)
- Media Didactic Concept (1)
- Mediendidaktik (1)
- Medizinische Bibliothek (1)
- Middleware (1)
- Motivation (1)
- NFDI (1)
- NFDI4Culture – Konsortium für Forschungsdaten materieller und immaterieller Kulturgüter (1)
- NLP (1)
- Nierentransplantation (1)
- Notation <Klassifikation> (1)
- Open Repositories (1)
- Open Science (1)
- Open Source (1)
- OpenRefine (1)
- PDF <Dateiformat> (1)
- PDF Document Analysis (1)
- POS Tagging (1)
- Paraphrase (1)
- Paraphrase Similarity (1)
- Patient empowerment (1)
- Phraseologie (1)
- Physics (1)
- Physik (1)
- Plugin (1)
- ProFormA-Aufgabenformat (1)
- Qualifikation (1)
- Rechtsdokumente (1)
- Reduction of Complexity (1)
- Regional Development (1)
- Regional Innovation Systems (1)
- Regional Policy (1)
- Repository <Informatik> (1)
- Schlagwortkatalog (1)
- Schlagwortnormdatei (1)
- Scientific image search (1)
- Selbstgesteuertes Lernen (1)
- Self-directed Learning (1)
- Semantics (1)
- Semantisches Datenmodell (1)
- Similarity Measures (1)
- Spezialbibliothekar (1)
- Standardised formulation (1)
- Standardisierung (1)
- Statistical Methods (1)
- Statistische Methoden (1)
- Structural Analysis (1)
- Systems Librarian, Data Librarian, Job advertisement analysis, Job profiles, New competencies (1)
- Territorial Intelligence (1)
- Text Similarity (1)
- Text annotation (1)
- Textbooks (1)
- Thesaurus (1)
- Title Matching (1)
- Transplantatabstoßung (1)
- Verbal Idioms (1)
- Versicherungsvertrag (1)
- Vertragsklausel (1)
- Wikimedia Commons (1)
- Wikipedia categories (1)
- Wissenschaftliche Bibliothek (1)
- Word Norms (1)
- Wort (1)
- XML (1)
- Zweiwortsatz (1)
- abstractness (1)
- concreteness (1)
- context vectors (1)
- cultural heritage (1)
- data warehouse (1)
- distributional semantics (1)
- e-Assessment (1)
- eLearning (1)
- education (1)
- fall prediction (1)
- fall prevention (1)
- fall risk (1)
- graft rejection (1)
- high-quality Learning Formats (1)
- image processing (1)
- information extraction (1)
- kidney transplant (1)
- library and information science (1)
- linked data (1)
- research data management (1)
- research information (1)
- sensor-based assessment (1)
- supervised machine learning (1)
- thesauri (1)
- wearable sensors (1)
- web crawling (1)
- word embedding space (1)
- Öffentliche Bibliothek (1)
- Überwachtes Lernen (1)
Institute
- Fakultät III - Medien, Information und Design (50) (remove)
The NOA project collects and stores images from open access publications and makes them findable and reusable. During the project a focus group workshop was held to determine whether the development is addressing researchers’ needs. This took place before the second half of the project so that the results could be considered for further development since addressing users’ needs is a big part of the project. The focus was to find out what content and functionality they expect from image repositories.
In a first step, participants were asked to fill out a survey about their images use. Secondly, they tested different use cases on the live system. The first finding is that users have a need for finding scholarly images but it is not a routine task and they often do not know any image repositories. This is another reason for repositories to become more open and reach users by integrating with other content providers. The second finding is that users paid attention to image licenses but struggled to find and interpret them while also being unsure how to cite images. In general, there is a high demand for reusing scholarly images but the existing infrastructure has room to improve.
Building a well-founded understanding of the concepts, tasks and limitations of IT in all areas of society is an essential prerequisite for future developments in business and research. This applies in particular to the healthcare sector and medical research, which are affected by the noticeable advances in digitization. In the transfer project “Zukunftslabor Gesundheit” (ZLG), a teaching framework was developed to support the development of further education online courses in order to teach heterogeneous groups of learners independent of location and prior knowledge. The study at hand describes the development and components of the framework.
After kidney transplantation graft rejection must be prevented. Therefore, a multitude of parameters of the patient is observed pre- and postoperatively. To support this process, the Screen Reject research project is developing a data warehouse optimized for kidney rejection diagnostics. In the course of this project it was discovered that important information are only available in form of free texts instead of structured data and can therefore not be processed by standard ETL tools, which is necessary to establish a digital expert system for rejection diagnostics. Due to this reason, data integration has been improved by a combination of methods from natural language processing and methods from image processing. Based on state-of-the-art data warehousing technologies (Microsoft SSIS), a generic data integration tool has been developed. The tool was evaluated by extracting Banff-classification from 218 pathology reports and extracting HLA mismatches from about 1700 PDF files, both written in german language.
In this poster we present the ongoing development of an integrated free and open source toolchain for semantic annotation of digitised cultural heritage. The toolchain development involves the specification of a common data model that aims to increase interoperability across diverse datasets and to enable new collaborative research approaches.
To learn a subject, the acquisition of the associated technical language is important.
Despite this widely accepted importance of learning the technical language, hardly any studies are published that describe the characteristics of most technical languages that students are supposed to learn. This might largely be due to the absence of specialized text corpora to study such languages at lexical, syntactical and textual level. In the present paper we describe a corpus of German physics text that can be used to study the language used in physics. A large and a small variant are compiled. The small version of the corpus consists of 5.3 Million words and is available on request.
Self-directed learning is an essential basis for lifelong learning and requires constantly changing, target groupspecific and personalized prerequisites in order to motivate people to deal with modern learning content, not to overburden them and yet to adequately convey complex contexts. Current challenges in dealing with digital resources such as information overload, reduction of complexity and focus, motivation to learn, self-control or psychological wellbeing are taken up in the conception of learning settings within our QpLuS IM project for the study program Information Management and Information Management extra-occupational (IM) at the University of Applied Sciences and Arts Hannover. We present an interactive video on the functionality of search engines as a practical example of a medially high-quality and focused self-learning format that has been methodically produced in line with our agile, media-didactic process and stage model of complexity levels.
Wikidata and Wikibase as complementary research data management services for cultural heritage data
(2022)
The NFDI (German National Research Data Infrastructure) consortia are associations of various institutions within a specific research field, which work together to develop common data infrastructures, guidelines, best practices and tools that conform to the principles of FAIR data. Within the NFDI, a common question is: What is the potential of Wikidata to be used as an application for science and research? In this paper, we address this question by tracing current research usecases and applications for Wikidata, its relation to standalone Wikibase instances, and how the two can function as complementary services to meet a range of research needs. This paper builds on lessons learned through the development of open data projects and software services within the Open Science Lab at TIB, Hannover, in the context of NFDI4Culture – the consortium including participants across the broad spectrum of the digital libraries, archives, and museums field, and the digital humanities.
A new FOSS (free and open source software) toolchain and associated workflow is being developed in the context of NFDI4Culture, a German consortium of research- and cultural heritage institutions working towards a shared infrastructure for research data that meets the needs of 21st century data creators, maintainers and end users across the broad spectrum of the digital libraries and archives field, and the digital humanities. This short paper and demo present how the integrated toolchain connects: 1) OpenRefine - for data reconciliation and batch upload; 2) Wikibase - for linked open data (LOD) storage; and 3) Kompakkt - for rendering and annotating 3D models. The presentation is aimed at librarians, digital curators and data managers interested in learning how to manage research datasets containing 3D media, and how to make them available within an open data environment with 3D-rendering and collaborative annotation features.
Image captions in scientific papers usually are complementary to the images. Consequently, the captions contain many terms that do not refer to concepts visible in the image. We conjecture that it is possible to distinguish between these two types of terms in an image caption by analysing the text only. To examine this, we evaluated different features. The dataset we used to compute tf.idf values, word embeddings and concreteness values contains over 700 000 scientific papers with over 4,6 million images. The evaluation was done with a manually annotated subset of 329 images. Additionally, we trained a support vector machine to predict whether a term is a likely visible or not. We show that concreteness of terms is a very important feature to identify terms in captions and context that refer to concepts visible in images.
Generalisierte Rechtsdokumente, bei denen für die individuellen Ausprägungen eines Vertrages die Positionen im Text bekannt sind, können eingesetzt werden, um erstens das Genehmigungsverfahren von Neuverträgen automatisiert zu unterstützen und zweitens als Vertragsgenerator neue Rechtsdokumente vorausgewählt zur Verfügung zu stellen. In diesem Beitrag wird, mithilfe von bekannten juristischen Texten gezeigt, wie formelhafte Textabschnitte identifiziert und häufige individuelle Ausprägungen klassifiziert werden können, um als Musterabschnitte eingesetzt zu werden. Es werden Einsatzbereiche vorgestellt und vorhandenes Potential für Legal Tech-Anwendungen aufgezeigt.