Refine
Year of publication
Document Type
- Conference Proceeding (162) (remove)
Has Fulltext
- yes (162)
Is part of the Bibliography
- no (162)
Keywords
- Digitalisierung (9)
- Energiemanagement (8)
- Mikroservice (8)
- Angewandte Botanik (7)
- Gepresste Pflanzen (7)
- Herbar Digital (7)
- Herbarium (7)
- Serviceorientierte Architektur (7)
- Virtualisierung (7)
- Agile Softwareentwicklung (6)
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
Die Forderungen, auch nicht personenbezogene Daten besser zu schützen, nehmen zu. Dies gilt auch für die Landwirtschaft. Landwirte fordern selbstbewusst „Meine Daten gehören mir“ und wollen für die Bereitstellung ihrer Betriebsdaten angemessen entlohnt werden. Es spricht aber einiges dafür, dass die meisten der erhobenen Daten kaum einen ökonomischen Wert aufweisen. In diesem Artikel wird systematisch untersucht, welche Arten von Daten es gibt und welchen Marktwert sie vermutlich haben. Da Daten digitale Güter sind, gelten für sie dieselben Besonderheiten wie für sonstigen digitalen Content, wie einfache Kopier- und Veränderbarkeit. Die Analyse kommt zu dem Schluss, dass die meisten Daten in der Landwirtschaft vermutlich nur einen geringen Wert aufweisen, der eine Vermarktung, aber auch einen aufwendigen juristischen Schutz nicht rechtfertigt. Erst durch Datenaggregation und geschickte Auswertung dieser Rohdaten werden quasi in einer Veredelungsstufe nützliche Informationen erzeugt. Vermutlich wäre es aber am besten, möglichst viele Daten öffentlich zugänglich zu halten, sodass Werte durch innovative Geschäftsmodelle geschaffen werden, die auf diesen öffentlichen Daten aufbauen.
In diesem Artikel wird die strategische Vorausschau als eine Methode der Zukunftsforschung vorgestellt. Die entwickelten Szenarien können Entscheidern helfen, besser auf zukünftige Entwicklungen vorbereitet zu sein. Die GIL könnte eine Plattform sein, um Trends in der Landwirtschaft und Agrarinformatik zu identifizieren.
The research project "Herbar Digital" was started in 2007 with the aim to digitize 3.5 million dried plants on paper sheets belonging to the Botanic Museum Berlin in Germany. Frequently the collector of the plant is unknown, so a procedure had to be developed in order to determine the writer of the handwriting on the sheet. In the present work the static character was transformed into a dynamic form. This was done with the model of an inert ball which was rolled along the written character. During this off-line writer recognition, different mathematical procedures were used such as the reproduction of the write line of individual characters by Legendre polynomials. When only one character was used, a recognition rate of about 40% was obtained. By combining multiple characters, the recognition rate rose considerably and reached 98.7% with 13 characters and 93 writers (chosen randomly from the international IAM-database [3]). A global statistical approach using the whole handwritten text resulted in a similar recognition rate. By combining local and global methods, a recognition rate of 99.5% was achieved.
The methods developed in the research project "Herbar Digital" are to help plant taxonomists to master the great amount of material of about 3.5 million dried plants on paper sheets belonging to the Botanic Museum Berlin in Germany. Frequently the collector of the plant is unknown. So a procedure had to be developed in order to determine the writer of the handwriting on the sheet. In the present work the static character is transformed into a dynamic form. This is done with the model of an inert ball which is rolled through the written character. During this off-line writer recognition, different mathematical procedures are used such as the reproduction of the write line of individual characters by Legendre polynomials. When only one character is used, a recognition rate of about 40% is obtained. By combining multiple characters, the recognition rate rises considerably and reaches 98.7% with 13 characters and 93 writers (chosen randomly from the international IAM-database [3]). Another approach tries to identify the writer by handwritten words. The word is cut out and transformed into a 6-dimensional time series and compared e.g. by means of DTW-methods. A global statistical approach using the whole handwritten sentences results in a similar recognition rate of more than 98%. By combining the methods, a recognition rate of 99.5% is achieved.
Wikidata and Wikibase as complementary research data management services for cultural heritage data
(2022)
The NFDI (German National Research Data Infrastructure) consortia are associations of various institutions within a specific research field, which work together to develop common data infrastructures, guidelines, best practices and tools that conform to the principles of FAIR data. Within the NFDI, a common question is: What is the potential of Wikidata to be used as an application for science and research? In this paper, we address this question by tracing current research usecases and applications for Wikidata, its relation to standalone Wikibase instances, and how the two can function as complementary services to meet a range of research needs. This paper builds on lessons learned through the development of open data projects and software services within the Open Science Lab at TIB, Hannover, in the context of NFDI4Culture – the consortium including participants across the broad spectrum of the digital libraries, archives, and museums field, and the digital humanities.
Mit der Anwendung der Norm ISO 50001 und der einhergehenden Einführung eines Energiemanagementsystems (kurz EnMS) kann eine sukzessive Erhöhung der Energieeffizienz erreicht werden. Zur Umsetzung von Energie-Monitoring- oder Standby-Management-Funktionalitäten müssen Energiedaten in der Feldebene bereitgestellt werden und auf Edge-Devices oder SPSen mittels eines Energiemanagement-Programms ggf. im Datenformat angepasst, skaliert und auf eine etablierte Kommunikationsschnittstelle (z.B. basierend auf OPC UA- oder MQTT) abgebildet werden. Die Erstellung dieser Energiemanagement-Programme geht mit einem hohen Engineering-Aufwand einher, denn die Feldgeräte aus der heterogenen Feldebene stellen die Energiedaten nicht in einer standardisierten Semantik bereit. Um diesem Engineering-Aufwand entgegenzuwirken, wird ein Konzept für ein universelles Energiedateninformationsmodell (kurz UEDIM) vorgestellt. Dieses Konzept sieht die Bereitstellung der Energiedaten an das EnMS in einer semantisch standardisierten Form vor. Zur weiteren Entwicklung des UEDIM wird im Beitrag näher untersucht, in welcher Form Energiedaten in der Feldebene bereitgestellt werden können und welche Anforderungen für das UEDIM aufzustellen sind.
The NOA project collects and stores images from open access publications and makes them findable and reusable. During the project a focus group workshop was held to determine whether the development is addressing researchers’ needs. This took place before the second half of the project so that the results could be considered for further development since addressing users’ needs is a big part of the project. The focus was to find out what content and functionality they expect from image repositories.
In a first step, participants were asked to fill out a survey about their images use. Secondly, they tested different use cases on the live system. The first finding is that users have a need for finding scholarly images but it is not a routine task and they often do not know any image repositories. This is another reason for repositories to become more open and reach users by integrating with other content providers. The second finding is that users paid attention to image licenses but struggled to find and interpret them while also being unsure how to cite images. In general, there is a high demand for reusing scholarly images but the existing infrastructure has room to improve.
For anomaly-based intrusion detection in computer networks, data cubes can be used for building a model of the normal behavior of each cell. During inference an anomaly score is calculated based on the deviation of cell metrics from the corresponding normality model. A visualization approach is shown that combines different types of diagrams and charts with linked user interaction for filtering of data.
Visual effects and elements in video games and interactive virtual environments can be applied to transfer (or delegate) non-visual perceptions (e.g. proprioception, presence, pain) to players and users, thus increasing perceptual diversity via the visual modality. Such elements or efects are referred to as visual delegates (VDs). Current fndings on the experiences that VDs can elicit relate to specifc VDs, not to VDs in general. Deductive and comprehensive VD evaluation frameworks are lacking. We analyzed VDs in video games to generalize VDs in terms of their visual properties. We conducted a systematic paper analysis to explore player and user experiences observed in association with specifc VDs in user studies. We conducted semi-structured interviews with expert players to determine their preferences and the impact of VD properties. The resulting VD framework (VD-frame) contributes to a more strategic approach to identifying the impact of VDs on player and user experiences.