Refine
Year of publication
Document Type
- Article (301)
- Conference Proceeding (120)
- Bachelor Thesis (9)
- Periodical Part (9)
- Report (6)
- Master's Thesis (4)
- Working Paper (4)
- Part of a Book (3)
- Preprint (3)
- Book (2)
Language
- English (462) (remove)
Is part of the Bibliography
- no (462)
Keywords
- Euterentzündung (23)
- Student (12)
- Computersicherheit (10)
- Knowledge (10)
- Mumbai (10)
- Wissen (10)
- India (9)
- Serviceorientierte Architektur (9)
- bioplastics (9)
- biopolymers (9)
Network convergence is an increasing trend in the automation domain. More and more plant owners strive for a unification of networks in their plants. This yields a seamless network structure, simplified supervision, and reduced training effort for the personnel, as only one unified network technology needs to be handled. Ethernet-APL is one piece of the puzzle for such a converged network, supporting various real time protocols like PROFINET, EtherNet, HART-IP as well as the middleware protocol OPC UA. This paper gives an overview on the impact of Ethernet-APL field devices to OT security and proposes how to ensure OT security for them.
This document concerns IT security in production facilities. It is intended for small and medium-sized enterprises that are looking for a simple procedural model for ensuring IT security in production areas.
In order to raise readers’ awareness of IT security in production facilities, security incidents are presented in section 2. It is clear that cyber attacks on production facilities in this day and age are not random, but are instead based on a targeted process.
An overview of the most important standards and recommendations on the topic of “IT security in production” then follows in section 3.
Section 4 develops a concept for setting up an IT security system for small and medium-sized enterprises (SMEs) on the basis of a ten-point plan. The focus is not only on technical measures, but also in particular on the most frequently neglected organizational measures.
Section 5 then describes the outlook for future requirements and solutions in the context of Industry 4.0.
The topic of electromagnetic compatibility (EMC) remains an important aspect during the planning, installation and operation of automation systems. Communication networks, such as PROFIBUS and PROFINET, are known to be robust and reliable transmission systems. Nevertheless, it is important that a number of fundamental principles needs to be observed to ensure fault-free operation over a long plant lifetime. This paper first describes a number of principles of EMC. On the basis of these principles, six recommendations for action are then developed which are to be observed during the planning of an automation system for use in the manufacturing industry. Finally, an overview is provided of future work for systems in the process industry.
The Ethernet-APL Engineering Process - A brief look at the Ethernet-APL engineering guideline
(2021)
The vision of an “Industrial Ethernet down to the sensors and actors” has become reality. At the Achema fair in June 2021 Ethernet-APL was introduced. This technology is based on a 2-wire Ethernet that conveys information as well as energy to the sensors and actuators of the automation system. Ethernet-APL is based on the 2-wire Ethernet standard IEEE 802.3cg running at 10 Mbit/s. An additional specification, the Ethernet-APL Port Profile Specification, defines additional parameters for the use of the technology in the process industry, especially in areas with potentially explosive atmospheres. As a next step, potential users need to become familiar with the engineering process of Ethernet-APL networks. For this purpose, the Ethernet-APL project provides the Ethernet-APL Engineering Guideline that covers the main areas of planning, installation and acceptance testing.
The impact of vertical and horizontal integration in the context of Industry 4.0 requires new concepts for the security of industrial Ethernet protocols. The defense in depth concept, basing on the combination of several measures, especially separation and segmentation, needs to be complimented by integrated protection measures for industrial real-time protocols. To cover this challenge, existing protocols need to be equipped with additional functionality to ensure the integrity and availability of the network communication, even in environments, where possible attackers can be present. In order to show a possible way to upgrade an existing protocol, this paper describes a security concept for the industrial Ethernet protocol PROFINET.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
On November 30th, 2022, OpenAI released the large language model ChatGPT, an extension of GPT-3. The AI chatbot provides real-time communication in response to users’ requests. The quality of ChatGPT’s natural speaking answers marks a major shift in how we will use AI-generated information in our day-to-day lives. For a software engineering student, the use cases for ChatGPT are manifold: assessment preparation, translation, and creation of specified source code, to name a few. It can even handle more complex aspects of scientific writing, such as summarizing literature and paraphrasing text. Hence, this position paper addresses the need for discussion of potential approaches for integrating ChatGPT into higher education. Therefore, we focus on articles that address the effects of ChatGPT on higher education in the areas of software engineering and scientific writing. As ChatGPT was only recently released, there have been no peer-reviewed articles on the subject. Thus, we performed a structured grey literature review using Google Scholar to identify preprints of primary studies. In total, five out of 55 preprints are used for our analysis. Furthermore, we held informal discussions and talks with other lecturers and researchers and took into account the authors’ test results from using ChatGPT. We present five challenges and three opportunities for the higher education context that emerge from the release of ChatGPT. The main contribution of this paper is a proposal for how to integrate ChatGPT into higher education in four main areas.
Social skills are essential for a successful understanding of agile methods in software development. Several studies highlight the opportunities and advantages of integrating real-world projects and problems while collaborating with companies into higher education using agile methods. This integration comes with several opportunities and advantages for both the students and the company. The students are able to interact with real-world software development teams, analyze and understand their challenges and identify possible measures to tackle them. However, the integration of real-world problems and companies is complex and may come with a high effort in terms of coordination and preparation of the course. The challenges related to the interaction and communication with students are increased by virtual distance teaching during the Covid-19 pandemic as direct contact with students is missing. Also, we do not know how problem-based learning in virtual distance teaching is valued by the students. This paper presents our adapted eduScrum approach and learning outcome of integrating experiments with real-world software development teams from two companies into a Master of Science course organized in virtual distance teaching. The evaluation shows that students value analyzing real-world problems using agile methods. They highlight the interaction with real-world software development teams. Also, the students appreciate the organization of the course using an iterative approach with eduScrum. Based on our findings, we present four recommendations for the integration of agile methods and real world problems into higher education in virtual distance teaching settings. The results of our paper contribute to the practitioner and researcher/lecturer community, as we provide valuable insights how to fill the gap between practice and higher education in virtual distance settings.
Companies worldwide have enabled their employees to work remotely as a consequence of the Covid 19 pandemic. Software development is a human-centered discipline and thrives on teamwork. Agile methods are focusing on several social aspects of software development. Software development teams in Germany were mainly co-located before the pandemic. This paper aims to validate the findings of existing studies by expanding on an existing multiple-case study. Therefore, we collected data by conducting semi-structured interviews, observing agile practices, and viewing project documents in three cases. Based on the results, we can confirm the following findings: 1) The teams rapidly adapted the agile practices and roles, 2) communication is more objective within the teams, 3) decreased social exchange between team members, 4) the expectation of a combined approach of remote and onsite work after the pandemic, 5) stable or increased (perceived) performance and 6) stable or increased well-being of team members.
In 2020, the world changed due to the Covid 19 pandemic. Containment measures to reduce the spread of the virus were planned and implemented by many countries and companies. Worldwide, companies sent their employees to work from home. This change has led to significant challenges in teams that were co-located before the pandemic. Agile software development teams were affected by this switch, as agile methods focus on communication and collaboration. Research results have already been published on the challenges of switching to remote work and the effects on agile software development teams. This article presents a systematic literature review. We identified 12 relevant papers for our studies and analyzed them on detail. The results provide an overview how agile software development teams reacted to the switch to remote work, e.g., which agile practices they adapted. We also gained insights on the changes of the performance of agile software development teams and social effects on agile software development teams during the pandemic.
This Innovative Practice Full Paper presents our learnings of the process to perform a Master of Science class with eduScrum integrating real world problems as projects. We prepared, performed, and evaluated an agile educational concept for the new Master of Science program Digital Transformation organized and provided by the department of business computing at the University of Applied Sciences and Arts - Hochschule Hannover in Germany. The course deals with innovative methodologies of agile project management and is attended by 25 students. We performed the class due the summer term in 2019 and 2020 as a teaching pair. The eduScrum method has been used in different educational contexts, including higher education. During the approach preparation, we decided to use challenges, problems, or questions from the industry. Thus, we acquired four companies and prepared in coordination with them dedicated project descriptions. Each project description was refined in the form of a backlog (list of requirements). We divided the class into four eduScrum teams, one team for each project. The subdivision of the class was done randomly.
Since we wanted to integrate realistic projects into industry partners’ implementation, we decided to adapt the eduScrum approach. The eduScrum teams were challenged with different projects, e.g., analyzing a dedicated phenomenon in a real project or creating a theoretical model for a company’s new project management approach. We present our experiences of the whole process to prepare, perform and evaluate an agile educational approach combined with projects from practice. We found, that the students value the agile method using real world problems. However, the results are mainly based on the summer term 2019, this paper also includes our learnings from virtual distance teaching during the Covid 19 pandemic in summer term 2020. The paper contributes to the distribution of methods for higher education teaching in the classroom and distance learning.
Agile methods require constant optimization of one’s approach and leading to the adaptation of agile practices. These practices are also adapted when introducing them to companies and their software development teams due to organizational constraints. As a consequence of the widespread use of agile methods, we notice a high variety of their elements:
Practices, roles, and artifacts. This multitude of agile practices, artifacts, and roles results in an unsystematic mixture. It leads to several questions: When is a practice a practice, and when is it a method or technique? This paper presents the tree of agile elements, a taxonomy of agile methods, based on the literature and guidelines of widely used agile methods. We describe a taxonomy of agile methods using terms and concepts of software engineering, in particular software process models. We aim to enable agile elements to be delimited, which should help companies, agile teams, and the research community gain a basic understanding of the interrelationships and dependencies of individual components of agile methods.
Context: Companies adapt agile methods, practices or artifacts for their use in practice since more than two decades. This adaptions result in a wide variety of described agile practices. For instance, the Agile Alliance lists 75 different practices in its Agile Glossary. This situation may lead to misunderstandings, as agile practices with similar names can be interpreted and used differently.
Objective: This paper synthesize an integrated list of agile practices, both from primary and secondary sources.
Method: We performed a tertiary study to identify existing overviews and lists of agile practices in the literature. We identified 876 studies, of which 37 were included.
Results: The results of our paper show that certain agile practices are listed and used more often in existing studies. Our integrated list of agile practices comprises 38 entries structured in five categories. Conclusion: The high number of agile practices and thus, the wide variety increased steadily over the past decades due to the adaption of agile methods. Based on our findings, we present a comprehensive overview of agile practices. The research community benefits from our integrated list of agile practices as a potential basis for future research. Also, practitioners benefit from our findings, as the structured overview of agile practices provides the opportunity to select or adapt practices for their specific needs.
Pathologists need to identify abnormal changes in tissue. With the developing digitalization, the used tissue slides are stored digitally. This enables pathologists to annotate the region of interest with the support of software tools. PathoLearn is a web-based learning platform explicitly developed for the teacher-student scenario, where the goal is that students learn to identify potential abnormal changes. Artificial intelligence (AI) and machine learning (ML) have become very important in medicine. Many health sectors already utilize AI and ML. This will only increase in the future, also in the field of pathology. Therefore, it is important to teach students the fundamentals and concepts of AI and ML early in their studies. Additionally, creating and training AI generally requires knowledge of programming and technical details. This thesis evaluates how this boundary can be overcome by comparing existing end-to-end AI platforms and teaching tools for AI. It was shown that a visual programming editor offers a fitting abstraction for creating neural networks without programming. This was extended with real-time collaboration to enable students to work in groups. Additionally, an automatic training feature was implemented, removing the necessity to know technical details about training neural networks.
Antimicrobials are widely used to cure intramammary infections (IMI) in dairy cows during the dry period (DP). Nevertheless, the IMI cure is influenced by many factors and not all quarters benefit from antimicrobial dry cow treatment (DCT). To evaluate the true effect of antibiotic DCT compared to self-cure and the role of causative pathogens on the IMI cure, a retrospective cross-sectional study was performed. The analysis included 2987 quarters infected at dry-off (DO). Information on DCT, causative pathogens, somatic cell count, milk yield, amount of lactation, Body Condition Score, and season and year of DO were combined into categorical variables. A generalized linear mixed model with a random cow, farm and year effect and the binary outcome of bacteriological cure of IMI during the DP was conducted. In the final model, a significant effect (p < 0.05) on DP cure was seen for the DO season and the category of causative pathogens (categories being: Staphylococcus aureus, non-aureus staphylococci, streptococci, coliforms, ‘other Gram-negative bacteria’, ‘other Gram positive bacteria’, non-bacterial infections and mixed infections), while antibiotic DCT (vs. non-antibiotic DCT) only showed a significant effect in combination with the pathogen categories streptococci and ‘other Gram-positive bacteria’.
Background
In Germany, up to 50% of nursing home residents are admitted to a hospital at least once a year. It is often unclear whether this is beneficial or even harmful. Successful interprofessional collaboration and communication involving general practitioners (GPs) and nurses may improve medical care of nursing home residents. In the previous interprof study, the six-component intervention package interprof ACT was developed to facilitate collaboration of GPs and nurses in nursing homes. The aim of this study is to evaluate the effectiveness of the interprof ACT intervention.
Methods
This multicentre, cluster randomised controlled trial compares nursing homes receiving the interprof ACT intervention package for a duration of 12 months (e.g. comprising appointment of mutual contact persons, shared goal setting, standardised GPs’ home visits) with a control group (care as usual). A total of 34 nursing homes are randomised, and overall 680 residents recruited. The intervention package is presented in a kick-off meeting to GPs, nurses, residents/relatives or their representatives. Nursing home nurses act as change agents to support local adaption and implementation of the intervention measures. Primary outcome is the cumulative incidence of hospitalisation within 12 months. Secondary outcomes include admissions to hospital, days admitted to hospital, use of other medical services, prevalence of potentially inappropriate medication and quality of life. Additionally, health economic and a mixed methods process evaluation will be performed.
Discussion
This study investigates a complex intervention tailored to local needs of nursing homes. Outcomes reflect the healthcare and health of nursing home residents, as well as the feasibility of the intervention package and its impact on interprofessional communication and collaboration. Because of its systematic development and its flexible nature, interprof ACT is expected to be viable for large-scale implementation in routine care services regardless of local organisational conditions and resources available for medical care for nursing home residents on a regular basis. Recommendations will be made for an improved organisation of primary care for nursing home residents. In addition, the results may provide important knowledge and data for the development and evaluation of further strategies to improve outpatient care for elderly care-receivers.
Background: Given both the increase of nursing home residents forecast and challenges of current interprofessional interactions, we developed and tested measures to improve collaboration and communication between nurses and general practitioners (GPs) in this setting. Our multicentre study has been funded by the German Federal Ministry of Education and Research (FK 01GY1124).
Methods: The measures were developed iteratively in a continuous process, which is the focus of this article. In part 1 “exploration of the situation”, interviews were conducted with GPs, nurses, nursing home residents and their relatives focusing on interprofessional interactions and medical care. They were analysed qualitatively. Based on these results, in part 2 “development of measures to improve collaboration”, ideas for improvement were developed in nine focus groups with GPs and nurses. These ideas were revisited in a final expert workshop. We analysed the focus groups and expert workshop using mind mapping methods, and finally drew up the compilation of measures. In an exploratory pilot study "study part 3" four nursing homes chose the measures they wanted to adopt. These were tested for three months. Feasibility and acceptance of the measures were evaluated via guideline interviews with the stakeholders which were analysed by content analyses.
Results: Six measures were generated: meetings to establish common goals, main contact person, standardised pro re nata medication, introduction of name badges, improved availability of nurse/GP and standardised scheduling/ procedure for nursing home visits. In the pilot study, the measures were implemented in four nursing homes. GPs and nurses reviewed five measures as feasible and acceptable, only the designation of a “main contact person” was not considered as an improvement.
Conclusions: Six measures to improve collaboration and communication could be compiled in a multistep qualitative process respecting the perspectives of involved stakeholders. Five of the six measures were positively assessed in an exploratory pilot study. They could easily be transferred into the daily routine of other nursing homes, as no special models have to exist in advance. Impact of the measures on patient oriented outcomes should be examined in further research.
Trial registration: Not applicable.
Staphylococcus aureus is recognized worldwide as one of the major agents of dairy cow intra-mammary infections. This microorganism can express a wide spectrum of pathogenic factors used to attach, colonize, invade and infect the host. The present study evaluated 120 isolates from eight different countries that were genotyped by RS-PCR and investigated for 26 different virulence factors to increase the knowledge on the circulating genetic lineages among the cow population with mastitis. New genotypes were observed for South African strains while for all the other countries new variants of existing genotypes were detected. For each country, a specific genotypic pattern was found. Among the virulence factors, fmtB, cna, clfA and leucocidins genes were the most frequent. The sea and sei genes were present in seven out of eight countries; seh showed high frequency in South American countries (Brazil, Colombia, Argentina), while sel was harboured especially in one Mediterranean country (Tunisia). The etb, seb and see genes were not detected in any of the isolates, while only two isolates were MRSA (Germany and Italy) confirming the low diffusion of methicillin resistance microorganism among bovine mastitis isolates. This work demonstrated the wide variety of S. aureus genotypes found in dairy cattle worldwide. This condition suggests that considering the region of interest might help to formulate strategies for reducing the infection spreading.
Visual effects and elements in video games and interactive virtual environments can be applied to transfer (or delegate) non-visual perceptions (e.g. proprioception, presence, pain) to players and users, thus increasing perceptual diversity via the visual modality. Such elements or efects are referred to as visual delegates (VDs). Current fndings on the experiences that VDs can elicit relate to specifc VDs, not to VDs in general. Deductive and comprehensive VD evaluation frameworks are lacking. We analyzed VDs in video games to generalize VDs in terms of their visual properties. We conducted a systematic paper analysis to explore player and user experiences observed in association with specifc VDs in user studies. We conducted semi-structured interviews with expert players to determine their preferences and the impact of VD properties. The resulting VD framework (VD-frame) contributes to a more strategic approach to identifying the impact of VDs on player and user experiences.
Research into new forms of care for complex chronic diseases requires substantial efforts in the collection, storage, and analysis of medical data. Additionally, providing practical support for those who coordinate the actual care management process within a diversified network of regional service providers is also necessary. For instance, for stroke units, rehabilitation partners, ambulatory actors, as well as health insurance funds. In this paper, we propose the concept of comprehensive and practical receiver-oriented encryption (ROE) as a guiding principle for such data-intensive, research-oriented case management systems, and
illustrate our concept with the example of the IT infrastructure of the project STROKE OWL.
Recent progress that has been made towards understanding the dynamics of collisions at the gas–liquid interface is summarized briefly. We describe in this context a promising new approach to the experimental study of gas–liquid interfacial reactions that we have introduced. This is based on laser-photolytic production of reactive gas-phase atoms above the liquid surface and laser-spectroscopic probing of the resulting nascent products. This technique is illustrated for reaction of O(³P) atoms at the surface of the long-chain liquid hydrocarbon squalane (2,6,10,15,19,23-hexamethyltetracosane). Laser-induced fluorescence detection of the nascent OH has revealed mechanistically diagnostic correlations between its internal and translational energy distributions. Vibrationally excited OH molecules are able to escape the surface. At least two contributions to the product rotational distributions are identified, confirming and extending previous hypotheses of the participation of both direct and trapping-desorption mechanisms. We speculate briefly on future experimental and theoretical developments that might be necessary to address the many currently unanswered mechanistic questions for this, and other, classes of gas–liquid interfacial reaction.
Techno-economic analysis that allocate costs to the energy flows of energy systems are helpful to understand the formation of costs within processes and to increase the cost efficiency. For the economic evaluation, the usefulness or quality of the energy is of great importance. In exergy-based methods, this is considered by allocating costs to the exergy instead of energy. As exergy represents the ability of performing work, it is often named the useful part of energy. In contrast, the anergy, the part of energy, which cannot perform work, is often assumed to be not useful.
However, heat flows as used e.g. in domestic heating are always a mixture of a relative small portion of exergy and a big portion of anergy. Although of lower quality, the anergy is obviously useful for these applications. The question is, whether it makes sense to differentiate between exergy and anergy and take both properties into account for the economic evaluation.
To answer this question, a new methodical concept based on the definition of an anergy-exergy cost ratio is compared to the commonly applied approaches of considering either energy or exergy as the basis for economic evaluation. These three different approaches for the economic analysis of thermal energy systems are applied to an exemplary heating system with thermal storages. It is shown that the results of the techno-economic analysis can be improved by giving anergy an economic value and that the proposed anergy-cost ratio allows a flexible adaptation of the evaluation depending on the economic constraints of a system.
Objectives
Quality of care largely depends on successful teamwork, which in turn needs effective communication between health professionals. To communicate successfully in a team, health professionals need to strive for the same goals. However, it has been left largely unaddressed which goals professionals consider to be important. In this study, we aim to identify these goals and analyse whether differences between (1) personal and organisational goals, (2) different professions and (3) hierarchical levels exist in neonatal intensive care units (NICUs).
Design
Goals were identified based on a literature review and a workshop with health professionals and tested in a pilot study. Subsequently, in the main study, a cross-sectional employee survey was undertaken.
Setting and participants
1489 nurses and 537 physicians from 66 German NICUs completed the
questionnaire regarding personal and organisational goal importance between May and July 2013. Answers were given based on a 7-point Likert scale varying between none and exceptionally high importance.
Results
Results show that the goals can be subdivided into three main goal dimensions: patients, parents and staff. Furthermore, our results reveal significant differences between different professions and different hierarchical level: physicians rated patient goals with a
mean (95% CI) importance of 6.37 (3.32 to 6.43), which is significantly higher than nurses with a mean (95% CI) importance of 6.15 (6.12 to 6.19) (p<0.01). Otherwise, nurses classified parental goals as more important (p<0.01). Furthermore, professionals in leading positions rate patient goals significantly higher than professionals that are not in leading positions (6.36 (3.28 to 6.44) vs 6.19 (6.15 to 6.22), p<0.01).
Conclusions
Different employee goals need to be considered in decision-making
processes to enhance employee motivation and the effectiveness of teamwork.
During the Corona-Pandemic, information (e.g. from the analysis of balance sheets and payment behavior) traditionally used for corporate credit risk analysis became less valuable because it represents only past circumstances. Therefore, the use of currently published data from social media platforms, which have shown to contain valuable information regarding the financial stability of companies, should be evaluated. In this data e. g. additional information from disappointed employees or customers can be present. In order to analyze in how far this data can improve the information base for corporate credit risk assessment, Twitter data regarding the ten greatest insolvencies of German companies in 2020 and solvent counterparts is analyzed in this paper. The results from t-tests show, that sentiment before the insolvencies is significantly worse than in the comparison group which is in alignment with previously conducted research endeavors. Furthermore, companies can be classified as prospectively solvent or insolvent with up to 70% accuracy by applying the k-nearest-neighbor algorithm to monthly aggregated sentiment scores. No significant differences in the number of Tweets for both groups can be proven, which is in contrast to findings from studies which were conducted before the Corona-Pandemic. The results can be utilized by practitioners and scientists in order to improve decision support systems in the domain of corporate credit risk analysis. From a scientific point of view, the results show, that the information asymmetry between lenders and borrowers in credit relationships, which are principals and agents according to the principal-agent-theory, can be reduced based on user generated content from social media platforms. In future studies, it should be evaluated in how far the data can be integrated in established processes for credit decision making. Furthermore, additional social media platforms as well as samples of companies should be analyzed. Lastly, the authenticity of user generated contend should be taken into account in order to ensure, that credit decisions rely on truthful information only.
Since textual user generated content from social media platforms contains valuable information for decision support and especially corporate credit risk analysis, automated approaches for text classification such as the application of sentiment dictionaries and machine learning algorithms have received great attention in recent user generated content based research endeavors. While machine learning algorithms require individual training data sets for varying sources, sentiment dictionaries can be applied to texts immediately, whereby domain specific dictionaries attain better results than domain independent word lists. We evaluate by means of a literature review how sentiment dictionaries can be constructed for specific domains and languages. Then, we construct nine versions of German sentiment dictionaries relying on a process model which we developed based on the literature review. We apply the dictionaries to a manually classified German language data set from Twitter in which hints for financial (in)stability of companies have been proven. Based on their classification accuracy, we rank the dictionaries and verify their ranking by utilizing Mc Nemar’s test for significance. Our results indicate, that the significantly best dictionary is based on the German language dictionary SentiWortschatz and an extension approach by use of the lexical-semantic database GermaNet. It achieves a classification accuracy of 59,19 % in the underlying three-case-scenario, in which the Tweets are labelled as negative, neutral or positive. A random classification would attain an accuracy of 33,3 % in the same scenario and hence, automated coding by use of the sentiment dictionaries can lead to a reduction of manual efforts. Our process model can be adopted by other researchers when constructing sentiment dictionaries for various domains and languages. Furthermore, our established dictionaries can be used by practitioners especially in the domain of corporate credit risk analysis for automated text classification which has been conducted manually to a great extent up to today.
Introduction: Renal cell carcinoma (RCC), an immunogenic tumor, is the most common form of kidney cancer worldwide. Immune checkpoint inhibitors (ICIs) play an important role in the treatment of metastatic RCC. Programmed death-ligand (PD-L1) has already been proposed as a possible prognosticator for ICIs effectiveness. To elucidate the feasible role of ICIs in neoadjuvant settings, we have assessed the most common PD-L1 expression modalities [tumor proportion score (TPS), combined positivity score (CPS) and inflammatory cell (IC) score] in primary tumors (PTs) and venous tumor thrombi (VTT) in first diagnosed, previously untreated RCC patients with accompanying
VTT.
Methods: Between January 1999 and December 2016, 71 patients with a first diagnosed, untreated, locally advanced RCC (aRCC) (≥ pT3a) underwent surgery in Hanover Medical School (MHH). PD-L1 expression was examined separately in PTs and VTT using the CPS, IC score and TPS. We also considered the age at the time of the initial surgery and gender as probable influencing factors. By using a cutoff value of 1 (1%), PD-L1 expression levels in PTs and VTT were assessed to enable the determination of any frequency differences.
Results: Positive scores for PTs were shown by 54 (CPS), 53 (IC score) and 34 (TPS) patients, whereas in VTT, positive scores were evaluated
for a total of 50 (CPS), 47 (IC-score) and 36 (TPS) patients. No statistically significant differences were obtained between the PD-L1 expression immunoscores for PTs and VTT. The covariates age at the time of the initial surgery and gender could not be statistically proven to influence the differences in PD-L1 expression between the
VTT and PTs.
Conclusion: To the best of our knowledge, this research is the largest study to investigate PD-L1 expression in PTs and VTT in 71 cases. It could have relevance for the future development of neoadjuvant immunotherapy options, particularly in aRCC with VTT.
Editorial for the 17th European Networked Knowledge Organization Systems Workshop (NKOS 2017)
(2017)
Knowledge Organization Systems (KOS), in the form of classification systems, thesauri, lexical databases, ontologies, and taxonomies, play a crucial role in digital information management and applications generally. Carrying semantics in a well-controlled and documented way, Knowledge Organization Systems serve a variety of important functions: tools for representation and indexing of information and documents, knowledge-based support to information searchers, semantic road maps to domains and disciplines, communication tool by providing conceptual framework, and conceptual basis for knowledge based systems, e.g. automated classification systems. New networked KOS (NKOS) services and applications are emerging, and we have reached a stage where many KOS standards exist and the integration of linked services is no longer just a future scenario. This editorial describes the workshop outline and overview of presented papers at the 17th European Networked Knowledge Organization Systems Workshop (NKOS 2017) which was held during the TPDL 2017 Conference in Thessaloniki, Greece.
Editorial for the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016)
(2016)
Knowledge Organization Systems (KOS), in the form of classification systems, thesauri, lexical databases, ontologies, and taxonomies, play a crucial role in digital information management and applications generally. Carrying semantics in a well-controlled and documented way, Knowledge Organisation Systems serve a variety of important functions: tools for representation and indexing of information and documents, knowledge-based support to information searchers, semantic road maps to domains and disciplines, communication tool by providing conceptual framework, and conceptual basis for knowledge based systems, e.g. automated classification systems. New networked KOS (NKOS) services and applications are emerging, and we have reached a stage where many KOS standards exist and the integration of linked services is no longer just a future scenario. This editorial describes the workshop outline and overview of presented papers at the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) in Hannover, Germany.
Fall events and their severe consequences represent not only a threatening problem for the affected individual, but also cause a significant burden for health care systems. Our research work aims to elucidate some of the prospects and problems of current sensor-based fall risk assessment approaches. Selected results of a questionnaire-based survey given to experts during topical workshops at international conferences are presented. The majority of domain experts confirmed that fall risk assessment could potentially be valuable for the community and that prediction is deemed possible, though limited. We conclude with a discussion of practical issues concerning adequate outcome parameters for clinical studies and data sharing within the research community. All participants agreed that sensor-based fall risk assessment is a promising and valuable approach, but that more prospective clinical studies with clearly defined outcome measures are necessary.
Background: Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data.
Methods: In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients’ fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched.
Results: Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores.
Conclusions: Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model’s performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.
Mining geriatric assessment data for in-patient fall prediction models and high-risk subgroups
(2012)
Background: Hospital in-patient falls constitute a prominent problem in terms of costs and consequences. Geriatric institutions are most often affected, and common screening tools cannot predict in-patient falls consistently. Our objectives are to derive comprehensible fall risk classification models from a large data set of geriatric in-patients’ assessment data and to evaluate their predictive performance (aim#1), and to identify high-risk subgroups from the data (aim#2).
Methods: A data set of n = 5,176 single in-patient episodes covering 1.5 years of admissions to a geriatric hospital were extracted from the hospital’s data base and matched with fall incident reports (n = 493). A classification tree model was induced using the C4.5 algorithm as well as a logistic regression model, and their predictive performance was evaluated. Furthermore, high-risk subgroups were identified from extracted classification rules with a support of more than 100 instances.
Results: The classification tree model showed an overall classification accuracy of 66%, with a sensitivity of 55.4%, a specificity of 67.1%, positive and negative predictive values of 15% resp. 93.5%. Five high-risk groups were identified, defined by high age, low Barthel index, cognitive impairment, multi-medication and co-morbidity.
Conclusions: Our results show that a little more than half of the fallers may be identified correctly by our model, but the positive predictive value is too low to be applicable. Non-fallers, on the other hand, may be sorted out with the model quite well. The high-risk subgroups and the risk factors identified (age, low ADL score, cognitive impairment, institutionalization, polypharmacy and co-morbidity) reflect domain knowledge and may be used to screen certain subgroups of patients with a high risk of falling. Classification models derived from a large data set using data mining methods can compete with current dedicated fall risk screening tools, yet lack diagnostic precision. High-risk subgroups may be identified automatically from existing geriatric assessment data, especially when combined with domain knowledge in a hybrid classification model. Further work is necessary to validate our approach in a controlled prospective setting.
Wearable sensors in healthcare and sensor-enhanced health information systems: all our tomorrows?
(2012)
Wearable sensor systems which allow for remote or self-monitoring of health-related parameters are regarded as one means to alleviate the consequences of demographic change. This paper aims to summarize current research in wearable sensors as well as in sensor-enhanced health information systems. Wearable sensor technologies are already advanced in terms of their technical capabilities and are frequently used for cardio-vascular monitoring. Epidemiologic predictions suggest that neuro-psychiatric diseases will have a growing impact on our health systems and thus should be addressed more intensively. Two current project examples demonstrate the benefit of wearable sensor technologies: long-term, objective measurement under daily-life, unsupervised conditions. Finally, up-to-date approaches for the implementation of sensor-enhanced health information systems are outlined. Wearable sensors are an integral part of future pervasive, ubiquitous and person-centered health
care delivery. Future challenges include their integration into sensor-enhanced health information systems and sound evaluation studies involving measures of workload reduction and costs.
A nonblinded, positively controlled, noninferiority trial was conducted to evaluate the efficacy of an alternative, nonantibiotic therapy with Masti Veyxym® to reduce ineffective antibiotic usage in the treatment of nonsevere clinical mastitis (CM) in cows with longer lasting udder diseases. The solely intramammary treatment with Masti Veyxym® (three applications, 12 hr apart) and the combined treatment with Masti Veyxym® and antibiotics as usual on the farm according to label of the respective product were compared with the reference treatment of solely antibiotic therapy. The matched field study was conducted on eight free-stall dairy farms located in Eastern Germany. Cases of mild-to-moderate CM in cows with longer lasting high somatic cell counts in preceding dairy herd improvement test days and with previous CM cases in current lactation were randomly allocated to one of the three treatment groups. A foremilk sample of the affected quarter was taken before treatment and again approximately 14 days and 21 days after the end of therapy for cyto-bacteriological examination. Primary outcomes were clinical cure (CC) and no CM recurrence within 60 days after the end of treatment (no R60). Bacteriological cure (BC) and quarter somatic cell count (QSCC) cure were chosen as secondary outcomes although low probabilities of BC and QSCC cure for selected cows were expected. The study resulted in the following findings: the pathogens mostly cultured from pretreatment samples were Streptococcus uberis, followed by Staphylococcus aureus and coagulase-negative staphylococci. There were no significant differences between the two test treatments in comparison with the reference treatment regarding all outcome variables. The sole therapy with Masti Veyxym® resulted in a numerically lower likelihood of BC without significant differences to the reference treatment. The combined therapy group showed a numerically higher nonrecurrence rate than the two other treatment groups and noninferiority compared to the reference treatment was proven. Having regard to the selection criteria of cows in this study, the findings indicated that sole treatment with Masti Veyxym® in nonsevere CM cases may constitute an alternative therapy to reduce antibiotics. However, noninferiority evaluations were mostly inconclusive. Further investigations with a larger sample size are required to confirm the results and to make a clear statement on noninferiority.
Background:
Hereditary angioedema (HAE) is a rare genetic disease and characterized by clinical features such as paroxysmal, recurrent angioedema of the skin, the gastrointestinal tract, and the upper airways. Swelling of the skin occurs primarily in the face, extremities and genitals. Gastrointestinal attacks are accompanied by painful abdominal cramps, vomiting and diarrhea. Due to the low prevalence and the fact that HAE patients often present with rather unspecific symptoms such as abdominal cramps, the final diagnosis is often made after a long delay. The aim of this German-wide survey was to characterize the period between occurrence of first symptoms and final diagnosis regarding self-perceived health, symptom burden and false diagnoses for patients with HAE.
Results:
Overall, 81 patients with HAE were included and participated in the telephone-based survey. Of those, the majority reported their current health status as “good” (47.5%) or “very good” (13.8%), which was observed to be a clear improvement compared to the year before final diagnosis (“good” (16.3%), “very good” (11.3%)). Edema in the extremities (85.2%) and in the gastrointestinal tract (81.5%) were the most currently reported symptoms and occurred earlier than other reported symptoms (mean age at onset 18.1 and 17.8 years, respectively). Misdiagnoses were observed in 50.6% of participating HAE patients with appendicitis and allergy being the most frequently reported misdiagnoses (40.0 and 30.0% of those with misdiagnosis, respectively). Patients with misdiagnosis often received mistreatment (80.0%) with pharmaceuticals and surgical interventions as the most frequently carried out mistreatments (65.6 and 56.3% of those with mistreatment, respectively). The mean observed diagnostic delay was 18.1 years (median 15.0 years). The diagnostic delay was higher in older patients and index patients.
Conclusions:
This study showed that self-perceived status of health for patients is much better once the final correct diagnosis has been made and specific treatment was available. Further challenge in the future will still be to increase awareness for HAE especially in settings which are normally approached by patients at occurrence of first symptoms to assure early referral to specialists and therefore increase the likelihood of receiving an early diagnosis.
The amount of papers published yearly increases since decades. Libraries need to make these resources accessible and available with classification being an important aspect and part of this process. This paper analyzes prerequisites and possibilities of automatic classification of medical literature. We explain the selection, preprocessing and analysis of data consisting of catalogue datasets from the library of the Hanover Medical School, Lower Saxony, Germany. In the present study, 19,348 documents, represented by notations of library classification systems such as e.g. the Dewey Decimal Classification (DDC), were classified into 514 different classes from the National Library of Medicine (NLM) classification system. The algorithm used was k-nearest-neighbours (kNN). A correct classification rate of 55.7% could be achieved. To the best of our knowledge, this is not only the first research conducted towards the use of the NLM classification in automatic classification but also the first approach that exclusively considers already assigned notations from other
classification systems for this purpose.
Corynebacterium spp. are frequently detected in bovine quarter milk samples, yet their impact on udder health has not been determined completely. In this longitudinal study, we collected quarter milk samples from a dairy herd of approximately 200 cows, ten times at 14 d intervals. Bacteriologically, Catalase-positive and Gram-positive rods were detected in 22.7% of the samples. For further species diagnosis, colonies were analyzed by MALDITOF MS. Corynebacterium bovis, C. amycolatum, C. xerosis and 10 other Corynebacterium spp. were detected. The three aforementioned species accounted for 88.4%, 8.65% and 0.94% of all cultured Corynebacterium spp., respectively. For further evaluation of infection dynamics, the following three infection definitions were applied: A (2/3 consecutive samples positive for the same species), B (≥1000 cfu/mL in one sample), C (isolated from a clinical mastitis case). Infections according to definition B occurred most frequently and clinical mastitis with Corynebacterium spp. occurred once during sampling. Life tables were used to determine the duration of infection. According to infection definition A, infection durations of 111 d and 98 d were obtained for C. bovis and C. amycolatum, respectively. Exemplarily, longer lasting infections were examined for their strain diversity by RAPD PCR. A low strain diversity was found in the individual quarters that indicates a longer colonization of the udder parenchyma by C. bovis and C. amycolatum.
In this species differentiation study of Corynebacterium spp. (C. spp.), quarter foremilk samples from 48 farms were included. These were obtained from both clinically healthy cows and those with clinical mastitis. First, all samples were examined cyto-microbiologically and all catalase-positive rods were differentiated using the direct transfer method in MALDI-TOF MS. C. bovis, C. amycolatum, C. xerosis, and five other species were identified with proportions of 90.1%, 7.7%, and 0.8% for the named species, respectively, and 1.4% for the remaining unnamed species. In addition, somatic cell count (SCC) was determined by flow cytometry. Based on this, the isolates were classified into four udder health groups: “latent infection”, “subclinical mastitis”, “clinical mastitis” and “others”. Approximately 90% of isolates of C. bovis and C. amycolatum were from latently and subclinically infected quarters. Of the C. bovis isolates, 5.8% were obtained from milk samples from clinical mastitis, whereas C. amycolatum was not present in clinical mastitis. The distribution of groups in these two species differed significantly. The geometric mean SCC of all species combined was 76,000 SCC/mL, almost the same as the SCC of C. bovis. With 50,000 SCC/mL, the SCC of C. amycolatum was slightly below the SCC of C. bovis. Through the species-level detection and consideration of SCC performed here, it is apparent that individual species differ in terms of their pathogenicity. Overall, their classification as minor pathogens with an SCC increase is confirmed.
In this paper, we consider the route coordination problem in emergency evacuation of large smart buildings. The building evacuation time is crucial in saving lives in emergency situations caused by imminent natural or man-made threats and disasters. Conventional approaches to evacuation route coordination are static and predefined. They rely on evacuation plans present only at a limited number of building locations and possibly a trained evacuation personnel to resolve unexpected contingencies. Smart buildings today are equipped with sensory infrastructure that can be used for an autonomous situation-aware evacuation guidance optimized in real time. A system providing such a guidance can help in avoiding additional evacuation casualties due to the flaws of the conventional evacuation approaches. Such a system should be robust and scalable to dynamically adapt to the number of evacuees and the size and safety conditions of a building. In this respect, we propose a distributed route recommender architecture for situation-aware evacuation guidance in smart buildings and describe its key modules in detail. We give an example of its functioning dynamics on a use case.
Immunization is the most cost-effective intervention for infectious diseases, which are the major cause of morbidity and mortality worldwide. Vaccines not only protect the individual who is vaccinated but also reduce the burden of infectious vaccine-preventable diseases for the entire community.
1 Adult vaccination is very important given that >25% of mortality is due to infectious diseases.
2 There is a scarcity of information on the vaccination status of young adults and the role of socioeconomic conditions in India.
The world health organization defines musculoskeletal disorder (MSD) as “a disorder of muscles, tendons, peripheral vascular system not directly resulting from an acute or instantaneous event.1 Work related MSDs are one of the most important occupational hazards.1 Among many other occupations, dentistry is a highly demanding profession that requires good visual acuity, hearing, depth perception, psychomotor skills, manual dexterity, and ability to maintain occupational postures over long periods.
Nanotechnology is emerging as one of the key technologies of the 21st century and is expected to enable developments across a wide range of sectors that can benefit citizens. Nanomedicine is an application of nanotechnology in the areas of healthcare, disease diagnosis, treatment and prevention of disease. Nanomedicines pose problem of nanotoxicity related to factors like size, shape, specific surface area, surface morphology, and crystallinity. Currently, nanomedicines are regulated as medicinal products or as medical devices and there is no specific regulatory framework for nanotechnology-based products neither in the EU nor in the USA. This review presents a scheme for classification and regulatory approval process for nanotechnology based medicines.
Medical devices are health care products distinguished from drugs for regulatory purposes in most countries based on mechanism of action. Unlike drugs, medical devices operate via physical or mechanical means and are not dependent on metabolism to accomplish their primary intended effect. Developing new medical devices requires clinical investigations and approval process goes through similar process like drugs. Medical device approvals in the period of 2010 to 2014 were searched from USFDA website. Disease burden data in the similar period was searched from centers for disease control and prevention website. Collected data was analyzed to know number of approved devices, top therapy areas, and mechanism of action of these devices. Out of a total of 200 medical devices approvals in the time period of 2010 to 2014, maximum number of devices (51; 25.5%) were approved in the year 2011, cardiovascular (78; 39%) was the top therapy area. Highest number (180; 90%) of approved medical devices belonged to the category III and maximum number (73; 36.5%) of approved medical devices had ―mechanical‖ mechanism of action. The top 3 causes of deaths in USA during 2010 to 2014 were heart disease, cancer and followed by respiratory infection. There was a match between the top diseases and the medical device approvals for top 2 diseases in USA i.e. heart disease, and cancer. With respect to respiratory infections and ailments which was the 3rd leading cause of death only one device was approved out of 200 approvals in total.
Background: Antimicrobial resistance has become a serious global problem. A potential post-antibiotic era is threatening present and future medical advances. In Pakistan, the usage of antibiotic is unnecessarily high and due to over exposure to these drugs, bacteria are developing resistance against these drugs. It is necessary to improve public awareness about the rational use of antibiotics in order to bring a change in consumer’s behaviour. Therefore, present study was undertaken to assess the existing knowledge, attitude and practices related to antibiotic usage among university students.
Methods: A cross-sectional study was carried out among university students from Karachi, Pakistan during May-June 2018. 200 students were approached to participate in the study of which 159 agreed to participate (males: 70, females: 89). Pretested questionnaire was distributed to the study subjects and the collected data was analyzed using IBM SPSS version 23.
Results: Substantial number of (33% and 50%) participants were unaware about the differences in antibiotic: anti-inflammatory drugs and antibiotic: antipyretics respectively. 29% of the participants thought it is right to stop antibiotics only based on symptomatic improvement. Thirty nine percent and eighty three percent participants believed that antibiotics should always be prescribed to treat flu like symptoms and pneumonia respectively.
Conclusions: Participants demonstrated average knowledge about antibiotics. Similarly, their attitude and practice toward antibiotic use was associated with misconceptions. An educational intervention is necessary to make them aware about rational use of antibiotics.
Background: Oral cancers (OC) are malignant lesions occurring in the oral cavity that include squamous cell carcinomas (SCC), salivary gland and odontogenic neoplasms. Even though it is the eighth most common malignancy globally but in Pakistan it is the second commonest type of cancer. Lack of awareness about ill-effects of preventable risk factors of oral cancer increases the burden of disease due to the associated high cost of treatment, permanent impairment and high mortality. Hence, awareness can be very helpful in prevention, control and early diagnosis of oral cancer.
Methods: A cross-sectional study was carried out among university students from Karachi, Pakistan during April to May 2018. Three hundred students were approached to participate in the study of which 277 agreed to participate. Pretested questionnaire was distributed and collected data was analysed using IBM SPSS version 23.
Results: There were 125 (45%) males and 152 (55%) females in the study and response rate was 94%. Sixty one percent (154/250) respondents correctly identified smoking, and tobacco chewing as possible causes of oral cancer. Almost one third (74%; 184/250) respondents correctly responded that oral cancer does not spread from person to person through touch or speaking. Sixty six percent (164/250) respondents believed that oral cancer is curable. Mean score of knowledge was higher in females (61%) than males (53%). Significantly higher number of females compared to male participants answered correctly to questions regarding cause of oral cancer, spread of disease and occurrence of oral cancer in AIDS patients.
Conclusions: Participants showed poor knowledge about oral cancer. Female participants showed better knowledge compared to male counterparts. Details about oral cancer should be incorporated in the university curriculum and periodic awareness programs should be organized for students.
Background: Diabetes is fast gaining the status of a potential epidemic in India, with >62 million individuals currently diagnosed with the disease. India currently faces an uncertain future in relation to the potential burden that diabetes may impose on the country. An estimated US$ 2.2 billion would be needed to sufficiently treat all cases of type 2 diabetes mellitus (T2DM) in India. Many interventions can reduce the burden of this disease. However, health care resources are limited; thus, interventions for diabetes treatment should be prioritized. The present study assesses the cost-effectiveness of antidiabetic drugs in patients with T2DM from Mumbai, India.
Methods: A prospective cross-sectional study was performed to assess the cost-effectiveness of antidiabetic drugs in patients with T2DM. Face-to-face interviews were conducted by using a validated questionnaire in a total of 152 (76 males, 76 females) patients with T2DM from F-North Ward, Mumbai, India. Cost-effectiveness was determined on the basis of cost of antidiabetic drug/s, efficacy, adverse drug reactions, safety of administration, frequency of administration, and bioavailability.
Results: For treatment of T2DM in non-obese participants, Glimepiride+Pioglitazone costed least (`3.7) per unit of effectiveness followed by Glimepiride (`6.6), Gliclazide (`8.1), Repaglinide (`24.5), and Vildagliptin (`45.2). For treatment of T2DM in obese participants, Metformin cost least (` 6.7) per unit of effectiveness followed by Glimepiride + Metformin (`5.9) and Repaglinide (`24.5).
Conclusions: In case of non-obese participants, cost effectiveness and prescribed treatments did not show a match, while for obese participants prescribed treatments were in line with cost effectiveness.
Diabetes is fast gaining the status of a potential epidemic in India, with >62 million individuals currently diagnosed with the disease.1 India currently faces an uncertain future in relation to the potential burden that diabetes may impose on the country. An estimated US$ 2.2 billion would be needed to sufficiently treat all cases of type 2 diabetes mellitus (T2DM) in India.2 Many interventions can reduce the burden of this disease. However, health care resources are limited; thus, interventions for diabetes treatment should be prioritized.
Background: Pharmacovigilance (PV); also known as drug safety surveillance, is the science of enhancing patient care and patient safety regarding the use of medicines by collecting, monitoring, assessing, and evaluating information from healthcare providers and patients. Pharmacists are pivotal players in adverse drug event (ADE) monitoring and reporting. However, most pharmacists are unaware or not knowledgeable about the guidelines used by their respective countries’ drug regulatory bodies. It is the need of the hour to train pharmacy students on the concept of pharmacovigilance.
Methods: A cross-sectional study was carried out among pharmacy students from Mumbai University, India during May-June 2017. On the basis of the eligibility criterion 352 students were selected for the present study. Four hundred students were approached to participate in the study of which 201 agreed to participate (males: 179; females: 173). Pretested questionnaire was distributed and collected data was analyzed using IBM SPSS version 23.
Results: Overall pharmacovigilance knowledge (44%) and perception (58%) was low among the participants of the present study. Seventy four percent of the participants felt that adverse drug reaction (ADR) reporting should be made compulsory for healthcare professionals. And only 21% agreed that the topic of Pharmacovigilance is well covered in pharmacy curriculum.
Conclusions: Pharmacy council of India, pharmacy teacher’s association and respective pharmacy college should take necessary steps to increase the knowledge and create awareness regarding pharmacovigilance and adverse drug reaction reporting among pharmacy students.
Background: Self-medication, practiced globally is an important public health problem. Research studies have indicated inappropriate self‐medication results in adverse drug reactions, disease masking, antibiotic resistance and wastage of healthcare resources. The objectives of the study were to explore overall self-medication and antibiotic self-medication prevalence among students of university students in Karachi, Pakistan along with probable reasons, indications, and sources of advice for self-medication. Methods: A descriptive, cross-sectional, questionnaire-based study was carried out among students from university of Karachi, Pakistan during the time period of September to November 2016. Pretested questionnaire was distributed to 320 students, collected data was analyzed using IBM SPSS version 24. Results: From 320 students, 311 (83 male and 228 female) students participated in the study giving a response rate of 97%. Prevalence of self-medication was 66%. Belonging to higher monthly family income group was associated with likelihood of self-medication. Antibiotic self-medication prevalence was 39%. Lack of time (39%), and old prescription (35%) were the main reasons for self-medication. Pharmacy shop (75%) was the main source for self-medication. In case of antibiotics, 44% students changed the dosage of antibiotic and 50% students stopped antibiotics after the disappearance of the symptoms. Conclusions: Antibiotic self-medication (39%) and self-medication with other drugs among university students of Karachi is a worrisome problem. Our findings highlight the need for planning interventions to promote the judicious use of general medicines as well as that of antibiotics.
Background: Discovery of antibiotics have helped to manage the devastating diseases. Presently, the antibiotic era is threatened by the emergence of high level of antibiotic resistance of important pathogens. Misuse of antibiotics poses a serious risk to infectious disease control. It is necessary to improve public awareness to bring a change in the behavior of consumers. Therefore, present study was undertaken to assess the existing knowledge, attitude and practices related to antibiotic usage among university students.
Methods: A cross-sectional study was carried out among students from Mumbai University, India during May-June 2017. 300 students were approached to participate in the study of which 250 agreed to participate (males: 117; females: 133). Pretested questionnaire was distributed and collected data was analyzed using IBM SPSS version 23.
Results: Substantial number (33% and 40%) participants were unaware about the differences in antibiotic-anti-inflammatory drugs and antibiotic-antipyretics respectively. 28% of the participants thought it is right to stop antibiotics only based on symptoms improvement. Sixty eight percent and seventy nine percent participants believed that antibiotics should always be prescribed to treat flu like symptoms and pneumonia respectively.
Conclusions: Participants demonstrated poor knowledge about antibiotics. Similarly, their attitude and practice toward antibiotic use was associated with misconceptions. An educational intervention can be introduced to make them aware about rational antibiotic practices.
Knowledge and attitude towards voluntary blood donation among students from Mumbai University
(2018)
Background: Blood is scarce; its demand far outweighs the supply. In addition to limited supply, the issue of safety especially with regard to the risk of transfusion transmissible infection is also an issue of utmost concern especially in the developing countries. Blood transfusion services in India have gained special significance in recent years and forms a vital part of national health care system. Voluntary Non-Remunerated Blood Donation (VNRBD) is the safest of all types of blood donations. One of the potential sources that can be tapped for blood donation is the young and physically fit students from educational institutions across India. Methods: A cross-sectional study was carried out among students from Mumbai University, India during May–June 2017. Two hundred and fifty students were approached to participate in the study of which 201 agreed to participate (males: 104; females: 97). Pretested questionnaire was distributed and collected data was analyzed using IBM SPSS version 23. Results: High number of participants agreed about encouraging general public about voluntary blood donation (96%; 193/201), lack of awareness about VBD in general public (82%; 164/201). But not a single participant was able to respond to the knowledge part of the questionnaire with 100% accuracy. Almost all the participants had correct knowledge about blood groups (98%; 196/201) and blood matching need (195/201; 97%). Conclusions: Participants showed good attitude but demonstrated poor knowledge about voluntary blood donation. Details about blood donation should be incorporated in the undergraduate curriculum and periodic awareness programs should be organized for students.