Refine
Year of publication
Document Type
- Article (300)
- Conference Proceeding (119)
- Periodical Part (9)
- Bachelor Thesis (8)
- Report (6)
- Master's Thesis (4)
- Working Paper (4)
- Part of a Book (3)
- Preprint (3)
- Book (2)
Language
- English (459) (remove)
Is part of the Bibliography
- no (459)
Keywords
- Euterentzündung (23)
- Student (12)
- Computersicherheit (10)
- Knowledge (10)
- Mumbai (10)
- Wissen (10)
- India (9)
- Serviceorientierte Architektur (9)
- bioplastics (9)
- biopolymers (9)
This paper presents a possibility to extend the formalism of linear indexed grammars. The extension is based on the use of tuples of pushdowns instead of one pushdown to store indices during a derivation. If a restriction on the accessibility of the pushdowns is used, it can be shown that the resulting formalisms give rise to a hierarchy of languages that is equivalent with a hierarchy defined by Weir. For this equivalence, that was already known for a slightly different formalism, this paper gives a new proof. Since all languages of Weir's hierarchy are known to be mildly context sensitive, the proposed extensions of LIGs become comparable with extensions of tree adjoining grammars and head grammars.
Subject of this work is the investigation of universal scaling laws which are observed in coupled chaotic systems. Progress is made by replacing the chaotic fluctuations in the perturbation dynamics by stochastic processes.
First, a continuous-time stochastic model for weakly coupled chaotic systems is introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck equation scaling relations are derived, which are confirmed by results of numerical simulations.
Next, the new effect of avoided crossing of Lyapunov exponents of weakly coupled disordered chaotic systems is described, which is qualitatively similar to the energy level repulsion in quantum systems. Using the scaling relations obtained for the coupling sensitivity of chaos, an asymptotic expression for the distribution function of small spacings between Lyapunov exponents is derived and compared with results of numerical simulations.
Finally, the synchronization transition in strongly coupled spatially extended chaotic systems is shown to resemble a continuous phase transition, with the coupling strength and the synchronization error as control and order parameter, respectively. Using results of numerical simulations and theoretical considerations in terms of a multiplicative noise partial differential equation, the universality classes of the observed two types of transition are determined (Kardar-Parisi-Zhang equation with saturating term, directed percolation).
In this paper we describe methods to approximate functions and differential operators on adaptive sparse (dyadic) grids. We distinguish between several representations of a function on the sparse grid and we describe how finite difference (FD) operators can be applied to these representations. For general variable coefficient equations on sparse grids, genuine finite element (FE) discretizations are not feasible and FD operators allow an easier operator evaluation than the adapted FE operators. However, the structure of the FD operators is complex. With the aim to construct an efficient multigrid procedure, we analyze the structure of the discrete Laplacian in its hierarchical representation and show the relation between the full and the sparse grid case. The rather complex relations, that are expressed by scaling matrices for each separate coordinate direction, make us doubt about the possibility of constructing efficient preconditioners that show spectral equivalence. Hence, we question the possibility of constructing a natural multigrid algorithm with optimal O(N) efficiency. We conjecture that for the efficient solution of a general class of adaptive grid problems it is better to accept an additional condition for the dyadic grids (condition L) and to apply adaptive hp-discretization.
Autonomous mobile six-legged robots are able to demonstrate the potential of intelligent control systems based on recurrent neural networks. The robots evaluate only two forward and two backward looking infrared sensor signals. Fast converging genetic training algorithms are applied to train the robots to move straight in six directions. The robots performed successfully within an obstacle environment and there could be observed a never trained useful interaction between each of the single robots. The paper describes the robot systems and presents the test results. Video clips are downloadable under www.inform.fh-hannover.de/download/lechner.php. Held on IFAC International Conference on Intelligent Control Systems and Signal Processing (ICONS 2003, April 2003, Portugal).
This assignment is about the development of a general strategic marketing plan for academic libraries in Germany and can be used as a guideline for libraries that want to develop concrete marketing strategies for several products and services. Two examples of marketing projects are at its end presented for linking theoretical approaches to practice. Finally the development of an own marketing strategy for “information literacy” builds the last part of the assignment.
We describe an experimental approach to the determination of the nascent internal state distribution of gas-phase products of a gas–liquid interfacial reaction. The system chosen for study is O(³P) atoms with the surface of liquid deuterated squalane, a partially branched long-chain saturated hydrocarbon, C₃₀D₆₂. The nascent OD products are detected by laser-induced fluorescence. Both OD (v′=0) and (v′=1) were observed in significant yield. The rotational distributions in both vibrational levels are essentially the same, and are characteristic of a Boltzmann distribution at a temperature close to that of the liquid surface. This contrasts with the distributions in the corresponding homogeneous gas-phase reactions. We propose a preliminary interpretation in terms of a dominant trapping-desorption mechanism, in which the OD molecules are retained at the surface sufficiently long to cause rotational equilibration but not complete vibrational relaxation. The significant yield of vibrationally excited OD also suggests that the surface is not composed entirely of –CD₃ endgroups, but that secondary and/or tertiary units along the backbone are exposed.
Report of a research project of the Fachhochschule Hannover, University of Applied Sciences and Arts, Department of Information Technologies. Automatic face recognition increases the security standards at public places and border checkpoints. The picture inside the identification documents could widely differ from the face, that is scanned under random lighting conditions and for unknown poses. The paper describes an optimal combination of three key algorithms of object recognition, that are able to perform in real time. The camera scan is processed by a recurrent neural network, by a Eigenfaces (PCA) method and by a least squares matching algorithm. Several examples demonstrate the achieved robustness and high recognition rate.
The effects of surface temperature on the gas-liquid interfacial reaction dynamics of O(³P)+squalane
(2005)
OH/OD product state distributions arising from the reaction of gas-phase O(³P) atoms at the surface of the liquid hydrocarbon squalane C₃₀H₆₂/C₃₀D₆₂ have been measured. The O(³P) atoms were generated by 355 nm laser photolysis of NO₂ at a low pressure above the continually refreshed liquid. It has been shown unambiguously that the hydroxyl radicals detected by laser-induced fluorescence originate from the squalane surface. The gas-phase OH/OD rotational populations are found to be partially sensitive to the liquid temperature, but do not adapt to it completely. In addition, rotational temperatures for OH/OD(v′=1) are consistently colder (by 34±5 K) than those for OH/OD(v′=0). This is reminiscent of, but less pronounced than, a similar effect in the well-studied homogeneous gas-phase reaction of O(³P) with smaller hydrocarbons. We conclude that the rotational distributions are composed of two different components. One originates from a direct abstraction mechanism with product characteristics similar to those in the gas phase. The other is a trapping-desorption process yielding a thermal, Boltzmann-like distribution close to the surface temperature. This conclusion is consistent with that reached previously from independent measurements of OH product velocity distributions in complementary molecular-beam scattering experiments. It is further supported by the temporal profiles of OH/OD laser-induced fluorescence signals as a function of distance from the surface observed in the current experiments. The vibrational branching ratios for (v′=1)/(v′=0) for OH and OD have been found to be (0.07±0.02) and (0.30±0.10), respectively. The detection of vibrationally excited hydroxyl radicals suggests that secondary and/or tertiary hydrogen atoms may be accessible to the attacking oxygen atoms.
A German university has developed a learning information system to improve information literacy among German students. An online tutorial based on this Lerninformationssystem has been developed. The structure of this learning information system is described, an online tutorial based on it is illustrated, and the different learning styles that it supports are indicated.
This document describes the work done during the Research Semester in Summer 2006 of Prof. Dr. Stefan Wohlfeil. It is about Security Management tasks and how these tasks might be supported by Open Source software tools. I begin with a short discussion of general management tasks and describe some additional, security related management tasks. These security related tasks should then be added to a software tool which already provides the general tasks. Nagios is such a tool. It is extended to also perform some of the security related management tasks, too. I describe the new checking scripts and how Nagios needs to be configured to use these scripts. The work has been done in cooperation with colleagues from the Polytech- nic of Namibia in Windhoek, Namibia. This opportunity was used to also establish a partnership between the Department of Computer Science at FH Hannover and the Department of Information Technology at the Polytechnic. A first Memorandum of Agreement lays the groundwork for future staff or student exchange.
In this article, we present the software architecture of a new generation of advisory systems using Intelligent Agent and Semantic Web technologies. Multi-agent systems provide a well-suited paradigm to implement negotiation processes in a consultancy situation. Software agents act as clients and advisors, using their knowledge to assist human users. In the presented architecture, the domain knowledge is modeled semantically by means of XML-based ontology languages such as OWL. Using an inference engine, the agents reason, based on their knowledge to make decisions or proposals. The agent knowledge consists of different types of data: on the one hand, private data, which has to be protected against unauthorized access; and on the other hand, publicly accessible knowledge spread over different Web sites. As in a real consultancy, an agent only reveals sensitive private data, if they are indispensable for finding a solution. In addition, depending on the actual consultancy situation, each agent dynamically expands its knowledge base by accessing OWL knowledge sources from the Internet. Due to the standardization of OWL, knowledge models easily can be shared and accessed via the Internet. The usefulness of our approach is proved by the implementation of an advisory system in the Semantic E-learning Agent (SEA) project, whose objective is to develop virtual student advisers that render support to university students in order to successfully organize and perform their studies.
Recent progress that has been made towards understanding the dynamics of collisions at the gas–liquid interface is summarized briefly. We describe in this context a promising new approach to the experimental study of gas–liquid interfacial reactions that we have introduced. This is based on laser-photolytic production of reactive gas-phase atoms above the liquid surface and laser-spectroscopic probing of the resulting nascent products. This technique is illustrated for reaction of O(³P) atoms at the surface of the long-chain liquid hydrocarbon squalane (2,6,10,15,19,23-hexamethyltetracosane). Laser-induced fluorescence detection of the nascent OH has revealed mechanistically diagnostic correlations between its internal and translational energy distributions. Vibrationally excited OH molecules are able to escape the surface. At least two contributions to the product rotational distributions are identified, confirming and extending previous hypotheses of the participation of both direct and trapping-desorption mechanisms. We speculate briefly on future experimental and theoretical developments that might be necessary to address the many currently unanswered mechanistic questions for this, and other, classes of gas–liquid interfacial reaction.
All of us are aware of the changes in the information field during the last years. We all see the paradigm shift coming up and have some idea how it will challenge our profession in the future. But how the road to excellence - in education of information specialists in the future - will look like? There are different models (new and old ones) for reorganising the structure of education: * Integration * Specialisation * Step-by step-model * Modul System * Network System / Combination model The paper will present the actual level of discussion on building up a new curriculum at the Department of Information and Communication (IK) at the FH Hannover. Based on the mission statement of the department »Education of information professionals is a part of the dynamic evolution of knowledge society« the direction of change and the main goals will be presented. The different reorganisation models will be explained with its objectives, opportunities and forms of implementation. Some examples will show the ideas and tools for a first draft of a reconstruction plan to become fit for the future. This talk has been held at the German-Dutch University Conference »Information Specialists for the 21st Century« at the Fachhochschule Hannover - University of Applied Sciences, Department of Information and Communication, October 14 -15, 1999 in Hannover, Germany.
On April, 23rd 2007 a series of postings started on Infobib.de, where guest authors from all over the world introduced the library and library related blogs of their own country. This book is a collection of 30 revised LibWorld articles, accompanied by a foreword by Walt Crawford. Included are articles about the blogosphere of: Argentina, Australia, Austria, Belarus, Belgium, Brazil, Canada, Denmark, Finland, France, Greece, Hungary, Iran, Italy, Japan, Latvia, Malawi, Netherlands, New Zealand, Norway, Peru, Puerto Rico, Russia, Singapore, Spain, Sweden, Switzerland, Trinidad & Tobago, USA.
Our research project, "Rationalizing the virtualization of botanical document material and their usage by process optimization and automation (Herbar-Digital)" started on July 1, 2007 and will last until 2012. Its long-term aim is the digitization of the more than 3,5 million specimens in the Berlin Herbarium. The University of Applied Sciences and Arts in Hannover collaborates with the department of Biodiversity Informatics at the BGBM (Botanic Garden and Botanical Museum Berlin-Dahlem) headed by Walter Berendsohn. The part of Herbar-Digital here presented deals with the analysis of the generated high resolution images (10,400 lines x 7,500 pixel).
Background: In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases.
Results: This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevanceshifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis.
Conclusion: The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
Heterogeneity has to be taken into account when integrating a set of existing information sources into a distributed information system that are nowadays often based on Service- Oriented Architectures (SOA). This is also particularly applicable to distributed services such as event monitoring, which are useful in the context of Event Driven Architectures (EDA) and Complex Event Processing (CEP). Web services deal with this heterogeneity at a technical level, also providing little support for event processing. Our central thesis is that such a fully generic solution cannot provide complete support for event monitoring; instead, source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Our core result is the design of a configurable event monitoring (Web) service that allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.
Background
To perform a systematic review about the effect of using clinical pathways on length of stay (LOS), hospital costs and patient outcomes. To provide a framework for local healthcare organisations considering the effectiveness of clinical pathways as a patient management strategy.
Methods
As participants, we considered hospitalized children and adults of every age and indication whose treatment involved the management strategy "clinical pathways". We include only randomised controlled trials (RCT) and controlled clinical trials (CCT), not restricted by language or country of publication. Single measures of continuous and dichotomous study outcomes were extracted from each study. Separate analyses were done in order to compare effects of clinical pathways on length of stay (LOS), hospital costs and patient outcomes. A random effects meta-analysis was performed with untransformed and log transformed outcomes.
Results
In total 17 trials met inclusion criteria, representing 4,070 patients. The quality of the included studies was moderate and studies reporting economic data can be described by a very limited scope of evaluation. In general, the majority of studies reporting economic data (LOS and hospital costs) showed a positive impact. Out of 16 reporting effects on LOS, 12 found significant shortening. Furthermore, in a subgroup-analysis, clinical pathways for invasive procedures showed a stronger LOS reduction (weighted mean difference (WMD) -2.5 days versus -0.8 days)).
There was no evidence of differences in readmission to hospitals or in-hospital complications. The overall Odds Ratio (OR) for re-admission was 1.1 (95% CI: 0.57 to 2.08) and for in-hospital complications, the overall OR was 0.7 (95% CI: 0.49 to 1.0). Six studies examined costs, and four showed significantly lower costs for the pathway group. However, heterogeneity between studies reporting on LOS and cost effects was substantial.
Conclusion
As a result of the relatively small number of studies meeting inclusion criteria, this evidence base is not conclusive enough to provide a replicable framework for all pathway strategies. Considering the clinical areas for implementation, clinical pathways seem to be effective especially for invasive care. When implementing clinical pathways, the decision makers need to consider the benefits and costs under different circumstances (e.g. market forces).
The methods developed in the research project "Herbar Digital" are to help plant taxonomists to master the great amount of material of about 3.5 million dried plants on paper sheets belonging to the Botanic Museum Berlin in Germany. Frequently the collector of the plant is unknown. So a procedure had to be developed in order to determine the writer of the handwriting on the sheet. In the present work the static character is transformed into a dynamic form. This is done with the model of an inert ball which is rolled through the written character. During this off-line writer recognition, different mathematical procedures are used such as the reproduction of the write line of individual characters by Legendre polynomials. When only one character is used, a recognition rate of about 40% is obtained. By combining multiple characters, the recognition rate rises considerably and reaches 98.7% with 13 characters and 93 writers (chosen randomly from the international IAM-database [3]). Another approach tries to identify the writer by handwritten words. The word is cut out and transformed into a 6-dimensional time series and compared e.g. by means of DTW-methods. A global statistical approach using the whole handwritten sentences results in a similar recognition rate of more than 98%. By combining the methods, a recognition rate of 99.5% is achieved.
The research project "Herbar Digital" was started in 2007 with the aim to digitize 3.5 million dried plants on paper sheets belonging to the Botanic Museum Berlin in Germany. Frequently the collector of the plant is unknown, so a procedure had to be developed in order to determine the writer of the handwriting on the sheet. In the present work the static character was transformed into a dynamic form. This was done with the model of an inert ball which was rolled along the written character. During this off-line writer recognition, different mathematical procedures were used such as the reproduction of the write line of individual characters by Legendre polynomials. When only one character was used, a recognition rate of about 40% was obtained. By combining multiple characters, the recognition rate rose considerably and reached 98.7% with 13 characters and 93 writers (chosen randomly from the international IAM-database [3]). A global statistical approach using the whole handwritten text resulted in a similar recognition rate. By combining local and global methods, a recognition rate of 99.5% was achieved.
The objective of this student project was for the students to develop, conduct, and supervise a training course for basic work place applications (word processing and business graphics). Students were responsible for the planning, organizing and the teaching of the course. As participants, underprivileged adolescents took part in order to learn the handling of IT applications and therefore, improve their job skills and have a better chance to get into employment. Therefore the adolescents do the role of trainees at the course. Our students worked with a population that is continually overlooked by the field.
As a result, the students trained to design and implement training courses, exercised to manage projects and increased their social responsibility and awareness concerning the way of life and living conditions of other young people. The underprivileged adolescents learned to use important business applications and increased their job skills and job chances. The overall design of our concept required extensive resources to supervise and to steer the students and the adolescents. The lecturers had to teach and to counsel the students and had to be on “stand-by” just in case they were needed to solve critical situations between the two groups of young people.
During the intraoperative radiograph generation process with mobile image intensifier systems (C-arm) most of the radiation exposure for patient, surgeon and operation room personal is caused by scattered radiation. The intensity and propagation of scattered radiation depend on different parameters, e.g. the intensity of the primary radiation, and the positioning of the mobile image intensifier. Exposure through scattered radiation can be minimized when all these parameters are adjusted correctly. Because radiation is potentially dangerous and could not be perceived by any human sense the current education on correct adjustment of a C-arm is designed very theoretical. This paper presents an approach of scattered radiation calculation and visualization embedded in a computer based training system for mobile image intensifier systems called virtX. With the help of this extension the virtX training system should enrich the current radiation protection training with visual and practical training aspects.
We have combined the velocity map imaging technique with time-of-flight measurements to study the surface photochemistry of KBr single crystals. This approach yields 3-dimensional velocity distributions of Br atoms resulting from 193 nm photodesorption. The velocity distributions indicate that at least two non-thermal mechanisms contribute to the photodesorption dynamics. Our experimental geometry also allows us to measure the Br(²P₃⁄₂):Br(²P₁⁄₂) branching ratio, which is found to be 24:1.
Influence on persistence and adherence with oral bisphosphonates on fracture rates in osteoporosis
(2009)
Background and Aim:
Oral bisphosphonates have been shown to reduce the risk of fractures in patients with osteoporosis. It can be assumed that the clinical effectiveness of oral bisphosphonates depends on persistence with therapy.
Methods:
The influence of persistence with and adherence to oral bisphosphonates on fracture risk in a real-life setting was investigated. Data from 4451 patients with a defi ned index prescription of bisphosphonates were included. Fracture rates within 180, 360, and 720 days after index prescription were compared between persistent and non-persistent patients. In an extended Cox regression model applying multiple event analysis, the influence of adherence was analyzed. Persistence was defined as the duration of continuous therapy; adherence was measured in terms of the medication possession ratio (MPR).
Results:
In patients with a fracture before index prescription, fracture rates were reduced by 29% (p = 0.025) comparing persistent and non-persistent patients within 180 days after the index prescription and by 45% (p < 0.001) within 360 days. The extended Cox regression model showed that good adherence (MPR ≥ 0.8) reduced fracture risk by about 39% (HR 0.61, 95% CI 0.47–0.78; p < 0.01).
Conclusions:
In patients with osteoporosis-related fractures, good persistence and adherence to oral bisphosphonates reduced fracture risk significantly.
The authors describe the application of a combination of velocity map imaging and time-of-flight (TOF) techniques to obtain three-dimensional velocity distributions for surface photodesorption. They have established a systematic alignment procedure to achieve correct and reproducible experimental conditions. It includes four steps: (1) optimization of the velocity map imaging ion optics’ voltages to achieve optimum velocity map imaging conditions; (2) alignment of the surface normal with the symmetry axis (ion flight axis) of the ion optics; (3) determination of TOF distance between the surface and the ionizing laser beam; (4) alignment of the position of the ionizing laser beam with respect to the ion optics. They applied this set of alignment procedures and then measured Br(²P₃/₂) (Br) and Br(²P₁/₂) (Br∗) atoms photodesorbing from a single crystal of KBr after exposure to 193 nm light. They analyzed the velocity flux and energy flux distributions for motion normal to the surface. The Br∗ normal energy distribution shows two clearly resolved peaks at approximately 0.017 and 0.39 eV, respectively. The former is slightly faster than expected for thermal desorption at the surface temperature and the latter is hyperthermal. The Br normal energy distribution shows a single broad peak that is likely composed of two hyperthermal components. The capability that surface three-dimensional velocity map imaging provides for measuring state-specific velocity distributions in all three dimensions separately and simultaneously for the products of surface photodesorption or surface reactions holds great promise to contribute to our understanding of these processes.
Primary data is an important source ofinformation for Competitive Intelligence. Traditionally, it has been collected from interviews with stakeholders, talks at conferences and other means of direct interpersonal communication. The role of the Internet in the data collection – if it was used at all – was that of a provider of supplementary secondary data. Here, this approach is challenged and, using three examples of Social Media, it is shown that the Internet can and does provide valuable primary information to the Competitive Intelligence professional. Accordingly, a case is made for a shift of focus in the data collection process.
The speed control system for a concept for cost effective drives with high precision is presented. The drive concept consists of two parallel working drives. The concept is an alternative to direct drives. One big advantage is the use of standard gear boxes with economical components. This paper deals with the control of the drive system consisting of two parts: one drive produces the power for the machine, another drive makes the motion precice and dynamic. Both drives are combined to one double drive by a control system. The drive system is usefull for printing machines and other machines with high power consumption at a nearly constant speed and high accuracy requirements. The calculation for a drive system with 37 kW shows, that the control drive has to supply only about 20 % of the total torque and power needed to compensate the errors of the power drive. The stability of the system is shown by a simulation of the double drive.
Background
Maternal postpartum depression has an impact on mother-infant interaction. Mothers with depression display less positive affect and sensitivity in interaction with their infants compared to non-depressed mothers. Depressed women also show more signs of distress and difficulties adjusting to their role as mothers than non-depressed women. In addition, depressive mothers are reported to be affectively more negative with their sons than with daughters.
Methods
A non-clinical sample of 106 mother-infant dyads at psychosocial risk (poverty, alcohol or drug abuse, lack of social support, teenage mothers and maternal psychic disorder) was investigated with EPDS (maternal postpartum depressive symptoms), the CARE-Index (maternal sensitivity in a dyadic context) and PSI-SF (maternal distress). The baseline data were collected when the babies had reached 19 weeks of age.
Results
A hierarchical regression analysis yielded a highly significant relation between the PSI-SF subscale "parental distress" and the EPDS total score, accounting for 55% of the variance in the EPDS. The other variables did not significantly predict the severity of depressive symptoms. A two-way ANOVA with "infant gender" and "maternal postpartum depressive symptoms" showed no interaction effect on maternal sensitivity.
Conclusions
Depressive symptoms and maternal sensitivity were not linked. It is likely that we could not find any relation between both variables due to different measuring methods (self-reporting and observation). Maternal distress was strongly related to maternal depressive symptoms, probably due to the generally increased burden in the sample, and contributed to 55% of the variance of postpartum depressive symptoms.
Background: Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data.
Methods: In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients’ fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched.
Results: Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores.
Conclusions: Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model’s performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.
The automated transfer of flight logbook information from aircrafts into aircraft maintenance systems leads to reduced ground and maintenance time and is thus desirable from an economical point of view. Until recently, flight logbooks have not been managed electronically in aircrafts or at least the data transfer from aircraft to ground maintenance system has been executed manually. Latest aircraft types such as the Airbus A380 or the Boeing 787 do support an electronic logbook and thus make an automated transfer possible. A generic flight logbook transfer system must deal with different data formats on the input side – due to different aircraft makes and models – as well as different, distributed aircraft maintenance systems for different airlines as aircraft operators. This article contributes the concept and top level distributed system architecture of such a generic system for automated flight log data transfer. It has been developed within a joint industry and applied research project. The architecture has already been successfully evaluated in a prototypical implementation.
Decision support systems for traffic management systems have to cope with a high volume of events continuously generated by sensors. Conventional software architectures do not explicitly target the efficient processing of continuous event streams. Recently, event-driven architectures (EDA) have been proposed as a new paradigm for event-based applications. In this paper we propose a reference architecture for event-driven traffic management systems, which enables the analysis and processing of complex event streams in real-time and is therefore well-suited for decision support in sensor-based traffic control sys- tems. We will illustrate our approach in the domain of road traffic management. In particular, we will report on the redesign of an intelligent transportation management system (ITMS) prototype for the high-capacity road network in Bilbao, Spain.
We compare the effect of different text segmentation strategies on speech based passage retrieval of video. Passage retrieval has mainly been studied to improve document retrieval and to enable question answering. In these domains best results were obtained using passages defined by the paragraph structure of the source documents or by using arbitrary overlapping passages. For the retrieval of relevant passages in a video, using speech transcripts, no author defined segmentation is available. We compare retrieval results from 4 different types of segments based on the speech channel of the video: fixed length segments, a sliding window, semantically coherent segments and prosodic segments. We evaluated the methods on the corpus of the MediaEval 2011 Rich Speech Retrieval task. Our main conclusion is that the retrieval results highly depend on the right choice for the segment length. However, results using the segmentation into semantically coherent parts depend much less on the segment length. Especially, the quality of fixed length and sliding window segmentation drops fast when the segment length increases, while quality of the semantically coherent segments is much more stable. Thus, if coherent segments are defined, longer segments can be used and consequently less segments have to be considered at retrieval time.
Automatic classification of scientific records using the German Subject Heading Authority File (SWD)
(2012)
The following paper deals with an automatic text classification method which does not require training documents. For this method the German Subject Heading Authority File (SWD), provided by the linked data service of the German National Library is used. Recently the SWD was enriched with notations of the Dewey Decimal Classification (DDC). In consequence it became possible to utilize the subject headings as textual representations for the notations of the DDC. Basically, we we derive the classification of a text from the classification of the words in the text given by the thesaurus. The method was tested by classifying 3826 OAI-Records from 7 different repositories. Mean reciprocal rank and recall were chosen as evaluation measure. Direct comparison to a machine learning method has shown that this method is definitely competitive. Thus we can conclude that the enriched version of the SWD provides high quality information with a broad coverage for classification of German scientific articles.
The speed control system for a concept for cost effective drives with high precision is presented. The drive concept consists of two parallel working drives. The concept is an alternative to direct drives. One big advantage is the use of standard gear boxes with economical components. This paper deals with the control of the drive system consisting of two parts: one drive produces the power for the machine, another drive makes the motion precice and dynamic. Both drives are combined to one double drive by a control system. The drive system is usefull for printing machines and other machines with high power consumption at a nearly constant speed and high accuracy requirements. The calculation for a drive system with 37 kW shows, that the control drive has to supply only about 20 % of the total torque and power needed to compensate the errors of the power drive. The stability of the system is shown by a simulation of the double drive.
In service-oriented architectures the management of services is a crucial task during all stages of IT operations. Based on a case study performed for a group of finance companies the different aspects of service management are presented. First, the paper discusses how services must be described for management purposes. In particular, a special emphasis is placed on the integration of legacy/non web services. Secondly, the service lifecycle that underlies service management is presented. Especially, the relation to SOA governance and an appropriate tool support by registry repositories is outlined.
Mining geriatric assessment data for in-patient fall prediction models and high-risk subgroups
(2012)
Background: Hospital in-patient falls constitute a prominent problem in terms of costs and consequences. Geriatric institutions are most often affected, and common screening tools cannot predict in-patient falls consistently. Our objectives are to derive comprehensible fall risk classification models from a large data set of geriatric in-patients’ assessment data and to evaluate their predictive performance (aim#1), and to identify high-risk subgroups from the data (aim#2).
Methods: A data set of n = 5,176 single in-patient episodes covering 1.5 years of admissions to a geriatric hospital were extracted from the hospital’s data base and matched with fall incident reports (n = 493). A classification tree model was induced using the C4.5 algorithm as well as a logistic regression model, and their predictive performance was evaluated. Furthermore, high-risk subgroups were identified from extracted classification rules with a support of more than 100 instances.
Results: The classification tree model showed an overall classification accuracy of 66%, with a sensitivity of 55.4%, a specificity of 67.1%, positive and negative predictive values of 15% resp. 93.5%. Five high-risk groups were identified, defined by high age, low Barthel index, cognitive impairment, multi-medication and co-morbidity.
Conclusions: Our results show that a little more than half of the fallers may be identified correctly by our model, but the positive predictive value is too low to be applicable. Non-fallers, on the other hand, may be sorted out with the model quite well. The high-risk subgroups and the risk factors identified (age, low ADL score, cognitive impairment, institutionalization, polypharmacy and co-morbidity) reflect domain knowledge and may be used to screen certain subgroups of patients with a high risk of falling. Classification models derived from a large data set using data mining methods can compete with current dedicated fall risk screening tools, yet lack diagnostic precision. High-risk subgroups may be identified automatically from existing geriatric assessment data, especially when combined with domain knowledge in a hybrid classification model. Further work is necessary to validate our approach in a controlled prospective setting.
Wearable sensors in healthcare and sensor-enhanced health information systems: all our tomorrows?
(2012)
Wearable sensor systems which allow for remote or self-monitoring of health-related parameters are regarded as one means to alleviate the consequences of demographic change. This paper aims to summarize current research in wearable sensors as well as in sensor-enhanced health information systems. Wearable sensor technologies are already advanced in terms of their technical capabilities and are frequently used for cardio-vascular monitoring. Epidemiologic predictions suggest that neuro-psychiatric diseases will have a growing impact on our health systems and thus should be addressed more intensively. Two current project examples demonstrate the benefit of wearable sensor technologies: long-term, objective measurement under daily-life, unsupervised conditions. Finally, up-to-date approaches for the implementation of sensor-enhanced health information systems are outlined. Wearable sensors are an integral part of future pervasive, ubiquitous and person-centered health
care delivery. Future challenges include their integration into sensor-enhanced health information systems and sound evaluation studies involving measures of workload reduction and costs.
The present investigation was conducted to investigate the in-vitro activity of ethanolic extract of roots of Centaurea behens by using DPPH radical scavenging activity, nitric oxide radical scavenging activity, hydrogen peroxide radical scavenging activity, hydroxyl radical. Result suggests that the extract possess significant antioxidant activity as compared to the standard ascorbic acid and thus further in vivo investigation is required to evaluate the medicinal significance of the extract which can be used for assessing the possible therapeutic importance of the drug.
An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction.
In recent years, multiple efforts for reducing energy usage have been proposed. Especially buildings offer high potentials for energy savings. In this paper, we present a novel approach for intelligent energy control that combines a simple infrastructure using low cost sensors with the reasoning capabilities of Complex Event Processing. The key issues of the approach are a sophisticated semantic domain model and a multi-staged event processing architecture leading to an intelligent, situation-aware energy management system.
In huge warehouses or stockrooms, it is often very difficult to find a certain item, because it has been misplaced and is therefore not at its assumed position. This position paper presents an approach on how to coordinate mobile RFID agents using a blackboard architecture based on Complex Event Processing.
Enterprise apps on mobile devices typically need to communicate with other system components by consuming web services. Since most of the current mobile device platforms (such as Android) do not provide built-in features for consuming SOAP services, extensions have to be designed. Additionally in order to accommodate the typical enhanced security requirements of enterprise apps, it is important to be able to deal with SOAP web service security extensions on client side. In this article we show that neither the built-in SOAP capabilities for Android web service clients are sufficient for enterprise apps nor are the necessary security features supported by the platform as is. After discussing different existing extensions making Android devices SOAP capable we explain why none of them is really satisfactory in an enterprise context. Then we present our own solution which accommodates not only SOAP but also the WS-Security features on top of SOAP. Our solution heavily relies on code generation in order to keep the flexibility benefits of SOAP on one hand while still keeping the development effort manageable for software development. Our approach provides a good foundation for the implementation of other SOAP extensions apart from security on the Android platform as well. In addition our solution based on the gSOAP framework may be used for other mobile platforms in a similar manner.
OSGi is a popular Java-based platform, which has its roots in the area of embedded systems. However, nowadays it is used more and more in enterprise systems. To fit this new application area, OSGi has recently been extended with the Remote Services specification. This specification enables distribution, which OSGi was previously lacking. However, the specification provides means for synchronous communication only and leaves out asynchronous communication. As an attempt to fill a gap in this field, we propose, implement and evaluate an approach for the integration of asynchronous messaging into OSGi.
Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer.
Distributional semantics tries to characterize the meaning of words by the contexts in which they occur. Similarity of words hence can be derived from the similarity of contexts. Contexts of a word are usually vectors of words appearing near to that word in a corpus. It was observed in previous research that similarity measures for the context vectors of two words depend on the frequency of these words. In the present paper we investigate this dependency in more detail for one similarity measure, the Jensen-Shannon divergence. We give an empirical model of this dependency and propose the deviation of the observed Jensen-Shannon divergence from the divergence expected on the basis of the frequencies of the words as an alternative similarity measure. We show that this new similarity measure is superior to both the Jensen-Shannon divergence and the cosine similarity in a task, in which pairs of words, taken from Wordnet, have to be classified as being synonyms or not.
Background:
This study examined the extent to which regulatory problems in infants at 4 and 6 months influence childhood development at 12 months. The second aim of the study was to examine the influence maternal distress has on 4-month-old children’s subsequent development as well as gender differences with regard to regulatory problems and development.
Methods:
153 mother-child dyads enrolled in the family support research project “Nobody slips through the net” constituted the comparison group. These families faced psychosocial risks (e.g. poverty, excessive demands on the mother, and mental health disorders of the mother, measured with the risk screening instrument Heidelberger Belastungsskala - HBS) and maternal stress, determined with the Parental Stress Index (PSI-SF). The children’s developmental levels and possible early regulatory problems were evaluated by means of the Ages and Stages Questionnaires (ASQ) and a German questionnaire assessing problems of excessive crying along with sleeping and feeding difficulties (SFS).
Results:
A statistically significant but only low, inverse association between excessive crying, whining and sleep problems at 4 and 6 months and the social development of one-year-olds (accounting for 5% and 8% of the variance respectively) was found. Feeding problems had no effect on development. Although regulatory problems in infants were accompanied by increased maternal stress level, these did not serve as a predictor of the child’s social development at 12 months. One-year-old girls reached a higher level of development in social and fine motor skills. No gender differences were found with regard to regulatory problems, nor any moderating effect of gender on the relation between regulatory problems and level of development.
Conclusions:
Our results reinforce existing knowledge pertaining to the transactional association between regulatory problems in infants, maternal distress and dysfunctionality of mother-child interactions. They also provide evidence of a slight but distinct negative influence of crying and sleeping problems on children’s subsequent social development. Easily accessible support services provided by family health visitors (particularly to the so-called “at-risk families”) are strongly recommended to help prevent the broadening of children’s early regulatory problems into other areas of behavior.
Regional Innovation Systems describe the relations between actors, structures and infrastructures in a region in order to stimulate innovation and regional development. For these systems the collection and organization of information is crucial. In the present paper we investigate the possibilities to extract information from websites of companies. First we describe regional innovation systems and the information types that are necessary to create them. Then we discuss the possibilities of text mining and keyword extraction techniques to extract this information from company websites. Finally, we describe a small scale experiment in which keywords related to economic sectors and commodities are extracted from the websites of over 200 companies. This experiment shows what the main challenges are for information extraction from websites for regional innovation systems.
Complications may occur after a liver transplantation, therefore proper monitoring and care in the post-operation phase plays a very important role. Sometimes, monitoring and care for patients from abroad is difficult due to a variety of reasons, e.g., different care facilities. The objective of our research for this paper is to design, implement and evaluate a home monitoring and decision support infrastructure for international children who underwent liver transplant operation. A point-of-care device and the PedsQL questionnaire were used in patients’ home environment for measuring the blood parameters and assessing quality of life. By using a tablet PC and a specially developed software, the measured results were able to be transmitted to the health care providers via internet. So far, the developed infrastructure has been evaluated with four international patients/families transferring 38 records of blood test. The evaluation showed that the home monitoring and decision support infrastructure is technically feasible and is able to give timely alarm in case of abnormal situation as well as may increase parent’s feeling of safety for their children.
Fall events and their severe consequences represent not only a threatening problem for the affected individual, but also cause a significant burden for health care systems. Our research work aims to elucidate some of the prospects and problems of current sensor-based fall risk assessment approaches. Selected results of a questionnaire-based survey given to experts during topical workshops at international conferences are presented. The majority of domain experts confirmed that fall risk assessment could potentially be valuable for the community and that prediction is deemed possible, though limited. We conclude with a discussion of practical issues concerning adequate outcome parameters for clinical studies and data sharing within the research community. All participants agreed that sensor-based fall risk assessment is a promising and valuable approach, but that more prospective clinical studies with clearly defined outcome measures are necessary.