Refine
Year of publication
- 2023 (112) (remove)
Document Type
- Article (50)
- Bachelor Thesis (13)
- Conference Proceeding (11)
- Master's Thesis (10)
- Course Material (7)
- Report (6)
- Study Thesis (5)
- Book (3)
- Preprint (2)
- Working Paper (2)
- Part of a Book (1)
- Periodical Part (1)
- Review (1)
Is part of the Bibliography
- no (112)
Keywords
- Euterentzündung (7)
- PROFInet (6)
- Informationsethik (4)
- Nachhaltigkeit (4)
- PROFINET Security (4)
- Öffentliche Bibliothek (4)
- Bibliothek (3)
- Computersicherheit (3)
- Fragebogen (3)
- IEC 62443 (3)
The miniaturized Mössbauer-spectrometer (MIMOS II), originally devised by Göstar Klingelhöfer, is further developed by the Renz group at the Leibniz University Hanover in cooperation with the Hanover University of Applied Sciences and Arts. A new processing unit with a two-dimensional (2D) data acquisition was developed by M. Jahns. The advantage of this data acquisition is that no thresholds need to be set before the measurement. The energy of each photon is determined and stored with the velocity of the drive. After the measurement, the relevant area can be selected for the Mössbauer spectrum. Now we have expanded the evaluation unit with a power supply for a MIMOS drive and a MIMOS PIN detector. So we have a very compact MIMOS transmissions measurement setup. With this setup it is possible to process the signals of two detectors serially. Currently we are working on a parallel signal processing.
In the last years generative models have gained large public attention due to their high level of quality in generated images. In short, generative models learn a distribution from a finite number of samples and are able then to generate infinite other samples. This can be applied to image data. In the past generative models have not been able to generate realistic images, but nowadays the results are almost indistinguishable from real images.
This work provides a comparative study of three generative models: Variational Autoencoder (VAE), Generative Adversarial Network (GAN) and Diffusion Models (DM). The goal is not to provide a definitive ranking indicating which one of them is the best, but to qualitatively and where possible quantitively decide which model is good with respect to a given criterion. Such criteria include realism, generalization and diversity, sampling, training difficulty, parameter efficiency, interpolating and inpainting capabilities, semantic editing as well as implementation difficulty. After a brief introduction of how each model works on the inside, they are compared against each other. The provided images help to see the differences among the models with respect to each criterion.
To give a short outlook on the results of the comparison of the three models, DMs generate most realistic images. They seem to generalize best and have a high variation among the generated images. However, they are based on an iterative process, which makes them the slowest of the three models in terms of sample generation time. On the other hand, GANs and VAEs generate their samples using one single forward-pass. The images generated by GANs are comparable to the DM and the images from VAEs are blurry, which makes them less desirable in comparison to GANs or DMs. However, both the VAE and the GAN, stand out from the DMs with respect to the interpolations and semantic editing, as they have a latent space, which makes space-walks possible and the changes are not as chaotic as in the case of DMs. Furthermore, concept-vectors can be found, which transform a given image along a given feature while leaving other features and structures mostly unchanged, which is difficult to archive with DMs.
The shift towards RES introduces challenges related to power system stability due to the characteristics of inverter-based resources (IBRs) and the intermittent nature of renewable resources. This paper addresses these challenges by conducting comprehensive time and frequency simulations on the IEEE two-area benchmark power system with detailed type 4 wind turbine generators (WTGs), including turbines, generators, converters, filters, and controllers. The simulations analyse small-signal and transient stability, considering variations in active and reactive power, short-circuit events, and wind variations. Metrics such as rate of change of frequency (RoCoF), frequency nadir, percentage of frequency variation, and probability density function (PDF) are used to evaluate the system performance. The findings emphasise the importance of including detailed models of RES in stability analyses and demonstrate the impact of RES penetration on power system dynamics. This study contributes to a deeper understanding of RES integration challenges and provides insights for ensuring the reliable and secure operation of power systems in the presence of high levels of RES penetration.
Ability of Black-Box Optimisation to Efficiently Perform Simulation Studies in Power Engineering
(2023)
In this study, the potential of the so-called black-box optimisation (BBO) to increase the efficiency of simulation studies in power engineering is evaluated. Three algorithms ("Multilevel Coordinate Search"(MCS) and "Stable Noisy Optimization by Branch and Fit"(SNOBFIT) by Huyer and Neumaier and "blackbox: A Procedure for Parallel Optimization of Expensive Black-box Functions"(blackbox) by Knysh and Korkolis) are implemented in MATLAB and compared for solving two use cases: the analysis of the maximum rotational speed of a gas turbine after a load rejection and the identification of transfer function parameters by measurements. The first use case has a high computational cost, whereas the second use case is computationally cheap. For each run of the algorithms, the accuracy of the found solution and the number of simulations or function evaluations needed to determine the optimum and the overall runtime are used to identify the potential of the algorithms in comparison to currently used methods. All methods provide solutions for potential optima that are at least 99.8% accurate compared to the reference methods. The number of evaluations of the objective functions differs significantly but cannot be directly compared as only the SNOBFIT algorithm does stop when the found solution does not improve further, whereas the other algorithms use a predefined number of function evaluations. Therefore, SNOBFIT has the shortest runtime for both examples. For computationally expensive simulations, it is shown that parallelisation of the function evaluations (SNOBFIT and blackbox) and quantisation of the input variables (SNOBFIT) are essential for the algorithmic performance. For the gas turbine overspeed analysis, only SNOBFIT can compete with the reference procedure concerning the runtime. Further studies will have to investigate whether the quantisation of input variables can be applied to other algorithms and whether the BBO algorithms can outperform the reference methods for problems with a higher dimensionality.
The digital transformation with its new technologies and customer expectation has a significant effect on the customer channels in the insurance industry. The objective of this study is the identification of enabling and hindering factors for the adoption of online claim notification services that are an important part of the customer experience in insurance. For this purpose, we conducted a quantitative cross-sectional survey based on the exemplary scenario of car insurance in Germany and analyzed the data via structural equation modeling (SEM). The findings show that, besides classical technology acceptance factors such as perceived usefulness and ease of use, digital mindset and status quo behavior play a role: acceptance of digital innovations, lacking endurance as well as lacking frustration tolerance with the status quo lead to a higher intention for use. Moreover, the results are strongly moderated by the severity of the damage event—an insurance-specific factor that is sparsely considered so far. The latter discovery implies that customers prefer a communication channel choice based on the individual circumstances of the claim.
During the Corona-Pandemic, information (e.g. from the analysis of balance sheets and payment behavior) traditionally used for corporate credit risk analysis became less valuable because it represents only past circumstances. Therefore, the use of currently published data from social media platforms, which have shown to contain valuable information regarding the financial stability of companies, should be evaluated. In this data e. g. additional information from disappointed employees or customers can be present. In order to analyze in how far this data can improve the information base for corporate credit risk assessment, Twitter data regarding the ten greatest insolvencies of German companies in 2020 and solvent counterparts is analyzed in this paper. The results from t-tests show, that sentiment before the insolvencies is significantly worse than in the comparison group which is in alignment with previously conducted research endeavors. Furthermore, companies can be classified as prospectively solvent or insolvent with up to 70% accuracy by applying the k-nearest-neighbor algorithm to monthly aggregated sentiment scores. No significant differences in the number of Tweets for both groups can be proven, which is in contrast to findings from studies which were conducted before the Corona-Pandemic. The results can be utilized by practitioners and scientists in order to improve decision support systems in the domain of corporate credit risk analysis. From a scientific point of view, the results show, that the information asymmetry between lenders and borrowers in credit relationships, which are principals and agents according to the principal-agent-theory, can be reduced based on user generated content from social media platforms. In future studies, it should be evaluated in how far the data can be integrated in established processes for credit decision making. Furthermore, additional social media platforms as well as samples of companies should be analyzed. Lastly, the authenticity of user generated contend should be taken into account in order to ensure, that credit decisions rely on truthful information only.
Monitoring of clinical trials is a fundamental process required by regulatory agencies. It assures the compliance of a center to the required regulations and the trial protocol. Traditionally, monitoring teams relied on extensive on-site visits and source data verification. However, this is costly, and the outcome is limited. Thus, central statistical monitoring (CSM) is an additional approach recently embraced by the International Council for Harmonisation (ICH) to detect problematic or erroneous data by using visualizations and statistical control measures. Existing implementations have been primarily focused on detecting inlier and outlier data. Other approaches include principal component analysis and distribution of the data. Here we focus on the utilization of comparisons of centers to the Grand mean for different model types and assumptions for common data types, such as binomial, ordinal, and continuous response variables. We implement the usage of multiple comparisons of single centers to the Grand mean of all centers. This approach is also available for various non-normal data types that are abundant in clinical trials. Further, using confidence intervals, an assessment of equivalence to the Grand mean can be applied. In a Monte Carlo simulation study, the applied statistical approaches have been investigated for their ability to control type I error and the assessment of their respective power for balanced and unbalanced designs which are common in registry data and clinical trials. Data from the German Multiple Sclerosis Registry (GMSR) including proportions of missing data, adverse events and disease severity scores were used to verify the results on Real-World-Data (RWD).
Presse-, Informations- und Meinungsfreiheit sind miteinander verbundene Basisrechte. Durch das Recht sich frei zu informieren wird das Recht der freien Meinungsäußerung und damit die freie Presse sichergestellt. Die sich weltweit verändernden Presselagen stellen nicht nur die Pressefreiheit vor Herausforderungen. Durch die Fähigkeit erhaltene Informationen zu verarbeiten, wird die Informationskompetenz der Bevölkerung ermöglicht und sichergestellt. Die Ausübung dieser Kompetenz hingegen ist die Grundlage sich aktiv an demokratischen Prozessen zu beteiligen. Ziel dieser Arbeit ist es, mittels der Rangliste zur Presselage der Reporter ohne Grenzen und anhand der Daten zur Informationsfreiheit des DA2I Dashboards sowie literaturbasierten Kriterien zu untersuchen, inwiefern sich die Lage der Presse- und Informationsfreiheit auf Bibliotheksdienstleistungen öffentlicher Bibliotheken auswirkt. Im Ergebnis wirken sich diese nicht ausschließlich auf den Medienbestand aus. Die Zustände der genannten Basisrechte in einem Staat haben auch Einfluss auf die Veranstaltungsarbeit.
Purpose: Radiology reports mostly contain free-text, which makes it challenging to obtain structured data. Natural language processing (NLP) techniques transform free-text reports into machine-readable document vectors that are important for creating reliable, scalable methods for data analysis. The aim of this study is to classify unstructured radiograph reports according to fractures of the distal fibula and to find the best text mining method.
Materials & Methods: We established a novel German language report dataset: a designated search engine was used to identify radiographs of the ankle and the reports were manually labeled according to fractures of the distal fibula. This data was used to establish a machine learning pipeline, which implemented the text representation methods bag-of-words (BOW), term frequency-inverse document frequency (TF-IDF), principal component analysis (PCA), non-negative matrix factorization (NMF), latent Dirichlet allocation (LDA), and document embedding (doc2vec). The extracted document vectors were used to train neural networks (NN), support vector machines (SVM), and logistic regression (LR) to recognize distal fibula fractures. The results were compared via cross-tabulations of the accuracy (acc) and area under the curve (AUC).
Results: In total, 3268 radiograph reports were included, of which 1076 described a fracture of the distal fibula. Comparison of the text representation methods showed that BOW achieved the best results (AUC = 0.98; acc = 0.97), followed by TF-IDF (AUC = 0.97; acc = 0.96), NMF (AUC = 0.93; acc = 0.92), PCA (AUC = 0.92; acc = 0.9), LDA (AUC = 0.91; acc = 0.89) and doc2vec (AUC = 0.9; acc = 0.88). When comparing the different classifiers, NN (AUC = 0,91) proved to be superior to SVM (AUC = 0,87) and LR (AUC = 0,85).
Conclusion: An automated classification of unstructured reports of radiographs of the ankle can reliably detect findings of fractures of the distal fibula. A particularly suitable feature extraction method is the BOW model.
Key Points:
- The aim was to classify unstructured radiograph reports according to distal fibula fractures.
- Our automated classification system can reliably detect fractures of the distal fibula.
- A particularly suitable feature extraction method is the BOW model.
The aim of this cross-sectional study was to investigate the occurrence of bacteremia in severe mastitis cases of dairy cows. Milk and corresponding blood samples of 77 cases of severe mastitis were bacteriologically examined. All samples (milk and blood) were incubated aerobically and anaerobically to also investigate the role of obligate anaerobic microorganisms in addition to aerobic microorganisms in severe mastitis. Bacteremia occurred if identical bacterial strains were isolated from milk and blood samples of the same case. In addition, pathogen shedding was examined, and the data of animals and weather were collected to determine associated factors for the occurrence of bacteremia in severe mastitis. If Gram-negative bacteria were detected in milk samples, a Limulus test (detection of endotoxins) was also performed for corresponding blood samples without the growth of Gram-negative bacteria. In 74 cases (96.1%), microbial growth was detected in aerobically incubated milk samples. The most-frequently isolated bacteria in milk samples were Escherichia (E.) coli (48.9%), Streptococcus (S.) spp. (18.1%), and Klebsiella (K.) spp. (16%). Obligatory anaerobic microorganisms were not isolated. In 72 cases (93.5%) of the aerobically examined blood samples, microbial growth was detected. The most-frequently isolated pathogens in blood samples were non-aureus Staphylococci (NaS) (40.6%) and Bacillus spp. (12.3%). The Limulus test was positive for 60.5% of cases, which means a detection of endotoxins in most blood samples without the growth of Gram-negative bacteria. Bacteremia was confirmed in 12 cases (15.5%) for K. pneumoniae (5/12), E. coli (4/12), S. dysgalactiae (2/12), and S. uberis (1/12). The mortality rate (deceased or culled) was 66.6% for cases with bacteremia and 34.1% for cases without bacteremia. High pathogen shedding and high humidity were associated with the occurrence of bacteremia in severe mastitis.
Theoretischer Hintergrund: 45,5 Millionen Personen hielten sich im Jahr 2022 regelmäßig in Betrieben oder Unternehmen auf. Ein Arbeitsplatz kann neben einem Einkommen und psychosozialen Ressourcen auch Stress und eine gesundheitliche Belastung bedeuten. Gleichzeitig bietet die Arbeitswelt jedoch auch gute Voraussetzungen für die Anwendung vorbeugender Maßnahmen zur Gesunderhaltung.
Ziel: In dieser Arbeit besteht das Erkenntnisinteresse darin, die betriebliche Situation und die Unternehmenseinstellungen zum Betrieblichen Gesundheitsmanagement (BGM) zu untersuchen sowie die Beweggründe für die Teilnahme an Interventionen der Betrieblichen Gesundheitsförderung (BGF) zu ermitteln.
Methode: Fünf Mitarbeitende des BGMs aus Unternehmen diverser Branchen und verschiedener Regionen Deutschlands wurden mittels eines semistrukturierten Leitfadens befragt. Weiter wurde eine Online-Umfrage von Arbeitnehmer*innen zu ihrem Teilnahmeverhalten befragt. Die Auswertung der qualitativen Daten entspricht der Inhaltsanalyse nach Mayring. Für die quantitativen Ergebnisse wurden deskriptive Statistiken erstellt sowie Korrelationsanalysen durchgeführt.
Ergebnisse: Selbst wenn das Angebot der gewünschten BGF-Maßnahme den Erwartungen entspricht, variiert die Teilnahme der Arbeitnehmer*innen je nach Angebot zwischen 15,4 % und 100,0 %. Neben der Thematik wird beispielsweise als wichtig empfunden, dass BGF-Angebote während der Arbeitszeit stattfinden, die Wegstrecke möglichst kurz ist, die Kosten komplett übernommen werden und die Angebote über möglichst viele Kanäle beworben werden. Der Einsatz einer/s Arbeitgeber*in zur Entstigmatisierung von psychischer Gesundheit oder Hilfegesuchen wird positiv bewertet, während der Einfluss der Einstellung von Führungskräften und Kolleg*innen als weniger stark angesehen wird. Es konnten signifikante Unterschiede ermittelt werden.
Schlussfolgerung: Es bedarf einer vertiefenden Einbindung von Führungsebenen sowie einer Reflexion des Verständnisses von BGM und den damit zusammenhängenden Absichten, um dieses in einem Unternehmen voranzubringen. Insbesondere große Unternehmen müssen sich hierfür der Komplexität ihrer Angestellten und deren Bedürfnisse bewusstwerden. Kleinere Unternehmen sollten gezielte Angebote in direkter Rücksprache gestalten.
One of the main concerns of this publication is to furnish a more rational basis for discussing bioplastics and use fact-based arguments in the public discourse. Furthermore, “Biopolymers – facts and statistics” aims to provide specific, qualified answers easily and quickly for decision-makers in particular from public administration and the industrial sector. Therefore, this publication is made up like a set of rules and standards and largely foregoes textual detail. It offers extensive market-relevant and technical facts presented in graphs and charts, which means that the information is much easier to grasp. The reader can expect comparative market figures for various materials, regions, applications, process routes, agricultural land use, water use or resource consumption, production capacities, geographic distribution, etc.
Bluetooth ist ein weit verbreitetes drahtloses Übertragungsprotokoll, das in vielen mobilen Geräten wie bspw. Tablets, Kopfhörer oder Smartwatches verwendet wird. Bluetooth-fähige Geräte senden mehrmals pro Minute öffentliche Advertisements, die u.a. die einzigartige MAC-Adresse des Gerätes beinhalten. Das Mitschneiden dieser Advertisements mittels Bluetooth-Logger ermöglicht es, Bewegungen der Geräte zu analysieren und lassen somit Rückschlüsse auf die Bewegungen der Besitzenden zu.
Zum Schutz der Privatsphäre werden seit 2014 zufällig erzeugte MAC-Adressen in Advertisements verwendet. Eine sog. randomisierte MAC-Adresse bleibt durchschnittlich 15 Minuten lang gültig und wird dann durch eine neue zufällige Adresse ersetzt. Der Aufenthalt eines Geräts zu einem späteren Zeitpunkt kann nicht bestimmt werden. Dennoch kann der Wechsel eines Geräts von einem Bluetooth-Logger zu einem anderen innerhalb dieser 15 Minuten erkannt und somit eine Bewegung des Gerätes abgeleitet werden.
Durch Apps der Kontaktpersonennachverfolgung wie die Corona-Warn-App (CWA) senden auch vermeintlich inaktive Smartphones Bluetooth-Advertisements. Mit etwa einem Viertel der Aufzeichnungen unterstützt die CWA die Auswertungen dieser experimentellen Arbeit.
Um die praktische Anwendbarkeit zu demonstrieren, wurde der Erlebniszoo Hannover als Testgelände genutzt. Die Auswertung der über sieben Wochen gesammelten Daten ermöglichte die Analyse von Stoßzeiten, stark besuchten Orten und Besucherströmen.
Social comparison theories suggest that ingroups are strengthened whenever important outgroups are weakened (e.g., by losing status or power). It follows that ingroups have little reason to help outgroups facing an existential threat. We challenge this notion by showing that ingroups can also be weakened when relevant comparison outgroups are weakened, which can motivate ingroups to strategically offer help to ensure the outgroups' survival as a highly relevant comparison target. In three preregistered studies, we showed that an existential threat to an outgroup with high (vs. low) identity relevance affected strategic outgroup helping via two opposing mechanisms. The potential demise of a highly relevant outgroup increased participants’ perceptions of ingroup identity threat, which was positively related to helping. At the same time, the outgroup’s misery evoked schadenfreude, which was negatively related to helping. Our research exemplifies a group's secret desire for strong outgroups by underlining their importance for identity formation.
Clio-Guide: Bibliotheken
(2023)
Das Kapitel definiert den Begriff Bibliothek, erläutert die wichtigsten Aufgaben und Dienstleistungen von Bibliotheken und stellt die wichtigsten Elemente des deutschen Bibliothekswesens vor. Darüber hinaus werden typologisch die wichtigsten Gattungen bibliothekarischer Informationsressourcen und -systeme darbestellt. Die behandelten Beispiele berücksichtigen insbesondere die Bedürfnisse von Forschung und Lehre in den historisch arbeitenden Fächern.
In this paper we describe the selection of a modern build automation tool for an industry research partner of ours, namely an insurance company. Build automation has become increasingly important over the years. Today, build automation became one of the central concepts in topics such as cloud native development based on microservices and DevOps. Since more and more products for build automation have entered the market and existing tools have changed their functional scope, there is nowadays a large number of tools on the market that differ greatly in their functional scope. Based on requirements from our partner company, a build server analysis was conducted. This paper presents our analysis requirements, a detailed look at one of the examined tools and a summarizes our comparison of all three tools from our final comparison round.
Die vorliegende Bachelorarbeit geht der Fragestellung nach, wie eine CSR Strategie in die identitätsbasierte Markenführung von Messeunternehmensmarken integriert werden kann. Dafür wurde insbesondere die Bildung und Gestaltung der Markenidentität betrachtet.
Um die Markenidentität einer Messedachmarke und gesellschaftliche Verantwortung zu verbinden und die Kontaktpunkte und Integrationsmöglichkeiten aufzuzeigen, wurden das Modell einer verantwortungsvollen Markenführung von Schmidt 2019 sowie das Gesamtmodell der Messedachmarkenidentität von Jung 2010 verwendet und inhaltlich sowie visuell miteinander verbunden.
Das daraus abgeleitete Modell bietet einen Ansatz der Integration von CSR Strategien in die identitätsbasierte Markenführung von Messeunternehmen. Grundlegend ist, dass Merkmale gesellschaftlicher Verantwortung Teil des gesamten Prozesses der Identitätserstellung sind und daher im Kern der Identität ansetzen sollten.
Das Gesundheitsdatennutzungsgesetz – Potenzial für eine bessere Forschung und Gesundheitsversorgung
(2023)
Im Koalitionsvertrag der Ampel-Koalition wird für die laufende Legislaturperiode ein Gesundheitsdatennutzungsgesetz (GDNG) angekündigt. Dieses Gesetz soll „zu einer besseren wissenschaftlichen Nutzung in Einklang mit der DSGVO“ führen. Bekanntermaßen steht unser Gesundheitssystem vor großen Herausforderungen (Demografie, Digitalisierung, Fachkräftemangel, Klimakrise, regionale Unterschiede, etc.) und ist jetzt schon das teuerste in Europa bei mittelmäßiger Leistung. Diese Herausforderungen können effizienter und evidenzgeleitet bewältigt werden, wenn wie im geplanten GDNG angedacht, die Datenressourcen für die Evaluierung und Weiterentwicklung des Gesundheitssystems und der Gesundheitsversorgung optimal genutzt werden. In den folgenden Ausführungen werden aus Sicht von Versorgungsforscher*innen Voraussetzungen und Desiderata für eine optimale Ausgestaltung des Gesetzes formuliert. Das Papier wurde durch das Deutsche Netzwerk Versorgungsforschung (DNVF) und die Arbeitsgruppe Erhebung und Nutzung von Sekundärdaten (AGENS) der Deutschen Gesellschaft für Sozialmedizin und Prävention (DGSMP) und der Deutschen Gesellschaft für Epidemiologie (DGEpi) erstellt und wird von den unterzeichnenden Fachgesellschaften getragen. Das vorliegende Positionspapier und die hier aufgestellten Forderungen sind vor der Veröffentlichung und damit in Unkenntnis des Referentenentwurfs zum GDNG formuliert worden.
Unternehmen, die sich ernsthaft mit Nachhaltigkeit beschäftigen, müssen den Nachweis erbringen, dass sie positive Effekte für die Gesellschaft erzielen. Damit ist eine ganzheitliche Wirkungsmessung unabdingbar. Sozialunternehmen sollten als Vorbild für eine solche Wirkungsmessung dienen. Eine wissenschaftliche Studie auf Basis der sog. „Ergebnispyramide“ kommt jedoch zu dem Schluss, dass selbst diese ihre Wirkung bisher kaum ganzheitlich messen.
There are many aspects of code quality, some of which are difficult to capture or to measure. Despite the importance of software quality, there is a lack of commonly accepted measures or indicators for code quality that can be linked to quality attributes. We investigate software developers’ perceptions of source code quality and the practices they recommend to achieve these qualities. We analyze data from semi-structured interviews with 34 professional software developers, programming teachers and students from Europe and the U.S. For the interviews, participants were asked to bring code examples to exemplify what they consider good and bad code, respectively. Readability and structure were used most commonly as defining properties for quality code. Together with documentation, they were also suggested as the most common target properties for quality improvement. When discussing actual code, developers focused on structure, comprehensibility and readability as quality properties. When analyzing relationships between properties, the most commonly talked about target property was comprehensibility. Documentation, structure and readability were named most frequently as source properties to achieve good comprehensibility. Some of the most important source code properties contributing to code quality as perceived by developers lack clear definitions and are difficult to capture. More research is therefore necessary to measure the structure, comprehensibility and readability of code in ways that matter for developers and to relate these measures of code structure, comprehensibility and readability to common software quality attributes.