Refine
Document Type
- Article (4)
Language
- English (4)
Has Fulltext
- yes (4)
Is part of the Bibliography
- no (4)
Keywords
- Public reporting (4) (remove)
Institute
- Fakultät III - Medien, Information und Design (4) (remove)
Purpose: The calculation of aggregated composite measures is a widely used strategy to reduce the amount of data on hospital report cards. Therefore, this study aims to elicit and compare preferences of both patients as well as referring physicians regarding publicly available hospital quality information.
Methods: Based on systematic literature reviews as well as qualitative analysis, two discrete choice experiments (DCEs) were applied to elicit patients’ and referring physicians’ preferences. The DCEs were conducted using a fractional factorial design. Statistical data analysis was performed using multinomial logit models.
Results: Apart from five identical attributes, one specific attribute was identified for each study group, respectively. Overall, 322 patients (mean age 68.99) and 187 referring physicians (mean age 53.60) were included. Our models displayed significant coefficients for all attributes (p < 0.001 each). Among patients, “Postoperative complication rate” (20.6%; level range of 1.164) was rated highest, followed by “Mobility at hospital discharge” (19.9%; level range of 1.127), and ‘‘The number of cases treated” (18.5%; level range of 1.045). In contrast, referring physicians valued most the ‘‘One-year revision surgery rate’’ (30.4%; level range of 1.989), followed by “The number of cases treated” (21.0%; level range of 1.372), and “Postoperative complication rate” (17.2%; level range of 1.123).
Conclusion: We determined considerable differences between both study groups when calculating the relative value of publicly available hospital quality information. This may have an impact when calculating aggregated composite measures based on consumer-based weighting.
Purpose
This study aims to determine the intention to use hospital report cards (HRCs) for hospital referral purposes in the presence or absence of patient-reported outcomes (PROs) as well as to explore the relevance of publicly available hospital performance information from the perspective of referring physicians.
Methods
We identified the most relevant information for hospital referral purposes based on a literature review and qualitative research. Primary survey data were collected (May–June 2021) on a sample of 591 referring orthopedists in Germany and analyzed using structural equation modeling. Participating orthopedists were recruited using a sequential mixed-mode strategy and randomly allocated to work with HRCs in the presence (intervention) or absence (control) of PROs.
Results
Overall, 420 orthopedists (mean age 53.48, SD 8.04) were included in the analysis. The presence of PROs on HRCs was not associated with an increased intention to use HRCs (p = 0.316). Performance expectancy was shown to be the most important determinant for using HRCs (path coefficient: 0.387, p < .001). However, referring physicians have doubts as to whether HRCs can help them. We identified “complication rate” and “the number of cases treated” as most important for the hospital referral decision making; PROs were rated slightly less important.
Conclusions
This study underpins the purpose of HRCs, namely to support referring physicians in searching for a hospital. Nevertheless, only a minority would support the use of HRCs for the next hospital search in its current form. We showed that presenting relevant information on HRCs did not increase their use intention.
Objective: To evaluate the impact of different dissemination channels on the awareness and usage of hospital performance reports among referring physicians, as well as the usefulness of such reports from the referring physicians’ perspective.
Data sources/Study setting: Primary data collected from a survey with 277 referring physicians (response rate = 26.2%) in Nuremberg, Germany (03–06/2016).
Study design: Cluster-randomised controlled trial at the practice level. Physician practices were randomly assigned to one of two conditions: (1) physicians in the control arm could become aware of the performance reports via mass media channels (Mass Media, npr MM=132, nph MM=147); (2) physicians in the intervention arm also received a printed version of the report via mail (Mass and Special Media, npr MSM=117; nph MSM=130). <br> Principal findings: Overall, 68% of respondents recalled hospital performance reports and 21% used them for referral decisions. Physicians from the Mass and Special Media group were more likely to be aware of the performance reports (OR 4.16; 95% CI 2.16–8.00, p < .001) but not more likely to be influenced when referring patients into hospitals (OR 1.73; 95% CI 0.72–4.12, p > .05). On a 1 (very good) to 6 (insufficient) scale, the usefulness of the performance reports was rated 3.67 (±1.40). Aggregated presentation formats were rated more helpful than detailed hospital quality information.
Conclusions: Hospital quality reports have limited impact on referral practices. To increase the latter, concerns raised by referring physicians must be given more weight. Those principally refer to the underlying data, the design of the reports, and the lack of important information.
Background: Physician-rating websites have become a popular tool to create more transparency about the quality of health care providers. So far, it remains unknown whether online-based rating websites have the potential to contribute to a better standard of care. Objective: Our goal was to examine which health care providers use online rating websites and for what purposes, and whether health care providers use online patient ratings to improve patient care. Methods: We conducted an online-based cross-sectional study by surveying 2360 physicians and other health care providers (September 2015). In addition to descriptive statistics, we performed multilevel logistic regression models to ascertain the effects of providers' demographics as well as report card-related variables on the likelihood that providers implement measures to improve patient care. Results: Overall, more than half of the responding providers surveyed (54.66%, 1290/2360) used online ratings to derive measures to improve patient care (implemented measures: mean 3.06, SD 2.29). Ophthalmologists (68%, 40/59) and gynecologists (65.4%, 123/188) were most likely to implement any measures. The most widely implemented quality measures were related to communication with patients (28.77%, 679/2360), the appointment scheduling process (23.60%, 557/2360), and office workflow (21.23%, 501/2360). Scaled-survey results had a greater impact on deriving measures than narrative comments. Multilevel logistic regression models revealed medical specialty, the frequency of report card use, and the appraisal of the trustworthiness of scaled-survey ratings to be significantly associated predictors for implementing measures to improve patient care because of online ratings. Conclusions: Our results suggest that online ratings displayed on physician-rating websites have an impact on patient care. Despite the limitations of our study and unintended consequences of physician-rating websites, they still may have the potential to improve patient care.