Orthopedics

Feature Article 

Provider-Initiated Patient Satisfaction Reporting Yields Improved Physician Ratings Relative to Online Rating Websites

Benjamin F. Ricciardi, MD; Brad S. Waddell, MD; Scott R. Nodzo, MD; Jeffrey Lange, MD; Allina A. Nocon, MPH; Spencer Amundsen, MD; T. David Tarity, MD; Alexander S. McLawhorn, MD, MBA

Abstract

Recently, providers have begun to publicly report the results of patient satisfaction surveys from their practices. However, these outcomes have never been compared with the findings of commercial online physician rating websites. The goals of the current study were to (1) compare overall patient satisfaction ratings for orthopedic surgeons derived from provider-based third-party surveys with existing commercial physician rating websites and (2) determine the association between patient ratings and provider characteristics. The authors identified 12 institutions that provided publicly available patient satisfaction outcomes derived from third-party surveys for their orthopedic surgeons as of August 2016. Orthopedic surgeons at these institutions were eligible for inclusion (N=340 surgeons). Provider characteristics were recorded from publicly available data. Four high-traffic commercial online physician rating websites were identified: Healthgrades.com, UCompareHealthCare.com, Vitals.com, and RateMDs.com. For each surgeon, overall ratings (on a scale of 1–5), total number of ratings, and percentage of negative ratings were compared between provider-initiated internal ratings and each commercial online website. Associations between baseline factors and overall physician ratings and negative ratings were assessed. Provider-initiated internal patient satisfaction ratings showed a greater number of overall patient ratings, higher overall patient satisfaction ratings, and a lower percentage of negative comments compared with commercial online physician rating websites. A greater number of years in practice had a weak association with lower internal ratings, and an academic practice setting and a location in the Northeast were protective factors for negative physician ratings. Compared with commercial online physician rating websites, provider-initiated patient satisfaction ratings of orthopedic surgeons appear to be more favorable, with greater numbers of responses. [Orthopedics. 2017; 40(5):304–310.]

Abstract

Recently, providers have begun to publicly report the results of patient satisfaction surveys from their practices. However, these outcomes have never been compared with the findings of commercial online physician rating websites. The goals of the current study were to (1) compare overall patient satisfaction ratings for orthopedic surgeons derived from provider-based third-party surveys with existing commercial physician rating websites and (2) determine the association between patient ratings and provider characteristics. The authors identified 12 institutions that provided publicly available patient satisfaction outcomes derived from third-party surveys for their orthopedic surgeons as of August 2016. Orthopedic surgeons at these institutions were eligible for inclusion (N=340 surgeons). Provider characteristics were recorded from publicly available data. Four high-traffic commercial online physician rating websites were identified: Healthgrades.com, UCompareHealthCare.com, Vitals.com, and RateMDs.com. For each surgeon, overall ratings (on a scale of 1–5), total number of ratings, and percentage of negative ratings were compared between provider-initiated internal ratings and each commercial online website. Associations between baseline factors and overall physician ratings and negative ratings were assessed. Provider-initiated internal patient satisfaction ratings showed a greater number of overall patient ratings, higher overall patient satisfaction ratings, and a lower percentage of negative comments compared with commercial online physician rating websites. A greater number of years in practice had a weak association with lower internal ratings, and an academic practice setting and a location in the Northeast were protective factors for negative physician ratings. Compared with commercial online physician rating websites, provider-initiated patient satisfaction ratings of orthopedic surgeons appear to be more favorable, with greater numbers of responses. [Orthopedics. 2017; 40(5):304–310.]

A current focus of health care reform in the United States is improved transparency of the quality of care at the provider level. For hospitals and individual physicians, financial incentives are increasingly linked to the quality of patient care, and physician engagement in the evaluation of these measures is critical. A controversial area of quality reporting is patient satisfaction.1–10 Currently, patient satisfaction outcomes from the Hospital Consumer Assessment of Healthcare Providers and Systems survey are a component of the total performance score linked to hospital value-based purchasing reimbursement from Medicare, suggesting that, at a payer level, patient satisfaction measures are considered a valuable measure of quality. Proponents of using patient satisfaction as a measure of the quality of health care showed an association with patient outcomes and empowerment of patients to seek better care. 2,7,8,11,12 The use of patient satisfaction as a proxy for quality of care is controversial, however, and public reporting of patient satisfaction at the physician level is limited.3–8,10,13

The most common source of publicly available patient satisfaction data for physicians is commercial online physician rating websites. Patient awareness of these websites is substantial, and their ratings may affect patient health care decision-making.14–16 A limited number of orthopedic studies have assessed the quality of these online ratings and found weak correlations among the different websites for reported patient satisfaction scores, suggesting significant heterogeneity.17–21 Despite the recent proliferation of these websites, they have significant limitations, including a low number of patient ratings, a bias toward dissatisfied patients, and a lack of subjective feedback to explain the basis of these ratings.12 To address these limitations, a number of health care networks have begun to provide publicly available ratings of their physicians derived from patient satisfaction surveys administered by third parties. A potential benefit of these internal surveys compared with commercial online websites is a higher number of patient ratings, resulting in more complete information. No study has examined the outcomes of provider-initiated surveys vs online physician rating websites.

The goals of the current study were to (1) compare overall patient satisfaction ratings for orthopedic surgeons derived from provider-based third-party surveys vs commercial physician rating websites and (2) determine the association between patient ratings and provider characteristics. The authors hypothesized that public reporting of provider-initiated patient satisfaction ratings of orthopedic surgeons will benefit patients by providing greater numbers of responses and higher overall ratings of individual surgeons.

Materials and Methods

Provider-Based Patient Satisfaction Surveys

The FREIDA online database from the American Medical Association was used to identify accredited orthopedic surgery residency and fellowship programs (N=172). Each residency-affiliated hospital was screened for health care systems that provided publicly available patient satisfaction surveys on orthopedic physicians as of August 2016. This search yielded 12 institutions that provided these data publicly for its affiliated physicians: Duke University, Midwest Orthopedics at Rush, Vanderbilt University, Northwell Health, Cleveland Clinic, University of Utah, University of Arkansas, Stanford University, University of Pittsburgh, Wake Forest University, Southern California Orthopedic Institute, and Piedmont Healthcare. At these institutions, orthopedic providers with publicly available patient satisfaction ratings were identified (N=415 surgeons). Excluded from the study were physicians who practiced within the orthopedic division but did not perform surgery, neurosurgeons who had academic appointments in the orthopedic department, and physicians who had no publicly available internal ratings (n=74 excluded). Public reporting of patient satisfaction data began in 2012 at the University of Utah. By August 2016, all of these institutions were reporting patient satisfaction data. Patient satisfaction surveys were obtained from outpatient clinics and administered at the time of the office visit by electronic or paper survey or were sent to randomly selected patients by email or regular mail. Response rates were not publicly available for all hospitals, but ranged from 18% to 30% for all specialties when reported. Independent third parties experienced in the creation and evaluation of health care surveys were used by most hospitals for survey administration (Press Ganey, n=9; National Research Corporation, n=1; Universal Research Solutions, LLC, n=1; not listed, n=1). The number of individual survey questions ranged from 6 to 20, depending on the institution, and most surveys included domains such as communication, trust in provider decision-making, time spent with the patient, and willingness to recommend the physician to others. At all hospitals, an overall physician rating was reported on a scale of 1 to 5, with 5 being the highest rating. Reporting of individual subscales was not consistent across hospitals. Most institutions required a minimum of 30 ratings within the past 12 to 18 months for the physician's ratings to be made public. Anonymous patient comments were also published when provided. Most institutions had a stated policy of publishing all physician ratings, including negative ratings and comments, as long as (1) no offensive, profane, or slanderous remarks were made and (2) there was no violation of patient privacy or confidentiality. Based on these inclusion criteria, in the current study, more than 95% of patient comments were reported without censorship from the limited number of institutions that reported this statistic.

Commercial Online Patient Satisfaction Surveys

Commercial physician rating websites were identified and chosen based on accessibility, web traffic, and search engine optimization within the Google algorithm. According to these criteria, the following 4 websites were chosen for inclusion: Healthgrades.com, Vitals.com, UCompareHealthCare.com, and RateMDs.com.18 Since approximately 2008, Healthgrades.com has provided a patient survey consisting of 8 questions about trust in provider decision-making, office cleanliness, staff friendliness, provider listening and explaining, and appropriate time spent with patients. These categories were ranked from 1 to 5 stars, with 5 as the highest rating. An overall rating derived from the likelihood of recommending the provider to a friend was rated from 1 to 5 stars. Vitals.com and UCompareHealthCare.com began to provide ratings of physicians in approximately 2009. These sites used a similar set of questions as well as a rating system of 1 to 5 stars, with 5 as the highest rating. The websites also provided an overall rating of 1 to 5 stars based on individual responses. RateMDs.com began to provide ratings for physicians in approximately 2006. Ratings were based on a scale of 1 to 5 stars, with 5 as the highest rating, and they included 4 domains: staff, punctuality, helpfulness, and knowledge. This site provided an overall rating based on a scale of 1 to 5 stars. For all of these websites, patients had the opportunity to leave anonymous comments.

Data Collection

Physician rating measures were obtained from each individual website between August 1 and September 31, 2016. This study was exempt from institutional review board review because of the use of publicly available data with no protected subject information. For orthopedic surgeons at the included institutions with a provider-based patient satisfaction rating, overall ratings were recorded from each website along with the total number of patient reviews for each surgeon. Ratings were based on a scale of 1 to 5, with 5 being the highest rating. Because of the variability in reporting of question subgroups by each website, these data were not analyzed and only aggregate overall scores were used. Negative comments were defined as a rating of 1 or 2 stars of a possible 5 stars, with written comments. For each surgeon, information on institution type (academic vs private), sex, number of years in practice, geographic location (Northeast, West, South, Midwest), and subspecialty training was recorded.

Statistical Analysis

Statistical analysis was performed by an author (A.A.N.) with advanced training in biostatistics. Analysis was performed with SAS software, version 9.3 (SAS Institute, Cary, North Carolina). Categorical variables were reported as frequencies and percentages. Normally distributed continuous variables were expressed as means with standard deviations or ranges. Non-normally distributed continuous variables were expressed as medians and interquartile ranges (first through third quartiles). The Kruskal–Wallis test was used to evaluate differences between internal ratings and the varied external ratings. Pairwise comparisons were completed for all significant differences. A Bonferroni correction was performed to adjust for multiple comparisons. To examine the association between internal ratings and several variables of interest (eg, institution type, number of years in practice, institution location), linear regression was used. Finally, logistic regression was used to examine the association between the previously mentioned variables and negative comments.

Demographic Characteristics of Physicians

Demographic characteristics of the orthopedic surgeons in the study cohort are shown in Table 1. Subjects were over-whelmingly male (93%), had practiced orthopedics for 16 years on average, were working at academic institutions (62%), and had subspecialty training.

Demographic Characteristics of the Physician Population (N=340)

Table 1:

Demographic Characteristics of the Physician Population (N=340)

Results

Comparison of Provider-Based Ratings and Commercial Online Physician Rating Websites

Internal ratings of physicians were increased relative to each commercial online physician rating website (median [range], 4.7 [3.5–5] internal rating vs 4.3 [1–5] Healthgrades.com, 4.0 [1–5] Vitals. com, 4.5 [1–5] UCompareHealthCare.com, and 4.0 [1–5] RateMDs.com; P<.001 for each comparison) (Table 2). The overall number of ratings was higher for internal ratings vs each of the commercial online physician rating websites (median [interquartile range; first through third quartiles], 168 [88–264] internal rating vs 13 [7–23] Healthgrades.com, 15 [6–28] Vitals.com, 5 [2–9] UCompareHealthCare.com, and 2 [1–5] RateMDs.com; P<.001 for each comparison) (Table 2). Healthgrades.com had a higher number of physician ratings compared with UCompareHealthCare.com and RateMDs.com (13 [7–23] Healthgrades.com vs 5 [2–9] UCompareHealthCare.com and 2 [1–5] RateMDs.com; P<.001 for each comparison) (Table 2). Vitals.com had a higher number of physician ratings compared with UCompareHealthCare.com and RateMDs.com (median [interquartile range], 15 [6–28] Vitals.com vs 5 [2–9] UCompareHealthCare.com and 2 [1–5] RateMDs.com; P<.001 for each comparison) (Table 2). The number of overall negative comments did not differ between websites (Table 3). The percentage of negative comments relative to the total rating was lower for internal ratings vs commercial websites (mean percentage of negative comments [standard deviation], 1% [3] internal rating vs 12% [17] Vitals.com, 17% [23] UCompareHealthCare.com, and 28% [33] RateMDs.com; P<.01) (Table 3). Healthgrades.com had a lower percentage of negative comments than the other commercial websites (mean percentage of negative comments [standard deviation], 6% [14] Healthgrades.com vs 12% [17] Vitals.com, 17% [23] UCompareHealthCare.com, and 28% [33] RateMDs.com; P<.01) (Table 3).

Median Number of Responses and Overall Rating From Physician Rating Websites

Table 2:

Median Number of Responses and Overall Rating From Physician Rating Websites

Mean Number and Percentage of Negative Comments From Physician Rating Websites

Table 3:

Mean Number and Percentage of Negative Comments From Physician Rating Websites

Demographic Factors Associated With Overall Ratings and Negative Ratings

Number of years in practice had a negative association with overall internal ratings, suggesting that surgeons who were in practice for a longer time had lower overall ratings. However, the effect size was small (estimate [standard error], −0.002 [0.001]; 95% confidence interval, −0.004 to −0.0003; P=.02). Sex, geographic distribution, practice setting, and subspecialty training were not associated with overall internal ratings (Table 4). The incidence of a negative rating on any online rating website for a given physician was negatively associated with an academic practice setting (odds ratio [95% confidence interval], 0.2 [0.10–0.60]; P=.01) and practice settings in the South (odds ratio [95% confidence interval], 0.3 [0.10–1.00]; P=.04), Midwest (0.1 [0.03–0.50]; P=.01), and West (0.2 [0.04–0.70]; P=.02] (Table 5) compared with practice settings in the Northeast. A greater number of years in practice was associated with a negative rating on any online website (odds ratio [95% confidence interval], 1.1 [1.00–1.10]; P<.01) (Table 5).

Linear Regression for Predictive Factors Associated With Internal Physician Ratings

Table 4:

Linear Regression for Predictive Factors Associated With Internal Physician Ratings

Logistic Regression for Predictive Factors for a Negative Rating in Any Single Online Rating System

Table 5:

Logistic Regression for Predictive Factors for a Negative Rating in Any Single Online Rating System

Discussion

The use of patient satisfaction ratings by Medicare as a pay-for-performance measure suggests that government payers believe that these metrics have substantial value. Although most physicians currently are not reimbursed based on patient satisfaction, physician involvement in assessing these outcomes is critical, given their importance at the hospital level. In the current study, provider-initiated internal ratings for orthopedic surgeons resulted in an increased number of overall patient ratings, higher patient satisfaction scores, and a lower percentage of negative comments. A greater number of years in practice had a weak association with lower internal ratings. Increasingly, patient satisfaction may be an element of pay for performance, and for orthopedic surgeons, provider-initiated data appear to be more favorable than data available on commercial online websites.

Currently, the primary source of publicly available physician satisfaction ratings is commercial online physician rating websites; however, these sites may have significant limitations, including a low number of patient responses, a bias toward unhappy patients, inability to confirm patient identities, and lack of clinical validation. The online presence of orthopedic surgeons has grown substantially, and most surgeons have been rated by their patients on at least one independent website.17,19,21 Although most overall ratings and comments are positive, the number of ratings across all of these websites is low, consistent with the current findings.17,19,21 Despite the variable quality of these ratings, patient awareness of these websites has grown during the past decade.14–16 A survey of US patients found that 65% were aware of physician ratings, 35% of those patients had chosen a physician based on good ratings, and 37% had avoided a physician because of negative ratings.16 The current findings showed a greater number of responses, improved overall satisfaction scores, and a lower percentage of negative ratings with publicly available internal data from physicians' practices compared with commercial online physician rating websites. This finding suggests that provider-initiated data may have less bias when describing overall patient care compared with commercial online physician rating websites. Despite the increased reporting of patient satisfaction data, its proxy for quality of care remains controversial.6,8,14,22,23 At the hospital level, Kennedy et al23 found that high Hospital Consumer Assessment of Healthcare Providers and Systems scores correlated with lower mortality rates, surgical volume, and hospital size. Sacks et al24 found correlations between the findings of patient satisfaction surveys and surgical outcomes, such as mortality and minor complication rates, in a surgically treated Medicare population. However, some studies of surgical oncology and general medical populations have not supported these associations, emphasizing the controversial nature of patient satisfaction as a proxy for quality.3,10 At the individual physician level, Okuda et al25 found that patient satisfaction correlated with 36-Item Short Form Health Survey and Japanese Orthopedic Association scores after posterior lumbar interbody fusion for spondylolisthesis. In contrast, Godil et al4 found no correlation between patient satisfaction and various 90-day outcomes of surgical quality in a population treated surgically for degenerative lumbar spine disease. Improvements in the quality of patient satisfaction survey data, including higher response rates, validated surveys, and larger studies, may help to reduce the heterogeneity of these studies.

The current study found a weak association between a greater number of years in practice and overall physician internal ratings. In addition, an academic practice and a location in the Northeast showed an association with fewer negative comments across all physician rating websites. Frost and Mesfin17 found that midcareer orthopedic surgeons had higher overall online patient ratings compared with early-career (0–5 years in practice) or late-career surgeons (>21 years in practice). However, a different study did not show a correlation between surgeon factors and satisfaction scores.21 Bias related to underlying physician demographics does not appear to play a major factor in determining satisfaction ratings. It is important to mention the effect of nonresponse bias on patient satisfaction surveys. Previous studies of Press Ganey surveys found a nonresponse bias for outpatient orthopedic patients. Tyser et al26 found that men, patients covered by Medicaid, and those treated for trauma were less likely to respond to surveys compared with older women. One disadvantage of current patient rating websites is lack of knowledge of the underlying characteristics of the patients who complete the surveys, which makes it difficult to adjust for demographic biases. Previous studies of nonmodifiable risk factors at the patient level found reduced provider satisfaction among men, younger patients, those with less education, smokers, and those with workers' compensation claims among those undergoing elective orthopedic surgery.27,28 Complications during hospitalization did not affect patient satisfaction scores at the hospital level after orthopedic surgery.29 Patient demographic data are not available for online physician rating websites and are not publicly available for internal ratings. In the future, improved transparency of data on the underlying patient cohorts answering these surveys may improve the quality of data.

Limitations

This study had several limitations. No standardized patient satisfaction survey was used by all hospitals, and although most surveys were very similar, the differences did not allow the comparison of ratings across individual domains of questions. The authors were limited to publicly available information, and websites do not consistently report response rates or the underlying demographics of patients who respond to surveys, creating the likelihood of a nonresponse bias. Despite these limitations, patient satisfaction is a quality metric of interest for payers, and physician involvement with developing and validating these outcomes is critical.

Conclusion

Publicly available internal patient satisfaction ratings provided a higher number of ratings, fewer negative comments, and better overall ratings compared with commercial online physician rating websites. In addition, these ratings had a limited association with available underlying physician demographic characteristics. These findings suggest that expansion of provider-initiated reporting of patient satisfaction may benefit the orthopedic community.

References

  1. Browne K, Roseman D, Shaller D, Edgman-Levitan S. Analysis & commentary: measuring patient experience as a strategy for improving primary care. Health Aff (Millwood). 2010; 29(5):921–925. doi:10.1377/hlthaff.2010.0238 [CrossRef]
  2. Emmert M, Adelhardt T, Sander U, Wambach V, Lindenthal J. A cross-sectional study assessing the association between online ratings and structural and quality of care measures: results from two German physician rating websites. BMC Health Serv Res. 2015; 15:414. doi:10.1186/s12913-015-1051-5 [CrossRef]
  3. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012; 172(5):405–411. doi:10.1001/archinternmed.2011.1662 [CrossRef]
  4. Godil SS, Parker SL, Zuckerman SL, et al. Determining the quality and effectiveness of surgical spine care: patient satisfaction is not a valid proxy. Spine J. 2013; 13(9):1006–1012. doi:10.1016/j.spinee.2013.04.008 [CrossRef]
  5. Graham B, Green A, James M, Katz J, Swiontkowski M. Measuring patient satisfaction in orthopaedic surgery. J Bone Joint Surg Am. 2015; 97(1):80–84. doi:10.2106/JBJS.N.00811 [CrossRef]
  6. Gray BM, Vandergrift JL, Gao GG, McCullough JS, Lipner RS. Website ratings of physicians and their quality of care. JAMA Intern Med. 2015; 175(2):291–293. doi:10.1001/jamainternmed.2014.6291 [CrossRef]
  7. Greaves F, Pape UJ, Lee H, et al. Patients' ratings of family physician practices on the internet: usage and associations with conventional measures of quality in the English National Health Service. J Med Internet Res. 2012; 14(5):e146. doi:10.2196/jmir.2280 [CrossRef]
  8. Sacks GD, Lawson EH, Dawes AJ, et al. Relationship between hospital performance on a patient satisfaction survey and surgical quality. JAMA Surg. 2015; 150(9):858–864. doi:10.1001/jamasurg.2015.1108 [CrossRef]
  9. Shirley ED, Sanders JO. Measuring quality of care with patient satisfaction scores. J Bone Joint Surg Am. 2016; 98(19):e83. doi:10.2106/JBJS.15.01216 [CrossRef]
  10. Wright JD, Tergas AI, Ananth CV, et al. Relationship between surgical oncologic outcomes and publically reported hospital quality and satisfaction measures. J Natl Cancer Inst. 2015; 107(3):409. doi:10.1093/jnci/dju409 [CrossRef]
  11. Glickman SW, Boulding W, Manary M, et al. Patient satisfaction and its relationship with clinical quality and inpatient mortality in acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2010; 3(2):188–195. doi:10.1161/CIRCOUTCOMES.109.900597 [CrossRef]
  12. Lagu T, Lindenauer PK. Putting the public back in public reporting of health care quality. JAMA. 2010; 304(15):1711–1712. doi:10.1001/jama.2010.1499 [CrossRef]
  13. Lee DS, Tu JV, Chong A, Alter DA. Patient satisfaction and its relationship with quality and outcomes of care after acute myocardial infarction. Circulation. 2008; 118(19):1938–1945. doi:10.1161/CIRCULATIONAHA.108.792713 [CrossRef]
  14. Emmert M, Meier F, Pisch F, Sander U. Physician choice making and characteristics associated with using physician-rating websites: cross-sectional study. J Med Internet Res. 2013; 15(8):e187. doi:10.2196/jmir.2702 [CrossRef]
  15. Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients' online ratings of their physicians over a 5-year period. J Med Internet Res. 2012; 14(1):e38. doi:10.2196/jmir.2003 [CrossRef]
  16. Hanauer DA, Zheng K, Singer DC, Gebremariam A, Davis MM. Public awareness, perception, and use of online physician rating sites. JAMA. 2014; 311(7):734–735. doi:10.1001/jama.2013.283194 [CrossRef]
  17. Frost C, Mesfin A. Online reviews of orthopedic surgeons: an emerging trend. Orthopedics. 2015; 38(4):e257–e262. doi:10.3928/01477447-20150402-52 [CrossRef]
  18. Bakhsh W, Mesfin A. Online ratings of orthopedic surgeons: analysis of 2185 reviews. Am J Orthop (Belle Mead NJ). 2014; 43(8):359–363.
  19. Nwachukwu BU, Adjei J, Trehan SK, et al. Rating a sports medicine surgeon's “quality” in the modern era: an analysis of popular physician online rating websites. HSS J. 2016; 12(3):272–277. doi:10.1007/s11420-016-9520-x [CrossRef]
  20. Ryan T, Specht J, Smith S, DelGaudio JM. Does the Press Ganey survey correlate to online health grades for a major academic otolaryngology department?Otolaryngol Head Neck Surg. 2016; 155(3):411–415. doi:10.1177/0194599816652386 [CrossRef]
  21. Trehan SK, DeFrancesco CJ, Nguyen JT, Charalel RA, Daluiski A. Online patient ratings of hand surgeons. J Hand Surg Am. 2016; 41(1):98–103. doi:10.1016/j.jhsa.2015.10.006 [CrossRef]
  22. Etier BE Jr, Orr SP, Antonetti J, Thomas SB, Theiss SM. Factors impacting Press Ganey patient satisfaction scores in orthopedic surgery spine clinic. Spine J. 2016; 16(11):1285–1289. doi:10.1016/j.spinee.2016.04.007 [CrossRef]
  23. Kennedy GD, Tevis SE, Kent KC. Is there a relationship between patient satisfaction and favorable outcomes?Ann Surg. 2014; 260(4):592–598. doi:10.1097/SLA.0000000000000932 [CrossRef]
  24. Sacks GD, Lawson EH, Dawes AJ, et al. Relationship between hospital performance on a patient satisfaction survey and surgical quality. JAMA Surg. 2015; 150(9):858–864. doi:10.1001/jamasurg.2015.1108 [CrossRef]
  25. Okuda S, Fujimori T, Oda T, et al. Patientbased surgical outcomes of posterior lumbar interbody fusion: patient satisfaction analysis. Spine (Phila Pa 1976). 2016; 41(3):E148–E154. doi:10.1097/BRS.0000000000001188 [CrossRef]
  26. Tyser AR, Abtahi AM, McFadden M, Presson AP. Evidence of non-response bias in the Press-Ganey patient satisfaction survey. BMC Health Serv Res. 2016; 16(a):350. doi:10.1186/s12913-016-1595-z [CrossRef]
  27. Abtahi AM, Presson AP, Zhang C, Saltzman CL, Tyser AR. Association between orthopaedic outpatient satisfaction and non-modifiable patient factors. J Bone Joint Surg Am. 2015; 97(13):1041–1048. doi:10.2106/JBJS.N.00950 [CrossRef]
  28. Bible JE, Kay HF, Shau DN, O'Neill KR, Segebarth PB, Devin CJ. What patient characteristics could potentially affect patient satisfaction scores during spine clinic?Spine (Phila Pa 1976). 2015; 40(13):1039–1044. doi:10.1097/BRS.0000000000000912 [CrossRef]
  29. Day MS, Hutzler LH, Karia R, Vangsness K, Setia N, Bosco JA III, . Hospital-acquired conditions after orthopedic surgery do not affect patient satisfaction scores. J Healthc Qual. 2014; 36(6):33–40. doi:10.1111/jhq.12031 [CrossRef]

Demographic Characteristics of the Physician Population (N=340)

VariableValue
Sex, No.
  Male316 (93%)
  Female24 (7%)
Years in practice, mean (range), y16 (1–45)
Institution type, No.
  Academic210 (62%)
  Private130 (38%)
Geographic location, No.
  South139 (41%)
  West83 (24%)
  Midwest78 (23%)
  Northeast40 (12%)
Subspecialty training, No.
  Sports91 (27%)
  Adult reconstructive59 (17%)
  Spine41 (12%)
  Hand38 (11%)
  None29 (8%)
  Foot and ankle26 (8%)
  Trauma23 (7%)
  Pediatrics16 (5%)
  Oncology17 (5%)

Median Number of Responses and Overall Rating From Physician Rating Websites

Rating WebsiteMedian

No. of Responses (Interquartile Range)aOverall Rating (Range)
Internal rating168 (88–264)b4.7 (3.5–5)c
Healthgrades.com13 (7–23)d4.3 (1–5)
Vitals.com15 (6–28).e4.0 (1–5)
UCompareHealthCare.com5 (2–9)4.5 (1–5)f
RateMDs.com2 (1–5)4.0 (1–5)
P<.001<.001

Mean Number and Percentage of Negative Comments From Physician Rating Websites

Rating WebsiteMean (SD)

No. of Negative CommentsPercentage of Negative Comments
Internal rating3 (6)1 (3)a
Healthgrades.com1 (4)6 (14)b
Vitals.com3 (7)12 (17)
UCompareHealthCare.com1 (2)17 (23)
RateMDs.com1 (2)28 (33)
P.89<.01

Linear Regression for Predictive Factors Associated With Internal Physician Ratings

FactorCoefficient95% Confidence IntervalP

EstimateStandard Error
Sex (reference: female)0.0330.038−0.042 to 0.11.42
Institution (reference: academic)−0.0350.043−0.12 to 0.05.39
Geographic location (reference: Northeast)
  South−0.0020.045−0.09 to 0.085.96
  Midwest−0.0320.061−0.15 to 0.089.60
  West−0.0560.064−0.18 to 0.070.39
Subspecialty training (reference: none)−0.0220.043−0.11 to 0.062.61
Years in practice−0.0020.001−0.004 to −0.0003.02a

Logistic Regression for Predictive Factors for a Negative Rating in Any Single Online Rating System

FactorOdds Ratio95% Confidence IntervalP
Sex (reference: female)1.60.70–4.00.31
Institution (reference: academic)0.20.10–0.60.01a
Geographic location (reference: Northeast)
  South0.30.10–1.00.04a
  Midwest0.10.03–0.50.01a
  West0.20.04–0.70.02a
Subspecialty training (reference: none)0.80.30–2.30.68
Years in practice1.11.00–1.10<.01a
Authors

The authors are from the Adult Reconstruction and Joint Replacement Service (BFR, BSW, SRN, JL, SA, TDT, ASM) and the Complex Joint Reconstruction Center (AAN), Hospital for Special Surgery, New York, New York.

The authors have no relevant financial relationships to disclose.

Correspondence should be addressed to: Benjamin F. Ricciardi, MD, Adult Reconstruction and Joint Replacement Service, Hospital for Special Surgery, 535 E 70th St, New York, NY 10021 ( ricciardib1111@gmail.com).

Received: April 09, 2017
Accepted: July 10, 2017
Posted Online: August 18, 2017

10.3928/01477447-20170810-03

Sign up to receive

Journal E-contents