Crowdsourcing Health Care: Is the Customer Always Right?

When investigating dining options in a new city or trendy neighborhood, many people first search Google Reviews or Yelp to see the reviews of previous patrons, while parents often consult the hive of the Facebook community for information about a new teacher in their district. And, as it turns out, an increasing number of patients are likewise turning to these social media sites to determine which hospital should administer their care or even which surgical center should perform their procedure.

Consumers directing consumers toward the most satisfying experience is the essence of the crowdsourcing ethos, and when the potential patient is evaluating a facility for easy parking or a friendly receptionist, this system can work. For more serious medical needs, however, there can be consequences. Hospitals in good standing online can sometimes perform poorly when it comes to hard endpoints, namely patient outcomes. This raises questions about the reliability of crowdsourced information in health care, including whether that information is driving more patients to certain centers and if the criteria used to evaluate one hospital over another includes optimal outcomes.

Researchers like Alexander McLawhorn, MD, assistant attending surgeon at the Hospital for Special Surgery, have taken notice of these trends. In a study published in Orthopedics, McLawhorn and colleagues compared overall patient satisfaction ratings for orthopedic surgeons from provider-based surveys with those found online. The analysis included data for 340 surgeons at 12 institutions that used a third-party organization to survey patients. Those findings were then compared with Healthgrades.com, UCompareHealthCare.com, Vitals.com and RateMDs.com to determine correlations between baseline factors and overall physician ratings. The researchers found that provider-initiated surveys had a higher number of patient ratings, and higher satisfaction rates, than those found online.

 
Despite being in good standing online, hospitals can sometimes perform poorly when it comes to hard endpoints, including patient outcomes, raising questions about the reliability of crowdsourced information in health care.
Source:Adobe Stock

“The superficially observed measures like bedside manner are convenient to capture in surveys and are readily attributable to the individual provider being evaluated,” McLawhorn said in an interview. “Current objective quality of care metrics are often more difficult to parse; they are complex and the result of interactions between provider-specific, patient-specific and hospital-specific variables.”

One of the more complicated outcomes seldom noted in surveys are joint infections following arthroplasty, McLawhorn noted. “Surveys tend to capture the processes and outcomes of the entire care team and care environment, and they can be contingent on patient variables, such as comorbidities and patient behavior,” he said. “However, these outcomes are often not adequately risk-adjusted when they are reported.”

Gathering information

Complicating an already intricate quality rating system issued by the government, crowdsourced ratings on social media sites also seem to lack consistency in what determines hospital quality. In their recent study in Health Services Research, Victoria A. Perez, PhD, assistant professor in the School of Public and Environmental Affairs at Indiana University, and colleagues, scoured Facebook, Google and Yelp for hospital ratings reviews, and compared those findings with those from the federal government’s Hospital Compare measures of hospital quality. The researchers found that only 50% to 60% of the highest ranked hospitals on the crowdsourcing sites correlated with the highest quality sites on the Hospital Compare in terms of overall and patient experience ratings. However, 20% of the sites ranked highest by the crowdsourced sites ranked as the worst by Hospital Compare.

“We compared crowdsourced ratings with government-issued hospital ratings related to overall quality, patient experience, clinical quality and patient safety,” Perez told Healio Rheumatology. “For the latter two categories, these scores are risk-adjusted to account for differences in patient health status. We found that crowdsourced ratings, which can reflect the experiences of patients, patient families, or people who considered becoming patients — but maybe left due to high wait times — are most closely correlated with patient experience ratings from Hospital Compare.” 

The safety issue is of particular concern to Perez, and why she chose to investigate the topic. “CMS produces dozens of clinical quality and patient safety indicators for a mix of conditions,” she said. “While these condition-specific indicators are more quantitative in nature than patient experience, they may also be very abstract for patients who don’t know their risk for developing the measured infections reported by CMS or who are looking for care for other types of conditions. These clinical quality measures are not correlated with one another and some hospitals do not have scores across all available measures due to limited observed cases.”

Additional findings from McLawhorn and colleagues demonstrated fewer negative comments on the provider-based surveys. The researchers observed a non-significant association between a greater number of years in practice and lower internal ratings. Academic practices and health centers that were located in the Northeast United States were protective factors for negative physician ratings.

Bradford Waddell, MD, assistant attending surgeon at the Hospital for Special Surgery, an accompanying researcher on this study, noted that government ratings provide a more comprehensive data set than crowdsourced ones. “Health care provider ratings are an anonymous survey given to all patients after care with the physician and are required by the government,” he said. “This automatically includes a larger and more robust representation of the physician’s patients. This will give a more accurate representation of the care provided by the physician.”

For McLawhorn, further study will shed light on important parameters associated with crowdsourced care. “Patient-reported outcomes measures, or PROMs, like the Knee injury and Osteoarthritis Outcome Score are other potential candidate measures of quality, but we do not yet know how best to analyze these outcome scores so that they can be interpreted reliably as quality measures,” he said. “Certainly, to use them as quality measures at this time would be premature and ahead of the science as well as our understanding of the links between PROMs and quality of care.”

Disadvantages to Crowdsourced Information

For Waddell, there are advantages and disadvantages to all rating systems. “In my opinion, however, the downsides of crowdsourced collection outweigh the upsides,” he said. “The bias toward extremely positive and extremely negative, along with limited numbers and potential rogue responses from non-patients, limit the validity of the online systems.”

To the point of rogue responses, Waddell noted that some reviews may not even be from patients the physician has actually treated. “Recently, a Texas physician was quoted in a very poorly received article in a newspaper, and his Vitals.com and Healthgrades.com scores and comments immediately fell to extremely low levels,” he said. “Many of the online reviews that followed the newspaper article said nothing about his care as a physician.”

Perez suggested that crowdsourced ratings can misidentify hospitals as the best or worst in a market in terms of patient safety or clinical quality as measured by Hospital Compare indicators. “Patients see these ratings and think they are going to get quality care,” she said. “However, you are not going to see much correlation between crowdsourced ratings and CMS ratings. Patients may not be interpreting the signals correctly.”

For Waddell, government-mandated, anonymous surveys given to every patient are more inclusive of all patients. “Further, the questions are vetted by the governmental committees that enforce them,” he said. “This provides a better resource for patients to use when determining their care provider. As with everything, there is the possibility of bias in the health care provider reporting, and this should be properly guarded against by the care system to provide the most accurate results.”

Potential Advantages

Perez said that an important finding of her study was that the social media sites correlated well with Hospital Compare in terms of parameters dealing with the personal patient experience, including food, friendliness of staff and amenities of the facility. “If a potential patient is shopping for a hospital based on patient experience, the online rating systems might not be so bad,” she said. “In fact, the correlation might be close to the ratings from CMS. The superficial experience is important to patients and their families, and so it might not be such a bad idea to crowdsource a review.”

Waddell agreed. “Bedside manner is one of the many important aspects to patient care and is often the reason for or against a review seen online,” he said. “Each patient should decide how important it is based on their personality. Ratings systems can be biased toward bedside manner, leading the physician to appear less competent in the overall rating. The best situation is for a physician to provide excellent medical care and have an excellent bedside manner.”  

Both Perez and Waddell emphasized using as many sources as possible to make a choice. “Patients should be aware that they can check a source like Hospital Compare for more in-depth information,” Perez said.

Waddell noted that many of the online ratings and provider-initiated questions have both superficial patient experience findings and safety outcomes covered in their questions. “Rating systems should have a clear delineation of both subjective and objective outcomes, allowing the patient to make the best physician choice based on both medical care and bedside manner,” he said. – by Rob Volansky

For more information:

Victoria Perez, PhD, can be reached at 1315 E 10th St, Bloomington, IN 47401; email: vieperez@indiana.edu.

Alexander S. McLawhorn, MD, MBA, and Bradford S. Waddell, MD, can be reached at 535 East 70th Street, New York, NY 10021; email: FrankR@HSS.EDU.

References:

Perez V, Freedman S. Health Services Research. 2018;doi: 10.1111/1475-6773.13026

Ricciardi BF, et al. Orthopedics. 2017;doi:10.3928/01477447-20170810-03

Disclosures: McLawhorn reports consulting for Ethicon and Intellijoint, and being on the editorial board of HSS Journal. Perez reports no relevant financial disclosures. Waddell reports being chairman of the Young Arthroplasty Committee of the American Association of Hip and Knee Surgeons; consulting for Orthalign; receiving research support from Stryker; and being on the editorial board of Current Reviews in Musculoskeletal Medicine.

When investigating dining options in a new city or trendy neighborhood, many people first search Google Reviews or Yelp to see the reviews of previous patrons, while parents often consult the hive of the Facebook community for information about a new teacher in their district. And, as it turns out, an increasing number of patients are likewise turning to these social media sites to determine which hospital should administer their care or even which surgical center should perform their procedure.

Consumers directing consumers toward the most satisfying experience is the essence of the crowdsourcing ethos, and when the potential patient is evaluating a facility for easy parking or a friendly receptionist, this system can work. For more serious medical needs, however, there can be consequences. Hospitals in good standing online can sometimes perform poorly when it comes to hard endpoints, namely patient outcomes. This raises questions about the reliability of crowdsourced information in health care, including whether that information is driving more patients to certain centers and if the criteria used to evaluate one hospital over another includes optimal outcomes.

Researchers like Alexander McLawhorn, MD, assistant attending surgeon at the Hospital for Special Surgery, have taken notice of these trends. In a study published in Orthopedics, McLawhorn and colleagues compared overall patient satisfaction ratings for orthopedic surgeons from provider-based surveys with those found online. The analysis included data for 340 surgeons at 12 institutions that used a third-party organization to survey patients. Those findings were then compared with Healthgrades.com, UCompareHealthCare.com, Vitals.com and RateMDs.com to determine correlations between baseline factors and overall physician ratings. The researchers found that provider-initiated surveys had a higher number of patient ratings, and higher satisfaction rates, than those found online.

 
Despite being in good standing online, hospitals can sometimes perform poorly when it comes to hard endpoints, including patient outcomes, raising questions about the reliability of crowdsourced information in health care.
Source:Adobe Stock

“The superficially observed measures like bedside manner are convenient to capture in surveys and are readily attributable to the individual provider being evaluated,” McLawhorn said in an interview. “Current objective quality of care metrics are often more difficult to parse; they are complex and the result of interactions between provider-specific, patient-specific and hospital-specific variables.”

One of the more complicated outcomes seldom noted in surveys are joint infections following arthroplasty, McLawhorn noted. “Surveys tend to capture the processes and outcomes of the entire care team and care environment, and they can be contingent on patient variables, such as comorbidities and patient behavior,” he said. “However, these outcomes are often not adequately risk-adjusted when they are reported.”

Gathering information

Complicating an already intricate quality rating system issued by the government, crowdsourced ratings on social media sites also seem to lack consistency in what determines hospital quality. In their recent study in Health Services Research, Victoria A. Perez, PhD, assistant professor in the School of Public and Environmental Affairs at Indiana University, and colleagues, scoured Facebook, Google and Yelp for hospital ratings reviews, and compared those findings with those from the federal government’s Hospital Compare measures of hospital quality. The researchers found that only 50% to 60% of the highest ranked hospitals on the crowdsourcing sites correlated with the highest quality sites on the Hospital Compare in terms of overall and patient experience ratings. However, 20% of the sites ranked highest by the crowdsourced sites ranked as the worst by Hospital Compare.

“We compared crowdsourced ratings with government-issued hospital ratings related to overall quality, patient experience, clinical quality and patient safety,” Perez told Healio Rheumatology. “For the latter two categories, these scores are risk-adjusted to account for differences in patient health status. We found that crowdsourced ratings, which can reflect the experiences of patients, patient families, or people who considered becoming patients — but maybe left due to high wait times — are most closely correlated with patient experience ratings from Hospital Compare.” 

The safety issue is of particular concern to Perez, and why she chose to investigate the topic. “CMS produces dozens of clinical quality and patient safety indicators for a mix of conditions,” she said. “While these condition-specific indicators are more quantitative in nature than patient experience, they may also be very abstract for patients who don’t know their risk for developing the measured infections reported by CMS or who are looking for care for other types of conditions. These clinical quality measures are not correlated with one another and some hospitals do not have scores across all available measures due to limited observed cases.”

Additional findings from McLawhorn and colleagues demonstrated fewer negative comments on the provider-based surveys. The researchers observed a non-significant association between a greater number of years in practice and lower internal ratings. Academic practices and health centers that were located in the Northeast United States were protective factors for negative physician ratings.

Bradford Waddell, MD, assistant attending surgeon at the Hospital for Special Surgery, an accompanying researcher on this study, noted that government ratings provide a more comprehensive data set than crowdsourced ones. “Health care provider ratings are an anonymous survey given to all patients after care with the physician and are required by the government,” he said. “This automatically includes a larger and more robust representation of the physician’s patients. This will give a more accurate representation of the care provided by the physician.”

For McLawhorn, further study will shed light on important parameters associated with crowdsourced care. “Patient-reported outcomes measures, or PROMs, like the Knee injury and Osteoarthritis Outcome Score are other potential candidate measures of quality, but we do not yet know how best to analyze these outcome scores so that they can be interpreted reliably as quality measures,” he said. “Certainly, to use them as quality measures at this time would be premature and ahead of the science as well as our understanding of the links between PROMs and quality of care.”

Disadvantages to Crowdsourced Information

For Waddell, there are advantages and disadvantages to all rating systems. “In my opinion, however, the downsides of crowdsourced collection outweigh the upsides,” he said. “The bias toward extremely positive and extremely negative, along with limited numbers and potential rogue responses from non-patients, limit the validity of the online systems.”

To the point of rogue responses, Waddell noted that some reviews may not even be from patients the physician has actually treated. “Recently, a Texas physician was quoted in a very poorly received article in a newspaper, and his Vitals.com and Healthgrades.com scores and comments immediately fell to extremely low levels,” he said. “Many of the online reviews that followed the newspaper article said nothing about his care as a physician.”

Perez suggested that crowdsourced ratings can misidentify hospitals as the best or worst in a market in terms of patient safety or clinical quality as measured by Hospital Compare indicators. “Patients see these ratings and think they are going to get quality care,” she said. “However, you are not going to see much correlation between crowdsourced ratings and CMS ratings. Patients may not be interpreting the signals correctly.”

For Waddell, government-mandated, anonymous surveys given to every patient are more inclusive of all patients. “Further, the questions are vetted by the governmental committees that enforce them,” he said. “This provides a better resource for patients to use when determining their care provider. As with everything, there is the possibility of bias in the health care provider reporting, and this should be properly guarded against by the care system to provide the most accurate results.”

Potential Advantages

Perez said that an important finding of her study was that the social media sites correlated well with Hospital Compare in terms of parameters dealing with the personal patient experience, including food, friendliness of staff and amenities of the facility. “If a potential patient is shopping for a hospital based on patient experience, the online rating systems might not be so bad,” she said. “In fact, the correlation might be close to the ratings from CMS. The superficial experience is important to patients and their families, and so it might not be such a bad idea to crowdsource a review.”

Waddell agreed. “Bedside manner is one of the many important aspects to patient care and is often the reason for or against a review seen online,” he said. “Each patient should decide how important it is based on their personality. Ratings systems can be biased toward bedside manner, leading the physician to appear less competent in the overall rating. The best situation is for a physician to provide excellent medical care and have an excellent bedside manner.”  

Both Perez and Waddell emphasized using as many sources as possible to make a choice. “Patients should be aware that they can check a source like Hospital Compare for more in-depth information,” Perez said.

Waddell noted that many of the online ratings and provider-initiated questions have both superficial patient experience findings and safety outcomes covered in their questions. “Rating systems should have a clear delineation of both subjective and objective outcomes, allowing the patient to make the best physician choice based on both medical care and bedside manner,” he said. – by Rob Volansky

For more information:

Victoria Perez, PhD, can be reached at 1315 E 10th St, Bloomington, IN 47401; email: vieperez@indiana.edu.

Alexander S. McLawhorn, MD, MBA, and Bradford S. Waddell, MD, can be reached at 535 East 70th Street, New York, NY 10021; email: FrankR@HSS.EDU.

References:

Perez V, Freedman S. Health Services Research. 2018;doi: 10.1111/1475-6773.13026

Ricciardi BF, et al. Orthopedics. 2017;doi:10.3928/01477447-20170810-03

Disclosures: McLawhorn reports consulting for Ethicon and Intellijoint, and being on the editorial board of HSS Journal. Perez reports no relevant financial disclosures. Waddell reports being chairman of the Young Arthroplasty Committee of the American Association of Hip and Knee Surgeons; consulting for Orthalign; receiving research support from Stryker; and being on the editorial board of Current Reviews in Musculoskeletal Medicine.