May 11, 2015
8 min read

Finding consistency in Medicare’s quality ratings for dialysis providers

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact


As of January 2015, the Centers for Medicare & Medicaid Services (CMS) now publishes two sets of quality rankings for dialysis facilities: performance ratings tied to the Quality Incentive Program (QIP), and “star ratings” published on the Dialysis Facility Compare website. We sought to compare the two ratings and see how consistent they are for a matched set of dialysis facilities that have ratings on both programs.

While the two quality rating programs have somewhat different objectives, both ratings are made available to patients to assist them in understanding the quality of care they can expect to receive from a given dialysis facility. Consistency of the ratings will be important to their acceptance by both patients and providers as valid measures of relative quality.


Until recently, dialysis patients who wanted summary information on the comparative quality of care that they receive could find this information in the Quality Incentive Program (QIP) Performance Score Certificate posted at their facility at the beginning of each year. This certificate shows the facility’s Total Performance Score (TPS) compared to the national average, as well as scores on each clinical quality measure scored by the program (see Figure 1.)  Other quality metrics have also been available on CMS’ Dialysis Facility Compare (DFC) website,1 but without any overall summary measure like that found in the QIP certificate.

However, beginning in January of this year, the DFC now carries a “star rating” for each facility. Using a familiar five star rating rubric, similar to what U.S. consumers have become accustomed to for everything from movie and restaurant ratings to how Consumer Reports ranks automobiles and appliances, CMS has created a summary quality measure from the array of clinical quality metrics available on the DFC, in an attempt to “increase transparency” and “provide an easily recognizable way to compare facilities." 2

While the two quality rating programs have somewhat different objectives, both ratings are available to patients to assist them in understanding the quality of care they can expect to receive from a given dialysis facility. Consistency of the ratings will be important to their acceptance by both patients and providers as valid measures of relative quality. To better understand the relative consistency or inconsistency of these quality ratings, we compared the two ratings for a matched set of dialysis facilities that have ratings on both programs.

What is the impact of the star ratings on dialysis clinics?

Facilities doing well under the QIP but who received low star ratings may have been unpleasantly surprised when

their Five Star preview reports came out. We interviewed the CEO of a small dialysis organization in the Midwest

whose facilities got only 1 or 2 stars, for his reaction to the star ratings. His comments are summarized below.

Question: Were you surprised at how your facility was rated?

Answer: Yes, given our QIP scores, the low star ratings were a big surprise. We expected 3 or even 4 stars. Data

entry errors hurt our ratings. Problems with CrownWeb downtime prevented us from checking the data after entry.

Dialysis adequacy (kt/V) for our home program should not be calculated the same as for in-center patients.

The star ratings are based heavily on events outside the dialysis center’s control. We had a spike in mortality that

hurt us. I’ve heard no feedback from our patients. I have yet to talk to a patient or their family about DFC information.

In general, I think the industry will downplay tthe star ratings in our communications with patients.


We downloaded the new Five Star Ratings for each facility from the January 2015 DFC website update.3 2015 QIP TPS scores were obtained from the QIP Payment Year 2015 Public Reporting Date File,4 also released in January. Facilities were matched between the two files on CMS Certification number. In order to compare QIP TPS scores (which range from 0-100) to the Five Star categories, we created five TPS categories corresponding to a standard scholastic grading system: A (90-100), B (80-89), C (70-79), D (60-69) and F (<60). These TPS categories, or “grades,” were selected after review of the distribution of TPS raw scores, which showed a distribution similar to a typical scholastic grading curve. Thus, intuitively, a star rating of 5 stars should be understood to be comparable to an “A,” and a 1 star would be comparable to an “F,” etc.


There were 6,225 dialysis providers listed in the January extract from the DFC, of which 5,580 (90%) had star ratings. There were 6,138 providers listed in the QIP 2015 Public Reporting Data File, of which 5,650 (92%) had a Total Performance Score. Some facilities did not receive a star rating or a QIP TPS (or both) due to missing or invalid data, or because the facility was not open for the full reporting period.


Of the 5,580 providers with star ratings and the 5,650 with 2015 QIP TPS scores, we found 5,481 facilities with both a star rating and a QIP TPS. This represents 98% of facilities with star ratings and 97% of facilities with QIP TPS scores.

A comparison of the overall distributions of star ratings and TPS grades is shown in Figure 2. While the star ratings distribution is normal, following a “bell curve,” the distribution of TPS grades is skewed heavily to the right. That is, while about 30% of facilities received 4 or 5 stars, over 60% of facilities received a TPS grade of “A” or “B.” Conversely, 30% of facilities received 1 or 2 stars, but only about 18% received a TPS grade of “D” or “F.”


Significant inconsistencies were found in ratings at all levels. For example, Table 1 shows the distribution of star ratings for facilities receiving a TPS grade of “A” (TPS 90+) and for facilities receiving perfect TPS scores of 100. As this table shows, scoring very high on the QIP TPS did not necessarily result in a good star rating, with some of these highest-rated facilities (on QIP) receiving only 1 or 2 stars and more receiving 3 stars than 5 stars (30% vs. 28%). Similarly, Table 2 shows the mean, min and max star ratings for each QIP TPS grade. Although the data are on average directionally consistent, there are inconsistencies. Regardless of TPS grade, facilities within each grade received between 1 and 5 stars.


Consistency in quality ratings was found for 32% of facilities; that is, a rating of 1 star with a TPS grade of “F,” or conversely a 5 star rating with a TPS grade of “A,” etc. Inconsistency, defined as scores that were at least two categories apart (e.g., a 3 star facility with a TPS grade of “A” or “F”) was found for 19% of facilities. The other 49% of facilities could be characterized as “nearly consistent,” where grades were only off by +/- 1 category between the two rankings.



This comparative analysis of the 2015 quality rankings for Medicare-certified dialysis facilities showed that only 32% of facilities have consistent rankings between the QIP program and the Star Rating program, when both systems are scored on a 5-point scale. Dialysis facilities typically scored higher on the QIP TPS than on the star ratings, but major inconsistencies abound. Nearly one-fifth of facilities have ratings that are at least two grades apart. While there is considerable overlap in the quality metrics and time periods used to score dialysis providers on each program, there are fundamental differences in scoring methodology that contribute to the disconnects. The 2015 QIP scores are based on six clinical process measures and four reporting measures, while the star ratings are based on six clinical process measures (five of which overlap with QIP) and three standardized outcomes measures. However, scores on individual measures used in the star ratings are “normalized” (i.e. – transformed through calculations so that the distribution of scores follows a bell curve) and the weighting schemes for calculating composite scores are different between the two programs.

Public/patient access to inconsistent quality rankings from two Medicare-sponsored programs for the same providers may cause confusion, as the understanding and interpretation of each program’s scoring is complex. In fact, concerns and criticism of the star ratings from industry and policy advocates began as soon as the program was announced last June. 5,6 Among the concerns were that the star ratings would be based on incomplete or erroneous data, would include measures that were outside the control of the dialysis center, and would be based on a forced distribution of nationwide provider scores rather than on any absolute measures of quality (see sidebar). Even the Medicare Payment Advisory Commission (MedPAC), in its comment letter on the renal payment program 2015 proposed rule, expressed a number of concerns with the proposed rollout of the star ratings, urging CMS to delay the rollout until a formal proposal on the program could be submitted for public comment. MedPAC expressed concern that, “Beneficiaries and their families might be confused if a facility’s star and QIP scores diverge,” and “the Commission believes the quality measurement process needs greater simplicity and clarity. Moving to two systems creates greater uncertainty.”7

Despite these protests, CMS went ahead with the program as originally designed, after a short delay in making the scores public from October 2014 to January 2015. Almost immediately, the new star ratings started making headlines in the trade and lay press, usually highlighting “poor performance facilities.” 8–10  Perhaps better late than never, CMS started a process in February for putting together a Technical Expert Panel (TEP) which will be charged with reviewing and updating the star methodology later this year. In the meantime, the star ratings may be adding more confusion than clarity to the understanding of quality of care in Medicare’s dialysis program. -by Mark Stephens, BS


1.     Centers for Medicare & Medicaid Services (US). Dialysis Facility Compare. 2015. Accessed January 21, 2015.

2.     Centers for Medicare & Medicaid Services (US). Dialysis Facility Compare and the New Star Ratings - Special Open Door Forum. 2014.

3.     Centers for Medicare & Medicaid Services (US). Dialysis Facility Compare Listing by Facility. 2015. n7w9/rows.csv?accessType=DOWNLOAD. Accessed January 21, 2015.

4.     Centers for Medicare & Medicaid Services (US). ESRD QIP Payment Year 2015 Public Reporting Data File. 2015. Instruments/ESRDQIP/Downloads/ Accessed February 2, 2015.

5.     Krishnan M. What’s wrong with the 5-star rating system for the renal community. Nephrol News Issues. 2014. Accessed July 23, 2014.

6.     Dialysis Patient Citizens. Addition of Star Ratings to Dialysis Facility Compare. 2014. Accessed February 26, 2015.

7.     Medicare Payment Advisory Commission. MedPAC comment on CMS’s proposed rule entitled: Medicare Program; End-Stage Renal Disease Prospective Payment System, Quality Incentive Program, and Durable Medical Equipment, Prosthetics, Orthotics, and Supplies. 2014. Accessed February 6, 2015.

8.     Fletcher H. No Nashville-area dialysis centers get top ranking. The Tennessean. 2015. Accessed February 2, 2015.

9.     Rice S. Fresenius operates half of Medicare’s lowest-rated dialysis facilities. Mod Healthc. 2015. Accessed February 2, 2015.

10.     Rice S. CMS dialysis clinic ratings won’t help patients, critics say. Mod Healthc. 2015. Accessed February 2, 2015.