#### Abstract

Schools of nursing across the country are implementing progression policies that prohibit students from graduating or from taking the nursing licensure examination, sometimes based solely on a single predictive test score. In addition, little empirical evidence exists that supports progression policies as effective in increasing a school’s NCLEX-RN^{®} pass rates. This article reports on a study conducted when one school did not achieve the results they expected after implementing a progression policy. With use of logistic regression, diagnostic indexes, and other methods, reasons for the disparity between expected and observed NCLEX-RN pass rates were examined. Results revealed that the Health Education Systems, Inc. (HESI) Exit Exam was not able to accurately predict NCLEX-RN outcomes for graduates and, further, that progression policies that allow retest after retest so as to achieve a minimum score on the HESI Exit Exam are not supported empirically. Conclusions and suggestions for schools using or considering progression policies are provided.

Mr. Spurlock is Assistant Professor, Mount Carmel College of Nursing, and Dr. Hunt is Senior Quality Manager, Ohio State University Medical Center, Columbus, Ohio. At the time this article was written, Mr. Spurlock was Instructor of Nursing, and Dr. Hunt was Associate Professor, Mount Carmel College of Nursing, Columbus, Ohio.

Address correspondence to Darrell Spurlock, Jr., MS, MSN, RN, Assistant Professor, Mount Carmel College of Nursing, 127 S. Davis Avenue, Columbus, OH 43222; e-mail: dspurlock@mccn.com.

Much emphasis is placed on the performance of nursing graduates on the National Council Licensure Examination for Registered Nurses (NCLEX-RN^{®}) by nursing school administrators, educators, and graduates, as well as by prospective students and their parents. To ensure an acceptable pass rate on the NCLEX-RN, many schools of nursing have implemented progression policies based on a single end-of-program predictive test. Although the research literature suggests that predicting NCLEX-RN outcomes using any measure is marginal at best—even in schools with high NCLEX-RN pass rates—the Health Education Systems, Inc. (HESI) Exit Exam is frequently used by nursing programs as a sole predictor of students’ NCLEX-RN outcomes. The primary purpose of this study was to evaluate, using several measures, how well the HESI Exit Exam was able to predict NCLEX-RN pass rates, and second, possible explanations for disparity between HESI Exit Exam predictions and actual NCLEX-RN performance.

### Background

Morrison, Free, and Newman (2002) found that programs of nursing are using comprehensive examinations, such as the HESI Exit Exam, to assess students’ preparedness for the NCLEX-RN. Morrison et al. (2002) also reported that within 2 years of implementing progression and remediation policies, NCLEX-RN pass rates had increased from 9% to 41% in those school studied. However, these results may be misleading because of multiple methodological issues in the reporting of these figures, the least of which is the minimal sample size (*N* = 5). As a cornerstone of the progression policies that are becoming more common in schools that use the HESI Exit Exam, HESI has recommended that students who do not achieve the minimum HESI predictability score set by schools on the HESI Exit Exam after at least three attempts at the Exit Exam not be allowed to take the NCLEX-RN (Morrison, 2000). Further information about the HESI Exit Exam can be found in the study by Spurlock and Hanks (2004).

In light of a decline in the NCLEX-RN pass rate in 2002, our college of nursing implemented a progression policy based solely on HESI Exit Exam scores. This policy followed many of the recommendations proposed by Morrison (2000). Initially, the policy required all graduating seniors to take the HESI Exit Exam the first time 8 to 16 weeks prior to graduation and achieve a score of 850. If a student did not pass the Exit Exam with a score of 850 on the first attempt and the second HESI Exit Exam score was below the minimum passing score, the student was required to complete a NCLEX-RN review course. The college would then certify the student’s completion of the program for the State Board of Nursing after receiving validation of completion of the review course.

The next year, the college’s NCLEX-RN pass rate did not improve, so a more stringent progression policy was implemented. This policy required the student to retake the HESI Exit Exam multiple times until they achieved a minimum passing score, which the school set at 850 under the guidance of HESI. Also, Nibert, Young, and Britt (2003) reported that 850 is the most commonly used Exit Exam cutoff score for allowing student progression to sit for the NCLEX-RN. Until this minimum score was obtained, students were to be denied eligibility for graduation and given a grade of incomplete in the capstone course, and the college would withhold approval for the student to sit for the NCLEX-RN.

On the basis of research published in the literature (Nibert, Young, & Adamson, 2002), faculty expected the NCLEX-RN pass rate to improve for our institution as a consequence of this more aggressive progression policy; however, this did not occur. In reality, there was a major difference between the predicted NCLEX-RN pass rate (approximately 94%) for the institution based on the HESI Exit Exam and the actual NCLEX-RN pass rate (approximately 87%). It seemed that because every student was achieving the minimum passing score after multiple test attempts, students should be expected to perform better on the NCLEX-RN. Somewhat on the basis of the study by )2004), our faculty began to question the usefulness of the HESI Exit Exam in accurately predicting the NCLEX-RN pass rate for our students.

### Purpose

To investigate the reasons for the disparity between our actual NCLEX-RN pass rate and the rate we expected based on our students Exit Exam scores, we conducted a study. The study’s main purpose was to explain why the actual NCLEX-RN pass rate was considerably lower than the expected pass rate for the nursing program. More specifically, the investigators attempted to answer the following research questions:

- What is the relationship between students’ first HESI Exit Exam scores and NCLEX-RN outcomes versus students’ final HESI Exit Exam scores and NCLEX-RN outcomes?
- Do HESI Exit Exam scores statistically significantly predict NCLEX-RN outcomes? What is the accuracy of the classification from this logistic model?
- What cutoff scores for HESI Exit Exam scores yield the most accurate classification of students as predicted to fail and predicted to pass?
- Do descriptors of HESI Exit Exam score categories actually reflect the real probability of a student failing the NCLEX-RN?

### Method

#### Design

This study used a retrospective descriptive, correlational design. Logistic regression analysis was used to predict NCLEX-RN failure from Exit Exam scores. The data collected for this study were gathered from student records after students had already taken the HESI Exit Exam and the NCLEX-RN. The setting for this study was a large, single-purpose college of nursing in a large midwestern town. Institutional human subjects protection approval was obtained although the research was conducted in a retrospective manner and no identifying information is reported here. No demographic or other academic factors were considered in this study for two reasons: data were collected retrospectively from student academic records, so demographic data collection was impossible; and the sole predictor of NCLEX-RN outcomes used in the progression policy at the institution under study was the HESI Exit Exam, so other variables were not studied because those are not considered in the progression policy. The measures used in this study include students’ HESI Exit Exam predictability scores, which are interval-level measurement data, the number of times students had to take the Exit Exam to achieve a score of 850, and NCLEX-RN outcomes, which are dichotomous (i.e., pass or fail).

#### Procedures

After human subjects protection approval was obtained, data were extracted from student records for those students graduating from January 2004 to July 2005; this encompassed students from two May graduations (as well as students from interceding graduations), which are the largest of several held yearly at the institution from which data were obtained. Because the progression policy used at the institution in this study allowed students to retest nearly indefinitely, each HESI Exit Exam score recorded for students during this time period was used. NCLEX-RN outcomes from the first examination attempt for each student were assessed using two methods. First, results could be obtained from a quarterly summary that is sent from the state board of nursing. The second method developed as a new online licensure verification system became available in the state. License status was checked using this site, yielding more timely results than those in the quarterly board of nursing reports. Students registered to take the NCLEX-RN would appear on the site as “exam pending.” Students who passed the NCLEX-RN were issued a license number, which would appear on the site; students who failed the NCLEX-RN had their names removed from the site. This procedure required vigilance in checking the site daily, but results were available much faster than the quarterly report. Data for a small number of students who took the NCLEX-RN in another state were unavailable.

Data were collected on a form that contained only the students’ names, their HESI Exit Exam scores, the number of Exit Exam attempts required for a score of 850, and their associated NCLEX-RN outcomes from the first examination attempt. Names were collected only for accuracy checking. After data collection, data were entered into a data file constructed in SPSS version 13.0, where data analysis occurred.

#### Data Analysis

Descriptive statistics were calculated so far as they were useful in gaining an understanding of the sample’s scores on the Exit Exam. Continuous data were visually examined before statistical analysis to evaluate the data in light of the assumptions of each statistical analysis. Mean HESI Exit Exam scores were computed from two distributions: a distribution of data assembled from the first set of scores achieved by students on the HESI Exit Exam (hereafter termed *first Exit Exam scores*), and from the distribution of data assembled from the final set of scores on the HESI Exit Exam (hereafter termed *final Exit Exam scores*). These first and final scores were analyzed because students sometimes had to take the HESI Exit Exam multiple times before achieving the minimum score required (850) according to the school’s progression policy. For some students, their first score and final score were the same (if they achieved an 850 or higher on the first attempt); for other students, multiple examination attempts were required to achieve the minimum score. Using NCLEX-RN outcomes (pass or fail) as the independent variable, one-way ANOVA was used to assess for differences in HESI Exit Exam first and final scores between students who passed and those who failed the NCLEX-RN.

There were four key research questions in this study, each of which was answered using different quantitative methods. The first question was about the relationship between HESI Exit Exam scores and NCLEX-RN outcomes. Specifically, we wanted to assess the relationship between students’ first Exit Exam scores, final Exit Exam scores, and NCLEX-RN outcomes. To examine these relationships, point-biserial correlation coefficients were calculated. The point-biserial correlation coefficient, denoted *r*_{pb}^{,} was used because a continuous variable (HESI Exit Exam score) was being examined against a dichotomous, categorical variable (NCLEX-RN outcomes). According to Polit (1996), the point-biserial analysis is the most appropriate measure for these kinds of data.

To answer the second research question, whether HESI Exit Exam scores can statistically significantly predict NCLEX-RN outcomes, a binary logistic regression model was constructed with HESI Exit Exam scores as the predictor variable and NCLEX-RN outcomes as the dichotomous outcome variable. Model fit was assessed using various statistical methods, described in the Results section. Spurlock and Hanks (2004) posited that when evaluating how well a test predicts NCLEX-RN outcomes, what nurse educators need to know most is which students will fail the NCLEX-RN. Therefore, accuracy in classification using the logistic model was assessed primarily by evaluating how many students were correctly predicted to fail the NCLEX-RN. Discriminant function analysis would have also been an option in this research, but because of the need to examine odds and questions about the normality of final Exit Exam scores, logistic regression is the most appropriate analysis to use (Polit, 1996).

The third research question asked which HESI Exit Exam cutoff score would be the most accurate in predicting actual NCLEX-RN failure, if the current cutoff score of 850 did not provide acceptable accuracy. With a cutoff score of 850, students scoring 850 or higher are predicted to pass and students scoring less than 850 are predicted to fail. To assess which score, retrospectively, would have been the most accurate to set as the cutoff score, the cutoff score was set at 900 and each of the test qualities of sensitivity, specificity, positive predictive value, negative predictive value, odds ratios (ORs), and overall accuracy were calculated. Spurlock and Hanks (2004) used this model to reanalyze published HESI Exit Exam data on the overall ability of the Exam to predict NCLEX-RN outcomes. Cutoff scores were then tested in 25-point increments down to a minimum score of 550. In sum, students were retrospectively classified as predicted to fail if they scored lower than 900, lower than 875, lower than 850, lower than 825, and so on, down to 550. Each of these cutoff scores was tested against actual student data, and accuracy of prediction was assessed with calculation of the above-mentioned test parameters.

The final research question sought to examine whether the predicted probabilities of failures, derived from the logistic regression model constructed in this analysis, seemed congruent with the descriptive indicators of risk of failure on the NCLEX-RN assigned by HESI to the various scoring categories. These labels appear on HESI Exit Exam summary reports available to schools of nursing that subscribe to and use the HESI Exit Exam. A sample report can be found on the HESI Web site (HESI, 2002).

### Results

#### Descriptive Statistics

A total sample size of 184 was available to the researchers, of which 5 cases were missing NCLEX-RN outcome data due to not having taken the examination at the time of data collection or having taken the examination out of state. This left a final sample size of 179 for the purposes of this analysis.

First Exit Exam scores yielded a group mean of 832.69 (*SD* = 116.07); scores ranged from 473 to 1251. Final Exit Exam scores yielded a group mean of 917.27 (*SD* = 76.12); scores ranged from 648 to 1251. Final Exit Exam scores include the final scores from students who took the examination up to 5 times to achieve a score of 850 on the examination. Overall, the mean number of examination attempts necessary to achieve a score of 850 was 2.13 (*SD* = 1.16), with 38.6% (*n* = 69) needing to take the examination three or more times to achieve a required score of 850 to graduate.

On the basis of their final Exit Exam scores, students scoring greater than 850 were predicted to pass the NCLEX-RN. In this sample, 167 students were predicted to pass the NCLEX-RN and 12 were expected to fail (i.e., these students scored lower than 850 but were allowed to graduate after the faculty voted to rescind the progression policy). Of the 167 students expected to pass, 22 failed the NCLEX-RN. Of the 12 expected to fail, 10 passed the NCLEX-RN, meaning only 2 of those expected to fail actually failed the NCLEX-RN. On the basis of the percentage of students who score in HESI Exit Exam Categories A through C (i.e., students scoring 850 or greater) who pass the NCLEX-RN, which ranges from 94.1% to 98.3% according to Nibert, Young, and Adamson (2002), one can assume an approximate 94% NCLEX-RN pass rate for the school, considering only 2 of the 12 students predicted to fail actually failed. For the total sample, the NCLEX-RN pass rate was 86.6% (*n* = 155). The actual NCLEX-RN pass rate was considerably lower than the expected pass rate for the school. To examine the HESI Exit Exam scores of students who passed versus those who failed the NCLEX-RN, both first Exit Exam scores and final Exit Exam scores were examined using crosstabulation. Results are presented in Table 1. To assess for differences in HESI Exit Exam scores with NCLEX-RN outcomes as the grouping variable, one-way ANOVAs were calculated and are also presented in Table 1.

Table 1: HESI Exit Exam Score Descriptive Statistics by NCLEX-RN Outcomes and Anova Comparison |

#### Research Questions

*Research Question 1: What Is the Relationship Between Students’ First Exit Exam Scores and NCLEX-RN Outcomes Versus Students’ Final Exit Exam Scores and NCLEX-RN Outcomes?* To answer this question, pointbiserial correlation coefficients were calculated with first Exit Exam scores and then final Exit Exam scores as the continuous variables and NCLEX-RN outcomes as the dichotomous variables. Results revealed a statistically significant relationship between first Exit Exam scores r_{pb} = −0.275, *p* ≤ 0.005) and NCLEX-RN outcomes. There was no statistically significant relationship between final Exit Exam scores and NCLEX-RN outcomes (r_{pb} = 0.026, *p* = 0.733). It appears that when students were allowed to retake the Exit Exam multiple times to achieve the minimum 850 required to graduate, the relationship between Exit Exam scores and NCLEX-RN outcomes nearly disappears. Allowing students indefinite attempts to pass the Exit Exam apparently introduces error in the form of spurious Exit Exam scores into the relationship, which causes the relationship to decrease in strength.

*Research Question 2: Do HESI Exit Exam Scores Statistically Significantly Predict NCLEX-RN Outcomes, and What Is the Accuracy of the Classification from this Logistic Model?* To examine whether HESI Exit Exam scores can significantly predict NCLEX-RN outcomes, binary logistic regression analysis was conducted. Two separate analyses were conducted; the first used students’ first Exit Exam scores; the second used students’ final Exit Exam scores. Each analysis was run separately using an ENTER method; because only one variable was used as a predictor in each analysis, the model fit statistics compare the model with a predictor to a model without a predictor to assess whether the model with a predictor predicts statistically significantly better than does a model with a constant only. Results of the logistic regression analysis are presented in Table 2.

Table 2: Logistic Regression of HESI Exit Exam Scores Predicting NCLEX-RN Outcomes |

According to Tabachnick and Fidell (2001), logistic regression models should be evaluated on two fronts: First, how well the model fits the data, and second, how well the predictor classifies individual cases. There was good model fit for first Exit Exam scores, as evidenced by a significant Wald statistics (12.230, *p* ≤ 0.005) and omnibus analysis of chi-square ([1] = 14.299, *p* ≤ 0.005); this indicates that first Exit Exam scores can distinguish between who will pass and who will fail the NCLEX-RN in a better-than-chance way. Using the range provided by the Cox and Snell *R*^{2} and Nagelkerke *R*^{2} as estimates of the proportion of the variance in NCLEX-RN outcomes accounted for by first Exit Exam scores, 7.7% to 14.1% of the variance in NCLEX-RN outcomes can be accounted by first Exit Exam scores. The model performed poorly in predicting NCLEX-RN failure, with none of the NCLEX-RN failures being accurately classified. The weakness in this prediction can be further seen in the OR of only 0.992 for the model; this OR is near 1 and therefore shows little change in likelihood of NCLEX-RN failure for a one-unit change in first Exit Exam score.

To visually examine how well first Exit Exam scores predict NCLEX-RN outcomes, a Receiver Operating Characteristics (ROC) curve was calculated (Figure 1). Spurlock and Hanks (2004) used this method of showing the tradeoff between sensitivity and specificity. According to Norusis (2005), once the curve is calculated, the area under the curve (AUC) is known as the *c* statistic. Children’s Mercy Hospitals & Clinics (Simon, 2003) provides this guide for interpreting *c* values: 0.50 to 0.75 = *fair*, 0.75 to 0.92 = *good*, 0.92 to 0.97 = *very good*, and 0.97 to 1.00 = *excellent*. The *c* value for this model was 0.739 (*p* = 0.000), which would classify first Exit Exam scores as “fair” predictors of NCLEX-RN outcomes, the poorest class for predictors.

Model fit for final Exit Exam scores was not impressive nor statistically significant as indicated by an insignificant Wald statistic and omnibus analysis chi-square ([1] = 2.751, *p* = 0.097). When the range provided by the Cox and Snell *R*^{2} and Nagelkerke *R*^{2} was used, only 1.5% to 2.8% of the variance in NCLEX-RN outcomes can be accounted by final Exit Exam scores. These results are not unexpected in light of the poor relationship reported earlier between final Exit Exam scores and NCLEX-RN outcomes and the insignificant *F* for differences in final Exit Exam mean scores for those who failed versus those who passed the NCLEX-RN. Classification using this model was poor as well, with no NCLEX-RN failures being accurately classified. The 95% confidence interval for ORs in this model includes an OR of 1, which further indicates the insignificance of final Exit Exam scores in predicting NCLEX-RN outcomes. An ROC curve was also calculated for this model (Figure 2). Results from calculating the curve reveal, again, that final Exit Exam scores are insignificant and poor predictors of NCLEX-RN outcomes.

Figure 2: Receiver Operating Characteristics (ROC) Curve for Final HESI Exit Exam Scores. Area Under the Curve (AUC) Is |

*Research Question 3: What Cutoff Scores for the HESI Exit Exam Yield the Most Accurate Classification of Students as Predicted to Fail and Predicted to Pass?* The logistic regression model can be used to calculate predicted probabilities of an event (in this case, NCLEX-RN failure), but to easily and more accurately demonstrate how setting different HESI Exit Exam scores can affect the overall ability of the Exit Exam to classify students as either predicted to pass or fail the NCLEX-RN, we used the model presented by Spurlock and Hanks (2004). This model, which is useful for evaluating how well a test can classify or diagnose a case (NCLEX-RN pass or fail), was applied to various cutoff scores. As Spurlock and Hanks demonstrated, only students who achieve a score of 900 or above are classified as predicted to pass by the HESI Exit Exam. Students achieving scores lower than 900 fall into other descriptive categories, further discussed under research question four. At the institution from which data for this study were collected, for example, students had to achieve a score of 850 to progress to graduation. So in this case, the prediction to pass was set at an Exit Exam score of 850 and above. Students who could not achieve an 850 or more on the Exit Exam, even after multiple attempts, would have technically, according to this institution’s progression policy, been prohibited from graduation (although that has not happened).

There are several predictive test characteristics that are important to nursing faculty. These test characteristics are summarized in **Table 3**, which also provides a clinical-educational comparison (for a more detailed explanation, see Spurlock and Hanks, 2004). These values are calculated from inserting actual student performance data into a 2×2 contingency table (Table 4). By changing the cutoff scores for when students are predicted to pass and, therefore, when they are predicted to fail, the overall performance of the various cutoff scores can be assessed. As Spurlock and Hanks noted (2004), the most important characteristics for nurse educators to be concerned with are the sensitivity and the positive predictive value (PPV). These values tell nurse educators what it is they need to know—how well the test predicts NCLEX-RN failure, both retrospectively (sensitivity) and prospectively (PPV).

Table 4: Contingency Table Used to Figure Predictive Test Parameters |

Table 3: Predictive Test Characteristics of Importance to Nurse Educators with Clinical-Educational Comparison |

Students from this sample were classified according to HESI Exit Exam cutoff score as either predicted to pass or predicted to fail. Only the first Exit Exam scores were used because they were the only scores found to be worthwhile in using as predictors. The starting cutoff score was 900 and the last score was 550; so for the 900 cutoff, students scoring less than 900 were predicted to fail and students scoring 900 or above were predicted to pass. This procedure was performed in 25-point increments to the lowest cutoff score. Results of each of the test characteristics calculated are provided in Table 5.

Table 5: Performance of Different Cutoff Scores for Predictions to Pass or Fail on First Exit Exam Scores |

An evaluation of the data in Table 5 reveal that the best cutoff score for the students in this sample was 650. A HESI Exit Exam score of 650 on the first attempt yields the best classification of students: PPV and overall accuracy are the highest on the table, and the OR increases greatly to 7.89, meaning that students predicted to fail (those scoring lower than 650) are 7.89-fold more likely to fail than those scoring above 650. Also, the overall accuracy of the test is highest as well, at 87%. A point of caution is due here regarding the calculation of positive predictive values. As Smith, Winkler, and Fryback (2000) noted, calculating PPVs in populations with low prevalence can be troublesome. Essentially, when the prevalence of the problem (failing the NCLEX-RN) is low (e.g., approximately 10%), the PPV of a test will tend to be low as well. One can use, as Smith et al. (2000) did, Bayesian methods to try to account for pre-prediction prevalence of the problem, but this would seem too wieldy for nurse educators in diverse academic settings. A key point to remember is this: Because for most schools NCLEX-RN failures are relatively low in prevalence, predicting them with a diagnostic or predictive test can be challenging, as has been demonstrated here. Difficulty in predicting NCLEX-RN failure has been reported by Spurlock and Hanks (2004) and Seldomridge and DiBartolo (2004), and further research will no doubt bolster this claim.

*Research Question 4: Do Descriptors of HESI Exit Exam Score Categories Actually Reflect the Real Probability of a Student Failing the NCLEX-RN?* To answer this question, we go back to the binary logistic regression model for first Exit Exam scores. One of the benefits of the logistic regression is that the logistic regression equation constructed from the logistic regression model can be used to predict the probability of, in this study, NCLEX-RN failure (Cohen, Cohen, West, & Aiken, 2002; Norusis, 2005; Tabachnick & Fidell, 2001). According to Norusis (2005), the logistic regression model applicable here is:

*B*

_{0}and

*B*

_{1}are coefficients estimated from the data,

*X*is the value of the independent variable, and

*e*is the base of the natural logarithms, which is approximately 2.718. In this analysis, the regression model applied would be this:

Using this equation, predicted probabilities for NCLEX-RN failure were calculated for each of the scoring categories reported by HESI. Those values, along with the assigned descriptors of risk and definition of categories as reported by HESI (2002) are reported in Table 6. As can be seen in Table 6, the descriptors for Category A and B seem correct; those students scoring very highly on their first Exit Exam do in fact have little chance of failing the NCLEX-RN, based on the Exit Exam as a sole predictor (p ≤ 0.03 to 0.05). Category C, which indicates an average probability of passing, is described incorrectly for this sample; according to the National Council of State Boards of Nursing (2005a, 2005b), the national pass rate for the NCLEX-RN in 2004 was 85.3% and in 2005, the national pass rate was 89.2%. Therefore, the average failure rate is actually much higher, at approximately *p* = 0.11 to 0.15., not the 0.05 to 0.08 that was found in this study. Looking at the lowest scoring categories, Categories G and H, HESI describes students in these categories as at “grave risk of failing” and “poor performance expected” (Evolve Reach, Powered by HESI, 2008, p. 2). The actual predicted probabilities of NCLEX-RN failure for this group is estimated to range from 0.22 to 0.29 (the lower the score in category H, the greater the risk), certainly not eliciting any thoughts of grave danger—increased risk, perhaps, but not grave danger.

Table 6: HESI Exit Exam Categories, Descriptors of Risk, and Calculated Probabilities of Failure for Each Category |

### Discussion

Schools of nursing across the country are implementing progression policies (Nibert, Young, & Britt, 2003) that seek to either raise their NCLEX-RN pass rates or protect the school from a decline in pass rates. Because NCLEX-RN outcomes are so important for accreditation and certification by states boards of nursing, schools are rightly concerned with their NCLEX-RN pass rates. Because little empirical research has shown the effectiveness of progression policies that use NCLEX-RN predictive tests as their basis, this study sought to analyze how well the HESI Exit Exam was able to predict NCLEX-RN outcomes for a large group of students at one school. This was an especially important study because the institution used for this study had a progression policy during the time data were collected. Despite this progression policy, NCLEX-RN pass rates were not what the school would have expected based on students’ final Exit Exam scores before graduation.

For the institution where data for this study were collected, NCLEX-RN pass rates remain acceptably above the national average and state requirements but are inconsistent with what should be expected based on how students scored on the HESI Exit Exam. Faculty expected higher NCLEX-RN pass rates for the institution based on how students who score higher than 850 on the Exit Exam should perform on the NCLEX-RN. Results from this analysis provide insight on several important factors related to the use of the HESI Exit Exam and progression policies.

First, Morrison (2000) recommended a process of testing that allows students multiple chances to pass the HESI Exit Exam should the student not achieve the minimum score set by the school on their first attempt. As we have found in this study, only students’ first scores on the Exit Exam are statistically significantly related to NCLEX-RN outcomes. Students who repeat the Exit Exam until they are successful dilute the relationship between the Exit Exam score and NCLEX-RN outcomes because these students are more likely to fail the NCLEX-RN after three, four, or five attempts at the HESI Exit Exam than are students who get high scores on the first attempt. This poses a major problem for schools of nursing: Schools may allow students to graduate after multiple attempts at passing the HESI Exit Exam when in fact they are really little better prepared for the NCLEX-RN than they were when they first took the test. This can lead to schools having a false sense of security about their NCLEX-RN pass rates. Schools may think that because all students achieved the minimum Exit Exam score, NCLEX-RN pass rates will be high, when in fact, it is the first Exit Exam scores that count, not a final set of scores that is contaminated with scores that decrease the relationship between Exit Exam scores and NCLEX-RN outcomes.

Second, while only the first Exit Exam scores are statistically significant predictors of NCLEX-RN success, these scores alone do not perform well as a sole predictor of NCLEX-RN failure. This is evidenced by the poor OR for first attempt Exit Exam scores (OR = 0.992). Little can be validly inferred from a test that predicts NCLEX-RN failure so poorly as does the HESI Exit Exam. To increase the predictive accuracy of the test, a lower cutoff score must be used than those recommended by HESI. In this sample of students, using a cutoff score of 650 would have yielded the most accurate results, and this method only correctly classified 5 of the 24 students who failed the NCLEX-RN as predicted to fail and in fact, misclassified 5 students as predicted to fail when they would go on to pass the NCLEX-RN. The results indicate that accurately predicting NCLEX-RN failure, which is essential if educators are going to adopt progression policies based on a student’s likelihood of NCLEX-RN failure, cannot occur with using only the HESI Exit Exam as the sole predictor of NCLEX-RN failure.

Third, the descriptors assigned to HESI Exit Exam categories by HESI can be misleading for nursing faculty, and this is especially so regarding the lowest scoring categories. In this study, students who had as little as a 22% chance of failing the NCLEX-RN are classified as in grave danger of failing the NCLEX-RN. This is a serious problem and in reality magnifies the overarching problem with using a single predictor, like the HESI Exit Exam (or any other single predictive test) to make an important educational decision: When using a single predictor alone to make important educational decisions, valuable information outside the test is not considered and in fact, serious errors can occur (Spurlock, 2006). These serious errors not only could affect the school’s NCLEX-RN pass rate, but more importantly, can affect students’ lives. Erroneously prohibiting a student from graduating from a nursing program based on a single predictive test score not only flies in the face of recommendations from almost every major national testing and measurement organization (Spurlock, 2006), but also could have dreadful, long-term, and real effects on students’ lives.

### Limitations

There are some limitations to this study. First, no demographic data were collected in this study. This could pose problems in the generalizability of these findings. However, it is worth noting that no demographic data were reported in any of the four yearly studies reporting on the HESI Exit Exam either (Lauchner, Newman, & Britt, 1999; Newman, Britt, & Lauchner, 2000; Nibert & Young, 2001; Nibert, Young, & Adamson, 2002), so our results are at least as generalizable as the findings of these studies. A second limitation of this study is that all students were from a single site. Although this may be a limitation in terms of generalizability, it is a benefit in that all students were exposed to the same progression policy and testing schema. This would result in a more homogenous sample from which real effects can be observed—a sample in which subjects are exposed to vastly different progression and testing schemas could lead to significant intra-sample variability.

### Conclusion

What Campbell (1976) wrote so many years ago appears to have application here: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor” (p. 49). The more importance nurse educators put on a single quantitative indicator, like an end-of-program predictive test, the more pressure there is on students to pass the examination. This could, in fact, be detrimental to the students’ end-of-program learning, which is no doubt important for nursing practice and NCLEX-RN performance. Focusing on studying for an exit examination that has little use in predicting NCLEX-RN failure seems a poor use of end-of-program students’ time.

Setting forth a single, clinically oriented standardized predictive examination as the sole measure of a students’ readiness to graduate also speaks volumes to the importance that educators place on the rest of a student’s education. It is well understood that the NCLEX-RN is a mostly clinical examination with elements of extra-clinical professional nursing practice included, but students’ educations are rarely so limited. When we say to students, “You must pass this clinical exam,” what do they hear in terms of the need for them to know about nursing research and evidence-based practice, leadership and change in complex health systems, the science of nursing practice, or community and public health nursing (on which there is little NCLEX-RN content)? Looking at a single, clinical-only indicator to represent students’ readiness for graduation devalues the rest of their education, whether it occurred in a community college, diploma school, or university setting.

Finally, schools of nursing, and faculty members, individually, must be more skeptical of processes and methods that seem too easy. As the adage goes, “If it looks too good to be true, then it probably is.” Vendors of testing products, especially vendors of tests used to make high-stakes decisions, must be honest and ethical in their presentation of the facts about their products; this would include clearly denoting both the strengths and weaknesses of the test (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 1999). However, the American Educational Research Association et al. (1999) also noted that test-users (i.e., nursing faculty) have an equal responsibility to ensure that they are using not only the most appropriate tests, but also using the tests in the proper way. This means not making an important decision, like whether a student will graduate or not, on the basis of a single test score.

### References

- American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. 1999.
*The standards for educational and psychological testing*. Washington, DC: American Educational Research Association. - Campbell, DT1976.
*Assessing the impact of planned social change*. Retrieved October 7, 2005, from The Center for Evaluation, Western Michigan University Web site: http://www.wmich.edu/evalctr/pubs/ops/ops08.pdf - Cohen, J, Cohen, P, West, SG & Aiken, LS2002.
*Applied multiple regression/correlation analysis for the behavioral sciences*(3rd ed) Mahwah, NJ: Erlbaum. - Evolve Reach, Powered by HESI. 2008.
*Summary analysis report for faculty*. Retrieved March 2, 2008, from http://evolve.elsevier.com/pdfs/reach/instructor_report.pdf - Health Education Systems, Inc. 2002.
*HESI Exit Exam summary of findings*. Retrieved February 6, 2005, from http://www.hesitest.com/testing/reports/exitexam/exitexam.asp - Lauchner, KA, Newman, M & Britt, RB1999. Predicting licensure success with a computerized comprehensive nursing exam: The HESI Exit Exam.
*Computers in Nursing*,*17*, 120–125. - Morrison, S. (2000, July13).
*Recommendations for improving NCLEX-RN pass rates*. Retrieved March 21, 2004, from http://www.hesitest.com/shownewsletter.asp?imageid=6 - Morrison, S, Free, KW & Newman, M2002. Do progression and remediation policies improve NCLEX-RN pass rates?
*Nurse Educator*,*27*, 94–96. doi:10.1097/00006223-200203000-00014 [CrossRef] - National Council of State Boards of Nursing. 2005a.
*2004: Number of candidates taking NCLEX examination and percent passing, by type of candidate*. Retrieved October 7, 2005, from http://www.ncsbn.org/pdfs/Table_of_Pass_Rates_2004.pdf - National Council of State Boards of Nursing. 2005b.
*2005: Number of candidates taking NCLEX examination and percent passing, by type of candidate*. Retrieved October 7, 2005, from http://www.ncsbn.org/pdfs/Table_of_Pass_Rates_2005.pdf - Newman, M, Britt, RB & Lauchner, KA2000. Predictive accuracy of the HESI Exit Exam: A follow-up study.
*Computers in Nursing*,*18*, 132–136. - Nibert, AT & Young, A2001. A third study on predicting NCLEX success with the HESI Exit Exam.
*Computers in Nursing*,*19*, 172–178. - Nibert, AT, Young, A & Adamson, C2002. Predicting NCLEX success with the HESI Exit Exam: Fourth annual validity survey.
*Computers, Informatics, Nursing*,*20*, 261–267. doi:10.1097/00024665-200211000-00013 [CrossRef] - Nibert, AT, Young, A & Britt, R2003. The HESI Exit Exam: Progression benchmark and remediation guide.
*Nurse Educator*,*28*, 141–145. doi:10.1097/00006223-200305000-00009 [CrossRef] - Norusis, MJ2005.
*SPSS 130 advanced statistical procedures companion*Upper Saddle River, NJ: Prentice Hall. - Polit, DF1996.
*Data analysis and statistics for nursing research: Application manual*Upper Saddle River, NJ: Prentice Hall. - Seldomridge, LA & DiBartolo, MC2004. Can success and failure be predicted for baccalaureate graduates on the computerized NCLEX-RN?
*Journal of Professional Nursing*,*20*, 361–368. doi:10.1016/j.profnurs.2004.08.005 [CrossRef] - Simon, S. 2000.
*ROC*. Retrieved October 1, 2005, from the Children’s Mercy Hospitals & Clinics Web site: http://www.cmh.edu/stats/ask/roc.asp - Smith, JE, Winkler, RL & Fryback, DG2000. The first positive: Computing positive predictive value at the extremes.
*Annals of Internal Medicine*,*132*, 804–809. - Spurlock, D Jr. 2006. Do no harm: Progression policies and high-stakes testing in nursing education.
*Journal of Nursing Education*,*45*, 297–302. - Spurlock, DR Jr. & Hanks, C2004. Establishing progression policies with the HESI Exit Examination: A review of the evidence.
*Journal of Nursing Education*,*43*, 539–545. - Tabachnick, BG & Fidell, LS2001.
*Using multivariate statistics*(4th ed) Needham Heights, MS: Allyn & Bacon.

HESI Exit Exam Score Descriptive Statistics by NCLEX-RN Outcomes and Anova Comparison

HESI Score | NCLEX-RN Pass | NCLEX-RN Fail | ANOVA |
---|---|---|---|

First Exit Exam mean score ( | 845.22 (114.12) | 751.82 (95.81) | 14.47 (0.00 |

Final Exit Exam mean score ( | 920.96 (75.34) | 893.46 (48.47) | 2.74 (0.10) |

Logistic Regression of HESI Exit Exam Scores Predicting NCLEX-RN Outcomes

Predictor | ß | SE | OR | 95% CI for OR | Wald Statistic |
---|---|---|---|---|---|

First Exit Exam | −0.008 | 0.002 | 0.992 | 0.988 to 0.997 | 12.320 |

Final Exit Exam | −0.005 | 0.003 | 0.995 | 0.989 to 1.001 | 2.722 |

Predictive Test Characteristics of Importance to Nurse Educators with Clinical-Educational Comparison

Test Characteristic | Clinical | Educational | Formula to Calculate |
---|---|---|---|

Sensitivity (probability of testing positive in those who are actually positive) | Probability of a positive mammogram, given the presence of breast cancer | Probability that those who failed were “predicted to fail” | a/(a + c) |

Specificity (probability of testing negative in those who are actually negative) | Probability of a negative mammogram, given the absence of breast cancer | Probability that those who passed were “predicted to pass” | d/(b + d) |

Positive predictive value (probability of a true positive in those with a positive test) | Probability of having breast cancer, given a positive mammogram | Probability of failing, given a prediction of failure | a/(a + b) |

Negative predictive value (probability of a true negative in those with a negative test) | Probability of being disease free, given a negative mammogram | Probability of passing, given a prediction of passing | d/(c + d) |

Accuracy (probability that either prediction [diseased versus nondiseased] was correct) | Combined probability that those with breast cancer and those without breast cancer were correctly diagnosed | Probability that students who passed or failed the NCLEX-RN were correctly “predicted to pass” or “predicted to fail” | (a + d)/(a + b + c + d) |

Odds ratio (the odds of an event occurring in one group versus another group) | The odds of having breast cancer in diseased versus nondiseased state | The odds of failing the NCLEX-RN in those who are predicted to fail versus those who are predicted to pass | (a/c)/(b/d) |

Contingency Table Used to Figure Predictive Test Parameters

Outcome on NCLEX-RN | |||
---|---|---|---|

HESI Exit Exam Prediction | Fail | Pass | Total No. of Predictions |

Fail | a | b | a+b |

Pass | c | d | c+d |

Total NCLEX-RN outcomes | a+c | b+d | a+b+c+d |

Performance of Different Cutoff Scores for Predictions to Pass or Fail on First Exit Exam Scores

Cutoff Score | OR | |||||
---|---|---|---|---|---|---|

Sensitivity | Specificity | PPV | NPV | Overall Accuracy | ||

900 | 0.92 | 0.30 | 0.17 | 0.95 | 0.39 | 4.78 |

875 | 0.88 | 0.37 | 0.18 | 0.95 | 0.44 | 4.19 |

850 | 0.83 | 0.46 | 0.19 | 0.95 | 0.51 | 4.33 |

825 | 0.79 | 0.58 | 0.22 | 0.95 | 0.61 | 5.26 |

800 | 0.71 | 0.68 | 0.25 | 0.94 | 0.68 | 5.10 |

775 | 0.58 | 0.74 | 0.25 | 0.92 | 0.72 | 3.89 |

750 | 0.54 | 0.83 | 0.33 | 0.92 | 0.79 | 5.86 |

725 | 0.25 | 0.88 | 0.24 | 0.88 | 0.79 | 2.38 |

700 | 0.20 | 0.91 | 0.26 | 0.88 | 0.82 | 2.65 |

675 | 0.21 | 0.95 | 0.38 | 0.89 | 0.85 | 4.83 |

650 | 0.21 | 0.97 | 0.50 | 0.89 | 0.87 | 7.89 |

625 | 0.17 | 0.97 | 0.50 | 0.88 | 0.87 | 7.55 |

600 | 0.08 | 0.98 | 0.40 | 0.87 | 0.86 | 4.61 |

575 | 0.04 | 0.99 | 0.33 | 0.87 | 0.86 | 3.33 |

550 | 0.00 | 0.99 | 0.00 | 0.87 | 0.86 | 0.00 |

HESI Exit Exam Categories, Descriptors of Risk, and Calculated Probabilities of Failure for Each Category

Scoring Category | Scoring Interval | Expectations for Student Performance | Predicted Probability of NCLEX-RN Failure |
---|---|---|---|

A | ≥ 950 | Outstanding probability of passing | < 0.03 |

B | 900 to 940.9 | Excellent probability of passing | 0.03 to 0.05 |

C | 850 to 899.9 | Average probability of passing | 0.05 to 0.08 |

D | 800 to 840.9 | Below average probability of passing | 0.08 to 0.11 |

E | 750 to 799.9 | Additional preparation needed | 0.11 to 0.16 |

F | 700 to 749.9 | Serious preparation needed | 0.16 to 0.22 |

G | 650 to 699.9 | Grave danger of failing | 0.22 to 0.29 |

H | < 649.9 | Poor performance expected | ≥ 0.29 |