Journal of Nursing Education

The Relationship Among Multiple Assessments of Nursing Education Outcomes

Bessie Marquis, MSN, RN, CNAA; Charles C Worth, PhD

Abstract

ABSTRACT

Three internal and three external outcome measures generated data for this longitudinal study. The internal outcome measures were nursing GPA, nonnursing GPA, and clinical evaluations of students during eight clinical rotations. The external evaluations were NCLEX scores, the graduates' self-rating of their competency in meeting program objectives, and their immediate supervisors' ratings on the identical competency rating scale.

Data analyzed using Pearson correlation coefficients revealed significant correlations among all internal measures of academic outcomes with one external measure, the NCLEX score. There was no correlation between any nursing measure of outcome and the supervisor's ratings. There was a modest correlation between the supervisors' rating and the nonnursing GPA. The faculty clinical evaluation was the only significant correlation found between the alumni rankings and any other measure of outcome. There was no correlation between the supervisors' and the graduates' ratings.

Abstract

ABSTRACT

Three internal and three external outcome measures generated data for this longitudinal study. The internal outcome measures were nursing GPA, nonnursing GPA, and clinical evaluations of students during eight clinical rotations. The external evaluations were NCLEX scores, the graduates' self-rating of their competency in meeting program objectives, and their immediate supervisors' ratings on the identical competency rating scale.

Data analyzed using Pearson correlation coefficients revealed significant correlations among all internal measures of academic outcomes with one external measure, the NCLEX score. There was no correlation between any nursing measure of outcome and the supervisor's ratings. There was a modest correlation between the supervisors' rating and the nonnursing GPA. The faculty clinical evaluation was the only significant correlation found between the alumni rankings and any other measure of outcome. There was no correlation between the supervisors' and the graduates' ratings.

Introduction

Recent standards (National Association of State Universities and Land Grant Colleges, 1988) call for the use of education outcome assessment measures. Both the National League for Nursing and the California Board of Registered Nursing require educational evaluation as part of the accreditation process. An emerging view of student outcome assessment calls for the measurement of educational effectiveness rather than measuring institutional reputation or resources (Astin, 1985). Indeed, many colleges and universities now require an examination of institutional effectiveness as part of the accreditation process (Western Association of Schools and Colleges, 1988).

Recommendations for attending to the measurement of educational outcomes advise that they be developed locally and be multivariate in nature (National Association of State Universities and Land Grant Colleges, 1988).

Conducting educational evaluations with outcome measurements presents a small school of nursing with a paradox: how to reconcile the inevitable small samples, imperfect participation, and nonstandardized measurements of locally developed assessment tools with the need to have reliable and valid information upon which to base curriculum decisions. Can nurse educators use outcome assessment as a reliable tool to make decisions?

Related Literature Review

General comments

During the 1960s and 1970s the majority of educational outcome studies focused on student grade point averages, program completion, or success with the state/national licensure examination. These studies used a single predictor, single outcome (univariate) approach rather than a multivariate one. An excellent review can be found in Schwirian(1977).

Multivariate approaches as a method of studying nursing education outcomes and/or predictors of success became more prevalent in the 1980s (Higgs, 1984; Johnson, 1988). Hechenberger (1988) reports that educational outcome studies continue to be underrepresented in nursing research, and this view is supported by Strickland and Waltz (1988).

Key issues

There are several themes in the studies available. Cognitive domain predictors, such as GPA, frequently do not correlate with clinical performance in nursing school (Soffer & Soffer, 1972). This finding was substantiated by Schwirian and Gortner (1979). Many studies, however, have shown a positive correlation between nursing grades and/or overall academic achievement with subsequent success on the NCLEX licensing examination (Outtz, 1979; Payne & Duffy, 1986; Quick, Krupa, & Whitley, 1985). Indeed, the majority of educational outcome assessment studies have focused on the relationship of academic achievement to NCLEX scores.

Table

TABLE 1Population Demographics

TABLE 1

Population Demographics

Table

TABLE 2Differences Between First (Fall 1984) and Last (Spring 1986) Cohort

TABLE 2

Differences Between First (Fall 1984) and Last (Spring 1986) Cohort

Fewer studies have examined both the relationship of academic success and licensure examination scores with later perceived competency in postgraduation nursing practice. The studies that have correlated academic achievement with later success in nursing practice have shown that academic success has little predictive ability in determining success in postgraduation nursing practice (Burgess, Duffy, & Temple, 1972; Dubs, 1975; Seither, 1980; Soffer & Soffer, 1972;).

Framework for the Study

Nursing faculty at California State University (CSU), Chico School of Nursing consider six criteria pertinent for measuring effectiveness of nursing education. Three of these measurements are internal: the GPA in nursing courses, nonnursing GPA, and ratings by faculty on the faculty clinical evaluation (FCE). Three of these measurements are external: the licensure examination scores (NCLEX), the Competency Ratings Scale rated by the graduate (CRS-G), and the Competency Rating Scale rated by the graduate's immediate supervisor (CRS-S). Both CRS ratings were obtained approximately two to three years after graduation.

The research questions addressed in this study are:

* How are these six education outcome measurements alike and how do they differ?

* What is their reliability and validity?

Four successive nursing classes were selected for this study. The four classes had 134 nursing graduates (graduating Fall 1981? through Spring 1988) from an entering group of 155 ftursing students. This represented an attrition rate of 14%. The curriculum remained unchanged during the study. Although there were some minor changes, the school of nursing faculty was stable (Table 1).

Over the course of the study, the four classes became progressively less selective as the number of applications declined. The first graduating class represented a more homogeneous group of students than the last graduating class in respect to their grades on the seven prerequisite courses and their nonnursing GPA. This resulted in a lower GPA and nonnursing GPA for the last class in the study as shown in Table 2.

Data Collection

Internal outcome measures

Two of the three internal outcome measures were composed of academic grades. One of these, nursing GPA, consisted of the grades acquired by the graduates in the 60-unit nursing curriculum (N= 134). The second, nonnursing GPA, was the total GPA of all other college work taken at CSU, Chico (N= 134).

The third outcome measure was an average of faculty ratings regarding the student's clinical competency in nursing clinical courses using an evaluation tool developed by two of the nursing faculty (Figure). The evaluation tool had been developed because other variables besides clinical competency influenced the student's clinical grade. For example, student absences or the paper component of a clinical course influence the grade in the clinical practicum. An evaluation tool that focused only on clinical competency would more accurately measure that component. The FCE was completed by each clinical instructor immediately following a student's clinical rotation. During the curriculum sequence, over five semesters, students are enrolled in eight clinical courses. Using factor analysis, this rating scale was reduced to one factor (clinical competency) and the eight scores were subsequently given a global composite score. This instrument, previously reported in Wold and Worth (1990), was shown to have reliability and face, construct, and concurrent validity. These data were obtained on 97% of the subjects (n = 132).

External outcome data

The fourth outcome measurement, the licensing exam (NCLEX) score, was collected via two methods. Graduates who released their scores prior to taking the examination had their scores reported to the school of nursing. Graduates who had not released their scores were contacted by faculty for permission to use their scores. In this manner scores were obtained on 96% (n = 131) of the graduates.

The fifth and sixth measures of educational outcomes, the CRS-G and CRS-S, rated nursing practice and were assessed postgraduation.

The CSU, Chico School of Nursing faculty consider competency in the workplace an important and valid outcome measurement. The faculty also believe that in order for workplace competency to be considered an educational outcome, it must relate to the educational objectives of the curriculum. This view is supported by Johnson (1988).

The end-of-program objectives were used as the basis for the development of a rating scale in a 1982 outcome study conducted by the school of nursing. The end-of-program objectives were delineated with statements reflecting behaviors that the respondents used to describe achievement of specific objectives.

The 10 program objectives were:

1. To base nursing practice on relevant principles, concepts and theories.

2. To use the nursing process to plan and to implement creative nursing interventions in situations with unpredictable outcomes that do not respond to routine solutions.

3. To apply the nursing process in meeting the health needs of individuals, families, defined population groups, and communities (focus depends on the nurse's setting).

4. To use a research approach to problems in nursing practice.

5. To assume ethical and legal accountability for one's nursing practice.

6. To use the leadership process in nursing practice.

7. To function in collegial relationships with other health professions in the delivery of health care.

Table

FIGUREFaculty Clinical Evaluation Tool*

FIGURE

Faculty Clinical Evaluation Tool*

8. To foster change to facilitate optimal wellness for patients/clients .

9. To serve as a patient/client advocate to facilitate access to, and continuity o£ quality health care.

10. To assume responsibility for continued personal and professional development.

Behaviors are complex; Bloom's taxonomy recognizes cognitive, psychomotor, and affective components. The rating scale incorporated all three dimensions. Instructions for use of the rating scale were provided.*

Ratings were obtained from the alumni and their immediate supervisors. The results of the 1982 study provided the faculty with information upon which to base some curriculum decisions. Although no substantive changes were made, some incremental change occurred as a result of the 1982 study.

Factor analysis of the 1982 data revealed that one general factor, competency, accounted for most of the variance among the 10 items. Therefore, the 10 items were pooled to form a single composite competency score: one each for the graduate and supervisor evaluations.

Last, a reliability analysis was carried out to determine the internal consistency of the CRS. The reliability coefficient (KR-20) was .90 and indicated a high degree of reliability for our purposes.

The minor changes made in the curriculum as a result of the 1982 study had not altered the end-of-program objectives, and the same CRS was used for the 1989 study.

Procedure

Twenty months after the last class in this study graduated (36 months after the first class graduated), current addresses were located for 82% (n = 120) of the subjects. In November 1989, 120 questionnaires were sent to these alumni and to their supervisors. Responses were obtained from 70 alumni (58%) and 73 supervisors (62%); there were 54 matched pairs (45%). These response rates are higher than the 52% supervisor return and 27% graduate return reported by Knowles and colleagues (1985).

Table

TABLE 3Descriptive Statistics

TABLE 3

Descriptive Statistics

Results

Descriptive statistics of the six outcome measures

The mean and standard deviation for each outcome measure is shown in Table 3.

Findings of graduate/supervisor ratings

The findings of the 1989 survey closely matched the 1982 survey. Both the nursing school alumni and their supervisors were satisfied with the competency of the nursing school graduates in the practice setting. With a rating from (1) completely satisfied to (4) not satisfied, the mean was 1.38 for the graduate rankings and 1.36 for supervisors, indicating a very satisfied ranking (Table 3). The majority of respondents ranked most competencies as a 3 or 4.

A large number of respondents (n = 59) completed the open-ended comment section. Overall, their comments were very positive. This held true for comments from both supervisors (44% wrote written comments) and alumni (39% commented).

Pearson's correlations

In order to accomplish the purpose of this study, it was necessary to examine relationships among the six measures of educational outcomes. Pearson correlation coefficients were calculated to determine these relationships and are shown in Table 4.

The strongest correlation occurred between the FCE and the nursing GPA (r= .74, p<.001). This finding is not surprising since the same faculty not only assigned overall grades for both theory and clinical courses but rated the students on the FCE as well. Other outcomes that correlated with the FCE were NCLEX scores, showing a moderate correlation (r = .40, p<.001), and two more modest correlations, the nonnursing GPA (r= .25, p<.002) and the CRS-G (r = .23, p<.03).

There were no statistically significant correlations between the CRS-S and most other measures. The rankings of the supervisors correlated only with nonnursing GPA (r=.25, p<.02). The significant correlations found between various academic measurements and the NCLEX are supported in the literature and were not surprising. All three internal measures, nursing faculty clinical evaluation (r=.40), nonnursing GPA (r = .41), and nursing GPA (r = .62), were significant at p<.001. As mentioned previously, there was a modest correlation between the FCE and the CRS-G. Thus, students who were rated by faculty as more clinically competent later rated their own job performance more highly. This was the only outcome measure that correlated with the CRS-G. Most noteworthy was the absence of a correlation between the self-ratings and the supervisors' ratings. Seither (1980) also reported this finding.

In addition to correlation coefficients, a factor analysis was performed on the 51 cases with complete data to determine if the six measures of educational outcome could be reduced to fewer central factors. The results of a principle factor analysis with orthogonal rotation are shown in Table 5. There was a group of closely related variables, which we chose to call the academic performance factor, composed of the FCE, the nursing GPA, the nonnursing GPA and the NCLEX scores. The latter two of these most closely defined the factor.

An attempt to isolate a second factor was less successful with the nursing GPA and the FCE as the only loading variables, and these two are important for defining the first factor as well. Perhaps the best definition of the second factor, if there is one, is the graduate's ratings of his or her own competency. Its correlation is not high but it is important because it does not correlate with the first factor. This suggests a need for more data and research. Clearly, there was no postgraduation factor; the factor analysis indicates that the graduates' and the supervisors' perceptions of competency are independent of each other and the other measures.

Discussion

This study supports the recommendations of the National Association of State Universities and Land Grant Colleges that multiple measures be used to assess learning outcomes. In this study, the product of nursing education (the graduates), the provider of the educational process (the college and its faculty), and the employer of the product (the health organization's supervisors) all held somewhat differing views and thus contributed some new information on the outcome of the educational process.

It may not be necessary to perform all six outcome measures of assessment, and some appear to measure similar components. More data and work are needed, however, before a compelling recommendation can be made. At the least, the lack of a significant correlation between alumni and their supervisors suggests that the school of nursing should continue to periodically assess both alumni and their supervisors regarding competency in the workplace.

Table

TABLE 4Pearson Correlation Coefficients

TABLE 4

Pearson Correlation Coefficients

The limited number of matched sets for all six measurements of outcome (n = 51) requires that one interpret the findings of the study cautiously. Additionally, while the CRS is reliable and has both face and content validity, we found only two small relationships between the CRS and other variables. This finding, and the absence of correlations between supervisors and graduates, indicate that they interpreted the criterion for competency differently.

It is noteworthy that there was no correlation between nursing academic standing and perceived competency in the practice setting. Although both alumni and supervisors rated the program highly, there was no relationship between the degree of success in nursing school and later perceived level of satisfaction of competency in the workplace.

It is also noteworthy that the supervisors' only correlation occurred with the nonnursing GPA. It would be interesting to determine if the correlation would be stronger for certain university courses. Wold and Worth (1990) found prerequisite science courses correlated with success in nursing school. Perhaps if these courses were separated from other nonnursing grades, the supervisors' competency ratings would be more strongly correlated with specific academic achievement.

The lack of correlation between the supervisors' and graduates' rankings with most of the academic measures, and with each other, has several possible explanations. Perhaps the nursing curriculum produces a satisfactory product as a result of the educational process, even though during that process a particular individual may not be at a high grade-level of academic performance. If this is true, it has many implications for nursing faculty, specifically for decisions regarding admission and retention of students. This finding also could be due to the influence of the work environment. For whatever reasons, the study did show that the high achiever in academe was not consistently ranked higher in the workplace.

Table

TABLE 5Varimax Rotated Factor Matrix After Rotation With Kaiser Normalization

TABLE 5

Varimax Rotated Factor Matrix After Rotation With Kaiser Normalization

Recommendations for further study

The results of this study cannot be generalized readily to other nursing curricula, but the findings do suggest that further research be carried out to compare academic performance with nursing practice competency. During the recent decline in nursing school enrollment, the makeup of many schools of nursing changed from a homogeneous group of high achievers to a heterogeneous group of students with a wide variety of learning needs and academic capabilities. This study suggests that academic standing in nursing school may not correlate with later perceived practice competency, by either the alumni or their immediate supervisors. Additional research in this area is necessary to determine if the findings in this study will hold true with a variety of students as well as a variety of schools of nursing.

References

  • Astin, A. W. (1985). Achieving institutional excellence. San Francisco: Jossey-Bass.
  • Burgess, M., Duffy, M., & Temple, F. (1972). Two studies of prediction of success in a collegiate program of nursing. Nursing Research, 21, 357-366.
  • Dubs, R. (1975). Comparison of student achievement with performance ratings of graduates and state board examination scores. Nursing Research, 24, 59-62.
  • Heehenbergen N. (1988). Future directions for improving the quality of the measurement of outcomes for education and research in nursing. In: O.L. Strickland & CE. Waltz (Eds.), Measurements of Nursing Outcomes. New York: Springer.
  • Higgs, Z. R. (1984). Predicting success in nursing: From prototype to pragmatics. Western Journal of Nursing Research, 6(1), 77-95.
  • Johnson, J. ( 1988). Assessing students in relation to curriculum objectives. In: O.L. Strickland & CE. Waltz (Eds.), Measurements of Nursing Outcomes. New York: Springer.
  • Knowles, L., Strozier, GR. , Wilson, J.M., Bodo, TL., Greene, D.B., & Saver, V.T. (1984). Evaluation of a baccalaureate nursing program by alumni and of alumni by their supervisors. Journal of Nursing Education, 24, 261-264.
  • National Association of State Universities and Land Grant Colleges, Council on Academic Affairs (1988, November). Statement of principles of student outcomes assessment. Washington, DC: Author.
  • Outtz, J. (1979). Predicting the success of state board exams for blacks. Journal of Nursing Education, 18, 35-40.
  • Payne, M., & Duffey, M. (1986). An investigation of the predictability of NCLEX scores of BSN graduates using academic predictors. Journal of Professional Nursing, 2, 326-333.
  • Quick, M., Krupa, K., & Whitley, T. (1985). Using admission data to predict success on NCLEX-RN in a baccalaureate program. Journal of Professional Nursing, 1, 98-103.
  • Schwirian, P.M. (1977). Prediction of successful nursing performances: Parts I and II, (DHEW Publication No. 7727). Washington, DC: U.S. Government Printing Office.
  • Schwirian, P., & Gortner, S. (1979). How nursing schools predict their successful graduates. Nursing Outlook, 27, 352-358.
  • Seither, F. (1980). Prediction of achievement in baccalaureate nursing education. Journal of Nursing Education, 19(9), 28-36.
  • Soffer, J., & Soffer, L. (1972). Academic record as a predictor of future job performance of nurses. Nursing Research, 21, 28-36.
  • Strickland, O.L., & Waltz, CF. (Eds.). (1988). Measurements of nursing outcomes: Practice, education, and research. New York: Springer.
  • Western Association of Schools and Colleges. (1988). Handbook of accreditation: Accrediting commission for senior colleges and universities (p. 16). Aptos, Calif: Author.
  • Wold, J., & Worth, C (1990). Baccalaureate student nurse success prediction: A replication. Journal of Nursing Education, 29, 84-89.

TABLE 1

Population Demographics

TABLE 2

Differences Between First (Fall 1984) and Last (Spring 1986) Cohort

FIGURE

Faculty Clinical Evaluation Tool*

TABLE 3

Descriptive Statistics

TABLE 4

Pearson Correlation Coefficients

TABLE 5

Varimax Rotated Factor Matrix After Rotation With Kaiser Normalization

10.3928/0148-4834-19920101-09

Sign up to receive

Journal E-contents