Journal of Nursing Education

The articles prior to January 2012 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

EDUCATIONAL INNOVATIONS 

Clinical Performance Appraisal: Renewing Graded Clinical Experiences

Lynn L Wiles, MSN, RN, CEN; Joanne F Bishop, MSN, RN, OCN

Abstract

Effective evaluation of student clinical performance has been a recurrent problem for nurse educators. Difficulty in clearly denning the purpose and functions inherent to nursing occurs secondarily to a lack of objective measure* ment criteria. Traditionally, the common denominator for measuring minimal student performance has been safety. However, merely saying that a student is safe does not begin to address the multitude of other evaluation issues. These issues include the impact of evaluation on student motivation and the learning process, faculty/student satisfaction with the evaluation process, and development and implementation of reliable and valid evaluation tools.

The faculty at our school of nursing chose to explore available options for clinical evaluations in an attempt to address faculty and student dissatisfaction with the current pass/fail clinical evaluation system. The faculty believed that the pass/fail system did not adequately reflect clinical performance, and distinctly did not reward high achievers. The faculty also believed that the tools lacked sufficient details to relate discrete clinical behaviors to course and curriculum objectives. Therefore, the goal of improving the correlation of clinical behaviors specific to course objectives was established.

REVIEW OF LITERATURE

Review of the literature revealed a lack of recent articles relating to the evaluation process. Literature addressing issues of fairness and faculty/student satisfaction with the evaluation process is reflected in discussions of graded systems versus pass/fail systems. Gennaro et al. (1982) indicated that graded systems are entirely subjective and difficult to defend, and found grading criteria difficult to observe and measure. The authors suggested that graded systems inhibit the learning process by discouraging students from seeking guidance, which leads to increased faculty/student dissatisfaction with the learning process. Inherent subjectivity of assigning letter grades leads to grading inconsistency and a focus on grades instead of on learning.

Gross and Eckhart (1986) also agree that graded clinical evaluations are entirely too subjective by nature, leading to inconsistencies in evaluation, negative instructor-student relationships, and student focus on grades instead of learning. These concerns led Gross and Eckhart to develop a modified contractual grading system, using contract agreements based on student attainment of preset standards for 60% of the clinical grade. The remaining 40% of the grade was earned from results of a criterion-referenced written examination. Students earn grades based on a combination of written exam scores and completion of chosen contracts for letter grades. Identified strengths included improved instructor-student relationships, but contract grading was not found to be acceptable for all clinical areas (Gross & Eckhart, 1986).

Tower and Majewski (1987) identified faculty dissatisfaction with the need for lengthy anecdotal notes as the reason their school changed from a traditional graded system to a pass/fail format. Their experience at an associate degree nursing program identified the need for a simple, concise tool that could be used by multiple evaluators in multiple settings, and also meet the time constraints of a compressed two-year nursing curriculum.

The Objective Structured Clinical Examination (OSCE) is a graded evaluation system that was developed for use in medicine (Harden, 1975) to evaluate medical student performance in physical assessment skills. The OSCE was adapted by McKnight et al. (1987) to study the effect of use of a simulated clinical situation to evaluate nursing student clinical performance. The tool was found to decrease instructor subjectivity and reduce faculty time spent on the evaluation process in a simulated setting, but was not tested in a clinical environment. Research by Rose et al (1988) aleo adapted and tested the OSCE for use in evaluating nursing student competence in clinical skill performance. They found that the OSCE decreased student anxiety, but the authors indicated that more study is…

Effective evaluation of student clinical performance has been a recurrent problem for nurse educators. Difficulty in clearly denning the purpose and functions inherent to nursing occurs secondarily to a lack of objective measure* ment criteria. Traditionally, the common denominator for measuring minimal student performance has been safety. However, merely saying that a student is safe does not begin to address the multitude of other evaluation issues. These issues include the impact of evaluation on student motivation and the learning process, faculty/student satisfaction with the evaluation process, and development and implementation of reliable and valid evaluation tools.

The faculty at our school of nursing chose to explore available options for clinical evaluations in an attempt to address faculty and student dissatisfaction with the current pass/fail clinical evaluation system. The faculty believed that the pass/fail system did not adequately reflect clinical performance, and distinctly did not reward high achievers. The faculty also believed that the tools lacked sufficient details to relate discrete clinical behaviors to course and curriculum objectives. Therefore, the goal of improving the correlation of clinical behaviors specific to course objectives was established.

REVIEW OF LITERATURE

Review of the literature revealed a lack of recent articles relating to the evaluation process. Literature addressing issues of fairness and faculty/student satisfaction with the evaluation process is reflected in discussions of graded systems versus pass/fail systems. Gennaro et al. (1982) indicated that graded systems are entirely subjective and difficult to defend, and found grading criteria difficult to observe and measure. The authors suggested that graded systems inhibit the learning process by discouraging students from seeking guidance, which leads to increased faculty/student dissatisfaction with the learning process. Inherent subjectivity of assigning letter grades leads to grading inconsistency and a focus on grades instead of on learning.

Gross and Eckhart (1986) also agree that graded clinical evaluations are entirely too subjective by nature, leading to inconsistencies in evaluation, negative instructor-student relationships, and student focus on grades instead of learning. These concerns led Gross and Eckhart to develop a modified contractual grading system, using contract agreements based on student attainment of preset standards for 60% of the clinical grade. The remaining 40% of the grade was earned from results of a criterion-referenced written examination. Students earn grades based on a combination of written exam scores and completion of chosen contracts for letter grades. Identified strengths included improved instructor-student relationships, but contract grading was not found to be acceptable for all clinical areas (Gross & Eckhart, 1986).

Tower and Majewski (1987) identified faculty dissatisfaction with the need for lengthy anecdotal notes as the reason their school changed from a traditional graded system to a pass/fail format. Their experience at an associate degree nursing program identified the need for a simple, concise tool that could be used by multiple evaluators in multiple settings, and also meet the time constraints of a compressed two-year nursing curriculum.

The Objective Structured Clinical Examination (OSCE) is a graded evaluation system that was developed for use in medicine (Harden, 1975) to evaluate medical student performance in physical assessment skills. The OSCE was adapted by McKnight et al. (1987) to study the effect of use of a simulated clinical situation to evaluate nursing student clinical performance. The tool was found to decrease instructor subjectivity and reduce faculty time spent on the evaluation process in a simulated setting, but was not tested in a clinical environment. Research by Rose et al (1988) aleo adapted and tested the OSCE for use in evaluating nursing student competence in clinical skill performance. They found that the OSCE decreased student anxiety, but the authors indicated that more study is needed.

A descriptive survey by Karns and Nowotny (1991) found that the issues of fairness and subjectivity encourage a majority of baccalaureate schools to use a pass/fail format. However, the pass/fail option may inhibit student motivation and leads to student dissatisfaction with the evaluation process because achievement is not specifically awarded. Other authors have indicated that pass/fail formats are less stressful for faculty and help to focus the evaluation process on learning (Thompson et al., 1991).

There is support in the literature for using objective, written clinical expectations to increase student satisfaction, reduce subjectivity (Carlson, Lubiefowski, Pollask, 1987), and promote learning (Booth, 1987). Objectivity is further enhanced when written objectives use a criterion-referenced model (Novak, 1988; Pavlish, 1987; CottreU et al., 1985; and Bondy, 1984). The literature supports this trend in evaluation methodology to shift from general categories to specific objectives. The desire for specificity has led to strong support for the use of specific criterion referenced tools (Krichhbaum, 1994; Karns & Nowotny, 1991).

Research by Bondy (1984) supports the use of criterion referenced rating scales as a method to reward studente who achieve high levels of performance. Criterion referenced rating scales on reliability of student evaluation were found to have a significant positive effect on scores. Using measurable objective behaviors as criterion referenced standards for clinical evaluation has been computerized by one program (Cottrell et al., 1985), and found to be an effective strategy for promoting formative evaluation, but lacking in interrater reliability and content validity.

THE PROCESS OF CHANGE

After making the decision to develop a graded clinical performance appraisal (CPA), the faculty participated in a series of clinical evaluation workshops. Under the direction of Dr. Kathleen Bondy, the faculty devised a template for the CPA based on Bond/s criterion referenced rating scales (1983). The first step in this implementation process was to adopt a "grid" to identify criteria scale labels and define limits, including weights of scores and passing values. Each clinical course coordinator was then asked to convert the respective evaluation tool to this new format. This conversion required removing quality indicators, decreasing wordiness, and grouping stems into similar categories.

A CPA committee was formed by selecting a faculty representative from each level of the pre-licensure undergraduate program. This committee served as mentors for the remaining faculty to ease the transition to graded clinical experiences. In addition to acting as a liaison with Dr. Bondy, the members encouraged revisions, assisted with trouble shooting, provided resources, conducted workshops and orientations, and developed the CPA protocol. An additional role of the committee was to study the effect of the new tool on student clinical grades, faculty and student perceptions of care provided, and quality of nursing care delivered.

Full-time faculty orientation was accomplished by viewing videotapes (Bondy, 1981) that presented students performing nursing skills at different levels of competency. The faculty viewed tapes depicting the skills sequentially and randomly, scored each interaction independently, and then discussed the findings. After viewing and scoring several vignettes, under the supervision of Dr. Bondy, the faculty consensus was that inter-rater reliability was established. Individual course coordinators provided adjunct clinical faculty orientation following the same format as that of full-time faculty.

Studente were also oriented to the new CPA. Studente observed the same videos as the faculty to gain an understanding of behaviors faculty believed constituted a specific scoring level. Clinical faculty provided an in-depth review of the scoring criteria required for the specific course and described how students could meet these criteria.

After the first semester of using the new CPA format, faculty gathered to evaluate the CPA and to make any changes. The consistent recommendation was to consolidate. Most CPA tools underwent a revision that simplified the number of eteme by approximately 50%. Clinical behaviors that were difficult to measure were evaluated for appropriateness and reworded or eliminated. The "not appropriate" category was eliminated from the grid with the belief that if the behavior was inappropriate, it should not be included in the tool. The "not observed" category was retained. The students do not receive credit for nonobserved skills unless they forward the information to faculty in daily clinical logs or clinical conferences.

The process of CPA revision is ongoing. Some courses are taught each semester, thereby hastening tool refinement. The faculty is working to maintain consistency between the tools and to provide support and direction to each other, as well as to the students.

CHALLENGES AND LESSONS LEARNED

Coping With Change

The first challenge for revising our method of clinical evaluation was to change the perception of faculty and studente that everyone will receive an "A" in clinical courses. Historically, clinical course grades have been based on satisfactory clinical performance and graded case studies, care plans, and oral presentations. The letter grade received reflected only the student's written work. Learner maturity and individual motivation was a factor in facilitating this change to reward the above average student.

A five-tier scale with the labels independent, supervised, assisted, marginal, and dependent was implemented. Many students perceived themselves as inadequate if they did not score "independent" in each competency. Students reported that they were dissatisfied with "supervised" ratings, and wished that they had received more timely formative evaluation indicating that they were not meeting the expectations of the instructor. This concern was addressed by having the student fill out a self-evaluation at midterm compare with instructor evaluation. Differences in opinion and clarification of expectations were addressed while there was still time to improve behaviors.

In response to the student's need to feel independent rather than supervised, the faculty has considered changing the scale labels. Additionally, students voiced concern that if they asked questions, the faculty would perceive the student as dependent. Students were reassured that asking questions demonstrates accountability and independence through identifying his/her learning needs and students were encouraged to investigate possible solutions to his/her problem.

Inconsistency in Tool Development

To successfully complete any clinical course, students were expected to meet all competencies of previous courses, as well as the new competencies for each course. Inconsistencies resulted when each course coordinator independently developed a tool. In retrospect, it would have been easier to develop the competencies by working in faculty groups, beginning with initial, sophomore clinical experiences and increasing the complexity of competencies as deemed appropriate for advanced level courses.

Defining competencies proved to be a significant challenge. The faculty revised competencies to reflect end-of-program behaviors rather than a skills checklist. The previous tools contained repetitious, wordy, and vague competencies. In the CPA, competencies were redefined without descriptive terms and screened for objectivity and relevance to clinical focus.

CPA Scoring

Traditionally, faculty provided students with a summative evaluation on the last day of the clinical experience. For the CPA to be effective, the faculty believed that students should be scored after each clinical experience. This required the faculty to document the progression of the student throughout the clinical rotation, promoting timely formative feedback. Faculty agreed to sacrifice efficiency to promote quality of the evaluation process.

Scoring was accomplished by adding total points earned by the student and dividing it by the total possible points. Students received individual section scores corresponding to end of program behaviors, as well as a final score. The purpose of individual section scoring is to support the school's evaluation plan. Section scores enabled the faculty to determine common deficiencies or trends, and help the student to focus additional energy on areas that require improvement.

CONCLUSION

The adaptation of Bondy's (1984) criterion-referenced, weighted clinical grading allowed students who excel to be recognized for their accomplishments. Faculty received an overwhelming positive response from the students who excelled clinically. All students were provided more structured feedback that strengthened formative evaluation and improved student performance.

The students performing at the supervised and assisted levels are now driven by the desire to make "As" and thus, perform at a higher level to receive recognition for their accomplishments. The higher level of student performance positively impacts the quality of care that patients receive. Developing the CPA has been a tedious, but rewarding project. When undertaking such an enormous project, teamwork, a sense of humor, and insightful revisions have been our keys to success.

REFERENCES

  • Bondy, K. (1981). Evaluation of clinical performance. Criteria for scale descriptors (R. Hyams, Ed.). Madison: University of Wisconsin.
  • Bondy, K (1983). Criterion referenced definitions for rating acales in nursing education. Journal of Nursing Education, 22, 376-382.
  • Bondy, K. (1984). Clinical evaluation of student performance: The effect of criteria on accuracy and reliability. Research in Nursing and Health, 7, 25-33.
  • Booth, D. (1987). Clinical evaluation assessment form. Nurse Educator, 72(2), 40.
  • Carlson, D., Lubiejewski, M., & Polaeki, A. (1987). Communicating leveled clinical expectations in nursing studente. Journal of Nursing Education, 26, 194-196.
  • Cottrell, B., Ritchie, P., Cox, B., Rumph, E., Keleey, S., & Shannahan, M.A. (1986). Clinical evaluation tool for nursing students based on the nursing procese. Journal of Nursing Education, 25, 270-274.
  • Genaro, S., Theilen, P., Chapman, N., Martin, J-, & Barnett, D. (1982). The birth, Ufe, and times of the clinical evaluation tool. Nurse Educator, 17, 27-32.
  • Gross, J., & Eckhart, D. (1986). Modified contractual grading. Nursing Outlook, 34, 184187.
  • Harden, RM., Stevenson, M., Downie, W., & Wilson, G.M. (1975). Assessment of clinical competence using the objective structured examination. British Medical Journal, 1, 447-451.
  • Käme, P., & Nowotny, M. (1991). Clinical structure and evaluation in baccalaureate schools of nursing. Journal of Nursing Education, 30, 207-211.
  • Krichbaum, K. (1994). Clinical teaching effectiveness described in relation to learning outcomes of baccalaureate nursing students. Journal of Nursing Education, 33, 306-316.
  • McKnight, J., Rideout, E., Brown, B., Ciliaka, D., Patton, D., Rankin, J., et al. (1987). The objective structured clinical examination; An alternative to assessing student clinical performance. Journal of Nursing Education, 26, 39-41.
  • Novak, S. (1988). An effective clinical evaluation tool. Journal of Nursing Education, 27, 8384.
  • Pavlish, C. (1987). A model foe clinical performance evaluation. Journal of Nursing Education, 26, 338-339.
  • Ross, M., Carroll, G., Knight, J., Chamberlain, M., Fothergill-BurbonnaiB, F., & Linton, J. (1988). Using the OSCE to measure clinical skills performance in nursing. Journal of Advanced Nursing, 13, 45-56.
  • Thompson, P., Lord, J., Powell, J., Devine, M., & Coleman, E. (1991). Graded versus pass fail evaluation of clinical courses. Nursing and Health Care, 12, 480-482.
  • Tower, B. & Majewski, T.V. (1987). Behaviorally based clinical evaluation. Journal of Nursing Education, 26, 120-123.

10.3928/0148-4834-20010101-09

Sign up to receive

Journal E-contents