The implementation of a new curriculum in an upper division baccalaureate nursing program created a problem in clinical evaluation of student performance. The new second semester clinical course required assignment of an end-of-course letter grade reflecting performance in four different clinical settings: medical-surgical, community health, maternity, and pediatric nursing. The paper-andpencil evaluation tool used in the old curriculum lacked both the discrimination needed for letter grades and the identification of competencies common to all the clinical settings. In addition, students had difficulty determining their level of performance during the semester.
Over the next few months, the six faculty members teaching the course met to resolve these problems. The roles of the baccalaureate nurse (National League for Nursing, 1977) and the program's conceptual framework and end-of-course objectives were reviewed. Three of the five roles, caregiving, health promotion, and teaching, are emphasized in the second semester. These roles, as well as the Nursing Process, became the organizing framework for tool development.
Problems associated with the development and use of clinical evaluation tools have been well-documented (Wood, 1972, 1982; Wooley, 1977). Primary among these was the subjectivity inherent in observation of student performance. To minimize this problem, the faculty decided to develop a criterion-referenced tool where student performance was measured against specified behaviors. In addition to increasing faculty objectivity, a criterion-referenced measure would facilitate learning by making explicit the behaviors at each level of performance (Bower, 1974; DeMers, 1978; Gennaro, Thielen, Chapman, Martin & Barnett, 1982; Krumme, 1975, 1977; Sommerfeld & Accola, 1978).
CLINICAL EVALUATION TOOL
CLINICAL EVALUATION TOOL
Faculty began to meet weekly to identify student competencies common to all four clinical settings used. The four major categories established for grouping of behaviors were the phases of the Nursing Process. Subcategories under each phase emphasized the caregiving, health promotion, and teaching roles of the baccalaureate nurse. A fifth category, professional behavior, was added. Since distinctions among three letter grades (A, B, and C) were needed, the group decided to develop a rating scale. Although rating scales have disadvantages (DeMers, 1978; Gronlund, 1971), they are a commonly used observational technique (DeMers, 1978; Gennaro et al, 1982; Sommerfeld & Accola, 1978; Wooley, 1977) capable of use with large groups of students. They also allow for "pooled ratings" (DeMers, 1978, p. 103), i.e., the averaging of ratings by several instructors, thus decreasing the influence of individual instructor bias.
Entry level behaviors for each subcategory were first identified. Then, guided by Gagne's hierarchy of learning theory, specific behavioral criteria essential for satisfactory performance at each grade level were developed on a scale from 1 to 4. A 2 was the minimal acceptable behavior and a 4 was the maximum expected behavior; a 1 indicated that the minimal behavioral criterion was not met. Numbers on the rating scale were assigned letter grades as follows: A = 4, B = 3, and C = 2. If a behavior was not observed, the evaluator was to assign the item an "X." A statistician was consulted to assist the faculty in weighting categories and rotations in a mathematically sound manner. If a rotation was four weeks long, it was weighted twice as heavily as a two- week rotation. The assessment and planning phases of the nursing process are emphasized in the second semester. Therefore the tool was weighted more heavily in these categories than in implementation and evaluation.
When the tool was completed, faculty wanted to use it for formative and summative evaluation and to provide accessible feedback to students throughout the semester. First, the behaviors specific to each grade level of a subcategory ( Figure 1) were included in the course syllabus purchased by students thus making faculty expectations explicit. Next, a decision was made to take advantage of the University's PLATO (Programmed Logic for Automatic Teaching Operations, Control Data Corporation) computer system to computerize the grid (Figure 2) on which scores were to be placed for computations. A University systems software designer programmed the tool for several operations. The grid could be accessed by students and faculty via terminals located at multiple sites on campus. Faculty would grade the student on the computer grid at the end of each rotation. The computer would then weigh and calculate the assigned scores and provide a summary score, e.g., 90%, for each category and for overall performance in the rotation. The scores would be available to students after they performed a self-evaluation, thus fostering formative evaluation on their part. At the end of the semester, an average for each category and for overall would be calculated by the computer.
The new tool has been in use for three semesters now and approximately 75 students are evaluated by four faculty members each semester. Response to the tool has been favorable. Students are receiving immediate feedback about performance and can identify areas where improvement is needed. Trends in students' performance can be identified easily. There is reduced faculty paperwork and computer printouts summarize each student's evaluation. A bonus has been the increased familiarity and comfort with computers that both students and faculty have experienced. At present, faculty are in the process of establishing interrater reliability and content validity of the tool.
- Bower, F.L. (1974). Normative- or criterion-referenced evaluation? Nursing Outlook, 22, 499-502.
- DeMers, J.L. (1978). Observational assessment of performance. In M.K. Morgan & D.M. Irby (Eds.). Evaluating Clinical Competence in the Health Professions (pp. 89-115). St. Louis: CV. Mosby Company.
- Gennaro, S., Thielen, P., Chapman, N., Martin, J., & Barnett, D.C. (1982). The birth, life, and times of a clinical evaluation tool. Nurse Educator, 7(1), 27-32.
- Gronlund, N.E. (1971). Measurement and Evaluation in Teaching (2nd ed.). New York: MacMillan.
- Krumme, VS. (1975). The case for criterion-referenced measurement. Nursing Outlook, 23, 764-770.
- Krumme, VS. (1977). Criterion-referenced measurement for student evaluation. In Evaluation of Students in Baccalaureate Nursing Programs (pp. 25-66). New York: National League for Nursing.
- National League for Nursing. (1977). Evaluation of Students in Baccalaureate Nursing Programs. (Pub. No. 15-1684). New York: Author.
- Sommerfeld, D.P. & Accola, K.M. (1978). Evaluating students' performance. Nursing Outlook, 26, 432-436.
- Wood, V (1972). Evaluation of student nurse clinical performance: A problem that won't go away. International Nursing Review, 19, 336-343.
- Wood, V (1982). Evaluation of student nurse clinical performance - A continuing problem. International Nursing Review, 29, 11-18
- Woolley, A.S. (1977). The long and tortured history of clinical evaluation. Nursing Outlook, 25, 308-315.
CLINICAL EVALUATION TOOL
CLINICAL EVALUATION TOOL