Journal of Nursing Education

The articles prior to January 2012 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

Evaluating Clinical Fitness

Susan Searle Jackson, RN, MSN; Jane Ellen Mead, Lt; Jean Burley Moore, RN, PhD

Abstract

How does a nursing instructor objectively evaluate a student's clinical performance when the emphasis is not upon technical skills but rather upon utilization of the nursing process? All too frequently the evaluation is subjective with clinical scores based more on personality than on performance. This method results in inconsistent scores that do not reflect the clinical fitness of the student. Furthermore, the student may see no progress in his/her performance and be confused about what action to take to improve. The following example illustrates the pitfalls of such evaluation:

Mary, a junior student, was very talkative and asked many questions during preclinical and post-clinical conferences. Although Mary's behavior was consistent, each instructor rated her clinical performance differently.

Instructor A rated Mary high, praising her for assertiveness in asking pertinent questions.

Instructor B rated Mary low, seeing her as unprepared, lacking confidence, and unable to form independent judgments.

Instructor C rated Mary somewhat low, viewing her continuous questions as condescending and with the purpose of making Mary appear superior to her peers.

Unfortunately, there is no simple answer to clinical evaluation. A variety of approaches have been tried over the years, each with shortcomings. Wooley (1977) summarized the various evaluation tools as character appraisals, efficiency records, checklists, rating scales, critical incidents, simulated testing situations, and criterionreferenced tools.

In general, there have been two broad approaches to evaluation, that is, the use of norm-referenced versus mastery learning evaluation. Norm-referenced evaluation involves a comparison of one student to another. The goal is to determine a student's relative status in relation to the performance of a group of students in his or her norm group (Ebel, 1978). The resulting score reflects a ranking of the student within a group. The traditional achievement test is one example of a norm-referenced evaluation tool (Nunnally, 1978).

More recently, mastery learning evaluation using criterion-referenced tools has gained popularity. The goal in mastery learning is that all students eventually master the material presented. Criterionreferenced tools measure a student's behavior against pre-specified performance criteria and not against other students' performances. In this approach, the student's absolute standing, as opposed to relative standing, is determined. The score awarded the student reflects whether he or she has met the prescribed criteria tPopham, 1978). The criterion-referenced evaluation tool may take many forms and numerical grades may or may not be assigned.

There are difficulties in developing criterion-referenced tools. Common questions asked include: Is it possible for the majority of the students to meet the objectives in the time period allowed? Does the students failure to meet the criteria represent lack of mastery or lack of opportunity to perform in that particular environment? Are the pre-specified behaviors observable and measurable? Are the evaluation items a representative sample of the content?

Developing a Clinical Evaluation Tool

Prior to the fall of 1975, faculty of the baccalaureate program at The Catholic University of America utilized normativereferenced tools for clinical evaluation. A letter grade for clinical performance was then derived. Faculty teaching at the junior and senior level of the BSN program utilized different tools with little or no sharing of clinical evaluation feedback between levels. The junior clinical tool had very broad cue statements while the senior clinical tool had very specific cue statements. The use of different tools for different levels was frustrating and confusing to both students and faculty. Therefore, at a 1975 spring workshop the faculty voted to develop a criterion-referenced tool and grade clinical performance on a satisfactory/unsatisfactory basis.

During the next two years, faculty developed a new curriculum framework, The CUA Systems -Adaptation Model, to be implemented at the sophomore level…

How does a nursing instructor objectively evaluate a student's clinical performance when the emphasis is not upon technical skills but rather upon utilization of the nursing process? All too frequently the evaluation is subjective with clinical scores based more on personality than on performance. This method results in inconsistent scores that do not reflect the clinical fitness of the student. Furthermore, the student may see no progress in his/her performance and be confused about what action to take to improve. The following example illustrates the pitfalls of such evaluation:

Mary, a junior student, was very talkative and asked many questions during preclinical and post-clinical conferences. Although Mary's behavior was consistent, each instructor rated her clinical performance differently.

Instructor A rated Mary high, praising her for assertiveness in asking pertinent questions.

Instructor B rated Mary low, seeing her as unprepared, lacking confidence, and unable to form independent judgments.

Instructor C rated Mary somewhat low, viewing her continuous questions as condescending and with the purpose of making Mary appear superior to her peers.

Unfortunately, there is no simple answer to clinical evaluation. A variety of approaches have been tried over the years, each with shortcomings. Wooley (1977) summarized the various evaluation tools as character appraisals, efficiency records, checklists, rating scales, critical incidents, simulated testing situations, and criterionreferenced tools.

In general, there have been two broad approaches to evaluation, that is, the use of norm-referenced versus mastery learning evaluation. Norm-referenced evaluation involves a comparison of one student to another. The goal is to determine a student's relative status in relation to the performance of a group of students in his or her norm group (Ebel, 1978). The resulting score reflects a ranking of the student within a group. The traditional achievement test is one example of a norm-referenced evaluation tool (Nunnally, 1978).

More recently, mastery learning evaluation using criterion-referenced tools has gained popularity. The goal in mastery learning is that all students eventually master the material presented. Criterionreferenced tools measure a student's behavior against pre-specified performance criteria and not against other students' performances. In this approach, the student's absolute standing, as opposed to relative standing, is determined. The score awarded the student reflects whether he or she has met the prescribed criteria tPopham, 1978). The criterion-referenced evaluation tool may take many forms and numerical grades may or may not be assigned.

There are difficulties in developing criterion-referenced tools. Common questions asked include: Is it possible for the majority of the students to meet the objectives in the time period allowed? Does the students failure to meet the criteria represent lack of mastery or lack of opportunity to perform in that particular environment? Are the pre-specified behaviors observable and measurable? Are the evaluation items a representative sample of the content?

Developing a Clinical Evaluation Tool

Prior to the fall of 1975, faculty of the baccalaureate program at The Catholic University of America utilized normativereferenced tools for clinical evaluation. A letter grade for clinical performance was then derived. Faculty teaching at the junior and senior level of the BSN program utilized different tools with little or no sharing of clinical evaluation feedback between levels. The junior clinical tool had very broad cue statements while the senior clinical tool had very specific cue statements. The use of different tools for different levels was frustrating and confusing to both students and faculty. Therefore, at a 1975 spring workshop the faculty voted to develop a criterion-referenced tool and grade clinical performance on a satisfactory/unsatisfactory basis.

During the next two years, faculty developed a new curriculum framework, The CUA Systems -Adaptation Model, to be implemented at the sophomore level beginning Fall 1977. In conjunction with the new framework, curriculum, course and clinical objectives were revised. The faculty recommended that the new criterion-referenced clinical evaluation tool be developed to reflect the new conceptual framework.

Therefore, in the fall of 1977, the authors formed a clinical evaluation tool committee. Its objectives were to create an evaluation tool which would:

1. include pre-specified behavioral criteria reflecting the new CUA Systems-Adaptation Model.

2. be used at all levels and in all clinical areas for junior and senior students.

3. assist students to identify their own strengths and limitations in clinical performance.

4. promote student self-evaluation as well as joint evaluation with faculty.

5. eventually be used in establishing interrater reliability between faculty members.

Once the objectives were approved by the Baccalaureate faculty, the committee designed the evaluation tool. First the members established six categories according to the terminal curriculum objectives; these included personal and professional growth, communication, therapeutic use of self, nursing process, group interaction, and health-care-system interaction. Next, items were written for each category. Finally five levels of behavior for each item were specified with the following qualification:

Level O was designed for the student who did not satisfactorily perform the item.

Level 1 stated the minimally acceptable behaviors of a beginning junior.

Level 2 specified average behaviors for junior students.

Level 3 behaviors were expected of seniors.

Level 4 behaviors reflected the terminal curriculum objectives (Figure 1).

Even though the committee members were enthusiastic initially, the tedium of creating level behaviors for each item taxed their imaginations. In designing the levels, some committee members questioned if level O would ever occur. One committee member asked, "How can a student not communicate with health team members in a clinical setting?" Someone else agreed. There was a long pause. "Maybe we should rewrite that one," someone volunteered. For example, "Student communicates with chart throughout clinical day."

Table

FIGURE 1SAMPLE ITEMS WITH CRITERION LEVELS FROM CLINICAL EVALUATION TOOL

FIGURE 1

SAMPLE ITEMS WITH CRITERION LEVELS FROM CLINICAL EVALUATION TOOL

Table

FIGURE 2CRITERION LEVELS FOR A VALUE BEHAVIOR

FIGURE 2

CRITERION LEVELS FOR A VALUE BEHAVIOR

Although there were problems with the 0 level items, later they became the easiest items to write, since they were the exact opposite of the desired behavior. Distinctions between levels 1 and 2 and those between 3 and 4 were more difficult to make. Many hours were spent discussing the merit of the terms "occasionally," "inadequately," "inconsistently," and "with guidance" as qualifiers. Eventually, the committee of four broke into groups of two to write criterion levels for an entire category of items. Working in pairs proved to be more efficient for accomplishing the task.

As the committee worked, one potential problem was always in mind. How likely was it that 22 faculty members with their different backgrounds, not to mention their own evaluation tools, would adopt the same tool, the one the committee was working hard to create?

Marketing the Clinical Evaluation lbol

Several strategies for gaining faculty acceptance of the tool were discussed. They included:

1. Solicitation of ideas from other faculty members ("What do you think should be included in the tool?")

2. Design of a preliminary tool to illustrate the ideas ("Here it is. Hope you like it.")

3. Preparation of defense tactics ("Can you do better?")

4. Mass resignation of the committee ("We quit. It's all yours.")

Fortunately, the committee never went beyond tacties 1 and 2. When the tool was presented to the faculty, none of the anticipated resistance was encountered. Instead the faculty was relieved. After years of discussion, a tool had finally been created. After explanation, interpretation and minor revisions, the faculty adopted the tool. Plans for implementation included using the tool in a pilot test for two consecutive years at both the junior and senior levels without revision. In addition, faculty were asked to keep anecdotal notes on their use of the tool and to record each student's mean scores.

Utilizing the Clinical Evaluation lbol

To document the student's functioning in a clinical setting,* a performance record was designed to accompany the criterion referenced clinical evaluation tool. The performance record included spaces for entering absences, tardiness, professional appearance, clinical papers, and skills practice. In addition the performance record included a matrix for recording numerical values for each item on the criterionreferenced tool. The matrix visually demonstrated the student's areas of limitations, strengths, improvements, and their overall mean scores.

Student performance was reviewed at each student's clinical evaluation conference. For this conference to be a meaningful learning experience, it was important that both the student and instructor independently evaluate student performance prior to the conference. At the conference both participants contributed equally when comparing evaluation scores. When scores did not agree, specific examples of student behaviors were discussed. For documentation of extremely low or high scores specific examples were stated.

To monitor student progress, the official performance record was maintained and transferred from one clinical instructor to the next. A personal copy was maintained by each student.

Revising the Clinical Evaluation Tool

During the two-year pilot study, the faculty were asked to contribute their collected data from anecdotal notes and student comments to the clinical evaluation committee. After analyzing the collected data, the evaluation committee posed several questions for discussion at faculty area meetings (medical-surgical, maternalchild, and psychiatric community), The discussion guidelines were as follows:

1. List those criterion items most pertinent to your clinical area.

2. List those items appropriate to junior and senior level respectively,

3. List items which are impossible to evaluate or achieve.

4. List behaviors that are important for your area that are not identified in the present tool.

5. Discuss interpretation of individual items to develop reliability among faculty members.

Upon receiving recommendations from each faculty area group, the committee systematically revised the clinical evaluation tool. Attention was directed toward developing items concerning values systems (Figure 2) since this topic was absent from the tool.

Planning for the Future

The committee still has not completed its work. Mandatory clinical criterion items, which a student must pass in each specific clinical area, need to be identified. Yearly re-evaluation of the tool and its application continues as well as tabulating means for each clinical course at the end of each semester.

References

  • Eber (1978).
  • Nunnally (1978).
  • Popham, W.I. (1972). Criterion-referenced measurement: An introduction. Englewood Cliffs, NJ: Education Technology Publications.
  • Sommerfeld, D.P., & Accola, K.M. (1978, July). Evaluating students' performance. Nursing Outlook, 26, 432-436.
  • Wooley, AS. (1977, May). The long and tortured history of clinical evaluation. Nursing Outlook, 25, 308-315.

FIGURE 1

SAMPLE ITEMS WITH CRITERION LEVELS FROM CLINICAL EVALUATION TOOL

FIGURE 2

CRITERION LEVELS FOR A VALUE BEHAVIOR

10.3928/0148-4834-19841001-15

Sign up to receive

Journal E-contents