Journal of Nursing Education

BRIEFS 

An Effective Clinical Evaluation Tool

Sylvia Novak, MSN, RN

Abstract

Accurate instructor evaluation of students in clinical settings requires use of an effective evaluation tool. In our psychiatric nursing clinical course, we were having difficulty documenting student behaviors and completion of assignments because we lacked a form which listed activities we wished to monitor.

Evaluated Activities

Clinical objectives for our students reflect both written and behavioral activities. To demonstrate meeting these objectives, we ask students to keep weekly logs in which they record their reactions to the agency, interactions with clients, and events that take place. Also, they must write assessments of clients, the agency and themselves, develop "mini" care plans for both individuals and groups of clients, and write reports on visits to outside meetings (such as Alcoholics Anonymous). In addition, two process recording papers are required. Behaviorally, we expect students to observe and participate in team treatment planning, to interact with clients, and to lead one or more group sessions.

Inadequate Evaluation System

Students' "logs" are turned in each week, read by the instructor, and returned to the students with written feedback for the next week. What had been missing was a tool with which the instructor could document specifics of student behaviors and monitor their progress in each of the expected assignments or behaviors. We had been using an anecdotal record system, jotting down notes that summarized our impressions based on the written logs and the behaviors we were able to observe, but it was difficult to look through our anecdotal records at the end of the clinical rotation and make objective judgments regarding performance. We often sensed subtle differences in students, but had not sufficiently recorded these differences in order to have documentation supporting our conclusions that one student, for example, would receive a "B," and another one earn a "B+."

Finding a better way to monitor and also evaluate each student's progress and learning was a challenge. What we needed was a better evaluation and documentation form.

Literature Search

Searching the literature for information on clinical evaluation tools, I found discussions on philosophical choices concerning objectivity vs. subjectivity, learning vs. performance, and normative vs. competency-based models (Battenfield, 1986; Cottrell, et al., 1986; Lewis, 1976; Litwack, 1976; Woolley, 1977).

In a move toward objectivity, nurse educators 10 to 15 years ago were proposing that we avoid subjectivity in evaluation by using specific criteria. However, Edith Lewis wrote about the difficulty of measuring qualities such as compassion. "If a quality cannot be itemized ... we will have denied the difference between the nurse who has only the procedural skills and the one who is also personally and socially perceptive" (Lewis, 1976). Perhaps I did not have to feel guilty over the nagging thought that our judgments of students may be somewhat subjective!

Despite the debates of the 1960s and 1970s, faculty of the 1980s generally agreed, as reflected by the increasing use of behavioral objectives, that competencybased or criterion-referenced models were the best (Battenfield, 1986). Now, with increasing use of Standards of Practice, this concept is generally accepted. I found ample support for using levels of competency for determining student grades; grading on a normative curve was discouraged.

And what about the choice of measuring learning vs. performance? Comments by Woolley about grading student learning rather than performance were heartening to me, for they gave support for my longheld belief that the laboratory is more a place to learn than to perform. And so, "seeking learning experiences" had support as an appropriate evaluation category for our form. In addition, awarding a student a grade of "A" at the end of the clinical experience could reflect terminal behaviors…

Accurate instructor evaluation of students in clinical settings requires use of an effective evaluation tool. In our psychiatric nursing clinical course, we were having difficulty documenting student behaviors and completion of assignments because we lacked a form which listed activities we wished to monitor.

Evaluated Activities

Clinical objectives for our students reflect both written and behavioral activities. To demonstrate meeting these objectives, we ask students to keep weekly logs in which they record their reactions to the agency, interactions with clients, and events that take place. Also, they must write assessments of clients, the agency and themselves, develop "mini" care plans for both individuals and groups of clients, and write reports on visits to outside meetings (such as Alcoholics Anonymous). In addition, two process recording papers are required. Behaviorally, we expect students to observe and participate in team treatment planning, to interact with clients, and to lead one or more group sessions.

Inadequate Evaluation System

Students' "logs" are turned in each week, read by the instructor, and returned to the students with written feedback for the next week. What had been missing was a tool with which the instructor could document specifics of student behaviors and monitor their progress in each of the expected assignments or behaviors. We had been using an anecdotal record system, jotting down notes that summarized our impressions based on the written logs and the behaviors we were able to observe, but it was difficult to look through our anecdotal records at the end of the clinical rotation and make objective judgments regarding performance. We often sensed subtle differences in students, but had not sufficiently recorded these differences in order to have documentation supporting our conclusions that one student, for example, would receive a "B," and another one earn a "B+."

Finding a better way to monitor and also evaluate each student's progress and learning was a challenge. What we needed was a better evaluation and documentation form.

Literature Search

Searching the literature for information on clinical evaluation tools, I found discussions on philosophical choices concerning objectivity vs. subjectivity, learning vs. performance, and normative vs. competency-based models (Battenfield, 1986; Cottrell, et al., 1986; Lewis, 1976; Litwack, 1976; Woolley, 1977).

In a move toward objectivity, nurse educators 10 to 15 years ago were proposing that we avoid subjectivity in evaluation by using specific criteria. However, Edith Lewis wrote about the difficulty of measuring qualities such as compassion. "If a quality cannot be itemized ... we will have denied the difference between the nurse who has only the procedural skills and the one who is also personally and socially perceptive" (Lewis, 1976). Perhaps I did not have to feel guilty over the nagging thought that our judgments of students may be somewhat subjective!

Despite the debates of the 1960s and 1970s, faculty of the 1980s generally agreed, as reflected by the increasing use of behavioral objectives, that competencybased or criterion-referenced models were the best (Battenfield, 1986). Now, with increasing use of Standards of Practice, this concept is generally accepted. I found ample support for using levels of competency for determining student grades; grading on a normative curve was discouraged.

And what about the choice of measuring learning vs. performance? Comments by Woolley about grading student learning rather than performance were heartening to me, for they gave support for my longheld belief that the laboratory is more a place to learn than to perform. And so, "seeking learning experiences" had support as an appropriate evaluation category for our form. In addition, awarding a student a grade of "A" at the end of the clinical experience could reflect terminal behaviors - how much the student learned - rather than an averaging of performance grades.

Researching the literature seemed to demonstrate that more was being written about clinical evaluation in earlier years, with fewer articles appearing in more recent times. Cottrell's article is one exception, and it added the dimension of using a computer (Cottrell, et al., 1986). In another article, Hillegas and Valentine (1986) developed excellent descriptive criteria for letter grades for medical/surgical areas, but we needed something more specific to meet our needs. I enjoyed a comment by Woolley which may explain the fewer number of more recent articles:

"After a few years of struggling with the problem (how to evaluate), one usually develops a philosophy one can live with and accepts the fact that a real solution is still eluding us. One can find support in the literature for almost any way of doing this task. Small wonder that educators so frequently work out their own solutions, and that their evaluations are often subjective . . . There is no valid or reliable method of grading students in the clinical area in baccalaureate education. Kolstoe is probably accurate in his conclusion that whatever grading system you choose, it is probably bad, and that the best course is 'to select whatever grading system is least in line with what your colleagues use. That way, you emerge as creative, and that is a characteristic highly prized by students and faculty alike" (Kolstoe, 1975, cited in Woolley, 1977).

What a relief to know I was free to re-invent the wheel! No longer hesitant, I began developing a tool that would meet our needs.

I sought examples for my preference to use numbers that would reflect grades. Cottrell's tool assigned numbers for grading levels of performance, but was developed for use with the computer (Cottrell, et al., 1986). We needed something less complex and more specific to our needs.

Designing a New Form

Until recently, determining the grade had been based on awarding points for logs, papers, and attainment of objectives at the end of the clinical experience. This method lacked documentation of ongoing progress. I created a form that allows documentation, on a weekly basis, of the written work, selfgrowth, group activities, meetings attended, and clinical behaviors. By having specific assignments and behaviors on the form, we maintain a more detailed assessment of each student. Not only is documentation easier, but built into the evaluation tool is a system for grading each activity. The point system used automatically becomes specific data for determining the final grade. The new form eliminates the former numerical guesswork that occurred when we attempted to score final achievement of objectives.

Behaviors and assignments are listed vertically in the left-hand column. Columns for each week form left to right, with a "total" column on the right side of the form, and horizontal lines form boxes for recording scores. Of course, every activity need not be scored every week; some may be given only one score for the eight-week period, such as when evaluating students' personal goals, while other categories may be assessed each week.

Regardless of the number of times an activity is assessed, a final number is written in the "total" column at the end of the rotation. This final number is not always an average of the eight weeks' scores for each activity, but may reflect the student's level of learning at the end of the rotation. For example, a beginning student may do poorly with "client assessment," but as time passes, the student becomes very proficient, and may very well end up with the maximum point for a final score. The choice of method used depends on the instructor's emphasis on performance vs. learning.

Consistency Between Weekly Scores and Final Grades

An additional benefit of this evaluation form is the ease with which the grade is determined. A key in the upper right-hand corner helps the instructor identify the weekly scores to award the student. (We also have an Instructor Guide, which explicitly describes criteria for these scores. See Footnote, end of article). The point range we use, from 1 to 5, is not simply a randomly picked range of numbers, but is composed of numbers which in themselves reflect a percentage grade for each item being scored. That is, the form is designed to include a total of 15 "units," each one worth five possible points, plus another five units, or 25 points, for papers. Thus, 10 items in the log earn a possible 50 points, five behavioral activities earn 25 points, and the two papers, another 25 points, adding up to a total of 100 points. Dividing 100 points by 20 units gives us the maximum point value of 5 for scoring each category.

For example, a score of 3.5 (x 20) equals 70%, or a "C - ," and reflects minimal performance. A score of 4 equates with 80%, which is a "B." The 4.5 represents a 90%, which is an "A - ," and the 5 of course represents 100%, or an "A." Thus, if a student earns five points in each of the 15 categories, and 25 points for papers, then 100 points, or 100%, has been earned. Obviously, lesser points immediately reflect lower percentages, with no need for further mathematical calculation to determine the student's grade. For instance, a total of 84 points, or 84%, automatically converts to a "B."

Flexibility of the Form

To adapt this form to your clinical course, the number of weekly columns and/or the activity items themselves can be changed to suit different needs, but one needs to maintain the same number of activities in order to have the points add up to a potential 100%. On our form, we have 15 activities for 75 possible points, plus 25 points for written papers. If needed, eliminating points for papers could free up five more spaces for categories. Of course, the form can be used in clinical areas other than psychiatric nursing; there are many possibilities.

Results

The first semester we implemented this new system, we found that we occasionally wished to write a comment about the student. We met this need by simply jotting it down on the back of the form. This practice allowed us to preserve the beneficial aspects of anecdotal records. Our evaluations of students have improved in their accuracy because we have weekly records of levels of achievement. Students respond positively to seeing that we are "keeping track" of their achievements, and they know we are available to discuss their progress with them. We developed finer discrimination because the form provides us with more objective data which documents those subtle differences among students.

In summary, an inadequate clinical evaluation system, plus uncertainty over which was the best evaluation method, prompted me to survey the literature for clinical evaluation tools.

A new form was designed for the documentation of students' behaviors and completed assignments. The form accurately monitors their progress and automatically determines the students' grade at the end of the clinical experience. Both faculty and students are finding the new system satisfying and effective.

References

  • Battenfield, B.L. (1986). Designing Clinical Evaluation lbols: The State of the Art. National League for Nursing Publication.
  • Cottrell, B.H., Cox, B.H., Kelsey, S.J., Ritchie, P.J., Humph, E. A., & Shannahan, M. K. (1986). A clinical evaluation tool for nursing students based on the nursing process. Journal of Nursing Education, 25, 270-274.
  • Hiilegas, K.B. & Valentine, S. U986). Development and evaluation of a summative clinical grading tool. Journal of Nursing Education, 25, 218-220.
  • Lewis, E, P. (1976). Quantifying the unquantifiable (editorial). Nursing Outlook, 24, 147.
  • Litwack, L. (1976). A system for evaluation. Nursing Outlook, 24, 45-48.
  • Woolley, A.S. (1977). The long and tortured history of clinical evaluation. Nursing Outlook, 25, 308-315.
  • Copies of the evaluation form and the Instructor's Guidelines may be obtained by writing to the author, enclosing $1.00 for postage.

10.3928/0148-4834-19880201-09

Sign up to receive

Journal E-contents