Journal of Nursing Education

The articles prior to January 2012 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

Evaluating Clinical Performance Using Simulated Patients

B Joan McDowell, MPH, ANP, RN, C; Donna L Nardini, MSW, MPH; Shirley A Negley, MNEd, ANP, RN, C; Joyce E White, DrPH, FNP, RN, C

Abstract

As faculty of a primary health care nursing program, we are engaged in the evaluation of student nurse practitioners' use of the nursing process in health care delivery. Over several years, we became dissatisfied with our approach to clinical evaluation and have developed what we consider to be a superior system using simulated patients.

Previous to our instituting the simulated patient approach, we administered clinical exams by observing students in ambulatory clinical settings and utilized actual patients scheduled for the clinics. Detailed rating scales based on the nursing process were used to enhance objective measurement. Nurse: practitioner faculty, assisted by the agencies' clinical staff, selected appropriate patients for the examinations. Despite the effort and cooperation of all concerned, we found evaluation encounters with actual patients not sufficiently reproducible or predictable to be reliable. Some patients did not keep their appointments and substitute, less suitable patients had to be recruited. At times patients were either too "difficult" or too "easy" for the level of knowledge and skill being evaluated. Other times patients were too sick or unable to freely communicate with students. The anxiety exhibited by many students was overwhelming and made it difficult to view each student's performance objectively. Faculty found themselves interjecting questions, adding to the physical examination and arranging for follow-up care as their concern that patients receive the needed care took priority over the primary objective of student evaluation.

Also, actual patient encounters were not as useful as we would have liked in pinpointing specific areas of strength and weakness, limiting the ability of students to learn from them. In some cases, faculty members found themselves unable to separate student performance from other variables in the situation and were not always comfortable with the accuracy of their rating of the students.

The Simulated Patient in Teaching and Evaluation

In spite of our dissatisfaction with our clinical evaluation efforts, we believed that some method of judging clinical performance was necessary; oral and written examinations do not reveal how students elicit data from patients, their skill in observation and interpretation, or the adequacy of their interviewing and physical exam techniques. The availability of a simulated patient program within our university community and our use of this approach in the teaching of interviewing led to our first attempt to use it for clinical evaluation. Two years' experience with this method has only served to reinforce our belief that it provides for a criterion-referenced system which decreases the problem of subjectivity so often noted in the evaluation of clinical skills iFrisbie, 1979; Dewyer & Schmitt, 1969).

We have found that the use of simulated patients in performance evaluation increases objectivity, allows for the comparison of each student's performance to identical criterion measures, and provides for feedback which students can use to strengthen their clinical skills.

Patient simulators (healthy individuals trained to enact the role of the patient by presenting the desired historical information, mimicking physical findings, and participating in the management plan) are readily available at any time or place, and their symptoms and physical signs remain stable over time. This allows for repetition of the same patient data to evaluate a number of different students or the same student over time. Further, the level of difficulty and the amount of information presented by the simulated patient can be correlated with the skill level of the student.

Patient simulators have been used to teach and evaluate interviewing and interpersonal skills (Werner & Schneider, 1974), assessment of physical signs (Barrows & Abamson, 1964; Stillman, Ruggill, & Sabers, 1978), and patient education techniques (Callaway, Bosshart, & O'Donell, 1977). Simulated patients have been trained…

As faculty of a primary health care nursing program, we are engaged in the evaluation of student nurse practitioners' use of the nursing process in health care delivery. Over several years, we became dissatisfied with our approach to clinical evaluation and have developed what we consider to be a superior system using simulated patients.

Previous to our instituting the simulated patient approach, we administered clinical exams by observing students in ambulatory clinical settings and utilized actual patients scheduled for the clinics. Detailed rating scales based on the nursing process were used to enhance objective measurement. Nurse: practitioner faculty, assisted by the agencies' clinical staff, selected appropriate patients for the examinations. Despite the effort and cooperation of all concerned, we found evaluation encounters with actual patients not sufficiently reproducible or predictable to be reliable. Some patients did not keep their appointments and substitute, less suitable patients had to be recruited. At times patients were either too "difficult" or too "easy" for the level of knowledge and skill being evaluated. Other times patients were too sick or unable to freely communicate with students. The anxiety exhibited by many students was overwhelming and made it difficult to view each student's performance objectively. Faculty found themselves interjecting questions, adding to the physical examination and arranging for follow-up care as their concern that patients receive the needed care took priority over the primary objective of student evaluation.

Also, actual patient encounters were not as useful as we would have liked in pinpointing specific areas of strength and weakness, limiting the ability of students to learn from them. In some cases, faculty members found themselves unable to separate student performance from other variables in the situation and were not always comfortable with the accuracy of their rating of the students.

The Simulated Patient in Teaching and Evaluation

In spite of our dissatisfaction with our clinical evaluation efforts, we believed that some method of judging clinical performance was necessary; oral and written examinations do not reveal how students elicit data from patients, their skill in observation and interpretation, or the adequacy of their interviewing and physical exam techniques. The availability of a simulated patient program within our university community and our use of this approach in the teaching of interviewing led to our first attempt to use it for clinical evaluation. Two years' experience with this method has only served to reinforce our belief that it provides for a criterion-referenced system which decreases the problem of subjectivity so often noted in the evaluation of clinical skills iFrisbie, 1979; Dewyer & Schmitt, 1969).

We have found that the use of simulated patients in performance evaluation increases objectivity, allows for the comparison of each student's performance to identical criterion measures, and provides for feedback which students can use to strengthen their clinical skills.

Patient simulators (healthy individuals trained to enact the role of the patient by presenting the desired historical information, mimicking physical findings, and participating in the management plan) are readily available at any time or place, and their symptoms and physical signs remain stable over time. This allows for repetition of the same patient data to evaluate a number of different students or the same student over time. Further, the level of difficulty and the amount of information presented by the simulated patient can be correlated with the skill level of the student.

Patient simulators have been used to teach and evaluate interviewing and interpersonal skills (Werner & Schneider, 1974), assessment of physical signs (Barrows & Abamson, 1964; Stillman, Ruggill, & Sabers, 1978), and patient education techniques (Callaway, Bosshart, & O'Donell, 1977). Simulated patients have been trained to act as instructors to teach pelvic examination skills (Livingston & Ostrow, 1978). Thus simulated patients have been extensively used to evaluate students' skill in performing component parts of the problem-solving process as it is applied to the delivery of health care. However, a review of the literature fails to reveal their use in evaluating skill in implementing the entire process. It is the evaluation of the students' ability to utilize this process in the delivery of comprehensive health care to individuals that we address in this article.

The Simulated Patient Examination

Our first simulated patient clinical examination was used during the term of study which focused on the care of patients presenting with episodic illnesses; the case was designed to evaluate students' skill in caring for a previously healthy patient who presented with mononucleosis. We chose a patient problem which students were likely to encounter in their actual patient experiences from those suggested by the faculty.

The student's ability to obtain the appropriate history and physical examination, to use laboratory data to reach a correct diagnosis, and to institute a plan of care emphasizing patient education and eelf-care strategies were evaluated.

While the simulated patient provided the historical data, physical exam findings were provided to students via the use of index cards provided by the faculty. For example, the student examined the oropharynx and received a card stating, "Examination of the oropharynx reveals moderate inflammation and exudative tonsillitis." Similarly, diagnostic tests were not actually performed. Cards were again used to provide results of diagnostic tests if such results would be immediately available in an actual patient encounter. Thus, students were provided with immediate feedback via the index cards as each portion of the examination was performed and each diagnostic test was requested. This they could use to aid in the process of hypothesis generation and testing. Physical findings and results of diagnostic tests were not made available to students for any part of the examination or any test they chose not to include in their assessment.

A rating scale was used to document the student's activities in each area as the faculty member observed her interacting with the simulated patient. This rating scale is outlined in the Figure; it consisted of five major parts: subjective data, objective data, assessment, plan of care and the written SOAP note. Data were weighted according to importance to the diagnosis and management of the illness. Also considered in weighting was whether the simulated patient had been instructed to volunteer the data or wait to be asked for it and whether the student volunteered correct information regarding the management plan or offered such information in response to the simulated patient's question.

The Faculty's Role

While only two faculty members actually administered the clinical exams which took 45 minutes per student, the entire faculty was involved in a variety of activities which led to the development and administration of the exams. Faculty identified the relevant historical information (including what data the simulated patient would volunteer and that which should be provided only upon questioning); the pertinent physical examination which should be performed (both the portions of the physical examination which should be performed and what constituted an acceptable examination); and an appropriate management plan for the patient, including diagnostic, therapeutic and patient education management. Faculty agreed that simulated patients would be instructed to ask about any parts of the management plan the student did not voluntarily include. These discussions provided an open forum for the consideration of faculty expectations of students as well as their own approaches toward patient care.

Training of the simulated patients was carried out by the coordinator of the patient simulation program. The patient simulators were instructed regarding symptom description and historical data to volunteer and that which they should wait to be asked. Expectations with respect to the physical examination to be performed were reviewed, including how to mimic some physical findings, e.g., discomfort on palpation of occipital lymph nodes. They were also instructed on the management plan, including asking questions about portions of the plan not mentioned by students.

After the simulated patients were trained, the rating scale was validated by having them interviewed by two practicing adult nurse practitioners in the community. Two faculty members simultaneously observed and rated the patient/ adult nurse practitioner interactions; the inter-rater reliability was found to be high.

Two weeks prior to the examination, students were given a guide to the clinical examination with specific information regarding the examination procedure. The patient encounter was to be timed with a maximum of 45 minutes allowed. It was suggested that this time be used in the following way: 20 minutes for data gathering, both history and physical, 10 minutes for discussing management with the patient, and the remainder in formulating the plan of care. Students had access to common reference books during the examination and rooms were set up to resemble actual exam rooms.

Student/Faculty Perceptions

While the use of the simulated patient for clinical examinations is not a panacea, both faculty and students rated it highly. No difference in the rating of the procedure was found between students who failed and those who passed the examination. All students perceived the experience to be similar to an actual patient encounter.

Barrows and Tamblyn (1977) found that student performance in simulation can be assumed to closely parallel their performance in a real-life situation. We found that students' performance on clinical examinations very closely correlated with class standing and ratings by clinical preceptors.

Faculty's enthusiasm was evident in their decision to prepare simulated patient clinical exams for all four terms of the program, and all students have been evaluated using this system for the last two years. Advantages are the increased objectivity of the approach and our ability to focus only on evaluating student performance without attending to the care needs of actual patients. Most importantly, the rating scale permitted ready identification of students' strengths and weaknesses in implementing the nursing process, facilitating more directive student education.

In spite of the general enthusiasm for this approach, there are factors which make it less than perfect. While there is less expenditure of faculty time during the actual administration of the exam, preparation time is extensive. Approximately 50 hours of faculty time were used to prepare the first examination. The development of subsequent examinations has taken less time, but remains time consuming. Employing individuals as simulated patients is an additional expense, especially since ours were paid for the time spent in training as well as in actual evaluations. Another disadvantage is the limited range of physical signs which can be replicated.

FIGUREOUTLINE OF RATING SCALE

FIGURE

OUTLINE OF RATING SCALE

Summary

In summary, we believe that nurse practitioners use the nursing process in giving primary health care to patients, and their skill in using that process must be periodically evaluated as they move through their educational program. Further, we believe that consumers have the right to expect a beginning level of competence from graduates of nursing programs. Based on our experience, we believe that the use of the simulated patient, now used primarily in teaching, should be expanded to be of valuable assistance in clinical performance evaluation.

References

  • Barrows, H., & Abamson, S. (1964). The programmed patient: a technique for appraising student's performance in clinial neurology. Journal of Medical Education, 39, 802-805.
  • Barrows, H.S., lkmblyn, R. Í1977). The simulated patient. Programme for Educational Development, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada.
  • Callaway, S" Bosshart, D., & O'Donnel, A. (1977). Patient simulators in teaching patient education skills to family practice residents. Journal of Family Practice, 4, 709-712.
  • Dewyer J.M., & Schmitt, JA (1969). Using the computer to evaluate clinical performance. Nursing Forum, 8(3 11 255-275.
  • Frisbie, D (1979). Evaluating student achievement: Principle trends and problems. (NLN Pub. #23-1766). New York: National League for Nursing
  • Livingston, R.A., Ostrow, D.N. (1978). Professional patient- instructors in the teaching of the pelvic examination. American Journal of Obstetrics and Gynecology, 132, 64-67.
  • Stillman, P., Ruggill. J, & Sabers, D. The use of practical instructors Uj evaluate a complete physical examination. Evaluation and the Health Professions, 1, 78.
  • Werner, A., & Schneider, J. (1974). Teaching medical students interactional skills. New England Journal of Medicine, 290, 1232-1237.

10.3928/0148-4834-19840101-11

Sign up to receive

Journal E-contents