Dr. McWilliam is Assistant Professor of Nursing, University of New Hampshire, Durham, New Hampshire; and Dr. Botwinski is Assistant Professor of Nursing, University of Tampa, Tampa, Florida.
The authors would like to thank Judith Ann Sullivan, EdD, RN, and Thomas J. Thompson, PhD, MPH.
Address correspondence to Paula McWilliam, EdD, ARNP, NNP, Assistant Professor of Nursing, University of New Hampshire, Durham, NH 03824; e-mail: email@example.com.
One of the major challenges in nursing education is evaluating clinical competencies because it reflects students’ abilities to select the necessary elements from their own repertoire of resources and apply them to unique situations (Bergevin, 1999). Clinical assessment in nursing has advanced from the pre-1971, classroom-based exhibit of practical skill, which was highly criticized due to concerns about transferability to real clinical situations, to the use of behavioral checklists in the 1970s, to continuous clinical assessment in the 1980s (Redfern, Norman, Calman, Watson, & Murrells, 2002).
The evaluation of nursing students’ clinical competencies in a wide array of situations is essential to the educational process because students are exposed to various patient health issues in the clinical area. However, students’ clinical experiences are not equivalent because each student will have exposure to different patients, making it difficult to measure individual and program outcomes (Rentschler, Eaton, Cappiello, McNally, & McWilliam, 2007).
The Objective Structured Clinical Examination (OSCE) is a performance-based examination where students are observed demonstrating various clinical behaviors. The main objective of the OSCE method of evaluating clinical performance is to assess students’ transfer of classroom and laboratory learning experiences into simulated clinical practice. Case scenarios are developed by faculty to reflect the aspects of the curriculum to be tested.
The OSCE method is a form of clinical performance evaluation that does not use mannikin-based simulation. Like high-fidelity simulation, mannikin-based simulation allows students to experience a realistic situation without real-world risks. The OSCE method includes a series of workstations that simulate different health care scenarios. The workstation’s activities may include use of standardized patients, videotaping stations, and student role-playing. Students rotate through each station, and their clinical performance can be assessed using tools such as checklists or rating scales. The tools used reflect the desired behaviors the student is expected to demonstrate with each scenario. Success in the examination is based on the number of desired behaviors the student was observed completing correctly (Bartfay, Rombough, Howse, & Leblanc, 2004; Bradley & Humphris, 1999; Evans, Morales, & Robb, 1995; Mavis, 2000). Based on the success of the individual OSCE stations, the faculty may then adjust the curriculum to strengthen aspects where the students showed deficits during testing.
The OSCE was developed by Harden to evaluate medical students’ clinical competencies. It first appeared in a report in the British Medical Journal in 1975 (Redfern et al., 2002). Since then, the OSCE has been adapted and applied to other disciplines in health care and is a nationally recognized model used in evaluating clinical competencies in medicine and nursing (Alinier, 2003). In 2004, an OSCE program was implemented at a nursing program to evaluate undergraduate senior nurses’ clinical performance at a large, public, research-oriented university in New England.
This nursing program offers basic and registered nurse completion undergraduate and graduate nursing programs that are nationally accredited by the National League for Nursing. The university’s baccalaureate program focuses on the health needs of patients at all age levels in a variety of health care settings. The nursing courses emphasize caring, critical thinking, problem solving, decision making, and clinical skills. The clinical experience is provided in area hospitals and community health care agencies. The second-semester seniors conclude in a clinical immersion experience (practicum) in which they apply classroom theory to a clinical setting of their choice. Prior to the time when students begin their senior practicum rotation, they are required to participate in the nursing OSCE developed for them, which provides a formative evaluation of students’ clinical competencies.
The purpose of the study was to examine key aspects of the nursing OSCE used to develop a performance-based evaluation of the nursing students’ selected clinical competencies at this university. This evaluation would be beneficial to the nursing faculty to determine where modifications or changes should be made to the existing curriculum.
Nurses are increasingly expected to exercise autonomy in clinical practice, thereby putting greater accountability on the quality of their professional activities. At the same time, nursing students are experiencing a decrease in institutional clinical learning opportunities due to the downsizing of health care institutions with the shift to community-based patient care, and competition with growing educational programs. These events are challenging nursing programs to explore alternative approaches to clinical training (Bartfay et al., 2004).
Developing OSCE Case Scenarios
When developing an OSCE case scenario for the use of evaluating nursing clinical competencies, it is important that each case scenario incorporates a specific range of nursing knowledge and skill. This allows students to demonstrate the use of assessing, planning, implementing, or evaluating care given in response to a single patient encounter (Eva, Rosenfeld, Reiter, & Norman, 2004; Martin, Stark, & Jolly, 2000; O’Neill & McCall, 1996).
The OSCE case scenario should include a detailed patient profile that includes the patient’s chief complaint, history of present illness, past pertinent medical history, family and social history, patient affect, and behavior. Areas of focus in the history of the present illness include: the onset, duration, and progression of symptoms and the frequency, location, description, radiation, quality, quantity, intensity, alleviating or aggravating factors of associated symptoms, current medications, and precipitating events of illness (Evans et al., 1995; Vessey & Huss, 2002).
The Role and Training of Standardized Patients
A standardized patient refers to an individual hired to portray a patient in a health case scenario with an assigned or actual health condition. A standardized patient is selected for a particular case scenario, (e.g., matched by characteristics such as age, gender, previous surgeries, past medical history, educational background, or bilingual abilities). The standardized patient is also trained to depict details of what a patient would say and do during an encounter with a health care professional (Adamo, 2003; MacDonald, 2004; Martin, Reznick, Rothman, Tamblyn, & Regehr, 1996; Zraick, Allen, & Johnson, 2003).
In addition, the standardized patient is trained to provide objective feedback via checklists and rating scales, as well as descriptive evaluations of students’ behavior. This provides a basis for constructive verbal or written postencounter feedback to students by the standardized patient. Observation of a student’s performance by the standardized patient during the OSCE provides important feedback indicating whether performance-based outcome behaviors reflect educational objectives (Adamo, 2003; Ebbert & Connors, 2004; MacDonald, 2004; Martin et al., 1996; Zraick et al., 2003).
There is no consensus in the literature regarding how much an standardized patient should have before participating in an OSCE program; however, models and standards for standardized patient selection and training do exist in practice. MacDonald (2004), a standardized patient training expert, recommends that standardized patient receive a minimum of 10 hours of training per case. Recruiting and training standardized patients are the cornerstones of successful OSCE programs. Each institution determines how much standardized patient training is appropriate: training varies according to the experience of the standardized patient and the complexity of the scenario. One way to determine whether standardized patients have received enough training is to observe them by videotape to see if they maintain a desired level of accuracy in performance and reporting in multiple, consecutive encounters with students (Adamo, 2003).
To assess the effectiveness of building a nursing OSCE, questions in three areas of OSCE development are presented here:
- How should case scenarios be developed?
- What constitutes the role, training, and evaluation of the standardized patient?
- What are students’ perceptions of the OSCE nursing experience?
The evaluative method for investigating the nursing OSCE program was an assessment design (Isaac & Michael, 1997). An assessment design differs from traditional testing designs in that it increases validity by directly testing performance rather than using indirect methods such as classroom testing. According to Linn, Baker, and Stephen (1991):
This method has been increasingly used with the need for complex, performance-based assessments. . . in high stakes contexts or to make wise decisions about the curriculum.
The setting used for this assessment was a same-day surgery unit at a local community hospital. It was accessible to the university on weekends and included space and facilities for the standardized patient and students.
There were three groups of participants in this study: students, faculty, and standardized patients. The students consisted of 60 White, senior-level undergraduate nursing students from the graduating class of 2005 (56 women and 4 men) between the ages 21 and 23. The faculty members included two university faculty members and a clinical nursing consultant who had experience with the OSCE in nursing education. They were interviewed to determine how the written case scenarios used for the nursing OSCE workstations measured students’ clinical competencies and were developed, evaluated, and used at this university. This group is hereafter referred to as “faculty.” Three standardized patients were interviewed to determine their impression specific training that they received in preparation for their role as standardized patients and evaluators. The standardized patients who participated in the study consisted of a nursing faculty member, a nursing graduate student, and a student from a non-nursing major.
Data Collection and Instruments
To elicit information from the faculty on the process of developing and implementing the nursing OSCE case scenarios, the researcher developed a set of nine questions that each faculty member was asked (Table 1).
Table 1: Interview Guide for Faculty
To obtain information from standardized patients, the researcher developed a set of nine questions that were asked of each standardized patient (Table 2). The questions addressed the training the standardized patients received in preparation for their role in the nursing OSCE and training they received for providing feedback to students.
Table 2: Interview Guide for Standardized Patients
At the end of the nursing OSCE, students completed a postencounter Likert-style questionnaire. This form (Table 3) has been used since the nursing OSCE was first implemented at this university. To evaluate the students’ perceptions of the nursing OSCE experience, the researcher reviewed responses from this form.
Table 3: Postencounter Evaluation Questionnaire
For the purpose of this article, findings from the three research questions explored are addressed here:
- The development of the nursing OSCE case scenarios and updates
- Standardized patient role, training, and evaluation
- Students’ perceptions of the nursing OSCE experience.
Developing Nursing OSCE Case Scenarios and Updates
Interviews with program faculty revealed that case scenarios must be developed by expert faculty in the content area to test topics valid to the competencies to be assessed. If content from more than one specialty is to be tested, working with content experts from all nursing disciplines is important to make certain that current content and clinical procedures are followed during case development. Our faculty expressed the opinion that when writing nursing OSCE case scenarios, drawing from one’s personal experiences is valuable in case development.
The nursing OSCE challenges students to apply clinical reasoning skills to both direct information and cases with a hidden agenda—for example, a scenario could be constructed where a patient complains of abdominal pain but also exhibits bruising on several areas of her back, suggesting abuse.
Faculty also recommended developing multiple OSCE scenarios for each course topic to offer several ways to test the knowledge of nursing content. Our faculty developed 10 case scenarios, and each student was randomly assigned 5. The more OSCE stations a student performs, the less likely there will be error due to content specificity and the more likely assessors will be measuring the students “true” competence in this area. Ideally, these case scenarios would be tested for interrater reliability and content validity (Eva et al., 2004).
To assure content validity, the course faculty who facilitate the OSCE program at the institution are responsible for updating OSCE scenarios. Our faculty found the need for annual updates, which included reviewing case scenarios used in prior years to retain those that have promise for continued OSCE use. The OSCE case review determines the range of situations available for case performance, whether the cases depict current and up-to-date nursing standards of care, and whether gaps are found in the nursing the curriculum. This review provides an opportunity for faculty to strengthen the course content in these areas because a critical feedback loop is closed.
Faculty must also review the associated checklists, questionnaires and surveys, and all other methods of faculty observation in the OSCE process to assure each step in the process reflects the intended content to be tested.
The program faculty use three ways to assess the students’ clinical competencies during the implementation of the OSCE: the Nursing Interview Interaction Scale (NIIS), a modified version of the Arizona Clinical Interviewing Rating Scale (ACIRS), which is a global rating scale to assess clinical interviewing skills with patients; the Specific Case Content Checklist (SCCC) to assess students’ ability to perform specific clinical skills; and the post-encounter paperwork to examine students’ knowledge of the underlying pathophysiology present in case scenarios (the SCCC and the postencounter paperwork used during the 2005 nursing OSCE had not been psychometrically tested). Comments on these tools are reported in a separate article (McWilliams & Botwinski, 2009). The university’s institutional review board approved this study.
Standardized Patient Roles, Training, and Evaluation
The role of the standardized patient is to portray the patient, complete the assessments of the students (NIIS and SCCC), and provide feedback to students about their performance. Interviews with standardized patients in our program revealed that minimal training was given to standardized patients who were nurses. This suggested that they should rely on their own or other nurses’ experiences to guide them in understanding and delivering the case scenarios. The standardized patient responses revealed that their portrayals of the case scenarios were uneven due to their differences in their knowledge and prior experiences. We concluded that the use of professional nurses as standardized patients could weaken the OSCE format because nurses tend to provide more information than what is called for in a case scenario, thus guiding students to the correct answer as opposed to providing a standard performance—an objective of the nursing OSCE. The literature suggests that the preferred individuals hired to portray standardized patients are nonmedically trained actors because they can deliver case scenarios as written and provide feedback without interfering in the reliability and coaching outcome of the nursing OSCE process (Adamo, 2003; Evans et al., 1995; Heine, Garman, Wallace, Bartos, & Richards, 2003; MacDonald, 2004).
Training the standardized patients to portray a patient role requires establishing a specific plan and time frame, depending on the level of complexity of the OSCE case. Our standardized patients called for practice sessions with a facilitator who could critique the standardized patient portraying the patient role. To assure that the standardized patient is competent in assessing students’ clinical competencies and providing feedback to students, random faculty observation and evaluation of standardized patients during the nursing OSCE was useful. When feasible, videotaping the training is useful to provide direct feedback to the standardized patient, facilitator, and faculty. If videotaping is included, the services of an educational media technologist should be sought to ensure the recording’s quality. With each new OSCE session, standardized patients should undergo repeat training, especially when new scenarios are introduced.
In terms of giving feedback to students, there was consensus among the three standardized patients that giving positive feedback to students about their performance was clearly understood; however, giving constructive criticism to the students was more difficult. MacDonald (2004) offered many suggestions for offering students feedback. For example, constructive criticism should be sandwiched between two strengths. An standardized patient’s feedback should be delivered to the student in third person, meaning that it is important for the standardized patient to break out of the character he or she is portraying in the OSCE scenario and use statements such as “The patient felt embarrassed when you began your examination without saying hello” (MacDonald, 2004). More direction on this topic should be included in the training.
All standardized patient participants stated that participating in the OSCE for an 8-hour day caused fatigue and confusion, especially on scoring the NIIS and SCCC, handling interactions with students, and giving feedback. Shorter sessions involving standardized patients are recommended, along with giving breaks between sessions. Eva et al. (2004) suggested three OSCE sessions per day, with 40 minutes between sessions.
Students’ Perception of the Nursing OSCE Experience
Questions from the Post-encounter Evaluation Questionnaire (PEQ) were reviewed to answer whether students thought the nursing OSCE experience was valuable and educational, whether it was realistic, and whether the feedback given by the standardized patient was constructive. Because of an inadvertent omission, data from the PEQ are available for only 30 of the 60 students. Therefore, these data are reported for 30 students.
The results from these questions were tabulated, with mean scores ranging from 4.4 to 4.9 on a scale of 5, (representing percentages from 88% to 98% affirmative responses). These results indicated that the students appreciated the authenticity and value of the OSCE experience to their education.
Discussion and Recommendations
Testing the transfer of classroom learning into clinical performance is essential for program faculty to assess students’ clinical competencies, as well as to identify gaps and strengths in curriculum. Specific protocols for developing case scenarios and standardized patient training are needed. When possible, standardized patients should not be health care professionals, but should be trained for their specific roles and evaluated to assure consistency and accuracy.
When an OSCE is used as a summative measure to determine progression in a nursing major, the utmost attention must be given to the psychometric properties of the evaluation. Unlike an OSCE used for assessment purposes only, reliability and validity concerns are foremost in importance because of the definitive role the OSCE plays in whether a student may progress to the next level of education in nursing. Few, if any, programs at the baccalaureate level in nursing have adopted this plan. For any who are considering it, a formal remediation plan must be in place, specific to the identified areas of weakness in order for the students to prepare for allowable retakes of the examination. Building a remediation plan within the OSCE process gives faculty time to focus on student learning.
However, for programs using OSCE as an assessment tool only, such as ours, remediation is optional. This assessment form of OSCE would focus more on the student receiving formative feedback with or without the consequence of a subsequent examination. This approach places greater importance on the feedback given to the student by the standardized patient.
Poor student performance on the nursing OSCEs can be attributed to a number of nonstudent-related factors, including poor design of the nursing OSCE station (e.g., case scenarios, inadequately trained standardized patients, and gaps or deficits in curricular programming). All such factors need to be considered before the remediation process can be designed. Careful consideration of these factors, as well as remediation strategies for students performing poorly on the nursing OSCE, is warranted. A question could be added to the student PEQ that would query whether students thought the course content was adequately tested.
Optimally, when a program is ready to implement OSCE use, mini-OSCEs might be run within or after each course to test content and to familiarize the students with the method. By weaving the OSCE process throughout the nursing curriculum, students would be better prepared to participate than when an OSCE is given as a one-shot prerequisite for admission into the final preceptorship.
From our experience, a department must take time to determine whether the nursing OSCE is a feasible option for the program. The faculty need adequate time to write a variety of case scenarios and update them and to recruit and supervise the standardized patients’ training. They must also monitor data collection, provide for the videotaping to be done (including finding space for the equipment and someone dedicated to film the sessions), and arrange for an optimal setting for the OSCE sessions. In addition, there needs to be program support for data collection, entry, analysis, and interpretation. These latter tasks could either be divided among faculty or conducted by a designated program evaluator. Without time for these important activities, the intention of offering a variety of case scenarios for the student experience and providing testing objectivity by using the OSCE format is likely to fall short of expectations.
There were several limitations in this study. First, the results of this study design were tailored to the specific nursing OSCE process at this university and, therefore, may not be appropriate for other institutions using an OSCE format. Second, this study was specific to the undergraduate program; therefore, the process may be different for review of the nursing OSCE widely used in nursing graduate programs. Finally, a lack of nursing programs in the United States reporting an OSCE-type process limited the amount of information available for comparison of programs and little guidance in the adaptation of the OSCE method to this student population.
Use of the OSCE format for testing, either for formative or summative purposes, is wise for any program as a system of testing if the resources allow. With attention to the time involved in development of case scenarios reflecting curriculum content, adequate training and evaluation of standardized patients, and accurate measurement procedures of the student performance, the OSCE can provide students with an opportunity to experience many more clinical situations than would be available in the natural clinical setting and receive feedback (positive and negative) about their clinical performances with review of their strengths and weaknesses. In an OSCE, even constructive criticism can be given in a more relaxed and neutral way than on a unit. Sandwiched between positive comments, students report a growing sense of competence in their clinical skills.
Although the use of the OSCE is in its infancy in nursing education, it has much to offer in evaluating clinical competencies because it reflects real-life tasks that nurses will face in the clinical arena. With careful development, as well as a plan for remediation and program support, the nursing OSCE can increase the validity, reliability, feasibility, and objectivity of student evaluation (Bartfay et al., 2004).
- Adamo, G. (2003). Simulated and standardized patients in OSCEs: Achievements and challenges. Medical Teacher, 25(3), 262–270. doi:10.1080/0142159031000100300 [CrossRef]
- Alinier, G. (2003). Nursing students’ and lecturers’ perspectives of objective structured clinical examination incorporating stimulation. Nursing Education Today, 23, 419–426. doi:10.1016/S0260-6917(03)00044-3 [CrossRef]
- Bartfay, W., Rombough, R., Howse, E. & Leblanc, R. (2004). The OSCE approach in nursing education. Canadian Nurse, 100(3), 19–23.
- Bergevin, L. (1999). The complex problem of evaluating competency. Retrieved March 2, 2002, from http://www.otrq.qc.ca/english/inspection/default.htm
- Bradley, P. & Humphris, G. (1999). Assessing the ability of medical students to apply evidence in practice: The potential of the OSCE. Medical Education, 33, 815–817. doi:10.1046/j.1365-2923.1999.00466.x [CrossRef]
- Ebbert, D. & Connors, H. (2004). Standardized patient experiences: Evaluation of clinical performance and nurse practitioner student satisfaction. Nursing Education Perspectives, 25(1), 12–15.
- Eva, K. W., Rosenfeld, J., Reiter, H. & Norman, G. (2004). An admissions OSCE: The multiple mini-interview. Medical Education, 3, 314–326. doi:10.1046/j.1365-2923.2004.01776.x [CrossRef]
- Evans, J., Morales, D. & Robb, A. (1995). Running an OSCE. Unpublished manuscript.
- Heine, N., Garman, K., Wallace, P., Bartos, R. & Richards, A. (2003). An analysis of standardized patient checklist errors and their effect on student scores. Medical Education, 37, 99–104. doi:10.1046/j.1365-2923.2003.01416.x [CrossRef]
- Isaac, S. & Michael, W. B. (1997). Research and evaluation (3rd ed.). San Diego, CA: Educational and Industrial Testing Services.
- Linn, R., Baker, E. & Stephen, D. (1991). Complex, performanced-based assessment: Expectations and validation criteria. Educational Researcher, 20, 15–21.
- MacDonald, W. (2004, January). Standardized patients: Training and program issues. Paper presented at the meeting of Standardized Patient Training of the University of New Hampshire. , Durham. .
- Martin, I., Stark, P. & Jolly, B. (2000). Benefiting from clinical experience: The influence of learning style and clinical experience on performance in an undergraduate objective structured clinical examination. Medical Education, 34, 530–534. doi:10.1046/j.1365-2923.2000.00489.x [CrossRef]
- Martin, J.A., Reznick, R.K., Rothman, A., Tamblyn, R.M. & Regehr, G. (1996). Who should rate candidates in an objective structured clinical examination?Academic Medicine, 71, 170–175. doi:10.1097/00001888-199602000-00025 [CrossRef]
- Mavis, B. (2000). Does studying an objective structured clinical examination make a difference?Medical Education, 34, 808–812. doi:10.1046/j.1365-2923.2000.00687.x [CrossRef]
- McWilliam, P. & Botwinski, C. (2009). Implementing the nursing OSCE: Address challenges to validity and reliability. Unpublished manuscript. University of New Hampshire, Durham.
- O’Neill, A. & McCall, J.M. (1996). Objectively assessing nursing practices: A curricular development. Nurse Education Today, 16, 121–126. doi:10.1016/S0260-6917(96)80068-2 [CrossRef]
- Redfern, S., Norman, I., Calman, L., Watson, R. & Murrells, T. (2002). Assessing competence to practice in nursing: A review of the literature. Research Papers in Education, 17(1), 51–77. doi:10.1080/02671520110058714 [CrossRef]
- Rentschler, D., Eaton, J., Cappiello, J., McNally, S. & McWilliam, P. (2007). Evaluation of undergraduate students using objective structured clinical evaluation. Journal of Nursing Education, 46, 135–139.
- Vessey, J. & Huss, K. (2002). Using standardized patients in advanced practice nursing education. Journal of Professional Nursing, 18(1), 29–35. doi:10.1053/jpnu.2002.30898 [CrossRef]
- Zraick, R.I., Allen, R.M. & Johnson, S.B. (2003). The use of standardized patients to teach and test interpersonal and communication skills with students in speech-language pathology. Advances in Health Sciences Education, 8, 237–248. doi:10.1023/A:1026015430376 [CrossRef]
Interview Guide for Faculty
|How did you formulate the case scenarios for the nursing objective structured clinical examination (OSCE) to determine that the student nurses display clinical competence?|
|How did you determine which case scenarios to portray for the nursing OSCE?|
|What has your clinical experience been to allow you to write the case scenarios for nursing OSCE?|
|What sources do you use to determine that the case studies depicted in the nursing OSCE accurately display current state of the art information?|
|Who determines when the case scenarios are updated?|
|How often are the case scenarios for the nursing OSCE updated and when was the last time they were revised?|
|How was it determined what aspects of a specific case study the standardized patients were to portray?|
|How do you determine that the standardized patient is competent to portray the case scenario assigned?|
|How do you determine that the standardized patient is competent to assess the student in the assigned nursing OSCE station?|
Interview Guide for Standardized Patients
|What was your training process at the university in becoming a standardized patient?|
|Are you always given the same case scenario to portray?|
|Do you serve as a standardized patient at other institutions?|
|Do you feel that the training you receive at the university in order to be standardized patient is appropriate to assess the students?|
|Were you given specific information on how to provide feedback to the students?|
|If a student’s performance is unsatisfactory, do you feel comfortable with the training you received at the university to give the constructive criticism to the student?|
|Were there areas in the standardized patient training that were not covered or that you feel need to be expanded since you have been participating in nursing objective structured clinical examination (OSCE)?|
|Are you provided with feedback on your performance as a standardized patient?|
|What do you feel can be done to improve the nursing OSCE process for the standardized patient and/or the student at the university?|
Postencounter Evaluation Questionnaire
|Using the following scale, please circle the answer you feel accurately reflects your impression of the OSCE experience.|
|5- Strongly Agree|
|1- Strongly Agree|
|1. The environment was realistic.||5 4 3 2 1|
|2. The situation was similar to an actual case.||5 4 3 2 1|
|3. This is a worthwhile experience for students. Comments or suggestions:||5 4 3 2 1|