Journal of Nursing Education

The articles prior to January 2012 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

The Objective Structured Clinical Examination: An Alternative Approach to Assessing Student Clinical Performance

Janet McKnight, MHSc, RN; Elizabeth Rideout, MHSc, RN; Barbara Brown, MScN, RN; Donna Ciliska, MN, RN; Diane Patton, MHSc, MEd, RN; Jean Rankin, MHSc, RN; Christel Woodward, PhD

Abstract

Background and Rationale

Evaluation of the clinical performance of nursing students continues to be a source of debate and difficulty for teachers of nursing. Most clinical performance is evaluated by direct observation and this method, a subjective one, has been declared Jacking (Wood, 1982). She identifies four problems with this method:

1. the subjectivity and differences in perception that result in variability from instructor to instructor;

2. the ratio of students to instructor is generally such that only a small sample of behaviors is used for evaluation purposes;

3. the behaviors observed vary greatly from student to student, depending on the clinical settings and the patients within that setting; and

4. evaluation is often done while learning is in progress.

It was in an effort to address these problems that the faculty of the School of Nursing at McMaster University decided to assess the usefulness of the Objective Structured Clinical Examination (OSCE). This examination is defined as an objective method of assessing a student's clinical competence where the areas tested and the evaluation criteria are determined in advance from course content and objectives. The medical education literature contains several reports of its use (Adeyemi, Omo-Dare & Rao, 1984; Harden, Stevenson, Downie & Wilson, 1975; Kirby & Curry, 1982). However, only one report of its implementation with nursing students was identified. Van Niekerk and Lombard (1982) described its use on an experimental basis with a group of ten nursing students. The objectivity of the test was confirmed by their experience. They found it to be time consuming but felt the benefits outweighed problems. They recommended its usefulness to others and chose to adopt it for their preregistration examination.

During an OSCE, students rotate around a series of stations. At some stations, they are asked to take a focused history or perform some aspects of physical examination. These stations, where an observer is asked to score a student's performance, are called examiner stations. At other stations, students may be asked to answer short questions, to interpret patient data, or to record findings. Subsequent marking is required at these marker stations. Rating forms and scoring systems for each station are prepared in advance by the consensus of program planners and teachers.

The OSCE, therefore, should minimize or eliminate some of the problems described by Wood. First of all, the instructors variability is reduced since one examiner stays in a particular station and examines the students as they rotate through. A larger sampling of behaviors can be used for evaluation purposes and these are clearly removed from the learning situation. Use of the same simulated patients for examiner stations controls the complexity and allows for very comparable evaluation situations for each student.

The OSCE was implemented in the Bachelor of Science in Nursing program, more specifically within the Primary Care component of the Level III clinical course where the traditional direct observation method of evaluation had previously been used. Within this component, students learn the historical and current perspective of the nurses role in Primary Care; that is, the role of the nurse practitioner. Emphasis is placed on developing health assessment skills, which include history taking about current health status and physical examination techniques. Process and content related to patient education are taught. Students apply knowledge of the counseling process to problems frequently encountered in primary care. Clinical practice is obtained in the office of a community physician, and settings where a nurse practitioner is employed are preferred.

A study was conducted to evaluate the usefulness of this method of evaluation in our Bachelor of Science in Nursing program. The purposes of the study…

Background and Rationale

Evaluation of the clinical performance of nursing students continues to be a source of debate and difficulty for teachers of nursing. Most clinical performance is evaluated by direct observation and this method, a subjective one, has been declared Jacking (Wood, 1982). She identifies four problems with this method:

1. the subjectivity and differences in perception that result in variability from instructor to instructor;

2. the ratio of students to instructor is generally such that only a small sample of behaviors is used for evaluation purposes;

3. the behaviors observed vary greatly from student to student, depending on the clinical settings and the patients within that setting; and

4. evaluation is often done while learning is in progress.

It was in an effort to address these problems that the faculty of the School of Nursing at McMaster University decided to assess the usefulness of the Objective Structured Clinical Examination (OSCE). This examination is defined as an objective method of assessing a student's clinical competence where the areas tested and the evaluation criteria are determined in advance from course content and objectives. The medical education literature contains several reports of its use (Adeyemi, Omo-Dare & Rao, 1984; Harden, Stevenson, Downie & Wilson, 1975; Kirby & Curry, 1982). However, only one report of its implementation with nursing students was identified. Van Niekerk and Lombard (1982) described its use on an experimental basis with a group of ten nursing students. The objectivity of the test was confirmed by their experience. They found it to be time consuming but felt the benefits outweighed problems. They recommended its usefulness to others and chose to adopt it for their preregistration examination.

During an OSCE, students rotate around a series of stations. At some stations, they are asked to take a focused history or perform some aspects of physical examination. These stations, where an observer is asked to score a student's performance, are called examiner stations. At other stations, students may be asked to answer short questions, to interpret patient data, or to record findings. Subsequent marking is required at these marker stations. Rating forms and scoring systems for each station are prepared in advance by the consensus of program planners and teachers.

The OSCE, therefore, should minimize or eliminate some of the problems described by Wood. First of all, the instructors variability is reduced since one examiner stays in a particular station and examines the students as they rotate through. A larger sampling of behaviors can be used for evaluation purposes and these are clearly removed from the learning situation. Use of the same simulated patients for examiner stations controls the complexity and allows for very comparable evaluation situations for each student.

The OSCE was implemented in the Bachelor of Science in Nursing program, more specifically within the Primary Care component of the Level III clinical course where the traditional direct observation method of evaluation had previously been used. Within this component, students learn the historical and current perspective of the nurses role in Primary Care; that is, the role of the nurse practitioner. Emphasis is placed on developing health assessment skills, which include history taking about current health status and physical examination techniques. Process and content related to patient education are taught. Students apply knowledge of the counseling process to problems frequently encountered in primary care. Clinical practice is obtained in the office of a community physician, and settings where a nurse practitioner is employed are preferred.

A study was conducted to evaluate the usefulness of this method of evaluation in our Bachelor of Science in Nursing program. The purposes of the study were:

1. to determine the usefulness of the OSCE as an evaluation measure;

2. to determine its acceptability to both students and faculty;

3. to assess the differences in time spent by faculty between the traditional direct observation method and the OSCE;

4. to assess the reliability of the OSCE; and

5. to assess its validity.

This article will describe our experience in developing and implementing the OSCE and address the first three purposes as stated above. The reliability and validity of the OSCE will be addressed in a further paper.

Method

Subjects: All the students who were enrolled in the Primary Care component of the clinical course were asked to participate. This comprised two distinct groups. The majority (81 students) were enrolled in Level III of the basic BScN program and an additional 25 students were diploma prepared registered nurses enrolled in the first year of the post-diploma BScN program. Since we were assessing the usefulness of the OSCE, it was optional for students and did not represent any part of the final grade. Of the 106 students enrolled in the course, 77 participated in the OSCE ; ?2% of the basic students and 73% of the postdiploma students.

Development of Examination Stations: Stations were developed to represent the course objectives and course content. The areas identified were: management, history taking, physical examination, data analysis, teaching, and interpersonal skills. The number, type, and focus of stations was determined by a group of faculty from Level III. Faculty volunteered to develop one or more stations which were then critiqued and finalized by other members of the group. The format of the circuit was designed so there would be equal representation of examiner and marker stations (Table 1).

Organization of the Examination: Two circuits were organized; each had 20 stations, including a rest station. In order to accommodate the number of students involved, one circuit was run three times and the other twice. Students were randomly assigned to a starting station and a circuit. It took two and a half hours for each student to complete a circuit. In addition, the students participated in a 20-minute orientation session and a 30-minute debriefing session. In the debriefing session, students were shown the marking criteria for each station and asked to comment on the relevance, fairness, and clarity of each station.

Table

TABLE 1STATIONS BY CONTENT AND METHOD

TABLE 1

STATIONS BY CONTENT AND METHOD

Table

TABLE 2STUDENT SCORES BY CONTENT AREA

TABLE 2

STUDENT SCORES BY CONTENT AREA

Results

Data were compiled from student scores and feedback from students and faculty.

Student Scores: The range of student scores was from 38.3% to 59.6%, with a mean of 50.8 and standard deviation of 5.3.

Student Scores by Content Area: Students performed best on history taking and physical examination stations, and less well on stations assessing management and interpersonal skills (liable 2).

Student Feedback: Sixty-seven students (93%) completed a questionnaire at the end of the circuit. The questionnaire was designed to elicit information regarding fairness of the OSCE in terms of the Primary Care objectives, the opportunity to learn the skills being tested in the Primary Care course, and the fairness of the criteria used to score each station. Comments were also elicited regarding the amount of time available to complete the stations, available response options, clarity of the objectives of each station, and other comments students wished to make. Student feedback and mean scores on the stations are summarized in lbble 3.

Faculty Evaluation: Faculty who participated as examiners were asked to evaluate each examiner station in terms of time allowed to complete the station, the appropriateness of the content used to test the station, and the clarity of criteria developed. In addition, they were asked for general comments on the OSCE. No formal evaluation of the marker stations was done.

Evaluators felt that time was sufficient and criteria were realistic for stations 2, 4, 9, IQ, 12, 15, and 16. Some minor modifications were suggested for other stations. Time was too short and criteria too difficult for station 14 indicating a need for extensive revision. Moreover, student performance on the station was poor. Criteria for station 6 were not sufficiently clear to differentiate between good and poor students.

Overall faculty response to OSCE was positive. Its usefulness as an evaluation method was supported.

Discussion

The OSCE was instituted on a trial basis in response to two concerns of faculty; a desire to implement an objective means of assessing clinical skills, and an attempt to reduce the amount of faculty time spent in evaluation. In addition, students were concerned about the subjectivity and perceived inconsistency of clinical evaluation within and among faculty.

Table

TABLE 3STUDENT FEEDBACK

TABLE 3

STUDENT FEEDBACK

Student response to the OSCE was positive. Although 5O1S of the students reported it to be more stressful than the usual method of evaluation, over 50% reported it to be more objective than other methods and 36% felt it was about the same.

Its adoption as a formal evaluation method was favored by 40% of the students with 20% undecided. In addition to its objectivity, the method was acceptable to students because it tested important knowledge and skill, and because each student was evaluated at the same time on the same objectives.

Faculty response was also positive to the OSCE experiment. In particular, they identified its usefulness in pointing out consistent problems in student knowledge, possible omissions of specific content from the program, and gaps in student skills. They favored its inclusion as an evaluation method for primary care and suggested its adoption in other areas of the program.

Evaluation of performance has been based primarily on observation in the clinical practice setting. In addition to the problems of subjectivity and lack of uniformity, the method has been time consuming, involving travel throughout the city and surrounding areas to the physician practices where students were placed. Faculty time, even including time spent preparing for this first OSCE, was less than that spent in the traditional method. After a bank of stations has been developed, faculty will be required only at specific times to set up, be present at, and mark selected stations. This will require far fewer faculty hours. In addition, confirmation that marker stations could be marked as accurately and consistently by secretarial staff as by faculty was obtained by tests of interrater reliability.

Students' scores on the OSCE had a mean of 50% (or D); much less than their score on the traditional method where the mean was A-. Further details on the reliability and validity of the OSCE, including comparison with other measures of clinical and academic performance, are presented in a further paper.

In examining reasons for the performance level of students, we looked at scores by content area and scores in reiation to students' perception of the appropriateness of the station and their preparation for performance on the station. We also considered faculty evaluation of the criteria and timing of stations. Students' mean scores were best on stations involving patient education, history taking, data anaJysis, and physical examination. They found the stations testing management and interpersonal skills more troublesome. Few students felt they had an opportunity to learn, or felt the criteria were appropriate for, the interpersonal skills stations (numbers 18, 20). For these two stations, videotaped vignettes were used and the students were asked to answer questions related to the interpersonal styles depicted. More rigorous critiquing of stations before using them in a testing situation is indicated, particularly in this rather complex area. Students also scored poorly on a management station (#17) that tapped content students felt was inappropriate and that they had limited opportunity to learn. Faculty agreed and again noted the need for pretesting and critiquing of stations at the time of preparing the OSCE.

Overall, the students performed well on those stations where the content had been presented through class and clinical experience, and where the criteria were clear and specific. Pretesting of stations to ensure that they reflect content as well as objectives of the course, should result in accurate and fair evaluation of students. Faculty time spent on the OSCE is substantially less than that used in the traditional method.

Continued use and refinement of this very promising method of student evaluation is indicated. Our intent is to use it again for evaluation of student performance in primary care.

References

  • Adeyemi. S.D., Omo-Dare, P, & Rao, C.R. 11984). A comparative study of the traditional long case with the objective structured clinical examination in Lagos, Nigeria. Medica/ Education, 28, 106-109.
  • Harden, R. M., Stevenson, M.. Downie, W., & Wilson, G. M. (1975). Assessment of clinical competence using the objective structured examination. British Medical Journal, 1, 447-451.
  • Jackson. F. 11983). Problems in assessing nursing students. Nurse Times. 79(23), 33-34.
  • Kirby, R.L., & Curry, L. (19821. Introduction of an objective structured clinical examination (OSCE) to an undergraduate clinical skills program. Medical Education, 16, 362-364.
  • Van Niekerk, J.G.E, & Lombard. S.A. (1982). The OSCE experiment at MEDUNSA. Curations, 5(11, 44-48.
  • Wood, V 11982, Jan.-Feb.). Evaluation of student nurse clinical performance. International Nurse Review. 29(1 >, 11-18.

TABLE 1

STATIONS BY CONTENT AND METHOD

TABLE 2

STUDENT SCORES BY CONTENT AREA

TABLE 3

STUDENT FEEDBACK

10.3928/0148-4834-19870101-10

Sign up to receive

Journal E-contents