Clinical experience for nursing students comes in settings as diverse as the patients and their medical problems. A given student may never experience an appropriate range of patient problems. Stillman, Regan, and Swanson (1987) say, "Each . . . student, in a very real sense, has a different clinical curriculum that diverges in unknown ways from the training that faculty would like to provide" (p. 1981).
Nurse practitioner (NP) education prepares mid-level health care providers who furnish broad services to patients and their families. An implicit question is how best to prepare competent clinicians. The nature of clinical opportunities does not allow for reproducible or even similar learning experiences; evaluating student performance may not be valid and reliable during faculty site visits. With a variety of clinical situations, some students miss formative feedback that would enhance cognitive and clinical performance.
The Objective Structured Clinical Assessment (OSCA), a form of simulated clinical learning, "is a method of assessing a student's clinical competence which is objective rather than subjective, and in which the areas tested are carefully planned by the examiners* Harden & Gleeson, 1979, p. 42). The OSCA consists of clinical and static stations. Clinical stations assess a student's ability to take a focused patient history or perform a limited physical examination in the presence of an examiner who scores the student's performance based on predetermined criteria. At static stations, students answer multiple-choice or short-answer questions based on the previous station. Again, students' scores are based on answers determined previously.
While the OSCA is acknowledged to be a valuable tool for formative and summative clinical evaluation of medical students, little is reported in nursing literature. This study investigated the effect of participation in OSCA simulations on cognitive and clinical performance of NP students.
The theoretical framework that supported this research is transfer of learning (EUis, 1965). Ellis maintains that transfer of learning occurs when 'experience or performance on one task influences performance on some subsequent task" (p. 3). Transfer of learning can be: (a) positive, when performance on one task facilitates performance on a second task; (b) negative, when performance on one task interferes or inhibits performance on a second task; or (c) neutral, when there is no effect on performance of the second task as a result of performance on the first.
Transfer of learning is influenced by several factors. One factor is the similarity between the original task and the transfer task. In fact, the greater the similarity between the two tasks, the greater the likelihood of positive transfer. An additional factor that affects transfer is that of practice with positive transfer increasing with increasing practice opportunities (Ellis, 1965).
A basic design for transfer of learning studies includes the use of control and experimental groups. Positive transfer of learning takes place if the experimental group, after having been exposed to the planned learning experience (i.e., the OSCA simulations), performs better on the subsequent tasks (i.e., performance on written examinations and clinical evaluations) than does the control group.
Simulations were first introduced when Barrows and Abrahamson (1964) described the use of a "programmed patient" for assessing the performance of medical students in clinical neurology. Since that time simulations have become an integral part of medical education and their use in nursing education, first introduced by Frejlach and Corcoran (1971), is increasing.
Critical to development of simulated learning experiences is the maintenance of "psychological fidelity" (Tinning, 1975). Psychological fidelity occurs when the simulation is created in such a way that it represents reality for the learner; this, according to Tinning, is the key to positive transfer of learning.
Clinical simulations using simulated patients can be an ideal way to promote student learning. Simulated patients may be programmed in such a way that they present patient problems that otherwise might be embarrassing or distressing for a real patient (Harden & Gleeson, 1979). Using a simulated patient, learning can often proceed in a more relaxed environment; in addition, risks to patients because of student inexperience are avoided. Once a simulated patient format is developed, it may be repeated so long as it is relevant to teaching needs. This is cost-effective and also permits for comparison of students over time.
Simulations maximize student learning. According to McDonald (1987), students state that "they learn more in the simulated clinical laboratory than during comparable time in the actual clinical experience* (p. 291). In addition, because simulations allow for immediate reinforcement of learning, students' confidence regarding their own decisionmaking abilities can increase.
Simulations take various forms. Computer assisted instruction (CAI), patient management problems (PMP), standardized patients (SP), and the OSCA, are all examples of simulations.
The OSCA is a method of educating and evaluating students that is objective rather than subjective. It is designed to assess a student's clinical competence, with clinical skills to be tested broken down into their various components. Because performance standards have been predetermined, the examination is more likely to be objective. Students are assessed by rotating through clinical and static testing stations.
To maintain the objectivity of the OSCA, two of the three variables encountered in clinical examinations, the patient and the examiner, are controlled (Harden, Stevenson, Downie, & Wilson, 1975). Rather than using real patients, patient simulators (healthy individuals trained to enact a role) are used. Simulated patients can be instructed to present desired historical information, mimic physical findings, or participate in a management plan. Because they are simulated, patients can portray the same scenario consistently and repeatedly over time (McDowell, Nardini, Negley, & White, 1984).
The actions of the examiners are also controlled in OSCA simulations. Examiners are trained to assess student performance based on a checklist. Items included on the checklist are predetermined and denote what will be evaluated by the examiner. Only checklist items are used by the examiners. This minimizes examiner bias (Soeter, Scherpblier, & van Lunsen, 1987).
The OSCA stations provide the clinical situations by which students are assessed. At each clinical station, the student is presented with a short patient scenario and is requested to demonstrate a clinical skill. The format of each station is tailored to testing one aspect of clinical competence (Swanson & Norcini, 1989). Testing stations, both clinical and static, are allocated equal time. The advantage of having two different types of stations is that it diminishes the effect of queuing. Additionally, multiple stations allow for the examination of a greater number of students. The number of stations included in an OSCA varies depending upon content, the time allotted for the examination, and the number of students.
Students find the OSCA to be a stimulating and effective form of assessment (Agardh, 1987) and, when given the opportunity, recommend that it be continued in subsequent years (Hoole, Kowolowitz, McGaghie, Sloane, & Colindres, 1987). The value of OSCA participation may be that students see the OSCA as a format for assisting them in learning clinical skills (Agardh, 1987). It may also be that the OSCA helps students see clearly their performance ability (Lavelle & Harden, 1987) and "the need for thinking as opposed to memorization" (p. 527).
Faculty respond positively to their involvement in the OSCA and find it to be a stimulating experience (Agardh, 1987). Faculty are able to obtain detailed information on each student's strengths and weaknesses (Barrow, Williams, & Moy, 1987) and information gained can guide educational planning. Petrosa et al. (1987) also indicate that participation in the OSCA provides faculty members "with better data about their own instructional efficacy as well as better information on which to base decisions about students' clinical development" (p. 41).
OSCA Use in Medical Education
Use of the OSCA in medical education began with a study by Harden et al. (1975). Harden et al. studied medical students by dividing them into three groups, two of which were evaluated using traditional clinical examinations, while the experimental group was evaluated by the OSCA. The performance of all three groups of students was compared to their scores in a multiple-choice question (MCQ) examination. Although the two control groups showed no correlation between scores on the traditional clinical examination and the MCQ examination, a highly significant correlation was found between scores on the OSCA and the MCQ examination. The researchers concluded that the OSCA has value as an assessment method and the feedback provided to students can be useful to them in directing their studies.
Medical students' attitudes toward the OSCA and conventional methods of assessment were studied by Lazarus and Kent (1983). Positive attitudes toward the OSCA as a method of assessment were reported. The OSCA was also selected as the preferred method of clinical evaluation. Lazarus and Kent concluded that the OSCA is an excellent alternative to the traditional oral examination.
Reports that medical students are often inadequately supervised by attending physicians during their clinical rotations led Hoole et al. (1987) to implement the OSCA to assess clinical skills of second- and third-year medical students. While the researchers were interested in the evaluation characteristics of the OSCA, they were also interested in the merits of the OSCA as an educational method. Data gathered on students showed that they enthusiastically supported the OSCA as a method of assessing clinical performance. Hoole et al. found their experience with the OSCA to be "unequivocally positive." They further reported that the OSCA "as an educational endeavor is more than we ever envisioned" (p. 466) and believe its role as an educational method is emerging. The OSCA, conclude Hoole et al., is a reliable evaluation and teaching tool.
Simulations in Nursing Education
KoIb and Shugart (1984) recommend the use of simulations in nursing education because of their value in student self-evaluation and performance appraisal. The authors suggest that until nursing educators become familiar with this form of assessment, simulations be used for formative rather than summative evaluation.
Using a clinical simulation for the purposes of formative evaluation was introduced by McDonald (1987). McDonald ' provided senior nursing students with an emergency room simulation to focus on nursing intervention in patient situations that require rapid decision-making skills. Students participated in the simulation by taking turns being the patient. Feedback on each nursing student's ability was provided by peer sharing in the simulation and through viewing of taped video recordings. McDonald concludes that the educational value of the simulations were twofold: (a) students can be provided with experiences that faculty believe will be necessary for their future practice, and (b) learning time can be maximized. Ross (1988) also used a lab simulation to assist senior nursing students in their management and decision-making skills.
In an application of the OSCA to an undergraduate nursing program, van Niekert and Lombard (1982) used stations to evaluate first-year nursing students' clinical skills. Experiences included stations where: (a) procedures were performed with demonstration dolls or simulated patients in the presence of an examiner; (b) procedures were performed without an examiner in attendance; (c) an oral examination was conducted by an examiner; and (d) students answered questions based on selected topics. While the researchers did not report the reaction of students to participation in the OSCA, the researchers' initial experience with this type of clinical evaluation led them to recommend it as a valuable tool in nursing education.
Ross et al. (1988) introduced the OSCA to evaluate the performance of clinical skills of third-year nursing students. Students were randomly assigned to one of two groups with the experimental group participating in an OSCA designed to assess skills in neurology. Students' responses to the OSCA were positive; they perceived it to be relevant and a motivating factor for learning skills.
Simulations are not well documented in graduate nursing education. Sherman, Miller, Farrand, and Hölzerner (1979) reported an early study describing their use of simulations. Students were introduced to a simulated patient encounter using a written PMP; the PMP, however, was used only for course evaluation.
A simulated patient was used to evaluate the clinical performance of NP students in a study conducted by McDowell et al. (1984). The simulation, based on a problem the researchers thought students were likely to encounter in their actual patient experiences, used only one patient problem and reported only historical information. McDowell et al. used the simulation for the purposes of summative rather than formative evaluation and concluded that student performances on this clinical simulation were "closely correlated with class standings and ratings by clinical preceptors" (p. 38).
The most recent work on use of simulations in NP education was reported by Wilbur, Miller, and Talashek (1991), who introduced standardized patients in an attempt to standardize the evaluation process of NP students' clinical performance. Information regarding the acceptability and effectiveness of these standardized patients has yet to be reported.
The use of clinical simulations in nursing education is supported by the literature. Clinical simulations have been implemented in both undergraduate and graduate nursing education, although it appears that the OSCA is used currently in undergraduate nursing programs only. It seems appropriate that the OSCA be studied as a method for the formative evaluation of NP students and improving their cognitive and clinical competency.
Pearson Product-Moment Correlation Matrix of Covariate and Dependent Variables
1. Nurse practitioner students educated with OSCAs will score higher on subsections of midterm examinations than NP students who are not educated with OSCAs.
2. Nurse practitioner students educated with OSCAs will demonstrate better clinical skills as evaluated by clinical preceptors than NP students who are not educated with OSCAs.
A quasiexperimental design was used in this study because it is the preferred method when subjects cannot be assigned randomly to groups (Cook & Campbell, 1979; Rosa et al., 1988). Subjects in this study were assigned to the control or experimental group based on the semester in which they were beginning NP program clinical course work and, therefore, could not be randomized.
There were two dependent variables in the study: (a) cognitive learning, as measured by scores on subsections of midterm examinations; and (b) clinical competency, as measured by preceptor evaluation of student clinical performance. The independent variable was OSCA introduction.
The subjects for this study were graduate nursing students enrolled in an NP program at a large state university. Subjects in the control group (n = 18) were taught using the traditional lecture/discussion method only. Subjects in the experimental group (n = ll) were taught using traditional lecture/discussion method plus the OSCA. Tb avoid contamination between students in the control and experimental groups, the groups were studied a year apart, with the control group preceding the experimental group.
Twenty-seven female and two male NP students participated in this study. Of the total population sampled, the mean age of subjects was 36 years, with subjects ranging in age from 24 to 50 years. No significant difference between the control and expérimentai group was found.
The nursing experience of subjects ranged from a minimum of two years to a maximum of 28 years. Of the total population sampled, the mean work experience was 11 years. While subjects in the experimental group had less work experience, no significant differences existed between the two groups.
A Pearson product-moment correlation coefficient on the covariates of age, years in nursing, clinical pretest, and grade point average (GPA), and dependent variables of cognitive learning and clinical competency was completed. Two significant correlations were found at thep<.05 level with age of the student and years in nursing showing a .651 correlation. Additionally, a .500 correlation was found between GPA and scores on cognitive learning (Table 1).
Cognitive measures used in this study were the scores on four subsections of two midterm examinations that corresponded to the four OSCAs presented to the experimental group. The purpose of these scores was to assess the knowledge that students had acquired as a result of the different teaching strategies. Test items were taken from a test bank of previously used items found to be valid and reliable in testing former students on these subjects. These items were identical for both groups.
Each subsection tested the factual knowledge on hypertension, urinary tract infections (UTIs), cardiology, and diabetes. Questions were assigned a value of one point and the range of scores varied depending on the number of questions in each subsection. Each section consisted of objective multiple-choice, true/false, and short-answer questions.
Clinical competency was determined by averaging the scores obtained in an eight-section clinical evaluation tool for student performance. The purpose of this instrument was to measure the clinical skills of the NP student in the following areas: communication, subjective historytaking, objective physical examination, assessment, management planning, oral presentation, record keeping, and professional role. The rating scale for this evaluation tool ranged from zero (poor performance) to four (strong performance). Each section was evaluated separately according to guidelines that identify clearly the performance that corresponds to each rating on the rating scale. The clinical evaluation tool was scored by clinical preceptors who had participated in the NP program for two to 15 years and were familiar with the goals and objectives of the program. Clinical preceptors were not told of the study to avoid contaminating their scoring from one year to another.
Four OSCAs were used in this study: hypertension, UTI, chest pain, and diabetes. The rationale for selecting these four was that they represented typical patient problems encountered by NPs in providing primary health care. Only medical conditions were selected because this was the focus of the course students were participating in. The OSCAs used were either developed by the investigator or adapted from previously developed OSCAs.
In order to implement the OSCA, examiners were needed to assess student performance at each of the clinical stations. Prior to their participation, examiners attended a training session. To support the training session and their role as examiners, each examiner was given a packet of information describing the examiner role and the feedback process. Additionally, the training packet contained a copy of the clinical station the examiner was to assess, including the presenting situation and evaluation checklist, as well as guidelines for scoring the clinical station.
Simulated patients were recruited and trained. All simulated patients were provided a detailed description of the case they would be portraying. Simulated patients were encouraged to assimilate symptoms, physical findings, personality, and the life situations and problems of the patient rather than memorize the details of the case. Simulated patients used for each of the four OSCAs were standardized so that the same patient scenario was presented to all students in the same way.
The method of instruction for both the experimental and control groups was the same. Both groups received lectures on hypertension, UTIs, cardiopuhnonary disease, and diabetes during their regularly scheduled lecture from lecturers who used traditional lecture/discussion format and regularly lectured to the NP students on these subjects. Objectives for each of the four lectures remained the same for both control and experimental groups, as did the lecturer presenting the content, the time that class was offered, and textbooks used.
Students in the experimental group participated in the OSCA simulations in the week following the lecture on the disease entity the OSCA was portraying. For example, students participated in the clinical and static OSCA stations that addressed hypertension the week after the lecture on this subject.
Students were scheduled for each OSCA as part of their required three-hour seminar. Students were assigned an examination room with a simulated patient and examiner. Prior to entering the room, the student read the OSCA clinical scenario that concluded by directing the student to either take a focused history, perform a physical examination related to the presenting problem, or provide patient education. Students were given 15 minutes to interact with the simulated patient and an additional five minutes for feedback from both the examiner and patient. Students then moved to the 15-minute static station to answer written questions relating to their OSCA clinical experience. As students completed the static station, their answers were reviewed with the investigator. When all students had completed both clinical and static OSCA stations, a debriefing and a review of the OSCA occurred.
Multiple regression was used to analyze the data. Each hypothesis was tested by means of an F test, significant at p<.05, for significant differences between groups on the dependent variables. In this study one predictor variable (introduction of the OSCA) and two criterion variables (cognitive learning and clinical competency) were used. The covariates age, years in nursing, clinical pretest score, and GPA upon entrance into the graduate nursing program were introduced into the equation.
Pearson product-moment correlation was used to determine significant associations between variables. Analysis of data also included the t test, which was used to look for differences between the control and experimental group in their subjective evaluation of the clinical course work. Finally, qualitative data from the experimental group regarding their impression of each OSCA and their overall summary of the OSCA experience were reviewed.
Descriptive data of subjects' performance on the four subsections of the two midterm examinations showed that the experimental group performed slightly better than the control group on the hypertension, cardiology, and diabetes test sections. However, when data relating to the variables in this hypothesis were analyzed using multiple regression, no significant difference (p = .626) between the groups was found, indicating that participation in the OSCA did not lead to better performance on subsections of the midterm examinations. Of the covariates included in the regression equation, only GPA was strongly related to the dependent variables and showed significance with a p value of .026 (Table 2).
Data were gathered on each subject's clinical performance as determined by their clinical preceptors. Multiple regression was used to analyze this data. After adjusting for the effects of the covariates, no significant difference (p = . 110) between the two groups was found, indicating that OSCA participation did not lead to better clinical performance. The only covariate significantly related to student clinical performance was that of GPA, which was significant at ? = .012 (Table 3).
Regression of Objective Structured Clinical Assessment and Co va ria tes on Cognitive Performance
Regression of Objective Structured Clinical Assessment and Covariates on Clinical Competency
Other Findings of Interest
After each of the four OSCAs, students evaluated their experience. In each instance the majority of students strongly agreed or agreed that the OSCAs had been valuable learning experiences, improved their clinical competence, reinforced the objectives of the lectures, and provided valuable feedback. Students were also asked what they liked best or least about participating in the OSCAs. Students described the value of being placed "in a situation that felt real" and the challenge of "thinking on my feet." Students also mentioned having the opportunity to learn and "take the risk before I got into the clinical setting." Overall, the OSCAs were perceived as a useful learning experience that enhanced clinical practice.
The predominant theme in what the students liked least about the OSCAs was anxiety. Students mentioned that it was "scary," "threatening," "nerve-wracking" and anxiety provoking. One student who wrote of the anxiety said, "This [anxiety] declined and, by the final experience, I saw the OSCA as a strongly welcomed opportunity."
Support for the first hypothesis of interest was not shown, indicating that participation in the OSCAs as implemented in this study did not lead to a significant difference in learning between the two groups. The only finding of significance was that, regardless of the group, students with higher GPAs upon entering the graduate program performed better on tests of cognitive learning - a finding that is not surprising given what is known about the predictive value of GPAs.
One reason for lack of a significant difference between the two groups on cognitive learning may be that the predictor variable selected, introduction of the OSCA, was not an appropriate predictor of cognitive knowledge. It could also be that significance was not shown because what was measured in the OSCAs and what was measured in the midterm examination subsections were not the same. Stillman and Gillers (1986) maintain that written examinations do not always measure the skills required for clinical practice.
Transfer of knowledge could also have been affected by the amount of information provided in each OSCA. The OSCAs implemented in this study assessed limited areas within each subspecialty and, although learning occurred, it may be that the cognitive areas tested required the student to transfer more knowledge than was provided by either the static or clinical stations. It could be that students would have performed better had more than one clinical and one static station been implemented with each subspecialty area.
Although the results of multiple regression fell short of significance, and therefore the second hypothesis of interest could not be supported, a trend in the predicted direction was shown. It may be that lack of power in the significance test was due solely to the small sample size, a constraint of the experimental situation. The trend in the predicted direction suggests that OSCA implementation may lead to better clinical performance. One explanation for this trend is that the original task, participation in the clinical stations of the OSCAs, and the criterion task, clinical performance, were more similar than that which was measured in the first hypothesis. ElUs (1965) maintains that the similarity between the original task and transfer task is a major fector in influencing the degree of transfer.
An important question regarding this hypothesis is that the OSCA and other forms of clinical evaluations measure different aspects of clinical performance (Roberts & Norman, 1989). The clinical evaluation tool used to measure clinical performance in this study, while measuring a gestalt of clinical practice, may not be measuring what was assessed in the OSCAs implemented.
Finally, another possible reason that significance was not achieved could be related to the number of OSCAs implemented and the variability of students' clinical experiences. Studies have shown that increasing the number of OSCA stations not only increases the range of competencies that can be assessed but also improves the OSCA's reliability as a method of student evaluation (Ross, Syal, Hutcheon, & Cohen, 1987).
All students agreed or strongly agreed that OSCA participation was a worthwhile method for improving clinical competence. While this finding was not supported in the quantitative analysis of this research, it is a qualitative response worth noting as previous studies in medical education have supported the value of the OSCA for learning clinical skills (Agardh, 1987; Hoole et al., 1987). A similar finding has been described in a study of nursing students (Ross, 1988). The above positive responses obviously reflect students' subjective recognition of the OSCA's worth and it may be that the perceived value of their OSCA participation was the realization that the skills they learned would be transferred to their clinical practice.
Although the original purpose of the study was not directed at identifying students' subjective evaluation of their OSCA experiences, students* subjective evaluations were favorable, a finding supported by others within medical education (Agardh, 1987; Harden, 1987; Ross et al., 1987) and a finding that lends support for the continued investigation of the value of OSCA implementation in nursing education and NP programs.
Because OSCAs are a new experience for nursing students and educators and are different from traditional measures, nursing faculty must carefully consider ways in which OSCAs are implemented and used. Of utmost importance is the development of each OSCA clinical and static station, the training of examiners and simulated patients, and the preparation of students for their participation in the OSCA experiences. Additionally, consideration must be given to the OSCA's use in evaluating students, with formative assessments used to enhance nursing students1 clinical and cognitive performance and summative evaluations used to measure these same student attributes.
- Agardh, C.D. (1987). An objective structured clinical examination in the assessment of clinical skills after the introductory course to clinical medicine. In I.R. Hart, & R.M. Harden (Eds.), Further Developments in Assessing Clinical Competence (pp. 648-651). Montreal: Can-Heal Publications.
- Barrows, H.S., & Abrahamson, S. (1964). The programmed patient: A technique for appraising student performance in clinical neurology. Journal of Medical Education, 39, 802-805.
- Barrows, H.S., Williams, R.G., & Moy, R.H. (1987). A comprehensive performance based assessment of fourth-year students' clinical skills. Journal of Medical Education, 62, 805-809.
- Cook, T.D., & Campbell, D.T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNaIIy.
- Ellis, H.C. (1965). The transfer of learning. New York: Macmillan.
- Frejlach, G., & Corcoran, S. (1971). Measuring clinical performance. Nursing Outlook, 19, 270-271.
- Harden, R.M. (1987). The objective structured clinical examination (OSCE). In I.R. Hart, & R.M. Harden (Eds.), Further Developments in Assessing Clinical Competence (pp. 99-104). Montreal: Can-Heal Publications.
- Harden, R.M., & Gleeson, F.A. (1979). Assessment of clinical competence using an objective structured clinical examination (OSCE). Medical Education, 13, 41-54.
- Harden, R.M., Stevenson, M., Downie, W.W., & Wilson, G.M. (1975). Assessment of clinical competence using objective structured examination. British Medical Journal, 1, 447-451.
- Hoole, A.J., Kowolowitz, V., McGaghie, W.C., Sloane, P.D., & Colindres, R.E. (1987). Using the objective structured clinical examination at the University of North Carolina Medical School. North Carolina Medical Journal, 48, 463-467.
- KoIb, S.E., & Shugart, E.B. (1984). Evaluation: Is simulation the answer? Journal of Nursing Education, 23, 84-86.
- Laveile, S.M., & Harden, R.M. (1987). Sex differences and stress in a problem-solving oriented OSCE. In I.R. Hart, & R.M. Harden (Eds.), Further Developments in Assessing Clinical Competence (pp. 524-527). Montreal: Can-Heal Publications.
- Lazarus, J., & Kent, A.P. (1983). Student attitudes towards the objective structured clinical examination (OSCE) and conventional methods of assessment. South African Medical Journal, 64, 390-394.
- McDonald, G.F. (1987). The simulated clinical laboratory. Nursing Outlook, 35, 37-39.
- McDowell, B.J., Nardini, D.L., Negley, S.A., & White, J.E. (1984). Evaluating clinical performance using simulated patients. Journal of Nursing Education, 23, 37-39.
- Petrusa, E.R., Blackwell, T.A., Rogers, L.P., Saydjari, C., Parcel, S., & Guckian, J.C. (1987). An objective measure of clinical performance. The American Journal of Medicine, 83, 34-41.
- Roberts, J., & Norman, G.R. (1989). Reliability and learning from the OSCE. Proceedings of the Twenty-eighth Annual Conference on Research in Medical Education, 121-125.
- Ross, J.R., Syal, S., Hutcheon, M.A., & Cohen, R. (1987). Second-year students' score improvement during an objective structured clinical examination. Journal of Medical Education, 62, 857-858.
- Ross, M., Can-oil, G., Knight, J., Chamberlain, M., FothergillBourbonnaise, F., & Linton, J. (1988). Using the OSCE to measure clinical skills performance in nursing. Journal of Advanced Nursing, 13, 45-56.
- Ross, N. (1988). The use of an extended simulation in ward management training; I: Rationale for development and design criteria. Nurse Education, 8, 4-8.
- Sherman, J.E., Miller, A.G., Farrand, L.L., & Hölzerner, W.L. (1979). A simulated patient encounter for the family nurse practitioner. Journal of Nursing Education, 18, 5-15.
- Soeter, D., Scherpblier, A.J.J.A., & van Lunsen, H.W. (1987). Assessing students' performances or ... examination peculiarities. In I.R. Hart, & R.M. Harden (Eds.), Further Developments in Assessing Clinical Competence (pp. 517-521). Montreal: Can-Heal Publications.
- Stillman, P.L., & Gillers, M-A. (1986). Clinical performance evaluation in medicine and law. In R.A. Beck (Ed.), Performance Assessment: Methods and Applications. Baltimore: Johns Hopkins University Press.
- Stillman, PL., Regan, M.S., & Swanson, D.B. (1987). A diagnostic fourth-year performance assessment. Archives of Internal Medicine, 147, 1981-1985.
- Swanson, D.B., & Norcini, J.J. (1989). Factors influencing reproducibility of tests using standardized patients. Teaching and Learning in Medicine, 1, 158-166.
- Tinning, FC. (1975). Simulation in medical education. East Lansing, MI: Michigan State University Press.
- van Niekert, J.G.P., & Lombard, SA. (1982). The OSCE experiment at Medunsa. Curationis, 5, 44-48.
- Wilbur, J., Miller, A., & Talashek, M. (1991, April). The use of the standardized patient in evaluating NP student clinical performance [Summary]. Poster session presented at the meeting of the National Organization of Nurse Practitioner Faculties, San Diego, California.
Pearson Product-Moment Correlation Matrix of Covariate and Dependent Variables
Regression of Objective Structured Clinical Assessment and Co va ria tes on Cognitive Performance
Regression of Objective Structured Clinical Assessment and Covariates on Clinical Competency