Journal of Nursing Education

Research Briefs 

Equivalence Testing of Traditional and Simulated Clinical Experiences: Undergraduate Nursing Students’ Knowledge Acquisition

Maura C. Schlairet, EdD, RN; Jane W. Pollock, MSN, RN

Abstract

Although simulated clinical experience is being used increasingly in nursing education, vital evidence related to knowledge acquisition associated with simulated clinical experience does not exist. This intervention study used a 2×2 crossover design and equivalence testing to explore the effects of simulated clinical experiences on undergraduate students’ (n = 74) knowledge acquisition in a fundamentals of nursing course. Following random assignment, students participated in laboratory-based simulated clinical experiences with high-fidelity human patient simulators and traditional clinical experiences and completed knowledge pretests and posttests. Analysis identified significant knowledge gain associated with both simulated and traditional clinical experiences, with the groups’ knowledge scores being statistically significantly equivalent. A priori equivalence bounds around the difference between the groups were set at ±5 points. Simulated clinical experience was found to be as effective as traditional clinical experience in promoting students’ knowledge acquisition.

Abstract

Although simulated clinical experience is being used increasingly in nursing education, vital evidence related to knowledge acquisition associated with simulated clinical experience does not exist. This intervention study used a 2×2 crossover design and equivalence testing to explore the effects of simulated clinical experiences on undergraduate students’ (n = 74) knowledge acquisition in a fundamentals of nursing course. Following random assignment, students participated in laboratory-based simulated clinical experiences with high-fidelity human patient simulators and traditional clinical experiences and completed knowledge pretests and posttests. Analysis identified significant knowledge gain associated with both simulated and traditional clinical experiences, with the groups’ knowledge scores being statistically significantly equivalent. A priori equivalence bounds around the difference between the groups were set at ±5 points. Simulated clinical experience was found to be as effective as traditional clinical experience in promoting students’ knowledge acquisition.

Dr. Schlairet is Assistant Professor, and Ms. Pollock is Laboratory Coordinator, Valdosta State University, College of Nursing, Valdosta, Georgia.

Address correspondence to Maura C. Schlairet, EdD, RN, Assistant Professor, Valdosta State University, College of Nursing, 1300 N. Patterson St., Valdosta, GA 31698-0130; e-mail: mcschlai@valdosta.edu.

Received: January 10, 2008
Accepted: October 06, 2008
Posted Online: January 04, 2010

To provide students with ample exposure to a variety of clinical experiences during undergraduate education, nursing programs across the United States are incorporating simulated clinical experiences to promote student learning. Factors driving the adoption of this technology in nursing education include shortages of clinical sites and preceptors, changes in patient acuity levels and admission patterns, shortages of nurse faculty, and patient safety considerations (Bradley, 2006; Feingold, Calaluce, & Kallen, 2004). The literature describes a variety of positive outcomes associated with the use of simulation in clinical nursing education (Lasater, 2007; Schoening, Sittner, & Todd, 2006), and the National Council of State Boards of Nursing and other leadership organizations are recognizing simulation as an essential element of nursing education (Nehring, 2008).

Unfortunately, an absence of research specifically addressing student knowledge outcomes related to the use of simulated clinical experience has been recognized (Issenberg, McGaghie, Petrusa, Lee Gordon, & Scalese, 2005; Jeffries & Rizzolo, 2006; Nehring & Lashley, 2004; Seropian, Brown, Gavilanes, & Driggers, 2004). Studies contrasting knowledge gains related to simulated clinical experiences with knowledge outcomes associated with traditional clinical experiences are scarce, and research focusing on learning through clinical simulation has been requested (Radhakrishnan, Roche, & Cunningham, 2007; Schoening et al., 2006).

Literature Review

Does simulation as an educational technique work? A recent systematic review (McGaghie, Issenberg, Petrusa, & Scalese, 2006) showed a strong positive correlation between simulator practice hours and standardized outcomes in medical education. Unfortunately, the effect of clinical simulation on undergraduate nursing students’ knowledge acquisition has yet to be validated. Although studies on a variety of student and faculty perceptions associated with the use of clinical simulation are accumulating, vital evidence related to knowledge acquisition through clinical simulation simply does not exist.

Thus, the current study tested the hypothesis that clinical simulation, in an undergraduate fundamentals of nursing course, teaches basic nursing care concepts as well as traditional clinical experiences. In addition, the authors tested the hypothesis that simulated clinical experiences followed by traditional clinical experiences, as an intervention sequence, teaches basic nursing care concepts as well as the reverse sequence does. Basic nursing care concepts, as operationalized in this study, were drawn from course objectives and a topical outline from a fundamentals of nursing course, which in turn were developed according to the American Association of Colleges of Nursing’s (AACN’s) (1998) The Essentials of Baccalaureate Education for Professional Nursing Practice.

Method

This intervention study explored student knowledge acquisition associated with simulated clinical experiences using a 2×2 cross-over design containing two interventions (i.e., simulated and traditional clinical experiences) and two intervention time periods (i.e., 2-week exposure to each intervention). Each individual acted as his or her own control. The dependent outcome was students’ knowledge test scores. Independent variables were clinical experience intervention (i.e., simulated or traditional) and time of testing (i.e., pretest, posttest 1, and posttest 2). Baccalaureate students who were enrolled in a nursing fundamentals course during two consecutive semesters were invited to participate in the study. Random assignment to intervention group (simulated-traditional or traditional-simulated) was used. Approval for this study was granted by the university’s institutional review board.

The simulated clinical experience intervention used high-fidelity human-patient simulators in a skills laboratory setting and was developed according to educational best practices, as described in the literature (Chickering & Gameson, 1987; Jeffries & Rizzolo, 2006; Seropian et al., 2004). These best practices included selection of simulated clinical experience scenarios that reflected clinical diversity and increasing complexity, faculty cues and feedback to promote refinement of theoretical knowledge through reflection on practice in simulated clinical experiences, and other pedagogic techniques to promote nursing knowledge acquisition. Traditional clinical experiences occurred in a skilled long-term care setting and consisted of traditional assignment of students to individual patients for the provision of holistic nursing care.

Equivalency testing was proposed with a priori equivalence bounds around the difference between the groups set at ±5 points on a 100-point scale. Equivalence testing is useful in situations where the goal is to establish that two methods or treatments are equal to each other (Mecklin, 2003). This kind of analysis was deemed appropriate given the dearth of evidence on knowledge outcomes related to simulated clinical experiences. Alpha was set at the p < 0.05 level (two-sided). A power analysis for the proposed statistics with an anticipated medium effect size indicated a minimum required sample size of 33 per group (Cohen, 1992).

To capture quantitative data, a 100-point scale knowledge test was created, consisting of 25 multiple-choice questions randomly chosen from appropriate sections of a NCLEX-RN® study book (Silvestri, 2005). Questions had similar difficulty levels and represented content that was as likely to be covered in the simulated clinical experience as it was in the traditional clinical experience. Internal consistency reliability coefficients (KR-20) were within an acceptable range across all administrations of the knowledge test.

To begin, students participated in an orientation to the study and completed the knowledge pretest. During the primary intervention (weeks 1 and 2), students participated in traditional clinical experiences (i.e., in the nursing home) or simulated clinical experiences (i.e., in the simulation laboratory) based on random assignment. After each experience, students in the simulated clinical experiences participated in faculty-guided debriefings, whereas students in traditional clinical experiences participated in traditional postconferencing. All students then completed knowledge post-test 1 and subsequently crossed over into the opposite intervention arm. Following the second intervention, faculty debriefed the entire group and students completed posttest 2, reflective journals, and National League for Nursing and Laerdal (2007) instruments.

To promote consistency across interventions, simulated and traditional clinical experiences focused on course objectives and a fundamental nursing skill set. Students were exposed to the same faculty members throughout the study and the time on task was comparable for both groups. In addition, student groups used identical patient care scenarios during simulated clinical experiences, cared for nursing home clients with similar levels of acuity during traditional clinical experiences, and completed similar kinds of clinical paperwork.

Results

All 74 students from the two consecutive academic semesters participated in data collection. Participants ranged in age from 18 to 44, were predominantly women (86%), were Caucasian (68%), and were classified as traditional students (i.e., younger than 25 years, and no prior education beyond high school) (88%). After the data were screened, three participants were excluded from the analysis due to missing data or extreme scores.

Of the remaining students, analysis of chi-square statistic for demographic variables revealed no significant differences between semester groups or intervention groups (i.e., simulated-traditional and traditional-simulated). In addition, t tests showed no statistically significant difference on knowledge pretest scores, course midterm grade, or course final grade by semester or intervention group.

T tests revealed significant knowledge score differences from pretest (mean = 60.05, SD = 9.30) to post-test 1 (mean = 62.68, SD = 8.54, t = −2.48, p = 0.015, df = 70); post-test 1 (mean = 62.68, SD = 8.54) to posttest 2 (mean = 64.78, SD = 9.35, t = −2.24, p = 0.028, df = 70); and pretest (mean = 60.11, SD = 9.32) to posttest 2 (mean = 64.61, SD = 9.39, t = −3.54, p = 0.001, df = 69). Significant knowledge gain was observed following both simulated and traditional clinical experiences as primary interventions and as sequenced interventions, although effect sizes were small per Cohen’s (1988) criteria.

The observed difference between simulated and traditional clinical experiences as a primary or single intervention on the groups’ posttest 1 knowledge scores was 0.49 (95% confidence interval [CI] = −3.58 to 4.56). Finding the 95% CI on the difference ±5 points, the knowledge scores of the simulated and traditional clinical experience groups were determined to be statistically equivalent. Exploring the intervention sequences, the observed difference between the simulated-traditional group and the traditional-simulated group for post-test 2 knowledge scores was −0.33 (95% CI = −4.77 to 4.11). Therefore, the scores for the intervention sequences (simulated-traditional and traditional-simulated) were also determined to be statistically equivalent. Means for knowledge scores by intervention group at all three testing times were plotted (Figure).

Graph of Mean Knowledge Scores (100-Point Scale) by Intervention Group at Each Testing Time (N = 71). Note. Simulated Clinical Experience Group (SE) and Traditional Clinical Experience Group (TE). SE-TE Sequence Students Were Initially Exposed to the SE, Whereas Students in the TE-SE Sequence Experienced the Traditional Clinical Setting as the Initial Intervention.

Figure. Graph of Mean Knowledge Scores (100-Point Scale) by Intervention Group at Each Testing Time (N = 71). Note. Simulated Clinical Experience Group (SE) and Traditional Clinical Experience Group (TE). SE-TE Sequence Students Were Initially Exposed to the SE, Whereas Students in the TE-SE Sequence Experienced the Traditional Clinical Setting as the Initial Intervention.

Discussion

Results from this sample suggest that simulated clinical experiences benefited undergraduate nursing students as well as traditional clinical experiences did regarding students’ knowledge acquisition in a fundamentals of nursing course. These findings are consistent with, and add to, a growing body of research that demonstrates positive learning outcomes associated with simulated clinical experiences (Alinier, Hunt, Gordon, & Harwood, 2006; Bradley, 2006; Foster, Sheriff, & Cheney, 2008; Jarzemsky & McGrath, 2008; McGaghie et al., 2006; Morton, 1997; Nehring, Ellis, & Lashley, 2001).

Sample demographics suggested a diverse sample that paralleled characteristics of undergraduate nursing students at the national level (AACN, 2006). This parallel with national data may support an ability to generalize these findings to other undergraduate nursing students in similar educational settings possessing the necessary resources to use simulation pedagogy.

Although the simulated-traditional group had slightly lower pretest knowledge scores, their knowledge acquisition scores after the simulated clinical experience (primary intervention) revealed a steeper positive incline, compared with the group with traditional clinical experience as the first intervention. Finding significant knowledge acquisition related to simulated clinical intervention timing or sequencing has not been well studied; however, some have suggested that simulated clinical experience is best used to prepare learners for real patient contact (Henneman & Cunningham, 2005; Issenberg et al., 2005).

Often, studies exploring outcomes associated with simulation for health care professional education use superiority study designs to compare this innovative pedagogy to more standard clinical learning experience (Bogacki, Best, & Abbey, 2004). The approach in this work, rather than testing whether the simulated experience was better than the traditional experience for promoting knowledge acquisition in beginning nursing students, clearly identified knowledge gains related to simulated clinical experiences that were as robust as gains related to traditional clinical experiences. Although weak effect sizes were observed, this typically occurs with equivalent conditions (Mecklin, 2003).

Limitations

Several imitations must be considered when interpreting this work. Due to enrollment caps in the fundamentals class, the resultant sample size was modest. Although additional sampling was desired, significant changes to the undergraduate curriculum and the fundamentals of nursing course were anticipated in the following semester. Anticipating methodological fidelity issues resulting from curricular revision, data accrual during an additional semester was not attempted.

Another concern was the low knowledge scores at pretesting and posttesting, associated with both interventions. Low scores may have resulted from the relatively short intervention phase (Kazdin, 2003) and students’ lack of familiarity with both simulated and traditional clinical environments. Regarding the use of one version of the knowledge test, practice effects or interaction effects must be considered; however, students were not given access to the test answer key or to individual test scores.

Implications for Research and Education

This study offers new insights on students’ knowledge acquisition related to the use of simulated clinical experiences not previously described in the literature. Although more research on knowledge gain related to simulated clinical experience should be conducted, these findings can serve as a beginning. Additional understanding will be possible as remaining data from this study are analyzed (i.e., National League for Nursing and Laerdal [2007] instruments and students’ reflective journals).

On the basis of these findings, educators might ask whether simulated clinical experiences should be used as an adjunct to traditional clinical teaching (Nehring, 2008) or as a replacement for those experiences (Bogacki et al., 2004; Johnson, Zerwic, & Theis, 1999). Others may decide to explore whether sequence or timing, as a feature of simulation, may lead to effective learning (Issenberg et al., 2005).

Should there be a focus of instructional hours on simulated, rather than traditional, clinical experiences? On the basis of the knowledge gain slope reflecting the 2-week simulated clinical intervention, the authors can only contemplate a slope representing a 4-week simulated clinical intervention. Speculating on the means plot (Figure), simulated-simulated clinical experience knowledge outcomes could possibly surpass knowledge gains achieved in either simulated-traditional or traditional-simulated formats. This finding may support a focus of instructional hours on simulated clinical experiences, opposed to traditional clinical experiences alone, or some combination of simulated and traditional clinical experiences, and would address the substantial difficulty associated with securing adequate traditional clinical experiences.

Alternatively, these findings may suggest alternate ratios of simulated to traditional clinical experiences, other than one-to-one. This notion fits with the findings of McGaghie et al. (2006) of a dose-response relationship between hours of clinical simulation and learning outcomes. Research exploring optimal ratios of simulated to traditional clinical experiences has been suggested (Nehring, 2008).

Conclusion

New formats for clinical learning such as simulated clinical experiences may ameliorate problems in student access to meaningful patient care experiences. Robust evidence of students’ knowledge acquisition associated with simulated clinical experiences augments existing findings of simulation benefits and will allow nurse educators to plan for and use simulated clinical experiences with confidence. This study, which found simulated clinical experiences to be as effective as traditional clinical experiences regarding knowledge acquisition and found use in early placement of clinical simulation as an educational intervention, appears to be one of the first of its kind.

Use of simulation in education is gaining momentum across the United States and elsewhere (Wilford & Doyle, 2006), and clinical experiences for undergraduate students in the form of simulation represent a real opportunity for both learners and educators. New formats for clinical learning such as simulation can address a variety of problems that conspire to limit student access to traditional patient care settings and essential learning experiences. Additional systematic evaluation and documentation of knowledge acquisition related to simulated clinical experiences will be necessary before educators can optimally integrate simulated clinical experiences into the undergraduate nursing education learning environment.

References

  • Alinier, G., Hunt, B., Gordon, R. & Harwood, C. (2006). Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education. Journal of Advanced Nursing, 54, 359–369. doi:10.1111/j.1365-2648.2006.03810.x [CrossRef]
  • American Association of Colleges of Nursing. (1998). The essentials of baccalaureate education for professional nursing practice. Washington, DC: Author.
  • American Association of Colleges of Nursing. (2006). Advancing higher education in nursing. Retrieved June 25, 2007, from http://www.aacn.nche.edu/2006AnnualReport.pdf
  • Bogacki, R.E., Best, A. & Abbey, L.M. (2004). Equivalence study of a dental anatomy computer-assisted learning program. Journal of Dental Education, 68, 867–871.
  • Bradley, P. (2006). The history of simulation in medical education and possible future directions. Medical Education, 40, 254–262. doi:10.1111/j.1365-2929.2006.02394.x [CrossRef]
  • Chickering, A.W. & Gameson, A.F. (1987). Seven principles for good practice in undergraduate education. Racine, WI: The Johnson Foundation.
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.
  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 115–159. doi:10.1037/0033-2909.112.1.155 [CrossRef]
  • Feingold, C.E., Calaluce, M. & Kallen, M.A. (2004). Computerized patient model and simulated clinical experiences: Evaluation with baccalaureate nursing students. Journal of Nursing Education, 43, 156–163.
  • Foster, J.G., Sheriff, S. & Cheney, S. (2008). Using nonfaculty registered nurses to facilitate high-fidelity human patient simulation activities. Nurse Educator, 33, 137–141. doi:10.1097/01.NNE.0000312186.20895.50 [CrossRef]
  • Henneman, E.A. & Cunningham, H. (2005). Using clinical simulation to teach patient safety in an acute/critical care nursing course. Nurse Educator, 30, 172–177. doi:10.1097/00006223-200507000-00010 [CrossRef]
  • Issenberg, S.B., McGaghie, W.C., Petrusa, E.R., Lee Gordon, D. & Scalese, R.J. (2005). Features and uses of high fidelity medical simulations that lead to effective learning: A BEME systematic review. Medical Teacher, 27, 10–28. doi:10.1080/01421590500046924 [CrossRef]
  • Jarzemsky, P.A. & McGrath, J. (2008). Look before you leap: Lessons learned when introducing clinical simulation. Nurse Educator, 33(2), 90–95. doi:10.1097/01.NNE.0000299513.78270.99 [CrossRef]
  • Jeffries, P.R. & Rizzolo, M.A. (2006). Summary report. Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national, multi-site, multi-method study. Retrieved January 10, 2007, from http://www.nln.org/research/LaerdalReport.pdf
  • Johnson, J.H., Zerwic, J.J. & Theis, S.L. (1999). Clinical simulation laboratory: An adjunct to clinical teaching. Nurse Educator, 24(5), 37–41. doi:10.1097/00006223-199909000-00016 [CrossRef]
  • Kazdin, A.E. (Ed.). (2003). Methodological issues and strategies in clinical research (3rd ed.). Washington, DC: American Psychological Association.
  • Lasater, K. (2007). High-fidelity simulation and the development of clinical judgment: Students’ experiences. Journal of Nursing Education, 46, 269–276.
  • McGaghie, W.C., Issenberg, S.B., Petrusa, E.R. & Scalese, R.J. (2006). Effect of practice on standardized learning outcomes in simulation-based medical education. Medical Education, 40, 792–797. doi:10.1111/j.1365-2929.2006.02528.x [CrossRef]
  • Mecklin, C.J. (2003). A comparison of equivalence testing in combination with hypothesis testing and effect sizes. Journal of Modern Applied Statistical Methods, 2, 329–340.
  • Morton, P.G. (1997). Using a critical care simulation laboratory to teach students. Critical Care Nurse, 17(6), 66–69.
  • National League for Nursing, & Laerdal. (2007). Descriptions of available instruments. Retrieved May 11, 2009, from http://www.nln.org/research/nln_laerdal/instruments.htm
  • Nehring, W.M. (2008). U.S. boards of nursing and the use of high-fidelity patient simulators in nursing education. Journal of Professional Nursing, 24, 109–117. doi:10.1016/j.profnurs.2007.06.027 [CrossRef]
  • Nehring, W.M., Ellis, W.E. & Lashley, F.R. (2001). Human patient simulators in nursing education: An overview. Simulation and Gaming, 32, 194–204. doi:10.1177/104687810103200207 [CrossRef]
  • Nehring, W.M. & Lashley, F.R. (2004). Current use and opinions regarding human patient simulators in nursing education: An international survey. Nursing Education Perspectives, 25, 244–248.
  • Radhakrishnan, K., Roche, J.P. & Cunningham, H. (2007). Measuring clinical practice parameters with human patient simulation: A pilot study. International Journal of Nursing Education Scholarship, 4(1). Retrieved February 27, 2007, from http://www.bepress.com/ijnes/vol4/iss1/art8 doi:10.2202/1548-923X.1307 [CrossRef]
  • Schoening, A.M., Sittner, B.J. & Todd, M.J. (2006). Simulated clinical experience: Nursing students’ perceptions and the educators’ role. Nurse Educator, 31, 253–258. doi:10.1097/00006223-200611000-00008 [CrossRef]
  • Seropian, M.A., Brown, K., Gavilanes, J.S. & Driggers, B. (2004). Simulation: Not just a manikin. Journal of Nursing Education, 43, 164–169.
  • Silvestri, L.A. (2005). Saunder’s comprehensive review for the NCLEX-RN examination (3rd ed.). St. Louis, MO: Saunders.
  • Wilford, A. & Doyle, T.J. (2006). Integrating simulation training into the nursing curriculum. British Journal of Nursing, 15, 926–930.
Authors

Dr. Schlairet is Assistant Professor, and Ms. Pollock is Laboratory Coordinator, Valdosta State University, College of Nursing, Valdosta, Georgia.

Address correspondence to Maura C. Schlairet, EdD, RN, Assistant Professor, Valdosta State University, College of Nursing, 1300 N. Patterson St., Valdosta, GA 31698-0130; e-mail: mcschlai@valdosta.edu

10.3928/01484834-20090918-08

Sign up to receive

Journal E-contents