Dr. Simonelli is Assistant Chair and Assistant Clinical Professor, and Ms. Paskausky is an MS/PhD student and Research Fellow, William F. Connell School of Nursing, Boston College, Chestnut Hill, Massachusetts.
The authors have disclosed no potential conflicts of interest, financial or otherwise.
The authors thank Boston College for supporting this research. They also thank Amy Smith, DNP, RN, CNM, and Jeanie Foley, MS, for sharing their expertise, Dr. Ellen Mahoney for her support and expertise in the preparation of this manuscript, and the Boston College nursing students for participating in the simulation experiences for this project.
Address correspondence to Mary Colleen Simonelli, PhD, RN, Assistant Chair and Assistant Clinical Professor, William F. Connell School of Nursing, Boston College, 140 Commonwealth Avenue, Chestnut Hill, MA 02467; e-mail: firstname.lastname@example.org.
High-fidelity simulation in nursing education has increased dramatically in the past 10 years. This increase has been attributed to the nursing faculty shortage, the need to increase student enrollment, the lower cost of simulation equipment, and the acceptance of simulation as a useful tool to enhance clinical practice (Feingold, Calaluce, & Kallen, 2004; Seropian, Brown, Gavilanes, & Driggers, 2004). Simulation in nursing education has become ubiquitous and in many states is substituted for traditional clinical experience (Nehring, 2008). Although research regarding student perceptions and self-efficacy abounds, little research has addressed the effects of simulation in education on clinical competency and knowledge acquisition. The value of simulation, both in educational and financial terms, and the theoretical foundation for its use remain largely unaddressed (Landeen & Jeffries, 2008; Schiavenato, 2009). The purpose of this study was to examine the effects of simulation on student performance in an undergraduate childbearing clinical course and to compare knowledge and skill development of nursing students exposed to simulation as part of their curriculum with those whose curriculum did not include simulation.
The results of previous studies exploring the relationship between skills development and simulation have been mixed, and the conclusions reached by researchers have been varied. Blum, Borglund, and Parcells (2010) found no significant difference in the improvement of clinical competency measurements between a group of entry-level health assessment students receiving standard teaching and a group of entry-level health assessment students receiving additional simulation training, thus raising concerns as to whether the costly simulation equipment was effective or appropriate for entry-level nursing courses. When evaluating competency in motivational interviewing technique, researchers found no difference in scores on an objective instrument, the Behavior Change Counseling Index, on the basis of whether health care professionals used a simulated patient or another training method to learn the technique (Lane, Hood, & Rollnick, 2008).
Still, results from other studies suggest a positive relationship between simulation and clinical competence. A pilot study (Radhakrishnan, Roche, & Cunningham, 2007) conducted among second bachelor’s degree in nursing students (N = 12) found significant improvements in the use of correct patient identifiers and vital sign assessment among those who had simulation experiences in addition to the usual clinical experiences. Although the low number of participants greatly limits the generalizability of the study, the results present a research pathway to explore. In addition, Shepherd, Kelly, Skene, and White (2007) noted a significant improvement in clinical assessment ability in a group provided with simulation training experience, whereas no improvement was evident in two other groups provided with either a self-directed learning package or a self-directed learning package plus two scenario-focused PowerPoint® workshops. All groups were tested postintervention on a low-fidelity manikin; however, the possibility of the effects of prior exposure to the simulation manikin was not considered as a possible intervening variable in the group that had significant improvement. Moreover, an experimental design project used an objective structured clinical examination to measure clinical competency and produced findings that the group with intermediate-fidelity simulation experiences in addition to the standard curriculum had improved scores compared with the group with the standard curriculum alone (Alinier, Hunt, Gordon, & Harwood, 2006).
Evidence also exists that simulation improves individuals’ performance in clinical competency measures when presimulation and postsimulation exposure scores are compared. Results from a study in nursing crisis management indicated significantly higher scores on clinical performance measures in a group using simulation combined with a problem-based discussion than in a group using problem-based discussion alone (Liaw et al., 2010).
Many studies find that the specific and immediate feedback available through simulation may be part of the improvement in clinical competency (Domuracki, Moule, Owen, Kostandoff, & Plummer, 2009; Ford et al., 2010; Latif et al., 2009; Wilfong, Falsetti, McKinnon, Daniel, & Wan, 2011). In CPR training among nursing and medical students, application of a recommended amount of cricoid pressure was found more often in a group who practiced with a cricoid pressure simulator with force feedback than in a group who did not have force feedback (Domuracki et al., 2009). A randomized, masked study found simulation-based training in addition to didactic instruction provided better results in performance among certified registered nurse anesthetist students and medical residents learning ultrasound-guided central venous catheter insertion aseptically (Latif et al., 2009). Further, some evidence suggests that simulation can translate directly into improvement in clinical performance. A study based in a hospital setting comparing medication errors in practicing nurses (Ford et al., 2010) indicated that nurses given simulation training had fewer errors than those given didactic training. Another study (Wilfong et al., 2011), also in a hospital setting, reported findings that using simulation reduced the number of intravenous stick attempts and complications compared with the traditional “watch one, do one, teach one” approach.
Other studies have focused on knowledge acquisition. Hoffmann, O’Donnell, and Kim (2007) found improvement in basic critical care nursing knowledge in upper-level nursing students when simulation was used. However, the design was a pretest and posttest repeated measure, making it unclear whether the improvement was due to maturation, habituation to the testing instrument, traditional clinical experiences prior to the simulation experience, or a combination thereof (Hoffmann et al., 2007). Another study (Elfrink, Kirkpatrick, Nininger, & Schubert, 2010) used NCLEX®-style pretest and posttest questions to objectively assess knowledge acquisition before and after simulation experiences and found significant improvement in the knowledge of students. Although the study did not compare performance improvement with a group receiving comparable exercises in clinical thinking, it raised the question of whether the effect noted was simply from time spent immersed in clinical thinking.
Some evidence suggests that simulation can be equivalent to traditional clinical experiences in knowledge acquisition for undergraduate nursing students (Schlairet & Pollock, 2010). The results from this 2×2 crossover design with the intervention of simulation or traditional clinical experiences showed no statistically significant different performance on NCLEX-style questions in groups exposed to traditional clinical experiences first and simulation second versus groups exposed to simulation first and traditional clinical experiences second (Schlairet & Pollock, 2010).
Graduate nurses are entering an increasingly complex working environment and must be prepared to care for critically ill patients. However, students are often relegated to observer roles in high-risk patient care situations. It is also widely accepted that we learn more from our mistakes than from our successes. In the realm of nursing education, of course, allowing students to make mistakes in clinical situations is not an option. Patient safety is paramount; consequently, the use of human simulation can be a valuable teaching modality because it presents real-life situations and evokes actions and reactions from the students but does not result in any real untoward effects on patients (Robertson, 2006). This research focused on the ability of simulated experiences to enhance both knowledge and skill acquisition and to determine whether they transfer to actual clinical performance, whether simulated experiences are best as adjuncts to traditional learning experiences, or whether they may replace some proportion and obtain equal or better results in performance.
Specific Aims and Study Objectives
The objectives of this study were:
- To evaluate the knowledge acquisition of students enrolled in a childbearing course who were exposed to simulation by comparing scores on presimulation and postsimulation tests.
- To compare the skill acquisition of students previously enrolled in a childbearing course who were not exposed to simulation with that of students for whom simulation had been incorporated.
- To compare the knowledge acquisition of students previously enrolled in a childbearing course who were not exposed to simulation with that of students for whom simulation had been incorporated.
Materials, Methods, and Analysis
This project was conducted at a 4-year private university in the northeastern United States. Human subject approval for this project was granted by the university institutional review board. All nursing students who were enrolled in the undergraduate childbearing clinical course from spring 2008 through spring 2009 constituted the nonsimulation experience participants (control group), and all students who enrolled in the undergraduate childbearing clinical course in fall 2009 through fall 2010 constituted the simulation experience sample (experimental group). Nine men and 272 women were enrolled in this course during the study period, and the participation rate in this study was 100%. Included were 138 nonsimulation experience participants, of which 133 were female, and 143 simulation experience participants, of which 139 were female. Comparison of the students, based on their grade point averages for nursing courses preceding the semester in which they enrolled in the child-bearing course, revealed no statistical difference between the control and experimental groups. All pretest, posttest grades, clinical experience final examination, clinical experience performance grades, and course final grades were deidentified by the principle investigator (M.C.S.) prior to any use of the data for research analysis.
In the experimental group, students had two simulation experiences that replaced two of their clinical days in the hospitals. This included two 2-hour didactic sessions in which orientation to the high-fidelity simulators, discussion of nursing care, and computerized “micro-sim” or clinical scenarios were reviewed prior to their simulation experiences. The first simulation focused on normal obstetric nursing care, and the second reflected care of a high-risk obstetrical client. A 20-question NCLEX-style pretest was administered prior to each didactic session, and the same 20-question NCLEX-style examination was administered as a posttest following each simulation experience. For the simulation experience, six students participated in each session, during which three scenarios were run: one focused on care of the antepartum patient, one on the labor patient, and one on the postpartum patient. Each scenario lasted 15 to 30 minutes until the faculty thought the objectives had been met. Students were assigned roles as either a confederate (support person) or an off-going nurse who had information to share with the health care team, a nurse providing direct patient care, or an observer. The scenarios were followed by 45-minute debriefing sessions in which all students and faculty discussed the events from their perspective as patient, support person, provider, or objective observer.
As is standard practice at the university, at the conclusion of each semester an evaluation of the students’ clinical performance was conducted by the clinical faculty providing supervision to groups of six to eight students. Grades for clinical performance were derived from a Likert scale evaluation tool assessing the students’ professionalism, communication skills, and clinical judgment in the care of the childbearing family. Clinical performance grades were used to measure clinical knowledge and skill acquisition. In addition, a written NCLEX-style final examination was administered to all students at the conclusion of the course. Historical data were available, including clinical performance grades and clinical final examination grades for students not exposed to simulation, and these were compared with students in the experimental group.
Using deidentified data about student performance in each group (nonsimulation control group and simulation experimental group), statistical analysis was performed with PASWStatistics version 18.0 software. The first objective of this study was to assess knowledge acquisition of the students exposed to simulation. This was measured by a comparison of pretest and posttest scores for each of the simulation experiences. Using a paired sample t test, simulation was found to improve performance on both NCLEX-style tests: t = 18.754, df = 142 (first experience), and t = 4.809, df =142 (second experience) (p < 0.001).
The second objective of this study was to compare skill acquisition in the nonsimulation and simulation groups. Skill acquisition was operationalized as the clinical performance grades given by clinical faculty. An independent means t test found the difference between the clinical performance grades of the nonsimulation and the simulation group statistically significant, with the simulation group performing higher (mean grade, 91.67) compared with the nonsimulation group, (mean grade, 89.75) (t = 4.504, df = 279; p < 0.001).
The third objective of this research was to compare knowledge acquisition in the nonsimulation and simulation groups was measured by final examination scores and final course grades, which included clinical performance grades, the final examination, and writing exercises on related material. An independent means t test found the difference in both final examination scores and the final course grades between the nonsimulation and the simulation group statistically significant, with the simulation group performing higher, with a mean final examination score of 79.13 (t = 4.341, df = 279, p < 0.001) and a mean grade of 88.33 (t = 6.872, df = 279, p < 0.001), compared with the nonsimulation group, with a mean final examination score of 75.59 and a mean grade of 85.08.
The goal of this study was to compare the knowledge and skill development of students exposed to simulation with those whose curriculum did not expose them to simulation. On the basis of the outcome variables of clinical performance grades, final examination grades, and final grades, we found simulation to have a positive effect on both knowledge and skill development. The results of this study suggest that simulated experiences replacing a limited number of traditional clinical days, coupled with didactic teaching methods, improve clinical competency skills and knowledge development. These findings support the use of simulation as a valid teaching method that adds value in terms of skills and knowledge acquisition for undergraduate students.
Although a limitation of this study is the convenience sampling, participation of the entire population of students enrolled (n = 138 and n = 143), as well as the similarity of the control and experimental groups in academic achievement prior to this course offering, suggest that statistical analysis is robust to variations in the samples that may have been present due to the lack of random selection or assignment. Although it is not possible to generalize the findings of this study to all simulated experiences in nursing education, the data suggest that similar results could be expected in upper-level undergraduate childbearing courses in the United States, and the literature suggests that other courses may well have similar results from simulated experiences. Other specialties should be studied and compared to determine whether there are differences in the effects of simulation on student knowledge and skill acquisition across different types of nursing education. Further research should determine whether improvements in student skill and knowledge development transfer to courses other than those in which simulation is included. Determining appropriate doses of simulation and whether gains plateau after a given amount of exposure should be priorities for future study to help nursing programs weigh the cost-effectiveness of simulation. The simulation manikins used in this study were high fidelity, so comparing different levels of fidelity on the effects of simulation would help administrators optimize their curriculum.
Simulation may have a more powerful effect in the child-bearing specialty because nursing student labor and delivery experiences can be greatly limited by geographic, logistic, or legal factors, and simulation in childbearing courses offers students experiences not frequently obtained in clinical sites. During simulation, students are able to practice the independent nurse role in both normal and high-risk birth scenarios. For students in this study, the clinical placements were in the greater Boston area, where a multitude of state-of-the-art hospitals with high acuity exist for nursing student labor and delivery experiences, and significant gains were made through exposure to simulation. Therefore, it is possible that in other areas of the country where opportunities are fewer, the effects of the simulation experience may be even greater.
The study findings offer empirical data about gains made through the use of simulated experiences in essential skills and knowledge for safe childbearing nursing practice, which complements the rich literature available about student perspectives on simulation and its utility in a nursing curriculum. Further research must examine the relationship between simulation and knowledge and skill acquisition across the nursing curriculum. Doing so would help construct a model of simulation in nursing education that encapsulates the value and cost in terms of time, financial resources, personnel, and improvements in objective measures such as knowledge, clinical skill, and error rates. This information would help nurse educators and administrators tailor nursing education to produce the most competent and safest practitioners possible with the resources available.
- Alinier, G., Hunt, B., Gordon, R. & Harwood, C. (2006). Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education. Journal of Advanced Nursing, 54, 359–369. doi:10.1111/j.1365-2648.2006.03810.x [CrossRef]
- Blum, C., Borglund, S. & Parcells, D. (2010). High-fidelity nursing simulation: Impact on student self-confidence and clinical competence. International Journal of Nursing Education Scholarship, 7(1), Article 18. doi:10.2202/1548-923X.2035 [CrossRef]
- Domuracki, K.J., Moule, C.J., Owen, H., Kostandoff, G. & Plummer, J.L. (2009). Learning on a simulator does transfer to clinical practice. Resuscitation, 80, 346–349. doi:10.1016/j.resuscitation.2008.10.036 [CrossRef]
- Elfrink, V.L., Kirkpatrick, B., Nininger, J. & Schubert, C. (2010). Using learning outcomes to inform teaching practices in human patient simulation. Nursing Education Perspectives, 31, 97–100.
- Feingold, C., Calaluce, M. & Kallen, M. (2004). Computerized patient model and simulated clinical experiences: Evaluation with baccalaureate nursing students. Journal of Nursing Education, 43, 156–163.
- Ford, D.G., Seybert, A.L., Smithburger, P.L., Kobulinsky, L.R., Samosky, J.T. & Kane-Gill, S.L. (2010). Impact of simulation-based learning on medication error rates in critically ill patients. Intensive Care Medicine, 36, 1526–1531. doi:10.1007/s00134-010-1860-2 [CrossRef]
- Hoffmann, R.L., O’Donnell, J.M. & Kim, Y. (2007). The effects of human patient simulators on basic knowledge in critical care nursing with undergraduate senior baccalaureate nursing students. Simulation in Healthcare, 2, 110–114. doi:10.1097/SIH.0b013e318033abb5 [CrossRef]
- Landeen, J. & Jeffries, P.R. (2008). Simulation. Journal of Nursing Education, 47, 487–488. doi:10.3928/01484834-20081101-03 [CrossRef]
- Lane, C., Hood, K. & Rollnick, S. (2008). Teaching motivational interviewing: Using role play is as effective as using simulated patients. Medical Education, 42, 637–644. doi:10.1111/j.1365-2923.2007.02990.x [CrossRef]
- Latif, R., Memon, S., Bautista, A., Smith, E., Ziegler, C. & Wadhwa, A. (2009). A randomized, blinded study to assess the effectiveness of simulation-based training for U/S-guided central venous access placement using aseptic technique. Critical Care Medicine, 37(12), A1.
- Liaw, S., Chen, F., Klainin, P., Brammer, J., O’Brien, A. & Samarasekera, D. (2010). Developing clinical competency in crisis event management: An integrated simulation problem-based learning activity. Advances in Health Sciences Education, 15, 403–413. doi:10.1007/s10459-009-9208-9 [CrossRef]
- Nehring, W. (2008). U.S. Boards of Nursing and the use of high-fidelity patient simulators in nursing education. Journal of Professional Nursing, 24, 109–117. doi:10.1016/j.profnurs.2007.06.027 [CrossRef]
- Radhakrishnan, K., Roche, J. & Cunningham, H. (2007). Measuring clinical practice parameters with human patient simulation: A pilot study. International Journal of Nursing Education Scholarship, 4(1), 1–11. doi:10.2202/1548-923X.1307 [CrossRef]
- Robertson, B. (2006). An obstetric simulation experience in an undergraduate nursing curriculum. Nurse Educator, 31, 74–78. doi:10.1097/00006223-200603000-00009 [CrossRef]
- Schiavenato, M. (2009). Reevaluating simulation in nursing education: Beyond the human patient simulator. Journal of Nursing Education, 48, 388–394. doi:10.3928/01484834-20090615-06 [CrossRef]
- Schlairet, M.C. & Pollock, J.W. (2010). Equivalence testing of traditional and simulated clinical experiences: Undergraduate nursing students’ knowledge acquisition. Journal of Nursing Education, 49, 43–47. doi:10.3928/01484834-20090918-08 [CrossRef]
- Seropian, M.A., Brown, K., Gavilanes, J.S. & Driggers, B. (2004). An approach to simulation program development. Journal of Nursing Education, 43, 170–174.
- Shepherd, I.A., Kelly, C.M., Skene, F.M. & White, K.T. (2007). Enhancing graduate nurses’ health assessment knowledge and skills using lowfidelity adult human simulation. Simulation in Healthcare, 2, 16–24. doi:10.1097/SIH.0b013e318030c8dd [CrossRef]
- Wilfong, D.N., Falsetti, D.J., McKinnon, J.L., Daniel, L.H. & Wan, Q. (2011). The effects of virtual intravenous and patient simulator training compared to the traditional approach of teaching nurses. Journal of Infusion Nursing, 34, 55–62. doi:10.1097/NAN.0b013e31820219e2 [CrossRef]