The use of simulation in its many forms has been an integral part of nursing education for decades, as evidenced by the use of manikins as skill trainers to teach basic psychomotor nursing skills (Gomez & Gomez, 1987). However, the advancement in computer technology over the past 10 years has resulted in the development of sophisticated human patient simulators (HPS) such as SimMan© and IStan© that allow for immediate response to nursing interventions. This has led to the widespread acceptance and use of HPS in high-fidelity simulation laboratories by nursing education programs across the country (Jeffries, 2009; Nehring & Lashley, 2010). An example of this acceptance is evidenced by the California Board of Registered Nursing now allowing up to 25% of student clinical learning to occur in simulation laboratories (California Board of Registered Nursing, 2011). Further, HPS use is expected only to increase in the future as nursing programs struggle to find appropriate clinical sites for their students, to recruit faculty to teach their students, and, most importantly, to ensure quality of care and the safety of patients cared for by nursing students (Kohn, Corrigan, & Donaldson, 2000; Landeen & Jeffries, 2008; McCallum, 2007).
The benefits of using HPS in high-fidelity simulation experiences are many. High-fidelity simulations offer students an anatomically correct human substitute that can physiologically respond to nursing interventions (Seropian, Brown, Gavilanes, & Driggers, 2004b). Simulations allow students to observe the sequelae of the care they provide and decisions they make, both good and bad, without causing harm to an actual patient. In addition, high-fidelity simulations have the ability to present a patient’s progression from admission through discharge or death more quickly than is seen in a real-life situation, offering students a more complete picture of the nursing care involved for specific disease processes. Finally, high-fidelity patient simulation has the ability to standardize the types of patients and disease processes encountered by students, ensuring that they have similar experiences, something that cannot be guaranteed in the traditional clinical setting. For example, a course coordinator may lecture on the topic of congestive heart failure, followed by a congestive heart failure simulation in the laboratory, giving all students a hands-on opportunity to observe what they learned didactically.
However, simulation is not a panacea and does have some disadvantages. First, no matter how lifelike an HPS is, it is still not a real human being and has limitations, such as restricted communication with the student. Not being able to experience the full range of interaction with a patient makes the simulation less realistic and more ambiguous than an actual patient encounter. There is also the danger that instead of using high-fidelity simulation as a means to foster a student’s clinical reasoning abilities, it will be primarily used for practicing basic psychomotor skill development (Seropian, Brown, Gavilanes, & Driggers, 2004a). Further, the costs of creating (approximately $876,485) and maintaining (approximately $361,245 per year) a high-fidelity simulation laboratory may be prohibitive to many nursing programs due to the budget constraints they are working under in uncertain economic times (Gates, 2008; Mcintosh, Macario, Flanangan, & Gaba, 2005).
Although the use of high-fidelity simulation offers many potential benefits to the nursing education programs that use them, the decision regarding the extent to which simulation is utilized by individual nursing programs is ultimately determined by the state board of nursing and the administrators at the university or college where the nursing program resides (Nehring, 2008). State boards of nursing have the regulatory authority to determine the maximum number of simulation hours that can be substituted for traditional clinical hours. Then, on the basis of these approved regulations, administrators at each university or college must decide to what extent simulation will be used in their nursing curriculum, taking into account such things as budget, clinical placement constraints, student needs, and faculty needs. Therefore, the important decisions that boards of nursing and college and university administrators have to make regarding simulation use in nursing programs must be supported by empirical evidence.
Unfortunately, there is little research evidence that demonstrates how well high-fidelity simulation assists students in acquiring and integrating knowledge, skills, and critical thinking into their knowledge base (Weaver, 2011). Instead, most research has focused on the confidence students gain after participating in a simulation scenario, student perceptions on whether the knowledge they gained in simulation was transferable to the clinical setting, or student perceptions regarding the realism of simulation (Abdo & Ravert, 2006; Bremner, Aduddell, Bennet, & VanGeest, 2006; Feingold, Calaluce, & Kallen, 2004).
Several studies that did evaluate knowledge acquisition captured students’ self-reports of an increase in knowledge after simulation instead of measuring knowledge more directly through other means, such as an examination (Alinier, Hunt, Gordon, & Harwood, 2006; Dillard, Sideras, Ryan, Carltone, Lasater, & Sitberg, 2009; Radhakrishnan, Roche, & Cunningham, 2007). Unfortunately, the rigor of studies that did explore the direct relationships between knowledge, critical thinking, and simulation participation was compromised by insufficient sample sizes or the use of convenience samples (Kardong-Edgren, Anderson, & Michaels, 2007; National Council of State Boards of Nursing, 2009; Schlairet & Pollock, 2010; Schumacher, 2004). Overall, review of the literature reveals that more research must be undertaken to build the evidence base necessary to determine to what extent high-fidelity simulation can be used as a substitute for traditional clinical experiences.
Our study was undertaken to examine the effects of high-fidelity simulation on nursing students’ knowledge acquisition as evidenced by their performance on content-specific examinations. Knowledge acquisition was selected as the outcome of interest because nursing students must pass the NCLEX-RN© to work as registered nurses. In addition, student pass rates on the NCLEX are used by accreditation agencies as a proxy to evaluate the health of a nursing program. The following hypothesis was tested: Students participating in a simulation experience will receive higher scores on an examination of course content covered in the simulation than students who did not participate in the simulation.
Institutional review board approval was obtained prior to the start of this experimentally designed study. Nursing students enrolled in their first medical–surgical nursing course were randomly assigned based on clinical group to participate in one of the following METI scenarios for their second simulation experience of the semester: pulmonary embolism (PE) or gastrointestinal (GI) bleed. These scenarios were chosen because they had been successfully pretested in this course in previous semesters and covered disease processes and nursing knowledge to which the students had already been exposed earlier in the semester in the form of course lectures, readings, case studies, and clinical experience. In addition, the two scenarios were chosen because they necessitated similar nursing actions. Further, the scenarios were scripted; that is, instructors followed a script that involved the same patient states (e.g., initial assessment, condition worsens, and transfer to intensive care unit) for both the PE and GI scenarios. Instructors were masked to the rationale underlying the randomized assignments of their clinical groups.
Because all students had to participate in 2 full days of simulation as part of the requirements for the course, for the purposes of this study, students in the PE group served as the control group for analyses of student examination performance on nursing knowledge for the GI bleed group. Likewise, the students who participated in the GI bleed simulation scenario served as the control group in the analyses examining student examination performance on PE nursing knowledge.
Prior to participating in the simulation scenario, students were expected to prepare in the usual fashion to care for a patient during a traditional clinical experience. Specifically, students were asked to demonstrate an understanding of the patient history, medications, laboratory results, assessment priorities, and potential complications by conducting a full presimulation workup for the simulated patient for whom they would care. Students were provided the necessary patient background information by the clinical faculty to perform this presimulation workup. In addition, students had to submit additional presimulation preparation questions provided by METI prior to starting the simulation scenario.
Each clinical group had between 7 and 10 students. Clinical faculty were asked to randomly assign their students into groups of 3, 4, or 5 and then randomly assign each student to one of the following nursing roles on the day of simulation: Primary RN; Charge RN; Secondary RN (role not used if group contained only 3 students); Recorder; and Observer (role not used if group contained only 4 students). The simulation began after students were given an end-of-shift report by their clinical faculty member regarding the patient for whom they would be caring during the simulation. Each of the simulation scenarios then unfolded until the patient was transferred to the intensive care unit and included changes in patient status and complications that necessitated calling the physician.
During the simulations, supervising clinical faculty were directed not to intervene as they would in an actual hospital environment if students omitted specific care or made flawed clinical decisions. At the conclusion of the simulation experience, clinical faculty held an approximately 1-hour structured debriefing session with the students. The clinical faculty were instructed to help students reflect on their simulation experiences in the debriefing session by encouraging them to explore what they did well and not so well and, more importantly, providing them with suggestions on how they can improve their nursing care and decision making abilities in future patient encounters. Specifically, each faculty member was expected to use the debriefing questions provided with the scripted METI scenarios.
The scheduling of the simulation scenarios was done in such a way that all clinical groups completed their simulations during the same week. Scheduling the simulations in this manner allowed faculty to schedule a content-specific 10-item examination during the next class meeting. In total, two 10-item content-specific examinations were given to all students during the semester, one on PE and the other on GI bleed course content.
All students were informed of the study during the first class lecture of the semester. At that time, students were informed that if they participated in the study they would receive extra credit of 1 percentage point added to their final grade. Consent to participate in the study and the awarding of the extra credit was achieved by having the student sign the consent form on the first page of each 10-question examination. Students were asked to remove the consent cover sheet and hand it in separately from their examination to protect their anonymity while still accounting for their participation. In addition, students were asked to create a self-generated identification code as outlined by Yurek, Vasey, and Havens (2008) so that examination scores could be linked. Finally, students were informed that if they did not wish to participate in the study, they could still earn the extra credit by completing an alternative assignment.
Setting and Sample
The sample consisted of baccalaureate nursing students enrolled in their second-semester medical–surgical course. This course represented the students’ first opportunity to provide direct care to patients in a medical–surgical environment. It was also the students’ first exposure to the use of high-fidelity simulation technologies as a patient care learning tool. All simulation experiences were conducted on campus at the simulation laboratory. Simulations were run by the clinical faculty under the supervision and guidance of the simulation laboratory coordinator and her staff. These clinical faculty members, as well as the simulation participants, were masked to the study’s objectives and hypotheses. The second and third authors (the simulation laboratory coordinator [M.B.P.] and her assistant coordinator [J.E.H.]) learned of the study’s hypotheses during data analysis. Although the first author [M.G.G.] was not masked to the conditions of this study, he was not involved in administration of the simulation experiences.
All 104 nursing students enrolled in the course agreed to participate in the study. The mean age of the student sample was 22.34 years, ranging from 19 to 37 years. The majority were female; 13% of the students were male. The students were part of 12 clinical groups, each of which were randomly assigned to either the GI bleed or PE simulation scenario. A total of 53 students took part in the PE simulation, whereas 51 students participated in the GI bleed simulation. Table 1 shows that the PE and GI bleed student groups share a demographic profile similar to that of the overall sample.
Table 1: Descriptive Statistics and ANOVAs of Study Variables by Simulation Group
Because all students were required to participate in 15 hours of high-fidelity simulated learning during the semester, a variable to capture the effect simulation participation had on the outcome variables of interest (i.e., 10-question examinations on GI bleed or PE) had to be devised. The system used in this study was to create two distinct dummy variables, one representing PE simulation participation and one representing GI bleed simulation participation. The control and experimental groups could easily be identified through the use of the dummy variables.
Prior to participation in the study, students had already completed two course examinations. These examinations covered the theoretical nursing knowledge necessary to care for patients who had a GI bleed or PE. Because the course material necessary for care of the patient encountered during the simulation was already covered in lecture and case study analyses and was evaluated through course examinations, students were asked to report the letter grade they achieved on their examinations during the administration of the first postsimulation examination. These grades were then averaged to create an overall average examination score that was used as a control variable in this study, representing baseline academic achievement. The examination variable was constructed using the traditional four-point grading scale, including pluses and minuses, where the highest score of a 4 represented an A grade. The overall examination average for both groups was a 2.64, with a standard deviation of 0.64.
The two 10-item examinations were created specifically for this study by the course coordinator. NCLEX-type questions were designed such that it was not necessary to have participated in the actual simulation to answer the question. Once the questions were finalized, the course coordinator verified that all material covered on the examinations could be reinforced by participation in the simulation scenario but would also be found in the lecture notes, case studies, or assigned reading that were already covered in the course. Further, the clinical instructors and simulation staff were masked to the content of the examination questions. Each examination was scored on a 1 to 10 scale, with 10 being the highest score. The overall mean for both groups on the PE quiz was a 6.89, with standard deviation of 1.40; for the GI bleed quiz, the overall mean was 4.92, with a standard deviation of 1.45.
The hypotheses proposed in this study predicted that the students who participated in a simulation experience would have a higher score on an examination of course content covered in the simulation than students who did not participate in that simulation. STATA® 11 software was used for all analyses. ANOVA was used to detect differences in means for the study variables. Hierarchical multiple regression techniques (different sets of variables were forced into the equation in sequential steps) were then utilized to evaluate the main study hypotheses. Specifically, GI bleed and PE examination scores were regressed first on the control variable “average course examination score” and then on the treatment variable—that is, whether or not a student participated in that specific simulation scenario. In addition, to control for the clustering of observations (e.g., nesting of individual students within clinical groups), all analyses utilized the STATA 11 cluster commands, which provides robust estimates of variance for clustering that occurs at the clinical group level (Wooldridge, 2009).
The power of a statistical test is the probability that it will yield a statistically significant result in situations in which it is present. According to Cohen (1988), the minimum acceptable level of power in an analysis is 0.80, which was used as the cutoff for this study. The sample size needed to detect a medium effect size of 0.3 at an alpha = 0.05 and a u = 1 (difference in means for two groups) is 44 students. Therefore, this study had sufficient power to capture the difference in means between the GI bleed (n = 51) and PE (n = 53) groups.
Of note in this study was the additional amount of variation in the outcome variable that was explained by the addition of the simulation participation dummy variable, or the increase in R2 over and above what is explained by the overall average examination score control variable. Based on the power tables presented in Cohen (1988), in which the significance level is set at alpha = 0.05, u = 1 (the overall average examination score variable), w = 1 (the simulation participation variable) and a sample size of 104 students, the minimum effect size that can be realistically observed at a power of 0.80 is any change in R2 that is 0.07 and greater. According to Cohen (1988), an effect size of 0.07 is between a small effect size, which is set at 0.02, and a medium effect size, which is set at 0.15. Therefore, with a sample size of 104 students, the current analysis had sufficient power to capture the direct effects of simulation participation.
Table 1 provides information concerning the demographic characteristics and major study variables of the overall sample, the PE simulation group, and the GI bleed simulation group. Students participating in the PE simulation had an average PE examination score of 6.89 (SD = 1.40). T tests indicated that this mean score was statistically different than the mean PE examination score obtained by the GI bleed simulation group, which was 6.08 (SD = 1.41). Similarly, the GI bleed mean examination score was significantly higher for those who participated in the GI bleed simulation (5.78; SD = 1.15) versus those who participated in the PE simulation (4.92; SD = 1.45).
The hypothesis with regard to the PE model, which purported that participation in a PE simulation would be positively related to a student’s score on an examination assessing nursing knowledge related to PE, was supported (Table 2). When the PE simulation variable was added, the R2 increased by more than 8 percentage points (moving from 0.105 to 0.186). Further, the statistically significant beta coefficient of 0.81 indicates that holding everything else constant, participation in the PE simulation will raise a student’s score on the PE examination by an average of 0.81 points or 8.1 percentage points.
Table 2: Hierarchical Multiple Regression Analyses: The Effects of Simulation on Pulmonary Embolism (PE) and Gastrointestinal (GI) Bleed Examinations
The hypothesis with regard to the GI bleed model was also supported (Table 2). The R2 when the GI bleed simulation variable was added increased by 9.9 percentage points (moving from 0.042 to 0.141). Further, the statistically significant beta coefficient of 0.86 indicates that holding everything else constant, participation in the GI bleed simulation will, on average, raise a student’s score on the GI bleed examination by 0.86 points or 8.6 percentage points.
The findings from this study are encouraging in that they help to develop an evidence base that indicates there are knowledge acquisition benefits to participation in high-fidelity simulations. Specifically, the results from both the PE and GI bleed models indicate there is more than an 8% increase in examination performance for students who participate in high-fidelity simulation compared with those who do not. Further, as all students had equal access to the information tested on the examinations, meaning that the simulation experience did not include any additional new information that was not already covered in course material, this 8% increase is equivalent to an increase of almost one full letter grade when viewed on a traditional grading scale.
Given that high-fidelity simulation is an integrative teaching and learning approach that assists students with their clinical reasoning skills, the results of this study do provide an evidence base that suggests high-fidelity simulation can be used as a viable substitute to traditional clinical learning. Further, the findings support the use of high-fidelity simulation as an engaging pedagogy that should be included in the radical transformation of nursing education outlined by Benner, Sutphen, Leonard, and Day (2010) in their important work on educating nurses. In addition, the knowledge gains seen in this study are encouraging, especially to nursing programs that are facing clinical placement challenges or are trying to argue for or justify the large capital investments seen with high-fidelity simulation laboratories (McCallum, 2007; Seropian et al., 2004a, 2004b).
Although the findings in this study provide evidence to justify the use of high-fidelity simulation in nursing curricula, this study did not explore to what extent simulation should be used as a substitute for traditional clinical experiences within nursing programs. This presents a difficult challenge to state boards of nursing that feel pressure to regulate the use of simulation even without an adequate evidence base. Therefore, for nursing programs and boards of nursing to more effectively regulate the use of high-fidelity simulation in nursing programs, future research must examine how varying amounts of simulation affect student learning outcomes such as knowledge acquisition. To fill this research gap, the National Council of State Boards of Nursing (2011) is conducting a landmark, national, multisite, longitudinal study of simulation use in prelicensure nursing programs beginning in the fall of 2011. The key component of this study is that each of the 10 participating nursing programs will be randomly assigned to one of three groups that will substitute simulation for traditional clinical experience at the following percentages: 10%, 25%, and 50%. However, until the results of studies that use varying amounts of simulation are disseminated, state boards of nursing will need to be conservative in developing regulations that dictate the maximum percentage of traditional clinical experiences than can be substituted with simulation.
As with any research study, the findings also stimulated additional questions that need to be answered for simulation research to truly advance. One such question was: Does simulation participation affect lower and higher performing students differently? In this study, the PE and GI bleed groups were similar in terms of mean age, percentage of female students, and baseline average examination score. However, the group means may have hidden variations between low and high performers within each group. Therefore, post hoc analyses were conducted that assessed the GI bleed and PE examination scores of low-performing and high-performing students in each group. The mean scores for the low performers in the PE (n = 11) and GI bleed (n = 14) groups for the GI bleed examination were 5.2 versus 5.4, respectively, and for the PE examination, 6.7 versus 5.5, respectively. The mean scores for the high performers in the PE (n = 8) and GI bleed (n = 9) groups for GI bleed examination were 6.3 versus 6.3, respectively, and for the PE examination, 7.8 versus 7.0, respectively. Unfortunately, the lack of statistical power when comparing groups of this size will not allow one to make any meaningful conclusions regarding the presence of a slight performance boost for lower-performing students over higher-performing students when participating in simulation.
An additional question that was raised during this study by members of the research team was: Would the results have been different if students were not required to prepare for the patient they would be taking care of during the simulation? Although this question could not be answered due to the design of the current study, the notion that simulation preparation could be viewed as an important confounder needs to be addressed in future studies investigating simulation and its links to student knowledge acquisition. The need for students to prepare for their clinical experiences, whether that experience is in a traditional or simulated environment, is not new. A strong body of research supports the idea that nursing students need to be exposed to and process knowledge, regardless of setting, multiple times before that knowledge becomes an integrated part of who they are as a nurse.
Further, future research must explore whether the increase in educational outcomes seen with simulation use are worth the large costs associated with creating and maintaining a simulation laboratory. Simulation laboratories place more demands on nursing faculty, who already have heavy workloads. Given that the goal of nursing schools is to achieve the best educational outcomes with the most efficient use of resources, research utilizing cost–benefit analyses should be included in the agenda of future simulation research.
Finally, future research must also consider the level of the student when exploring whether preparation for simulation matters, given that novice nursing students may have disparate preparation needs from the more seasoned nursing student who is about to graduate. It also would be beneficial for research to examine the most effective mix of simulation versus traditional clinical experience to incorporate within a course. Research that incorporates these design features will then be able to report best practices in terms of simulation administration and preparation.
Several limitations to this study should be noted. First, with 12 different clinical faculty members leading their clinical groups through the simulations, as well as the debriefing sessions, there may be concerns that clinical groups may have had varying experiences due to differences in faculty knowledge, experience, and application of the scripted debriefing questions. In addition to statistical controlling for these potential differences among clinical faculty, which was done in this study, providing the faculty with more training and more closely monitoring the content discussed in the debriefing sessions has the potential to decrease bias in the results.
Although the sample size was adequate to support the findings of this study, it can also be viewed as a limitation. A larger sample size would have allowed for the inclusion of additional confounding variables, most notably whether students had the opportunity in their clinical rotations to care for a patient who experienced a PE or GI bleed. In addition, increasing the sample size and expanding the study to include other nursing programs would allow for greater generalizability of the findings. Therefore, the current results should be generalized only to beginning medical–surgical nursing students.
The results of this study indicate that for beginning nursing medical–surgical undergraduate students, participation in high-fidelity simulation is positively related to knowledge acquisition, as evidenced by higher scores on content-specific examinations. These findings linking simulation participation to knowledge acquisition help build the necessary evidence base required to support the use of high-fidelity simulation as a viable substitute for traditional clinical experiences and justify the large capital expenditures associated with its implementation and use. More importantly, state boards of nursing that regulate nursing education programs can use these findings to develop future regulations regarding how simulation can be most effectively used in nursing programs. Finally, these results help lay the foundation for a research agenda in simulation that must be expanded so that the benefits of simulation are fully realized.
- Adbo, A. & Ravert, P. (2006). Student satisfaction with simulation experiences. Clinical Simulation in Nursing, 2(1), e13–e16. doi:10.1016/j.ecns.2009.05.009 [CrossRef] doi:10.1016/j.ecns.2009.05.009 [CrossRef]
- Alinier, G., Hunt, B., Gordon, R. & Harwood, C. (2006). Effectiveness of intermediate-fidelity simulation training in undergraduate nursing education. Journal of Advanced Nursing, 54, 359–369. doi:10.1111/j.1365-2648.2006.03810.x [CrossRef] doi:10.1111/j.1365-2648.2006.03810.x [CrossRef]
- Benner, P., Sutphen, M., Leonard, V. & Day, L. (2010). Educating nurses: A call for a radical transformation. San Francisco, CA: Jossey-Bass.
- Bremner, M., Aduddell, K., Bennet, D. & VanGeest, J. (2006). The use of human patient simulators: Best practices with novice nursing students. Nurse Educator, 31, 170–174. doi:10.1097/00006223-200607000-00011 [CrossRef]
- California Board of Registered Nursing. (2011). Approved Regulatory Language—Prelicensure Nursing Program with Documents Incorporated by Reference (Effective 10/21/10). Retrieved from http://www.rn.ca.gov/pdfs/regulations/approvedlang.pdf
- Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum.
- Dillard, N., Sideras, S., Ryan, M., Carltone, K., Lasater, K. & Sitberg, L. (2009). A collaborative project to apply and evaluate the clinical judgment model through simulation. Nursing Education Perspectives, 30, 99–104.
- Feingold, C., Calaluce, M. & Kallen, M. (2004). Computerized patient model and simulated clinical experiences: Evaluation with baccalaureate nursing students. Journal of Nursing Education, 43, 156–163.
- Gates, M. (2008, August). Developing a simulation center for your nursing school. Paper presented at the Technology Integration in Nursing Education and Practice Conference. , Durham, NC. .
- Gomez, G.E. & Gomez, E.A. (1987). Learning of psychomotor skill: Laboratory versus patient care setting. Journal of Nursing Education, 26, 20–24.
- Jeffries, P. (2009). Dreams for the future for clinical simulation. Nursing Education Perspectives, 30, 71.
- Kardong-Edgren, S., Anderson, M. & Michaels, J. (2007). Does simulation fidelity improve student test scores?Clinical Simulation in Nursing, 3(1), e21–e24. doi:10.1016/j.ecns.2009.05.035 [CrossRef] doi:10.1016/j.ecns.2009.05.035 [CrossRef]
- Kohn, L.T., Corrigan, J. & Donaldson, M.S. (2000). To err is human: Building a safer health system. Washington, DC: National Academy Press.
- Landeen, J. & Jeffries, P. (2008). Simulation. Journal of Nursing Education, 47, 487–488. doi:10.3928/01484834-20081101-03 [CrossRef]
- McCallum, J. (2007). The debate in favour of using simulation education in pre-registration adult nursing. Nurse Education Today, 27, 825–831. doi:10.1016/j.nedt.2006.10.014 [CrossRef] doi:10.1016/j.nedt.2006.10.014 [CrossRef]
- Mcintosh, C., Macario, A., Flanangan, B. & Gaba, D.M. (2005, November). Simulation: What does it really cost. Poster session presented at the SimTecT 2005 Healthcare Symposium. , Brisbane, Australia. .
- National Council of State Boards of Nursing. (2009). The effect of high-fidelity simulation on nursing students’ knowledge and performance: A pilot study. Retrieved from https://www.ncsbn.org/09_SimulationStudy_Vol40_web_with_cover.pdf
- National Council of State Boards of Nursing. (2011). The national simulation study. Retrieved from https://www.ncsbn.org/2094.htm
- Nehring, W. (2008). U.S. boards of nursing and the use of high-fidelity patient simulators in nursing education. Journal of Professional Nursing, 24, 109–117. doi:10.1016/j.profnurs.2007.06.027 [CrossRef] doi:10.1016/j.profnurs.2007.06.027 [CrossRef]
- Nehring, W. & Lashley, F. (2010). High-fidelity patient simulation in nursing education. Sudbury, MA: Jones and Bartlett.
- Radhakrishnan, J., Roche, J. & Cunningham, H. (2007). Measuring clinical practice parameters with human patient simulation. A pilot study. International Journal of Nursing Education Scholarship, 4(1), Article 8. Retrieved from http://www.bepress.com/ijnes/vol4/iss1/art8. doi:10.2202/1548-923X.1307 [CrossRef] doi:10.2202/1548-923X.1307 [CrossRef]
- Schlairet, M. & Pollock, J. (2010). Equivalence testing of traditional and simulated clinical experiences: Undergraduate nursing students’ knowledge acquisition. Journal of Nursing Education, 49, 43–47. doi:10.3928/01484834-20090918-08 [CrossRef] doi:10.3928/01484834-20090918-08 [CrossRef]
- Schumacher, L. (2004). The impact of utilizing high-fidelity computer simulation on critical thinking abilities and learning outcomes in undergraduate nursing students (Unpublished doctoral dissertation). Duquesne University, Pittsburgh, PA.
- Seropian, M.A., Brown, K., Gavilanes, J.S. & Driggers, B. (2004a). An approach to simulation program development. Journal of Nursing Education, 43, 170–174.
- Seropian, M.A., Brown, K., Gavilanes, J.S. & Driggers, B. (2004b). Simulation: Not just a manikin. Journal of Nursing Education, 43, 164–169.
- Weaver, A. (2011). High-fidelity patient simulation in nursing education: An integrative review. Nursing Education Perspectives, 32, 37–40. doi:10.5480/1536-5026-32.1.37 [CrossRef]
- Wooldridge, J.M. (2009). Introductory econometrics: A modern approach (4th ed.). Cincinnati, OH: Southwestern College Publishing.
- Yurek, L.A., Vasey, J. & Havens, D.S. (2008). Use of self-generated identification codes in longitudinal research. Evaluation Review, 32, 535–452. doi:10.1177/0193841X08316676 [CrossRef] doi:10.1177/0193841X08316676 [CrossRef]
Descriptive Statistics and ANOVAs of Study Variables by Simulation Group
|Spring 2009 Cohort||Spring 2009 Pulmonary Embolism Simulation Group||Spring 2009 Gastrointestinal Bleed Simulation Group|
|Age (SD)||22.34 (2.84)||22.32 (2.90)||22.35 (2.84)|
|Pulmonary embolism examination score (SD)||6.49 (1.41)||6.89 (1.40)**||6.08 (1.41)|
|Gastrointestinal bleed examination score (SD)||5.35 (1.38)||4.92 (1.45)**||5.78 (1.15)|
|Course examination average (SD)||2.64 (0.69)||2.64 (0.65)||2.64 (0.74)|
|Number of students||104||53||51|
Hierarchical Multiple Regression Analyses: The Effects of Simulation on Pulmonary Embolism (PE) and Gastrointestinal (GI) Bleed Examinations
|PE Examination Model Beta Coefficients||GI Bleed Examination Model Beta Coefficients|
|Step 1||Step 2||Step 1||Step 2|
| Course examination average (SD)||0.66 (0.20)**||0.66 (0.18)**||0.41 (0.15)*||0.41 (0.15)*|
| PE simulation (SD)||–||0.81 (0.28)**||–||–|
| GI bleed simulation (SD)||–||–||–||0.86 (0.27)**|
|Constant (SD)||4.74 (0.59)**||4.33 (0.49)**||4.26 (0.46)**||3.83 (0.49)**|
|Overall model F||11.03**||14.4**||7.17*||7.10**|