With continued expansion of knowledge in health care and the development of new technology, the demand for graduates who can solve complex problems efficiently has also increased (Sangestani & Khatiban, 2012). Therefore, in practice-based health care professions, methods of teaching and learning focus on facilitating development of a requisite level of clinical skills (Yuan, Williams, Fang, & Ye, 2012). Simulation-based learning could be a key component in adequately preparing nursing students for the transition into the ever-changing health care environment (Norman, 2012). Simulation, an educational strategy, “replaces or amplifies experiences that replicate aspects of the real world in an interactive fashion” (Gaba, 2004, p. i2). Simulation is composed of different modalities, including virtual reality, high-fidelity human simulators, and standardized patients (Decker, Sportsman, Puetz, & Billings, 2008). Of these simulation modalities, high-fidelity patient simulation (HFPS; human patient simulation) refers to pre-developed patient scenarios utilizing computerized manikins that respond to intervention by providing instant feedback (Weaver, 2011). Given that fidelity refers to the realism of the simulation, high-fidelity human simulation (HFHS) is currently the highest level of realism offered with patient simulation (Luctkar-Flude, Wilson-Keates, & Larocque, 2012). In that context, use of HFHS as an educational tool is becoming increasingly prevalent in medical and nursing education. Nurse educators considering the application of an HFPS curriculum in nursing education are confronted with problems created by the disparity of information provided by the results of previous studies.
Several systematic reviews have been conducted in recent years to investigate the use of HFPS in nursing education. Some of these reviews found that use of HFPS by nursing students improves knowledge acquisition (Cant & Cooper, 2009; Kim, Park, & Shin, 2013; Lapkin, Levett-Jones, Bell-chambers, & Fernandez, 2010; Weaver, 2011; Yuan et al., 2012), critical thinking (Cant & Cooper, 2009; Lapkin et al., 2010), and psychomotor skills (Lapkin et al., 2010; Yuan et al., 2012), and enhances students’ satisfaction with learning (Cant & Cooper, 2009; Kim et al., 2013; Lapkin et al., 2010; Weaver, 2011). Findings were mixed in the areas of student confidence (Cant & Cooper, 2009; Kim et al., 2013). As mentioned above, several systematic reviews about HFPS use have suggested its effectiveness as an educational strategy in nursing education. However, meta-analyses regarding the effectiveness of HFPS use in nursing education are scarce. A recent meta-analysis (Shin, Park, & Kim, 2014) addressed these issues with 20 studies involving one of the following modalities: HFPS, standardized patients, and partial-task trainers. A previous study (Shin, Park, & Kim, 2014) concluded that simulation education yielded psychomotor, affective, and cognitive outcomes of learning. However, the results showed statistical heterogeneity among study estimates. Meta-analysis applies well only if the heterogeneity is less than 50% (Higgins & Green, 2008). Thus, challenges exist in the evaluation of studies in this area. The current meta-analysis focused on HFPS applied to nursing students. Because of the small number of randomized controlled trials (RCTs) on this topic, a non-RCT design was included. The aim of this study is to evaluate the overall effectiveness of medium- to high-fidelity simulation using manikins in nursing education on cognitive, affective, and psychomotor outcomes of learning and to identify intervention moderators.
The current review followed the guidelines proposed by PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) (Liberati et al., 2009).
Eligibility criteria are detailed in accordance with the PICOS (Participants, Interventions, Controls, Outcomes, and Studies) framework. Participants were undergraduate and graduate nursing students. Exclusion criteria included other technological interventions that were not high-fidelity simulation interventions, such as computer simulation and anatomic models. Both no treatment (usual learning) and active (attention, placebo) control conditions were considered. The outcomes were learning outcomes, such as knowledge acquisition and clinical skill performance. Both RCTs and non-RCTs were considered for inclusion. Studies without sufficient data enabling calculation of effect size were excluded.
Studies were identified by searching Cochrane Library CENTRAL, EMBASE™, MEDLINE®, CINAHL®, and several Korean databases (KMBASE, KOREAMED, RISS, KISS, and NANET). Searches were limited to articles written in Korean or English, and any study published from the first available year within each database to June 2014 was considered. The main search strategy combined terms indicating simulation learning (intervention), nursing students (participants), and study design. In addition, a search of the Google™ Scholar database and a manual review of the reference lists of identified studies were conducted.
Studies were primarily screened using titles and abstracts after duplicates were removed by the reference management database. The studies were then screened using the full text. Independent screening was performed by two authors (J.L., P.-J.O.) for each study in accordance with the defined inclusion criteria.
Studies were included if they met the following criteria: (a) involved undergraduate or graduate nursing students, (b) measured simulation learning using a medium to high-fidelity patient simulator, (c) measured learning outcomes (i.e., knowledge acquisition, clinical skill performance, or critical thinking), (d) used RCTs and non-RCTs, and (e) included sufficient data for calculation of effect sizes between the treatment and control groups. Studies using a low-fidelity patient simulator were excluded.
Data were extracted from the study based on a predesigned coding manual form. The extracted data included specific details: (a) authors and years of publication, (b) country and study design, (c) sample size and characteristics of participants, (d) intervention details (i.e., number of sessions, duration, and total time), and (e) measurement and outcomes. Pilot testing was performed on three studies before data extraction by two independent reviewers (J.L., P.-J.O.).
Risk of Bias Assessment
After conducting a pilot test on three studies, two authors (J.L., P.-J.O.) performed an independent review of each study for evaluation of methodological quality. Disagreements were resolved through discussion. The 7-item scale of RoB (Risk of Bias), developed by the Cochrane Bias Method Group (Higgins & Green, 2008), was used for evaluation of RCT studies. The Risk of Bias Assessment tool for Non-randomized Studies (RoBANS) with 8 items, developed by Kim, Park, Lee, et al. (2013), was used for non-RCTs. As in the Cochrane RoB tool, the bias types in RoBANS are selection, performance, detection, attrition, and reporting biases. However, the domains of selection and performance biases were modified to include the selection of participants, confounding variables, and the measurement of exposure. Similar to the RoB, RoBANS is an outcome-based checklist (Kim, Park, Lee, et al., 2013).
For studies with data of sufficient quality and those that were similar in simulation learning and outcome measures, we combined data in a meta-analysis using the RevMan 5.3.3 software program. To combine similar outcomes measured on different instruments, the standardized mean differences and their 95% confidence intervals (CIs) were calculated and pooled. A 95% CI for the standardized mean difference greater than zero indicates a significant effect, favoring the intervention. The magnitude of effect size can be interpreted using Cohen’s recommendation for small (0.20), medium (0.50), and large (0.80) effect (Cohen, 1988). Each effect size was weighed by its inverse variance weight in calculating mean effect sizes. The inverse variance approach gives more weight to studies with larger sample sizes and minimizes the imprecision (uncertainty) of the pooled effect estimate (Higgins & Green, 2008).
A heterogeneity test was performed in each analysis using I2 statistics, which measure the ratio of true heterogeneity to total observed variation. An I2 of 25% was considered low, 50% moderate, and 75% high. I2 values greater than 50% were considered as having substantial heterogeneity, and the random-effects model was therefore applied for analysis of the data (Higgins & Green, 2008). Subsequently, subgroup analyses were performed according to categories of learning outcomes and potential moderating variables, such as grade level (sophomore, junior, senior) and class courses (adult nursing, pediatric nursing, emergency/intensive care). Grade was chosen as a potential moderator because different grades were included in the meta-analysis and it was considered important to perform analysis by subgroup. Class course was chosen as a potential moderator to examine the question of whether the effect on learning outcome differs according to the course subject.
To test for publication bias, a funnel plot, which graphs the effect size of each study according to its respective standard error, was used. The existence of publication bias was assumed if there were no small studies with effect sizes favoring control groups (Higgins & Green, 2008). In addition, a test of statistical significance was performed using Egger’s linear regression asymmetry test (Egger, Smith, Schneider, & Minder, 1997). The authors then computed the fail-safe N (Oh, 2002) to address potential publication bias. The results of a meta-analysis are considered to be unbiased if the fail-safe N exceeds 5k+10 (Rosenthal, 1993). A two-tailed p value < 0.05 was considered significant.
The study selection process is illustrated in the Figure. A total of 9,735 studies were retrieved through databases and from other references. After eliminating the duplicates, 1,161 publications were identified for screening. From the screening of titles and abstracts, 120 potentially relevant studies were identified. On review of these studies, 94 studies failed to meet inclusion criteria for the following reasons: 27 studies had a one-group design, 24 studies had insufficient data, 20 were descriptive studies, 16 studies were nonrelevant interventions (i.e., low-fidelity simulator or problem-based learning), and seven studies had a posttest-only design. A total of 26 studies were included in the current meta-analysis.
Study selection process. DB = database; RCT = randomized controlled trial.
Overall Description of the Studies
The characteristics of the studies included in the review are summarized in Table A (available in the online version of this article).
Descriptive Summary of Included Studies (N = 26)
Of the 26 studies, 16 (61.5%) were conducted in Korea, eight (30.7%) were conducted in the United States, and two studies (7.7%) were reported from the United Kingdom and Jordan. Four studies (15.4%) used RCT design, 22 studies (84.6%) were non-RCT design, which used a convenience sampling of nursing students. The sample size across the 26 included studies varied between 20 and 192, with a total of 2,031 participants. The participants were nursing students at varying levels of enrollment: sophomore (six studies), junior (11 studies), senior (three studies), graduate school (three studies), and unclear (three studies). The subject courses of HFPS were diverse, including medical–surgical nursing (n = 13), emergency/intensive care (n = 8), maternity care (n = 2), pediatric nursing (n = 2), and intensive comprehensive education for mechanical ventilated patients (n = 1). Among the 26 studies, the number of sessions varied from 1 to 10, with a mean of 2.7 sessions. Total hours of simulation learning were between 25 minutes and 24 hours, with a mean of 5.3 hours. Control groups using traditional nursing education were most common (84.6%); however, in four studies (15.4%), additional learning, such as a case study class, was included. The outcome measurements found in this study were categorized according to three domains of learning: cognitive outcomes, affective outcomes, and psychomotor outcomes.
Knowledge acquisition (k = 14), problem-solving competency (k = 5), critical thinking (k = 7), clinical judgment (k = 3), and communication skill (k = 2) were evaluated as the cognitive domain outcomes. A major scale for measuring these outcomes was the investigator-designed scale. Self-efficacy/confidence (k = 12) and learning satisfaction (k = 2) were evaluated as affective domain outcomes. Clinical competence (k = 10) was tested as the psychomotor domain outcome, and the major measure of clinical competence was used as an investigator-designed tool.
Quality of Studies
Three of four RCT studies did not report adequate details on randomization sequence and allocation concealment. For blinding of participants and providers of the intervention, three studies were judged to be unclear of performance bias. In terms of blinding outcome assessor, two (50%) of the studies reported on blinding the outcome assessment. Attrition bias was rated as low risk in three (75%) of the studies. The pre-specified expected outcome of interest of the study was clearly reported. Therefore, all of the studies were judged to be at low risk of reporting bias. Regarding other types of bias, use of an intervention manual and monitoring of the intervention procedure were considered essential for the risk of bias assessment in psychological intervention studies (Ranchor et al., 2012). Three studies (75%) provided an intervention manual (or scenario); however, it was not clear whether they evaluated the intervention procedure.
Twenty-two non-RCT studies were assessed using RoBANS in relation to selection bias, performance bias, attrition bias, detection bias, and reporting bias. Overall, except for selection bias and detection bias, these studies were rated as low risk for those biases. In participant selection, three studies (13.6%) were rated as high risk due to inconsecutive selection of participants. Regarding detection bias, five (22.7%) of the studies blinded the outcome assessor, and six (27.3%) of the studies were rated as high risk because performance skill was measured by the investigators.
Effects of Simulation-Based Learning Using a High-Fidelity Patient Simulator
Effects of HFPS on Cognitive Domain of Learning
Major results of a combined analysis across 19 studies on cognitive domain of learning are shown in Figure A (available in the online version of this article). The following cognitive domain outcomes were measured: knowledge acquisition (k = 14), problem-solving competency (k = 5), critical thinking (k = 7), clinical judgment (k = 3), and communication skills (k = 2). Despite significant heterogeneity (I2 = 89%), the current meta-analysis showed significant treatment effects on cognitive domain of learning (d = −0.73 (95% CI [−1.00, −0.46], p < 0.001 ); the weighted average effect sizes across studies were −0.34 (95% CI [−0.67, 0.02], p = 0.04), I2 = 82%) for knowledge acquisition, −1.27 (95% CI [−2.21, −0.33], p = 0.008, I2 = 95%) for problem-solving competency, −0.75 (95% CI [−1.18, −0.32], p < 0.001, I2 = 81%) for critical thinking, and −1.72 (95% CI [−2.71, −0.73], p < 0.001, I2 = 90%) for clinical judgment. However, no significant treatment effects on the communication skills (p = 0.20) were found. Due to the heterogeneity, subgroup analyses were conducted based on learning environment variables, such as student (grade) level and class course. Two studies involving junior students showed a significant effect on problem-solving competency (d = −0.97, 95% CI [−1.42, −0.52], p < 0.001), indicating homogeneity (I2 = 0%). Five studies involving adult nursing course (d = −0.63, 95% CI [−0.90, −0.36], p < 0.001) and junior class (d = −0.67, 95% CI [−0.90, −0.43], p < 0.001) showed a significant effect on critical thinking, indicating homogeneity (I2 = 0%). In addition, two studies involving an adult nursing course showed a significant effect on clinical judgment (d = −2.15, 95% CI [−2.58, −1.71], p < 0.001), indicating homogeneity (I2 = 0%). However, significant heterogeneity was still found across the studies on knowledge acquisition by subgroup analysis (I2 = 51% to 85%).
Forest plot of effect size and 95% confidence interval (CI) by high-fidelity patient simulation on cognitive domain of learning and funnel plot of effective sizes by standard error (SE[SMD]). X axis indicates the effect size of each study.
In examination of publication bias, review of the funnel plot of effect sizes by standard errors showed even distribution of the cognitive domain of learning outcome (Figure A).
Effects of HFPS on the Affective Domain of Learning
Self-efficacy (k = 12) and learning satisfaction (k = 2) were measured in the affective domain of learning. The current meta-analysis showed a tendency indicating that use of HFPS might have been a successful on self-efficacy (d = −0.49 (95% CI [−0.99, 0.00], p = 0.05), I2 = 91%), indicating large heterogeneity (I2 = 91%). In subgroup analyses according to student level (grade) and class course, significant heterogeneity was still found across the studies on self-efficacy (I2 = 0% to 95%). No significant effect on learning satisfaction (k = 2, p = 0.55) was found. In examination of publication bias, review of the funnel plot of effect sizes by standard errors showed somewhat uneven distribution of outcome.
Effects of HFPS on Psychomotor Domain of Learning
Results of a combined analysis across 11 studies (1,080 participants) that measured psychomotor domain of learning (i.e., clinical competence) are shown in Figure B (available in the online version of this article). Despite significant heterogeneity (I2 = 89%), the current meta-analysis showed a significant treatment effect on psychomotor domain of learning (d = −1.06, 95% CI [−1.46, −0.65], p < 0.001). In subgroup analyses according to class course, three studies of emergency/intensive care courses involving 299 participants showed a tendency indicating that use of HFPS might have been successful significant effect on clinical competence (d = −0.81, 95% CI [−1.35, −0.27], p = 0.003, I2 = 76%). In examination of publication bias, a funnel plot of effect sizes by their standard errors showed an even distribution of studies, which was statistically significant (p = 0.186) (Figure B). Fail-safe N was 33.5; it did not exceed 5k+10.
Forest plots of effect size and 95% confidence interval (CI) by high-fidelity patient simulation on clinical competence and funnel plot of effective sizes by standard error (SE[SMD]). X axis indicates the effect size of each study.
Despite the increasing prevalence of the use of HFPS as an educational tool in medical and nursing education (Yuan et al., 2012), little is known about its effectiveness. Much is known about simulation and its supposed effectiveness, mostly on the basis of small studies, anecdotal reports, and expert opinions; however, high-quality RCTs are missing. Thus, to examine the efficacy of HFPS on cognitive, affective, and psychomotor outcomes of learning, a meta-analysis was conducted across 26 studies with experimental design, which included 2,031 nursing students.
The results of the meta-analyses indicated that HFPS might have beneficial effects on cognitive (d = −0.73, p < 0.001) and psychomotor domain of learning (d = −1.06, p < 0.001). The current results support that simulation-based learning enables students to critically analyze their technical skills; thus, students may repeat the scenario to enhance their skills and increase retention and application of knowledge (Gaba, 2004). However, prior to drawing a conclusion regarding the findings, more careful consideration of heterogeneity (I2 = 89%) among studies was needed. Inevitably, the studies brought together in a meta-analysis have different characteristics (i.e., intervention characteristics), which could affect the direction and magnitude of effects. Restricting the review to a well-defined subtype of HFPS would have resulted in an insufficient number of studies for a meta-analysis. Meta-analysis applies well only if the heterogeneity is less than 50% (Higgins & Green, 2008). Thus, subgroup analyses were performed according to categories of learning outcomes and potential moderating variables such as grade level and class courses.
The cognitive outcomes used in our analysis included knowledge acquisition, problem-solving competency, critical thinking, clinical judgment, and communication. These outcomes are in agreement with major categories of cognitive processes, which can be thought of as degrees of difficulty (Krathwohl, 2002). In the current subgroup analyses according to categories of cognitive domain, use of HFPS for nursing students led to statistically significant enhancement of scores in problem solving competency, critical thinking, and clinical judgment, based on two to four trials of junior students in an adult nursing course, indicating homogeneity (I2 = 0%). Therefore, based on these findings, HFPS, if integrated appropriately, can be a useful learning methodology in academic settings. Use of cognitive skills enables students to make clinical judgments based on the available information when facing ambiguous situations, unique cases, or unresolved problems not covered in textbooks (Oermann & Gaberson, 2009). Therefore, the development of high-level cognitive skills through the use of HFPS is meaningful. However, due to a low power attributable to a small number of studies, conduct of further well-designed studies with a large sample will be needed to draw a conclusion in this area.
In the context of problem solving, the focus is to encourage students to develop critical thinking skills, as well as to impart knowledge (Lyons, 2008; Williams & Beattie 2008). Thus, a need exists for the evaluation of higher levels of the cognitive domain, such as application, analysis, and transfer (Shin & Kim, 2013).
In the case of knowledge acquisition, evidence of effectiveness of using HFPS was lacking in this meta-analysis. This result is not in agreement with the results of previous systematic reviews on the use of HFPS in nursing. Other systematic reviews reported that the use of HFPS in nursing students improves knowledge acquisition (Cant & Cooper, 2009; Kim, Park, & Shin, 2013; Lapkin et al., 2010; Weaver, 2011; Yuan et al., 2012).
The findings of the current study related to increased knowledge after simulation-based learning were nonsignificant, which was not surprising, as simulation-based learning is used to aid in the synthesis and application of knowledge and not to gain new knowledge (Norman, 2012). Findings of the current meta-analysis supported The National Council of State Boards of Nursing’s national simulation study, which was a longitudinal, multisite study (Hayden, Smiley, Alexander, Kardong-Edgren, & Jefferies, 2014). The study found no significant differences among three study groups (simulation as a substitute for up to 25%, up to 50%, and less than 10% of traditional clinical time) regarding end-of-program nursing knowledge. NCLEX® pass rates were statistically equivalent (Hayden et al., 2014). These findings suggest that the higher level learning assessed using a knowledge questionnaire was equivalent, regardless of the teaching method used (Yuan et al., 2012).
The affective domain includes the manner in which it manages things emotionally, such as feelings, values, appreciation, enthusiasms, motivations, and attitudes (Krathwohl, 2002). In the current study, use of HFPS for nursing students showed no significant treatment effects on self-efficacy and learning satisfaction. The current results showing no significant effects on perceived self-efficacy are in accordance with those of previous systematic review studies (Cant & Cooper, 2009; Kim, Park, & Shin, 2013). In fact, the number of sessions and operating hours of HFPS laboratory was small, with a mean of 3.7 sessions and 6.1 hours. This might result in no significant effects on perceived self-efficacy and, thus, no significant effects on learning satisfaction. In addition, lower satisfaction with HFPS may be due to a combination of lack of previous experience with HFPS, perceived lack of realism in the interaction (Luctkar-Flude et al., 2012), and perhaps low power, attributable to a small number of studies (k = 2). Learner satisfaction is important, as it may potentially enhance students’ engagement, thereby facilitating learning (Lapkin et al., 2010). Conduct of additional RCT studies with sufficient power on learning satisfaction will be needed before a more accurate conclusion can be made, as well as to test the effectiveness of feedback or debriefing on learner satisfaction. Feedback or debriefing is the most important and frequently cited variable about the use of HFPS to promote effective learning (McGaghie, Issenberg, Petrusa, & Scalese, 2010) and learner satisfaction (Jefferies, 2005). The use of small sample sizes in many studies resulted in insufficient power to detect effects of HFPS on the outcomes.
On the other hand, previous systematic review studies of HFPS (Kim, Park, & Shin, 2013; Lapkin et al., 2010; Weaver, 2011) reported significant effects on learning satisfaction. Learner satisfaction is important because it may potentially enhance students’ engagement, thereby facilitating learning (Lapkin et al., 2010). Conduct of more RCT studies on learning satisfaction is needed to reach an accurate conclusion.
In the current study, clinical competence (k = 11) of psychomotor domain was evaluated as the second leading outcome variable of the HFPS educational approach. In nursing students, HFPS is used most effectively in teaching and evaluation of clinical skills, such as emergency respiratory care and first aid. In psychomotor domain of learning, HFPS had significant effects on clinical competence (d = −1.06). Even though clinical competence was the only outcome variable of psychomotor domain, significant heterogeneity was found (I2 = 89%). Thus, subgroup analyses were performed according to potential moderating variables, such as class courses (i.e., adult nursing, pediatric nursing, emergency/intensive care). Class course was chosen as a potential moderator to examine the question of whether the effect on learning outcome differs according to the course subject. Three studies on emergency care courses found a significant moderate effect on clinical competence (I2 = 76%). A previous meta-analysis of problem-based learning also reported that courses that include clinical education had larger effect sizes and better outcomes than those for adult health, maternal health, and other nursing courses (Shin & Kim, 2013). The current study results support that HFPS use could enable students to learn and practice formative skills in a less threatening and controlled environment. This approach also focuses students’ attention on the acquisition of clinical skills and allows them to develop comfort in the management of sensitive patient issues. In addition, it allows students to receive immediate feedback on their performance and provides the opportunity to improve clinical skills (Becker, Rose, Berg, Park, & Shatzer, 2006). Core components of simulation include briefing, simulation, and debriefing exercises (Cant & Cooper, 2009). Feedback is essential and is perhaps the most important factor influencing learning (Issenberg, McGaghie, Petrusa, Gordon, & Scalese, 2005).
Korean studies consisted of 61.5% (k = 16) of the 26 studies included in this review. Over the past few decades, nursing education in South Korea has faced many challenges: increased nursing programs competing for limited clinical sites, patient safety initiatives restricting nursing students’ activity of observing care, and emphasizing competency-based nursing education. Based on the current nursing education environment, the Korean Accreditation Board of Nursing made an accreditation guideline to replace 10% of the total traditional clinical hours by HFPS. Thus, since 2006, 75% of the total of 201 nursing programs in Korea have started to incorporate HFPS into their curriculum, and simulation-based nursing education studies have been published in Korea (Kim, Park, & Shin, 2013), but half of them came from theses or dissertations. In general, although these Korean studies lacked randomization, they met inclusion criteria and other methodological quality.
Mailed to meet inclusion criteria because of one-group design, insufficient data, descriptive studies, and nonrelevant interventions. Finally, Korean studies consisted of more than half of the studies included in this review. These facts prompted the authors to test the effect of Korean studies and English studies on HFPS outcomes, and there were no differences between Korean studies and English studies regarding HFPS outcomes such as knowledge acquisition and self-efficacy. The findings of the current study are in agreement with previous study results, which concluded that simulation education yielded psychomotor and cognitive treatment effects (Cant & Cooper, 2009; Lapkin et al., 2010; Yuan et al., 2012).
This review has some limitations. First, studies published in languages other than English or Korean were not included; therefore, some studies might have been missed, resulting in a small number of RCT studies and publication bias. Second, the variation of assessment instruments (investigator-developed instruments) may have caused problems of validity. Based on the instruments used in the evaluation of outcomes in the reviewed studies, evidence for determining the best instrument is clearly inconsistent. The third limitation involves the methodological quality of selected studies; the reviewed RCTs are lacking in methodological quality, as allocation concealment was not clarified. Most of the reviewed studies had a nonequivalent control group design, which was at risk for selection bias. The use of small sample sizes in many studies resulted in insufficient power to detect effects of HFPS on the outcomes.
Despite some limitations of the current study, a tentative conclusion can be reached—the use of HFPS might have beneficial effects on cognitive outcomes (problem solving competency, critical thinking, and clinical judgment) and clinical skill acquisition. However, the effectiveness of using HFPS on affective outcomes (self-efficacy and learning satisfaction) appeared to be inconclusive. Conduct of further RCT studies with a larger sample will promote better understanding of whether HFPS has an effect on self-efficacy and learning satisfaction.
In nursing students, HFPS is used most effectively in teaching and evaluation of clinical skills, such as emergency respiratory care and first aid. In psychomotor domain of learning, HFPS had significant effects on clinical competence. The findings of the current study suggest that the use of HFPS could enable students to learn and practice formative skills in a less threatening and controlled environment. Therefore, based on the results of this study, the HFPS educational approach, if integrated appropriately, can be used in academic settings as an effective learning methodology.
- Becker, K.L., Rose, L.E., Berg, J.B., Park, H. & Shatzer, J.H. (2006). The teaching effectiveness of standardized patients. Journal of Nursing Education, 45, 103–111.
- Cant, R.P. & Cooper, S.J. (2009). Simulation-based learning in nurse education: systematic review. Journal of Advanced Nursing, 66, 3–15. Retrieved from http://dx.doi.org/10.1111/j.1365-2648.2009.05240.x doi:10.1111/j.1365-2648.2009.05240.x [CrossRef]
- Cohen, J. (1988). Statistical power analysis for the behavioral science (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.
- Decker, S., Sportsman, S., Puetz, L. & Billings, L. (2008). The evolution of simulation and its contribution to competency. The Journal of Continuing Education in Nursing, 39, 74–80. doi:10.3928/00220124-20080201-06 [CrossRef]
- Egger, M., Smith, G.D., Schneider, M. & Minder, C. (1997). Bias in meta-analysis detected by a simple graphical test. British Medical Journal, 315, 629–634. Retrieved from http://dx.doi.org/10.1136/bmj.315.7109.629 doi:10.1136/bmj.315.7109.629 [CrossRef]
- Gaba, D. (2004). The future vision of simulation in health care. Quality and Safety in Health Care, 16(Suppl. 1), i2–i10. Retrieved from http://dx.doi.org/10.1136/qshc.2004.009878 doi:10.1136/qshc.2004.009878 [CrossRef]
- Hayden, J.K., Smiley, R.A., Alexander, M., Kardong-Edgren, S. & Jefferies, P.R. (2014). NCSBN national simulation study: A longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education. Journal of Nursing Regulation, 5(2), S1–S64. Retrieved from https://www.ncsbn.org/JNR_Simulation_Supplement.pdf
- Higgins, J.P. & Green, S. (2008). Cochrane handbook for systematic reviews of interventions, version 5.0.0. Chichester, England: The Cochrane Collaboration and John Wiley & Sons. doi:10.1002/9780470712184 [CrossRef]
- Issenberg, S.B., McGaghie, W.C., Petrusa, E.R., Gordon, D.L. & Scalese, R.J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Medical Teacher, 27, 10–28. Retrieved from http://dx.doi.org/10.1080/01421590500046924 doi:10.1080/01421590500046924 [CrossRef]
- Jeffries, P.R. (2005). A framework for designing, implementing and evaluation simulations used as teaching strategies in nursing. Nursing Education Perspectives, 26, 96–103.
- Kim, J.H., Park, I.H. & Shin, S.J. (2013). Systematic review of Korean studies on simulation within nursing education. The Journal of Korean Academic Society of Nursing Education, 19, 307–319. Retrieved from http://dx.doi.org/10.5977/jkasne.2013.19.3.307 doi:10.5977/jkasne.2013.19.3.307 [CrossRef]
- Kim, S.Y., Park, J.E., Lee, Y.H., Seo, H.-J., Sheen, S.-S., Hahn, S.H. & Son, H.J. (2013). Testing a tool for assessing the risk of bias for non-randomized studies showed moderate reliability and promising validity. Journal of Clinical Epidemiology, 66, 408–414. Retrieved from http://dx.doi.org/10.1016/j.jclinepi.2012.09.016 doi:10.1016/j.jclinepi.2012.09.016 [CrossRef]
- Krathwohl, D.R. (2002). A revision of Bloom’s Taxonomy: An overview. Theory into Practice, 41, 212–218. Retrieved from http://dx.doi.org/10.1207/s15430421tip4104_2 doi:10.1207/s15430421tip4104_2 [CrossRef]
- Lapkin, S., Levett-Jones, T., Bellchambers, H. & Fernandez, R. (2010). Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: A systematic review. Clinical Simulation in Nursing, 6(6), e207–e222. Retrieved from http://dx.doi.org/10.1016/j.ecns.2010.05.005 doi:10.1016/j.ecns.2010.05.005 [CrossRef]
- Liberati, A., Altman, D.G., Tetzlaff, J., Mulrow, C., Gøtzsche, P.C., Ioannidis, J.P.A. & Moher, D. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. BMJ, 339, b2700. Retrieved from http://dx.doi.org/10.1136/bmj.b2700 doi:10.1136/bmj.b2700 [CrossRef]
- Luctkar-Flude, M., Wilson-Keates, B. & Larocque, M. (2012). Evaluating high-fidelity human simulators and standardized patients in an undergraduate nursing health assessment course. Nurse Education Today, 32, 448–452. Retrieved from http://dx.doi.org/10.1016/j.nedt.2011.04.011 doi:10.1016/j.nedt.2011.04.011 [CrossRef]
- Lyons, E.M. (2008). Examining the effects of problem-based learning and NCLEX-RN scores on the critical thinking skills of associate degree nursing students in a Southeastern Community College. International Journal of Nursing Education Scholarship, 5(1), 21. Retrieved from http://dx.doi.org/10.2202/1548-923X.1524 doi:10.2202/1548-923X.1524 [CrossRef]
- McGaghie, W.C., Issenberg, S.B., Petrusa, E.R. & Scalese, R.J. (2010). A critical review of simulation-based medical education research: 2003–2009. Medical Education, 44, 50–63. doi:10.1111/j.1365-2923.2009.03547.x [CrossRef]
- Norman, J. (2012). Systematic review of the literature on simulation in nursing education. The ABNF Journal, 23(2), 24–28.
- Oermann, M.H. & Gaberson, K.B. (2009). Evaluation and testing in nursing education. New York, NY: Springer.
- Oh, S.S. (2002). Meta-analysis: Theory and practice. Seoul, Korea: Kon-Kuk University Press.
- Ranchor, A.V., Fleer, J., Sanderman, R., Van der Ploeg, K.M., Coyne, J.C. & Schroevers, M. (2012). Psychological interventions for cancer survivors and cancer patients in the palliative phase (Protocol). Cochrane Database of Systematic Reviews, 2012(1), Article CD009511. Retrieved from http://dx.doi.org/10.1002/14651858.CD009511 doi:10.1002/14651858.CD009511 [CrossRef]
- Rosenthal, M. (1993). Meta-analytic procedures for social research. Newbury Park, CA: Sage.
- Sangestani, G. & Khatiban, M. (2012). Comparison of problem-based learning and lecture-based learning in midwifery. Nurse Education Today, 33, 791–795. Retrieved from http://dx.doi.org/10.1016/j.nedt.2012.03.010 doi:10.1016/j.nedt.2012.03.010 [CrossRef]
- Shin, I.S. & Kim, J.H. (2013). The effect of problem-based learning in nursing education: A meta-analysis. Advances in Health Sciences Education, 18, 1103–1120. Retrieved from http://dx.doi.org/10.1007/s10459-012-9436-2 doi:10.1007/s10459-012-9436-2 [CrossRef]
- Shin, S.J., Park, G.H. & Kim, J.H. (2014). Effectiveness of patient simulation in nursing education: Meta-analysis. Nurse Education Today, 35, 176–182. Retrieved from http://dx.doi.org/10.1016/j.nedt.2014.09.009 doi:10.1016/j.nedt.2014.09.009 [CrossRef]
- Weaver, A. (2011). High-fidelity patient simulation in nursing education: An integrative review. Nursing Education Perspectives, 32, 37–40. Retrieved from http://dx.doi.org/10.5480/1536-5026-32.1.37 doi:10.5480/1536-5026-32.1.37 [CrossRef]
- Williams, S.M. & Beattie, H.J. (2008). Problem-based learning in the clinical setting; A systematic review. Nurse Education Today, 28, 146–154. Retrieved from http://dx.doi.org/10.1016/j.nedt.2007.03.007 doi:10.1016/j.nedt.2007.03.007 [CrossRef]
- Yuan, H.B., Williams, B.A., Fang, J.B. & Ye, Q.H. (2012). A systematic review of selected evidence on improving knowledge and skills through high-fidelity simulation. Nurse Education Today, 32, 294–298. Retrieved from http://dx.doi.org/10.1016/j.nedt.2011.07.0 doi:10.1016/j.nedt.2011.07.010 [CrossRef]
References for Table A
- Alfes, C.M. (2011). CM evaluating the use of simulation with beginning nursing students. Journal of Nursing Education, 50, 89–93. doi:10.3928/01484834-20101230-03 [CrossRef]
- Alinier, G., Hunt, B., Gordon, R. & Harwood, C. (2006). Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education. Journal of Advanced Nursing, 54, 359–369.
- Andrighetti, T.P., Knestrick, J.M., Marowitz, A., Martin, C. & Engstrom, J.L. (2012). Shoulder dystocia and postpartum hemorrhage simulations: student confidence in managing these complications. Journal Midwifery & Womens Health, 57, 55–60. http://dx.doi.org/10.1111/j.1542-2011.2011.00085.x
- Brannan, J.D., White, A. & Bezanson, J.L. (2008). Simulator effects on cognitive skills and confidence levels. Journal of Nursing Education, 47, 459–500.
- Chae, M.J. & Lee, E.J. (2012). The effect of a simulation-based education on post-operation patient care knowledge and clinical performance ability of nursing student. The Journal of Regional Studies and Development, 18, 290–305.
- Choi, E.H., Kwon, K.N. & Lee, E.J. (2013). Achievements of nursing students among simulation and traditional education of bleeding patients. Journal of Korean Academy Society of Nursing Education, 19, 52–59. http://dx.doi.org/10.5977/jkasne.2013.19.1.52
- Corbridge, S.J., Robinson, F.P., Tiffen, J. & Corbridge, T.C.C. (2010). Online learning versus simulation for teaching principles of mechanical ventilation to nurse practitioner students. International Journal of Nursing Education Scholarship, 7(1), 1–9.
- Ha, Y.K. & Koh, C.K. (2012). The effects of mechanical ventilation simulation on the clinical judgment and self-confidence of nursing students. Perspectives in Nursing Science, 9, 119–126.
- Ham, Y.L. (2009). Development and evaluation of a simulation educational program using a high-fidelity patient simulator for undergraduate nursing students [dissertation]. Yonsei University, Seoul, Korea.
- Howard, V.M. (2007). A comparison of educational strategies for the acquisition of medical-surgical nursing knowledge and critical thinking skills: Human patient simulator vs. the interactive case study approach (EdD thesis). University of Pittsburgh, Pittsburgh, PA.
- Hur, H.K. & Roh, Y.S. (2013). Effects of a simulation based clinical reasoning practice program on clinical competence in nursing students. Korean Journal of Adult Academy, 25, 574–584. http://dx.doi.org/10.7475/kjan.2013.25.5.574
- Hur, H.K. & Park, S.M. (2012). Effects of simulation based education, for emergency care of patients with dyspnea, on knowledge and performance confidence of nursing students. Journal of Korean Academy Society of Nursing Education, 18, 110–118. http://dx.doi.org/10.5977/jkasne.2012.18.1.110
- Im, K.J. (2014). Effects of simulation educational program for nursing students [dissertation]. Chonbuk National University, Chonbuk, Seoul.
- Kim, C.S. (2011). Development and effect of high fidelity patient simulation education program for nursing students [dissertation]. The Catholic University of Korea, Seoul, Korea.
- Kim, H.R. (2012). Development and effect of team based simulation learning program on undergraduate nursing students [dissertation]. Chosun University, Gwangju, Korea.
- Kim, D.H., Lee, Y.J., Hwang, M.S. & Park, J.H. (2012a). Effects of a Simulation-based Integrated Clinical Practice Program (SICPP) on the problem solving process, clinical competence and critical thinking in a nursing student. Journal of Korean Academy Society of Nursing Education, 18, 499–509. http://dx.doi.org/10.5977/jkasne.2012.18.3.499
- Kim, S.A., Lee, S.K. & Chae, H.J. (2012b). Effects of clinical practice and simulation-based practice for obstetrical nursing. Korean Journal of Women’s Health Nursing, 18, 180–189. http://dx.doi.org/10.4069/kjwhn.2012.18.3.180
- Kwon, H.S. (2013). Development and effects of nursing process simulation scenario [dissertation]. Kyungpook National University, Daegu, Korea.
- Kwon, M.S. (2009). The effects of simulation-based training for basic life support on the knowledge and skills of the nursing college students. Journal of the Korea Academia-Industrial Cooperation Society, 10, 3925–3930.
- Megel, M.E., Black, J., Clark, L., Carstens, P., Jenkins, L.D. & Promes, J. (2012). Effect of high-fidelity simulation on pediatric nursing students’ anxiety. Clinical Simulation in Nursing. doi:10.1016/j.ecns.2011.03.006 [CrossRef]
- Parker, R.A., McNeill, J.A., Pelayo, L.W., Goei, K.A., Howard, J. & Gunter, M.D. (2011). Pediatric clinical simulation: a pilot project. Journal of Nursing Education, 50, 105–111. http://dx.doi.org/10.3928/01484834-20101230-05
- Scherer, Y.K., Bruce, S.A. & Runkawatt, V. (2007). A comparison of clinical simulation and case study presentation on nurse practitioner students’ knowledge and confidence in managing a cardiac event. International Journal of Nursing Education Scholarship, 4(1), 1–14.
- Tawalbeh, L.I. & Tubaishat, A. (2014). Effect of simulation on knowledge of advanced cardiac life support, knowledge retention, and confidence of nursing students in Jordan. Journal of Nursing Education, 53, 33–44. http://dx.doi.org/10.3928/01484834-20131218-01
- Yang, J.J. (2008). Development and evaluation of a simulation-based education course for nursing students. Korean Journal of Adult Nursing, 20, 548–560.
- Yang, J.J. (2012). The effects of a simulation-based education on the knowledge and clinical competence for nursing students. Journal of Korean Academy Society of Nursing Education, 18, 14–24. http://dx.doi.org/10.5977/jkasne.2012.18.1.014
- Yoo, S.Y. (2013). Development and effects of a simulation-based education program for newborn emergency care. Journal of Korean Academy of Nursing, 43, 468–477. http://dx.doi.org/10.4040/jkan.2013.43.4.468
Descriptive Summary of Included Studies (N = 26)
Duration/No. of session
|Alinier et al. (2006)
||Intermediate – fidelity Adult nursing (intensive care)
||Traditional nursing course
||OSCE examination After 6 months
||OSCE performance(+)/ Confidence(−)/ Stress(−)
|Brannan et al. (2008)
||Adult nursing (acute myocardial infarction)
||2.1 hours (simulation 120min +debriefing 10min
||Traditional classroom Lecture
||Investigator-designed scale/ Madorin’s & Iwasiw‘s Confidence Tool
||Cognitive Skill (+)/ Confidence (−)
|Corbridge et al. (2010)
||Emergency nursing (mechanical ventilation)
||Investigator designed 12-item multiple choice/ 5-item multiple choice
||Knowledge (−)/ Satisfaction (+)
||Adult Nursing (ACLS & acute ischemic stroke)
||2 hours (scenario 15min + debriefing 45min × 2scenarios)
||Interactive case study educational intervention (2hours)
||20 question exam created by the Health Education Systems, Incorporated examination
||Medical-surgical nursing knowledge(+)/ Critical thinking abilities(+)
|Megel et al. (2012)
||Pediatric Nursing course
||Low-fidelity learning experience(1 hour)
||State anxiety/ National League for Nursing student satisfaction & self-confidence
||State anxiety / self-confidence (post-only) (+) / satisfaction (post-only) (−)
|Choi et al. (2013)
||Simulation education (unclear)
Adult Nursing (care of bleeding patients)
||40min (scenario 10min+ debriefing 30min)
||C1: checklist + introduction to theory (10min)
C3: no treatment
||Investigator-designed scale for bleeding patients
||Subjective knowledge (+) / skill & attitude (+) / Objective knowledge (+) / skill & attitude (−)
|Ha & Koh (2012)
||Intensive comprehensive education (mechanical ventilated patient)
||4.5hours(Simulation 10–15min+debriefing 30min/wk)
||Lecture & practice
||Investigator-designed scale (10 questions each)
||Clinical judgement (+) / self-confidence (+)
||Sophomore & Junior
||Adult patient care (nursing process)
||1.4hours + nursing skills using simulator
||Case study (1.4hours)
||Yoon’s (2004) scale / Woo’s (2000) scale / Investigator-designed scale
||Critical thinking disposition (partially +)
Problem solving ability (+) / knowledge achievement (−)
|Hur & Roh (2013)
||Adult Nursing (abdominal pain, change in mental status, dyspnea)
||1.5hours (Simulation20min + debriefing 60min + observation)
||Traditional nursing education
||Yoon’s (2004) scale/ Rubric’s scale (clinical judgement & performance)
||Critical thinking (−) / clinical judgement (+)/ clinical performance (+)
|Hur & Park (2012)
||Adult Nursing (emergency care of patients with dyspnea)
||1.16hours (OT 10min +performance 12min +observation 24min +debriefing 30min)
||Lecture using audio-visual material
||Knowledge (+) / performance confidence (+)
||Adult Nursing (care of electrolyte imbalanced patient)
||Traditional lecture & practice
||Yoon’s (2004) scale/ Lasater’s (2007) scale
||Critical thinking disposition (+) / clinical performance ability (+)
||Adult Nursing (care of mechanical ventilated patient)
||Investigator-designed scale (problem-solving ability, knowledge, & skill performance) / 10cm visual analogue scale
||Problem-solving ability (+) knowledge (−) skill performance (−)/ self-confidence (−)
||Adult Nursing (respiratory)
||Case study & nursing skill practice (2.8hours) + clinical practice (40hours)
||Marshall’s (2003) scale / Lee et al.’s (2003) scale/ 20 questions by Korean Nurse Association work book (Academic achievement)
||Interpersonal understanding (+) / Problem-solving ability (+) / academic achievement (knowledge, clinical performance) (+)
|Kim et al. (2012a)
||Integrated clinical practice (L-tube feeding, suctioning, intravenous therapy, Foley catheterization)
||3hours (scenario 1.5hours + competence skill 1.5hours)
||Traditional nursing course (lecture + practice)
||Lee et al.’s(2008) scale / Ahn’s (2000) scale / Yoon’s (2004) scale
||Problem solving process (−) / nursing clinical competence (+) / critical thinking (+)
|Kim et al. (2012b)
||E: 35 (2009)
C: 35 (2010)
||Maternity Nursing (delivery)
||Traditional nursing course (waiting list)
||Yoo’s (2001) scale/ Yang & Park’s (2004) scale
||Communication skill (+) / Clinical competence (+)
||Emergency Nursing (basic life support)
||Traditional nursing course (lecture + manikin)
||20 questions created by American Heart Association for basic life support
||Basic life support knowledge (−)/ Basic life support skill (+)
C: 90 (2006)
||Emergency & Intensive Nursing (COPD & MI)
||24hours (scenario 2hours + debriefing 2hours/wk)
||Kim et al.’s (1998) scale/ Facione’s & Facione’s (1992) scale/Woo’s (2000) scale
||Clinical competence (+) / critical thinking disposition (−) / problem solving ability (+)
||E: 94 (2011)
C: 91 (2010)
||Adult Nursing (acute renal failure)
||Traditional lecture (2hours)
||Investigator-designed scale / Lee et al.’s (1990) clinical competence scale
||Knowledge (−) / clinical competence (+)
||Pediatric Nursing (Newborn emergency care: preterm infant & meconium aspiration syndrome module)
||8hours (lecture 3hours + clinical training 3hours +simulation 2hours)
||Neonatal resuscitation lecture (3hours)
||Newborn emergency care Knowledge (−) / newborn emergency care confidence (+)
|Tawalbeh &Tubaishat (2014)
||Simulation scenario (ACLS) + PowerPoint presentation + static manikin
||40min (30min scenario + 10 min debriefing)
||PPT presentation + static manikin
||Investigator-designed scale / Arnold et al.’s (2009) confidennce scale
||Knowledge (+) / confidennce (+)
||Practice course (pain care)
||25min (15min role play with SimMan + 10 min debriefing)
||Traditional clinical care
||National League for Nursing’s Student Satisfaction & Self Confidence scale
||Self-confidence (+) / Satisfaction (−)
|Parker et al. (2011)
||Child Health Course (simulated + traditional clinical experience)
||Traditional clinical experience
||Simulation practice (DM nursing process)
||Investigator designed scale
||Nursing process performance (+)/self-confidence (+)
|Chae & Lee (2012)
||Simulation-based education (operational care)
||25min (15m education + 10 min debriefing)
||Traditional lecture-based education
||Knowledge (+)/ clinical competence (+)
|Scherer et al. (2007)
||Adult Nursing (atrial fibrillation)
||Semiar using the same case scenario
||Morgan et al.’s (2002) Knowledge scale/ Investigator-designed confidence scale
||Knowedge (−)/ Confidence (−)
|Andrighetti et al. (2012)
||Obstetric emergency (dystocia and postpartum hemorrhage)
||Unclear (simulation experience + debriefing)
||Standard teaching method
||The National League for Nursing Student Satisfaction and Self Confidence in Learning Instrument
||Self confidence (+)