Journal of Nursing Education

Major Article 

Validation of the Simulation Effectiveness Tool in Nursing Education

Hyunsook Shin, PhD, RN, CPNP; Hyojin Kim, RN; Dahae Rim, RN; Hyunhee Ma, RN; Soonyoung Shon, PhD, RN, FNP

Abstract

Background:

The Simulation Effectiveness Tool (SET) frequently is used to assess perceived learning and confidence in simulation. However, few studies have reported the validity of the tool. This study assessed the reliability and validity of the SET.

Method:

This retrospective analysis evaluated the tool using 568 cases conducted at three nursing schools.

Results:

A two-factor model showed reasonable fit indices. The fit statistics for the two-factor structure were: χ2, 152.98 (df = 53, p < .001); comparative fit index, 0.94; root mean square error of approximation, 0.05 (range, 0.04 to 0.06); and standardized root mean square residual, 0.04. In addition, weak convergence was identified between the confidence in the SET and responding in the Lasater rubric.

Conclusion:

The psychometric properties of the study indicate the SET has demonstrated acceptable evidence of validity and reliability to measure simulation effectiveness in Korean nursing students. The use of this instrument for brief simulation education is recommended. [J Nurs Educ. 2020;59(4):186–193.]

Abstract

Background:

The Simulation Effectiveness Tool (SET) frequently is used to assess perceived learning and confidence in simulation. However, few studies have reported the validity of the tool. This study assessed the reliability and validity of the SET.

Method:

This retrospective analysis evaluated the tool using 568 cases conducted at three nursing schools.

Results:

A two-factor model showed reasonable fit indices. The fit statistics for the two-factor structure were: χ2, 152.98 (df = 53, p < .001); comparative fit index, 0.94; root mean square error of approximation, 0.05 (range, 0.04 to 0.06); and standardized root mean square residual, 0.04. In addition, weak convergence was identified between the confidence in the SET and responding in the Lasater rubric.

Conclusion:

The psychometric properties of the study indicate the SET has demonstrated acceptable evidence of validity and reliability to measure simulation effectiveness in Korean nursing students. The use of this instrument for brief simulation education is recommended. [J Nurs Educ. 2020;59(4):186–193.]

Simulation methods such as human patient simulation are widely used in nursing education, and a wide range of strategies is available to evaluate the effectiveness of simulation (Foronda, Liu, & Bauman, 2013). Use of reliable and valid tools to measure simulation effectiveness is critical considering that the domains of simulation outcomes are linked to patient safety and quality of nursing care (Shin, Park, & Kim, 2015).

Simulation effectiveness can be evaluated on many levels, including cognitive, psychomotor, and affective achievement in Bloom's taxonomy (Anderson & Krathwol, 2000), and participant reaction, learning, performance, and outcomes in the Kirkpatrick model (Kirkpatrick & Kirkpatrick, 2016). Participant reaction often is chosen as a first evaluation step, but few instruments measure the core elements of participant reaction related to simulation effectiveness. Considering that a positive participant reaction is essential for effective simulation outcomes (Østergaard, Dieckmann, & Lippert, 2011), participant reaction, often termed confidence and satisfaction of learning (Jeffries, 2012; Kirkpatrick & Kirkpatrick, 2016), typically is measured before integrative learning effect. Previous studies (Moser, Zumbach, & Deibl, 2017; Shin, Ma, Park, Ji, & Kim, 2015) reported that simulation experience significantly increased learners' metacognitive thinking as well as knowledge acquisition.

Researchers have used several instruments to evaluate student reactions to and learning from simulation. These include the Simulation Learning Effectiveness Inventory (Chen, Huang, Liao, & Liu, 2015), the Simulation Learning Effectiveness Scale for nursing students (Pai, 2016), the NLN Student Satisfaction and Self-Confidence in Learning Scale (Franklin, Burns, & Lee, 2014), the Satisfaction With Simulation Experience Scale (Levett-Jones et al., 2011), the General Perceived Self-Efficacy Scale (Schwarzer & Born, 1997), and the Simulation Effectiveness Tool (SET) (Cordi, Leighton, Ryan-Wenger, Doyle, & Ravert, 2012). Instruments measuring simulation effectiveness in previous research demonstrated good performance (Shin, Ma, et al., 2015), but they often contained an excessive number of items or complex structures that prevented simulation educators from using the instruments for frequent educational sessions (Facione, Facione, & Sanchez, 1994; Herm, Scott, & Copley, 2007).

The SET was developed in 2005 to evaluate simulation effectiveness on the level of participant reaction to nursing simulation (Cordi et al., 2012). The original SET was developed when several nursing schools collaborated to create a company-developed simulation curriculum (CAE Healthcare, 2012), and the tool was designed to evaluate students' perceptions about their simulation experiences. Using a content analysis method, the common themes were found from each simulation program's instrument. Three concepts were learning (skills or knowledge), confidence (self-belief in increased skill or knowledge), and satisfaction attitudes.

Cordi et al. (2012) reduced the SET instrument from 20 to 13 items based on an exploratory factor analysis (EFA) that showed loadings onto two subscales: perceived learning and confidence. The instrument's evaluation scales also were changed from 5-point to 3-point scales because there were no significant differences in the reliability of the 5-point versus the 3-point scale, and the 5-point scale did not yield greater stability or the capacity to distinguish among the subscales. The original study included 75 nursing students, and 200 nursing students including prelicensure sophomore, junior, and senior nursing students participated in the study for psychometric evaluation of the SET. The revised instrument's reliability was reported with a Cronbach's alpha of .93 for the total scale, .88 for the confidence subscale, and .87 for the perceived learning subscale (Cordi et al., 2012).

Recently, Leighton, Ravert, Mudra, and Macintosh (2015) revised the SET to create the SET-M and increased the number of instrument items from 13 to 19. However, considering that the SET-M has prebriefing and debriefing subscales added to the original confidence and perceived learning subscales, the revised version has limitations coming from the incoherent classification of the subscales because the added prebriefing and debriefing subscales were about simulation structure, but the original subscales of perceived learning and confidence were simulation outcomes.

Other studies (Leighton et al., 2015; Zhang, Ura, & Kaplan, 2014) have used the SET to evaluate the learning effect of simulation in nursing students. Previous studies reported the SET provided opportunities for students to reflect on their simulation experience and for faculty to understand students' experience while evaluating strengths and weaknesses of the curriculum and faculty performance (Hammer, Fox, & Hampton, 2014; Leighton et al., 2015).

Because the SET contains only 13 items in a convenient self-report form, it seems suitable for measuring participants' perceived learning effect in simulations with simple or introductory-level scenarios that require limited evaluation time. Although situation-based instruments that correspond with individual scenarios are in high demand for recent, more complicated nursing simulations, instruments reflecting general simulation objectives such as the SET also are needed for evaluation of simple or introductory simulations (Hammer et al., 2014). However, few studies have reported reliability and validity information for evaluation instruments such as the SET that are frequently used in nursing research and education. Therefore, this study aimed to assess the reliability and validity of the SET, examining its construct validity using confirmatory factor analysis (CFA) and its convergent validity with the Lasater Clinical Judgment Rubric (LCJR) (Lasater, 2007), which was identified as one of tools that had the ability to provide comprehensive evaluation of simulation effectiveness (Kardong-Edgren, Adamson, & Fitzgerald, 2010) using retrospective data, with the goal of determining whether this easy-to-use instrument is suitable for evaluating simulation effectiveness.

Method

Study Design

This study was a retrospective analysis that evaluated the reliability and validity of the SET using data generated in a simulation project conducted by the authors. Specifically, the study used data from a routine simulation practicum at three nursing schools in Seoul, South Korea. In total, data from 568 students were used (250 for EFA and 318 for CFA), which were acceptable sample sizes according to the rule of thumb in the Monte Carlo approach (Harrington, 2009), which stated that the necessary sample size for non-normal and complete data was greater than 315.

Data Collection

Data were collected during longitudinal research conducted from 2012 to 2014 for development of integrated pediatric nursing coursework using human patient simulation (Shin & Kim, 2014). Data were generated from participant use of two self-report instruments during a pediatric simulation. The simulation included a febrile context scenario set in a pediatric unit and an apnea context scenario in a neonatal intensive care unit. The simulation took approximately 1 hour including operation, self-analysis, and debriefing time. The main mechanism of simulation included the clinical judgment process, such as noticing, interpreting, responding, and reflecting.

Each simulation followed a protocol of pre-learning, orientation, simulation operation, reflective assessment, recommendation writing, self-evaluation, and debriefing. The self-report instruments were the SET and the LCJR. Under the inclusion criteria, data were selected from participants who completed both the SET and LCJR. The initial size of the participant sample was 581 students; however, data from 13 students were excluded due to incomplete data coding, resulting in the final sample size of 568 students.

General Data Characteristics

Data from 568 simulation participants were used to evaluate the SET's reliability and validity. The majority of participants were female (95%). Nearly two thirds (65%) of the participants were in their senior year, and the remaining participants (35%) were in their junior year. Mean age of participants was 22.06 ± 1.76 years. The data included samples pertaining to febrile care (62%) and apnea management (38%) scenarios.

Instruments

Simulation Effectiveness Tool. The SET (Cordi et al., 2012) is an instrument used to measure simulation effectiveness in nursing students. The original SET included two domains: perceived learning and confidence. This study used the SET containing two domains and 13 items. Translation and adaptation of the SET followed World Health Organization (WHO) (2016) guidelines. The forward translation of the original English version of the SET was completed by a nursing faculty member, and the tool was back-translated into English by an expert who was fluent in Korean and English. The content of the original and Korean versions then were compared, and a final version was developed (WHO, 2016). Each of the 13 items in the SET is scored on a scale ranging from 0 (do not agree) to 2 (strongly agree), with a maximum total score of 26. In a previous study, the reliability of the SET was calculated as a Cronbach's alpha of .93; the reliability of the SET's confidence and perceived learning domains was calculated as a Cronbach's alpha of .88 and .87, respectively (Zhang et al., 2014).

Lasater Clinical Judgment Rubric. The LCJR is a rubric used to measure clinical judgment in nursing (Shin, Park, & Shim, 2015). This rubric is based on Tanner's clinical judgment model (2006), which includes four subdomains of clinical judgment: noticing, interpreting, responding, and reflecting. The LCJR reflects these four subdomains of Tanner's model and consists of 11 items; each item is scored as exemplary, accomplished, developing, or beginning. The 3-item noticing phase consists of focusing on observation, recognizing deviation, and information seeking by nursing students, and the 2-item interpreting phase consists of prioritizing and interpreting data. The 4-item responding phase consists of focusing on mannerisms, communication skills, interventions/flexibility, and use of nursing skills (Victor-Chmil & Larew, 2013), and the 2-item reflecting phase consists of evaluation and planning for improvement.

A high value on the LCJR indicates a higher level of clinical judgment. The LCJR can be used as a faculty-rating rubric and as a self-rating rubric (the self-LCJR) to measure clinical judgment in simulation (Strickland, Cheshire, & March, 2016). The self-LCJR was chosen for convergent validity because the clinical judgment primarily reflects the learner's competency, which corresponds with confidence and perceived learning in the SET. The reliability of the LCJR was calculated as a Cronbach's alpha of .82 in a previous study (Strickland et al., 2016).

Data Analysis

Factor Analysis. EFA was performed to determine the number of factors. Bartlett's test of sphericity was used to test for the possibility to perform factor analysis. The Kaiser-Meyer-Olkin measure of sampling adequacy was used. Correlation coefficients were analyzed by principal component analysis and Varimax rotation, which identified two latent factors. Extraction of factors was based on Kaiser's criterion for Eigenvalues of equal to or greater than unity. For all 13 items, EFA showed two comprised factors, except for item 4, “I developed a better understanding of the medications that were in the simulated clinical experience (SCE).” When item 4 was excluded, the EFA demonstrated two distinct factors

Reliability and Validity. Internal consistency measured as Cronbach's alpha was used to evaluate the reliability of the SET, and CFA and convergent validity were used to assess the validity of the SET. In the first step, CFA was performed to validate the models proposed in the EFA results and the original SET study. The structural equation model in STATA® version 13.0 was used for the CFA (Acock, 2013). The following statistics were used to estimate the overall model fits: the chi-square statistic and associated probability (p), root mean square error of approximation (RMSEA) index, standardized root mean square residual (SRMR), the comparative fit index (CFI), the coefficient of determination, Tucker-Lewis Index (TLI), Akaike's information criterion (AIC), and the Bayesian information criterion (BIC).

The second step was to validate the instrument's constructs by analyzing convergence between the SET and the self-LCJR. To estimate construct validity by identifying convergence, Pearson's r was calculated. Values of correlation coefficients indicated the convergence between the two instruments. The criterion of r lying between .3 and –.3 indicated a weak relationship, between .3 and .5 and –.3 and –.5 indicated a moderate relationship, and > .5 indicated a strong relationship (Grove & Cipher, 2016). SPSS® version 23.0 was used to evaluate the convergent validity of the SET.

Ethical Consideration

This study received exemption from the university's institutional review board.

Results

Factor Analysis

Table 1 shows the EFA results. The EFA indicated two comprised factors with 12 items. Factor 1 comprised seven items with factor loadings ranging from .43 to .78, and factor 2 comprised five items with factor loadings ranging from .46 to .81. EFA results showed items 3 (I developed a better understanding of the pathophysiology of the conditions in the SCE) and 7 (My assessment skills improved), which were originally classified as perceived learning subscale, moved to confidence subscale. Item 4 (I developed a better understanding of the medications that were in the SCE) was removed due to the weak loading, which resulted in the modified version of SET with two factors and 12 items. The modified version explained variance to 48.8%.

Factor Loadings for the Modified 13-Item and Final 12-Item SET

Table 1:

Factor Loadings for the Modified 13-Item and Final 12-Item SET

Reliability

Each item of the SET examined in this study is presented in Table 2. Cronbach's alpha coefficient was used to evaluate the internal consistency of the SET, and the coefficient for the 12-item total was .84. Cronbach's alpha values for the two SET domains were .70 for perceived learning and .83 for confidence. Overall, the findings supported the reliability of the SET used in this study.

Mean and Reliability Values for Simulation Effectiveness Tool (N = 557)

Table 2:

Mean and Reliability Values for Simulation Effectiveness Tool (N = 557)

Construct Validity

CFA was performed to confirm the two domains of the SET. Table 3 compares the model fit statistics. Recommendations made by Hooper, Coughlan, and Mullen (2008) for good fit statistics were applied: values less than .07 as the good fit of the RMSEA, greater than .9 as the cut-off criteria, and .95 as the good fit of the CFI; SRMR values less than .08 and greater than .80 as the cut-off of the TLI were considered to be acceptable. Both the AIC and BIC were applied when models were compared, and the model with the smaller AIC and BIC values was preferred.

Model Fit Statistics for the Original, Modified, and Final 12-Item Simulation Effectiveness Tool

Table 3:

Model Fit Statistics for the Original, Modified, and Final 12-Item Simulation Effectiveness Tool

In the CFA, model 3, composed of perceived learning and confidence with 12 items, was confirmed as having the best fit among the tested models, including model 1 from the original SET and model 2 from the modified SET with 13 items based on the EFA results. The fit statistics for the final model 3 included a chi-square of 152.98 (df = 53, p < .001), CFI value of .94, RMSEA of .05 (.04 to .06), SRMR of .04, AIC of 8518.78, and BIC of 8678.71.

Figure 1 illustrates the standardized SEM results for the 12-item SET model. The correlation coefficient between confidence and perceived learning was .65, indicating that these two latent variables were highly correlated. The factor loadings for the seven items of the latent variable of confidence ranged from .48 to .75, and those for the five items of the latent variable of perceived learning ranged from .40 to .69. Overall, the items of confidence and perceived learning showed strong loadings.

Confirmatory factor analysis results for the modified Simulation Effectiveness Tool.

Figure 1.

Confirmatory factor analysis results for the modified Simulation Effectiveness Tool.

Convergent Validity

Table 4 shows the correlation coefficients between SET and self-LCJR scores used to identify convergent validity. The overall SET and self-LCJR scores showed significant correlation (r = .29; p < .01), indicating a weak relationship. The highest correlation coefficient among domains was between confidence in the SET and responding in the self-LCJR (r = .32; p < .01), followed by that between confidence in the SET and noticing in the self-LCJR (r = .29; p < .01).

Convergent Validity Between 12-Item SET and Self-LCJR

Table 4:

Convergent Validity Between 12-Item SET and Self-LCJR

Discussion

The findings of this study indicate the SET has demonstrated acceptable evidence of validity and reliability for measuring simulation effectiveness in Korean nursing students. More specifically, the results obtained from reliability and validity tests showed that the two-factor model of the SET was appropriate for use by nursing students in assessing simulation effectiveness.

The SET was developed as a self-assessment tool for nursing students engaging in simulation education, and it has been viewed as one of several reliable instruments for nursing student assessment of simulation outcomes in terms of effectiveness. The internal consistency finding of this study, a Cronbach's alpha of .84, indicates the SET has strong consistency for measuring simulation effectiveness among Korean nursing students, and this finding is similar to the results of previous studies (Kim, 2016; Shin, Ma, et al., 2015) reporting strong consistency among nursing students.

In addition, the results of this study indicate the SET is a valid instrument for measuring simulation effectiveness among nursing students. The two-factor modified SET model (model 3), including the perceived learning and confidence domains, showed reasonable fit indices, specifically, a CFI of .94, a RMSEA value of .05 (.04-.06), and an SRMR of .04. Although the CFI is less than .95 according to Bentler's conservative recommendation, these fit values are in the range of a good model fit (Brown, 2015). When the modification indices of the two-factor SET model were completed, no significant covariates were identified, meaning that the individual SET items were mutually exclusive.

Fit indices for the two-factor SET model restructured based on the findings from EFA without item 4 were better than those for the original model or the two-factor 13-item model including item 4. Considering that item 4 was about better understanding of the medications, this actual learning of knowledge acquisition would not be good for evaluating the simulation effectiveness. In analyzing simulation studies in accordance with Bloom's taxonomy of learning domains, Kim, Park, and Shin (2013) reported that knowledge had been measured mainly in the cognitive domain and that confidence had been confirmed as a main indicator in the affective domain. In addition, Cant and Cooper (2010) reported that the main effects of simulation in nursing education were learners' knowledge and satisfaction with the experience.

The SET was developed based on the essential concepts of perceived learning and confidence as the effects of nursing simulation, which corresponded with the first two levels in the Kirkpatrick and Kirkpatrick (2016) model of four levels of evaluation: reaction, learning, behavior, and outcome. The Kirkpatrick model suggests that acquisition of learners' lower-level outcomes should occur prior to achievement of higher outcomes. The first level of the Kirkpatrick model is reaction, which is identical to the confidence subscale of the SET, and the model's second level is learning, which is the same as the perceived learning subscale of the SET. In fact, considering items on the perceived learning subscale are likely to contain meta-cognitive thinking, the learning indicates metacognitive thinking rather than actual learning of knowledge acquisition. Meta-cognition is the awareness and understanding of one's thinking and cognitive processes (Mariani et al., 2013). Therefore, the SET is a tool to measure simulation effectiveness by measuring the confidence and metacognitive thinking in simulation.

As shown in Figure 1, the correlation coefficient between confidence and perceived learning was .65, suggesting that these two latent variables were correlated. The factor loadings for the seven items in the latent variable of confidence ranged from .48 to .75, and those for the five items in the latent variable of perceived learning ranged from .40 to .69. Based on the rule of thumb (Tabachnick & Fidell, 2007), most items in the confidence and perceived learning domains showed fair to excellent loadings.

Regarding convergent validity, some degree of convergence (r =. 33) was identified between the SET and the self-LCJR, and the highest correlation (r = .32) among subscales was between the confidence scale of the SET and the responding scale of the self-LCJR. The LCJR is based on Tanner's (2006) clinical judgment model, and confidence is one of the components of clinical judgment (Blum, Borglund, & Parcells, 2010). The confidence construct in the SET can be explained in terms of its correlation with responding in the LCJR. Given that confidence, learning, and clinical judgment are major learning effectiveness outcomes in nursing simulation (O'Donnell, Decker, Howard, Levett-Jones, & Miller, 2014), the findings of correlation between the SET and the self-LCJR presented in this study are important to demonstrate the validity of the SET.

However, the perceived learning subscale in the SET had significant correlation with the reflecting subscale of the LCJR and hardly any correlation with the other LCJR subscales. Given that metacognition is based on reflection on one's own knowledge and skills to improve perceived learning (Shelley, 2019), this correlation could be interpreted in importance of reflection in simulation methods. The perceived learning domain of SET tools is associated conceptually with the reflecting domain of the LCJR, which is one of the clinical judgment processes. However, the perceived learning domain of the SET has a low correlation with the other domains in the LCJR and is considered to have a conceptual structure that is different from the other domains in LCJR tools. Another possible reason is that students scored highly on all items in the perceived learning scale, which probably resulted in low variance, making it difficult to find associations with other variables.

The overall statistical results indicate the SET is a reliable and valid instrument for evaluating simulation effectiveness among nursing students. The instrument consists of confidence and perceived learning domains. The simulation education process involves setting objectives from the three domains of Bloom's taxonomy, the learners' scenario experience, and the outcome evaluation process (O'Donnell et al., 2014). The objectives reflect desired effects of Bloom's psychomotor, cognitive, and affective domains on the learners' scenario experience, and the simulation's effectiveness can be identified using outcome evaluations provided by the learners.

Use of simulation in nursing education can help educators to establish objectives fulfilling all three domains of Bloom's taxonomy and can help learners achieve learning outcomes in all four levels of the Kirkpatrick model. Before student behaviors and results are evaluated based on the Kirkpatrick model, the SET can be used as a formative evaluation tool for confirming the effectiveness of simulation education. For an instrument to be used as a formative tool, brevity, simplicity, and ease of use are important (Shute, 2008). The modified SET is a simple and easy-to-use tool composed of 12 items and a three-level Likert scale, and given the results of the current study, the instrument is suitable for formative evaluation of nursing simulation.

Limitations

This study has some limitations related to its retrospective analysis design. First, the original SET was identified as a reliable and valid instrument that is suitable for formative evaluation in nursing simulation education. However, a newly revised instrument includes two additional domains of prebriefing and debriefing (Leighton et al., 2015) along with the original two domains of perceived learning and confidence. This study used the original SET for its simplicity and also to focus on the effectiveness of the simulation scenario. Considering that the prebriefing and debriefing were procedural concepts in simulation education, the newly introduced domains of the SET-M (prebriefing, learning, confidence, and debriefing) need to be tested carefully in terms of conceptual coherence in the domain classification. Therefore, the findings of this study should be interpreted carefully in the sense that it tested the original SET rather than the revised untested instrument.

In addition, although the relatively low level Cronbach's alpha value in the perceived learning subscale of the suggested model 3 may reflect a low number of questions, the items in the perceived learning subscale need to be explored and modified further to provide clearly defined metacognitive thinking scales rather than actual knowledge acquisition. Considering that the data in this study included SET and the self-LCJR, two instruments not thoroughly analyzed in previous studies, further investigation of SET would likely yield meaningful information for the use of SET in simulation education.

Conclusion

Reliable and valid evaluation tools are needed for nursing students to evaluate the effectiveness of simulation to improve the overall quality of nursing education. In this study, the reliability and validity of the SET were established. The findings provide evidence supporting the SET with confidence and perceived learning as its psychometric properties. Educators and researchers can efficiently use this instrument with simple scenarios and virtual simulations, with or without debriefing.

References

  • Acock, A.C. (2013). Discovering structural equation modeling using Stata. College Station, TX: Stata Press.
  • Anderson, L.W. & Krathwohl, D.R. (Eds.). (2000). A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy. New York, NY: Longman.
  • Blum, C.A., Borglund, S. & Parcells, D. (2010). High-fidelity nursing simulation: Impact on student self-confidence and clinical competence. International Journal of Nursing Education Scholarship, 7(1), 1–14. doi:10.2202/1548-923X.2035 [CrossRef]
  • Brown, T.A. (2015). Confirmatory factor analysis for applied research (2nd ed.). New York, NY: Guilford.
  • CAE Healthcare. (2012). Program for nursing curriculum integration (PNCI) simulation effectiveness tool (SET). Retrieved from http://www.hpsn.com/documents/1348/simulation_effectiveness_tool.pdf
  • Cant, R.P. & Cooper, S.J. (2010). Simulation-based learning in nurse education: Systematic review. Journal of Advanced Nursing, 66(1), 3–15. doi:10.1111/j.1365-2648.2009.05240.x [CrossRef]
  • Chen, S.-L., Huang, T.-W., Liao, I.-C. & Liu, C. (2015). Development and validation of the Simulation Learning Effectiveness Inventory. Journal of Advanced Nursing, 71(10), 2444–2453. doi:10.1111/jan.12707 [CrossRef]
  • Cordi, V.L.E., Leighton, K., Ryan-Wenger, N., Doyle, T.J. & Ravert, P. (2012). History and development of the simulation effectiveness tool (SET). Clinical Simulation in Nursing, 8(6), e199–e210. doi:10.1016/j.ecns.2011.12.001 [CrossRef]
  • Facione, N.C., Facione, P.A. & Sanchez, C.A. (1994). Critical thinking disposition as a measure of competent clinical judgment: The development of the California Critical Thinking Disposition Inventory. Journal of Nursing Education, 33(8), 345–350.
  • Foronda, C., Liu, S. & Bauman, E.B. (2013). Evaluation of simulation in undergraduate nurse education: An integrative review. Clinical Simulation in Nursing, 9(10), e409–e416. doi:10.1016/j.ecns.2012.11.003 [CrossRef]
  • Franklin, A.E., Burns, P. & Lee, C.S. (2014). Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses. Nurse Education Today, 34(10), 1298–1304. doi:10.1016/j.nedt.2014.06.011 [CrossRef]
  • Grove, S.K. & Cipher, D.J. (2016). Statistics for nursing research: A workbook for evidence-based practice. St. Louis, MO: Elsevier.
  • Hammer, M., Fox, S. & Hampton, M.D. (2014). Use of a therapeutic communication simulation model in pre-licensure psychiatric mental health nursing: Enhancing strengths and transforming challenges. Nursing and Health, 2(1), 1–8.
  • Harrington, D. (2009). Confirmatory factor analysis. New York, NY: Oxford University Press.
  • Herm, S.M., Scott, K.A. & Copley, D.M.. (2007). “Sim”sational revelations. Clinical Simulation in Nursing Education, 3(1), e25–e30.
  • Hooper, D., Coughlan, J. & Mullen, M.R. (2008). Structural equation modelling: Guidelines for determining model fit. The Electronic Journal of Business Research Methods, 6, 53–60.
  • Jeffries, P.R. (Ed.). (2012). Simulation in nursing education: From conceptualization to evaluation (2nd ed.). New York, NY: National League for Nursing.
  • Kardong-Edgren, S., Adamson, K.A. & Fitzgerald, C. (2010). A review of currently published evaluation instruments for human patient simulation. Clinical Simulation in Nursing, 6(1), 25–35 doi:10.1016/j.ecns.2009.08.004 [CrossRef]
  • Kim, A. (2016). Effects of maternity nursing simulation using high-fidelity patient simulator for undergraduate nursing students. Journal of the Korea Academia-Industrial Cooperation Society, 17(3), 177–189. doi:10.5762/kais.2016.17.3.177 [CrossRef]
  • Kim, J.-H., Park, I.-H. & Shin, S. (2013). Systematic review of Korean studies on simulation within nursing education. The Journal of Korean Academic Society of Nursing Education, 19(3), 307–319. doi:10.5977/jkasne.2013.19.3.307 [CrossRef]
  • Kirkpatrick, J.D. & Kirkpatrick, W.K. (2016). Kirkpatrick's four levels of training evaluation. Alexandria, VA: ATD Press.
  • Lasater, K. (2007). Clinical judgment development: Using simulation to create an assessment rubric. Journal of Nursing Education, 46(11), 496–503. doi:10.3928/01484834-20071101-04 [CrossRef]
  • Leighton, K., Ravert, P., Mudra, V. & Macintosh, C. (2015). Updating the simulation effectiveness tool: Item modifications and reevaluation of psychometric properties. Nursing Education Perspectives, 36(5), 317–323. doi:10.5480/15-1671 [CrossRef]
  • Levett-Jones, T., McCoy, M., Lapkin, S., Noble, D., Hoffman, K., Dempsey, J. & Roche, J. (2011). The development and psychometric testing of the Satisfaction With Simulation Experience Scale. Nurse Education Today, 31(7), 705–710. doi:10.1016/j.nedt.2011.01.004 [CrossRef]
  • Mariani, B., Cantrell, M.A., Meakim, C., Prieto, P. & Dreifuerst, K.T. (2013). Structured debriefing and students' clinical judgment abilities in simulation. Clinical Simulation in Nursing, 9(5), e147–e155. doi:10.1016/j.ecns.2011.11.009 [CrossRef]
  • Moser, S., Zumbach, J. & Deibl, I. (2017). The effect of metacognitive training and prompting on learning success in simulation-based physics learning. Science Education, 101(6), 944–967. doi:10.1002/sce.21295 [CrossRef]
  • O'Donnell, J.M., Decker, S., Howard, V., Levett-Jones, T. & Miller, C.W. (2014). NLN/Jeffries simulation framework state of the science project: Simulation learning outcomes. Clinical Simulation in Nursing, 10(7), 373–382. doi:10.1016/j.ecns.2014.06.004 [CrossRef]
  • Østergaard, D., Dieckmann, P. & Lippert, A. (2011). Simulation and CRM. Best Practice & Research Clinical Anaesthesiology, 25(2), 239–249.
  • Pai, H.-C. (2016). Development and validation of the Simulation Learning Effectiveness Scale for nursing students. Journal of Clinical Nursing, 25(21–22), 3373–3381. doi:10.1111/jocn.13463 [CrossRef]
  • Schwarzer, R. & Born, A. (1997). Optimistic self-beliefs: Assessment of general perceived self-efficacy in thirteen cultures. World Psychology, 3(1–2), 177–190.
  • Shelley, T.B. (2019). Metacognition: Importance of reflection in the pre-service teacher journey. In Mariano, G.J. & Figliano, F.J. (Eds.), Handbook of research on critical thinking strategies in pre-service learning environments (pp. 174–188). Hershey, PA: IGI Global. doi:10.4018/978-1-5225-7823-9.ch009 [CrossRef]
  • Shin, H. & Kim, M.J. (2014). Evaluation of an integrated simulation courseware in a pediatric nursing practicum. Journal of Nursing Education, 53(10), 589–594. doi:10.3928/01484834-20140922-05 [CrossRef]
  • Shin, H., Ma, H., Park, J., Ji, E.S. & Kim, D.H. (2015). The effect of simulation courseware on critical thinking in undergraduate nursing students: Multi-site pre-post study. Nurse Education Today, 35(4), 537–542. doi:10.1016/j.nedt.2014.12.004 [CrossRef]
  • Shin, H., Park, C.G. & Shim, K. (2015). The Korean version of the Lasater clinical judgment rubric: A validation study. Nurse Education Today, 35(1), 68–72. doi:10.1016/j.nedt.2014.06.009 [CrossRef]
  • Shin, S., Park, J.-H. & Kim, J.-H. (2015). Effectiveness of patient simulation in nursing education: Meta-analysis. Nurse Education Today, 35(1), 176–182. doi:10.1016/j.nedt.2014.09.009 [CrossRef]
  • Shute, V.J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. doi:10.3102/0034654307313795 [CrossRef]
  • Strickland, H., Cheshire, M.H. & March, A.L. ( 2016, August. ). Comparing student and faculty scores of clinical judgment during simulation. Paper presented at the International Nursing Association for Clinical Simulation and Learning Annual Conference 2016. , Grapevine, TX. . Retrieved from http://www.nursinglibrary.org/vhl/handle/10755/618273
  • Tabachnick, B. & Fidell, L. (2007). Multivariate analysis of variance and covariance. Using Multivariate Statistics, 3, 402–407.
  • Tanner, C.A. (2006). Thinking like a nurse: A research-based model of clinical judgment in nursing. Journal of Nursing Education, 45(6), 204–211.
  • Victor-Chmil, J. & Larew, C. (2013). Psychometric properties of the Lasater clinical judgment rubric. International Journal of Nursing Education Scholarship, 10(1), 1–8.
  • World Health Organization. (2016). Process of translation and adaptation of instrument. Retrieved from: http://www.who.int/substance_abuse/research_tools/translation/en/
  • Zhang, W., Ura, D. & Kaplan, B. (2014). A comparison study on integrating electronic health records into priority simulation in undergraduate nursing education. Journal of Nursing Education and Practice, 4(7), 123. doi:10.5430/jnep.v4n7p123 [CrossRef]

Factor Loadings for the Modified 13-Item and Final 12-Item SET

No.Item13-Item SET12-Item SET


Factor 1Factor 2Factor 1Factor 2
E5I feel more confident in my decision making skills.78.78
E8I feel more confident that I will be able to recognize changes in my real patient's condition.73.72
E7My assessment skills improved.71.72
E6I am more confident in determining what to tell the health care provider.70.70
E9I am able to better predict what changes may occur with my real patients.68.68
E2I feel better prepared to care for real patients.67.67
E3I developed a better understanding of the pathophysiology of the conditions in the SCE.43.43
E4I developed a better understanding of the medications that were in the SCE.28.24
E13Debriefing and group discussion were valuable.80.81
E12I learned as much from observing my peers as I did when I was actively involved in caring for the simulated patient.69.68
E11I was challenged in my thinking and decision making skills.58.59
E10Completing the SCE helped me understand classroom information better.47.45
E1The instructor's questions helped me to think critically.46.46
Total variance explained28.7%17.3%30.5%18.3%

Mean and Reliability Values for Simulation Effectiveness Tool (N = 557)

DomainItemMSDCronbach's Alpha if Item DeletedCronbach's Alpha
Confidence2) I feel better prepared to care for real patients1.23.65.81.83
3) I developed a better understanding of the pathophysiology of the conditions in the SCE1.66.52.82
5) I feel more confident in my decision making skills1.31.61.78
6) I am more confident in determining what to tell the health care provider1.40.64.80
7) My assessment skills improved1.49.55.80
8) I feel more confident that I will be able to recognize changes in my real patient's condition1.46.57.79
9) I am able to better predict what changes may occur with my real patients1.46.58.81
Learning1) The instructor's questions helped me to think critically1.81.39.70.70
10) Completing the SCE helped me understand classroom information better1.82.4362
11) I was challenged in my thinking and decision making skills1.77.47.63
12) I learned as much from observing my peers as I did when I was actively involved in caring for the simulated patient1.85.44.66
13) Debriefing and group discussion were valuable1.90.40.61
Total.84

Model Fit Statistics for the Original, Modified, and Final 12-Item Simulation Effectiveness Tool

Modelχ2dfpCFIRMSEASRMRTLIAICBIC
Model 1: original version of SET274.5964<.0010.880.07 (0.06–0.08)0.050.869734.469907.36
Model 2: modified 13-item SET (e4 included)187.5864<.0010.930.05 (0.04–0.06)0.040.929647.459820.35
Model 3: final 12-item SET (e4 excluded)152.9853<.0010.940.05 (0.04–0.06)0.040.938518.788678.71

Convergent Validity Between 12-Item SET and Self-LCJR

SETLCJR


LearningConfidenceTotal SETNoticingInterpretingRespondingReflectingTotal LCJR
SETLearning1.00
Confidence0.51*1.00
Total SET0.76*0.95*1.00
LCJRNoticing0.110.29*0.26*1.00
Interpreting0.010.22*0.17*0.69*1.00
Responding0.100.32*0.28*0.71*0.68*1.00
Reflecting0.13*0.21*0.21*0.47*0.40*0.54*1.00
Total LCJR0.11*0.33*0.29*0.88*0.82*0.92*0.69*1.00
Authors

Dr. Shin is Professor, Ms. Kim is Research Assistant and PhD candidate, Ms. Rim is Research Assistant and PhD candidate, Ms. Ma is Research assistant and PhD candidate, and Dr. Shon is Postdoctoral Fellow, College of Nursing Science, Kyung Hee University, Dongdaemungu, Seoul, Republic of Korea.

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT, and Future Planning (KNRF # 2016R1A2B4010413). The authors thank Jon S. Mann, College Instructor, UIC Academic Center for Excellence, for editorial assistance.

The authors have disclosed no potential conflicts of interest, financial or otherwise.

Address correspondence to Hyunsook Shin, PhD, RN, CPNP, Professor, College of Nursing Science, Kyung Hee University, 26 Kyungheedaero, Dongdaemungu, Seoul, South Korea, 02447; e-mail: hsshin@khu.ac.kr.

Received: March 20, 2019
Accepted: November 25, 2019

10.3928/01484834-20200323-03

Sign up to receive

Journal E-contents