Many studies have reported on simulation as an alternative or supplementary strategy to traditional clinical practicum and have indicated that it enhanced students’ critical thinking competency and their clinical judgment ability in a safe environment (Goodstone et al., 2013; Jeffries & Rizzolo, 2006; Lasater, 2011; Schubert, 2012; Sullivan-Mann, Perron, & Fellner, 2009). Critical thinking and clinical judgment were identified as necessary outcomes of nursing education (American Association of Colleges of Nursing, 2008; Kim, Ahn, Kim, Jeong, & Lee, 2006). However, few studies have reported on the effects of simulation on critical thinking and clinical judgment, and a recent systematic review on simulation studies (Norman, 2012) showed inconsistent findings on simulation outcomes.
Today’s increasingly complex clinical environments require more competent nurses to act in diverse clinical situations, and this calls for integrating problem solving strategies into simulation scenarios (Shinnick & Woo, 2013). However, most nursing simulation studies included limited clinical case scenarios. In particular, the lack of pediatric clinical cases has hampered students’ learning of developmentally appropriate care knowledge and skills required in pediatric nursing (Bultas, 2011). Furthermore, limited validity and reliability of simulation tools and lack of theory-driven integrated courseware are major problems in simulation studies (Groom, Henderson, & Sittner, 2013).
Predictors of critical thinking and clinical judgment in nursing education included participant’s age, gender, prior simulation experience, clinical practicum experience, previous class exposure to critical thinking, academic level, and satisfaction with nursing curriculum (Cant & Cooper, 2009; Kaddoura, 2010; Kim, Moon, Kim, Kim, & Lee, 2014; Shinnick & Woo, 2013). Several studies on nursing simulation focused on evaluation of student satisfaction about innovative teaching and learning strategies, self-reported performance, or clinical judgment (Feingold, Calaluce, & Kallen, 2004; Jeffries & Rizzolo, 2006; Parr & Sweeney, 2006).
As the use of high-fidelity simulation increases in pediatric nursing education, it is important to have comprehensive and integrated nursing simulation scenarios that could serve as an alternative or a supplement to the pediatric clinical practicum. The authors developed an integrated pediatric nursing simulation courseware, using high-fidelity simulators and standardized patients with the theoretical frameworks of Jeffries (2006) and Tanner (2006). The courseware included major clinical scenarios and tools for evaluation of learning outcomes. The specific aim of the current study was to evaluate the effects of an integrated pediatric nursing simulation courseware in a pediatric practicum on students’ critical thinking and clinical judgment.
This study used a one-group, pretest and posttest design to evaluate the effectiveness of an integrated pediatric nursing simulation courseware in a pediatric practicum.
Ninety-five senior undergraduate nursing students were recruited from a university in Seoul, South Korea. Institutional review board approval was obtained prior to the study. Power analysis indicated that a sample size of 95 participants would allow detection of moderate (0.15) effect sizes on a paired t test and a one-way analysis of variance (ANOVA) at a p value of 0.05 and power of 0.80. Inclusion criteria were students enrolled in a 3-week pediatric nursing practicum between February and November 2012. The simulation took place as a required class activity, but the presurvey and postsurvey were done on a voluntary basis. The pediatric simulation was scheduled on the first day of weeks 1, 2, and 3 of the practicum.
The simulation courseware was developed based on the major tenets of the Jeffries’ simulation framework (2006), with the major objective of enhancing participants’ clinical judgment according to Tanner’s clinical judgment model (2006). Jeffries (2006) introduced the importance of proper simulation design and appropriate organization of students for effective simulation. The simulation design of the courseware for this study included major sections proposed by Jeffries: clear objectives, fidelity, complexity, cues, and debriefing. Outcomes of simulation in this study included critical thinking, student satisfaction with self-confidence, and clinical judgment. The theoretical framework for simulation outcomes in this study was the clinical judgment model by Tanner (2006). Tanner’s interpretive model of clinical judgment suggested areas in which educators could provide feedback and coaching to help students to develop insight into their own critical thinking. Tanner also suggested specific clinical learning activities that might help to promote skills in clinical judgment. These activities included noticing, interpretation, responding, and reflecting.
In the current study, the scenarios and assessment tools in each scenario were developed to enhance students’ clinical judgment. For example, the main mechanism of simulation included the process of noticing, interpreting, responding, and reflecting in the clinical judgment process. We used audio–video-enhancing equipment for the students’ individual and group reflection and debriefing. This strategy was used to enhance students’ learning outcomes. Assessment tools used by instructors and students for the self-evaluation were part of the clinical judgment process. In addition, the structured debriefing was also used to enhance students’ clinical judgment using the Situation, Background, Assessment, and Recommendation (SBAR) nursing record, written immediately after the student simulation performance and the self-evaluated clinical judgment rubric.
The courseware was composed of four major components: prelearning activity (smartpad-based vital training program for measuring and interpreting infant and child vital signs, developed by the authors), simulation scenarios, evaluation tools, and scripts for standard patients and students. The courseware had three major scenarios, including rapport-building (interaction between nurse, parent, and child), febrile infant care simulation, and emergency measures for high-risk newborns presenting with apnea. All scenarios were developed based on specific clinical cases, and they represented the most common clinical situations of pediatric nursing. Every scenario has elements of prioritizing care in a specific situation to facilitate the use of critical thinking among students. Students could then apply their learning to other clinical situations. Simulation scenarios included scenario templates, a prelearning checklist and materials, forms for case presentation, a physician’s prescription, the student performance guide, the SBAR-format nursing record, and a clinical inference web (i.e., a reasoning web to map out and visually represent the relationships among nursing assessment and intervention). The scenario templates contained simulation objectives, case presentation, type of simulation, prelearning experience, evaluation criteria and tools, required equipment, anticipated time, and required nursing skills. Content validity of the scenarios was established by five experts in pediatric nursing and one pediatrician, who had 100% agreement. The scenarios consisted of simple and complex pediatric nursing cases, as well as basic nursing assessment and interventions. Basic nursing assessment and intervention included checking vital signs in infants; using respiratory interventions; interacting among nurses, children, and parents; applying fever management techniques; administering oxygen; prioritizing medications ordered by physicians; and monitoring oxygen saturation and blood pressure. Students’ achievement level of clinical judgment for each scenario was evaluated by the evaluation tools.
Each simulation during the 3-week practicum period proceeded with the following steps: (a) simulation courseware was integrated into the regular pediatric nursing practicum; (b) each simulation session using the courseware had uniform protocol (prelearning, orientation, simulation operation, SBAR writing, and watching the video clip of performance for self-evaluation and debriefing); and (c) learning outcomes were evaluated by the critical thinking disposition tool (Yoon, 2008), the Lasater Clinical Judgment Rubric (LCJR; Lasater, 2007), and the Simulation Effectiveness Tool (SET; Elfrink, Leighton, Ryan-Wenger, Doyle, & Ravert, 2012).
Researchers introduced information about the study to students during the practicum orientation session before starting the actual practicum. After the question-and-answer session, all students (n = 100) were asked to participate in the study. Five students declined; hence, 95 signed informed consent and were informed that their participation was voluntary. After the informed consent process, all students were asked to complete a pre-critical thinking test and demographic questionnaire. Ninety-five students were divided into six groups following the regular curricular procedure; this resulted in approximately 15 to 20 students in each group. Each group was further divided into six to seven subgroups for the simulation activity (i.e., two to three students in each simulation group). After completing the simulation activity, each subgroup of 15 to 20 students participated in the debriefing session. Every group followed the established courseware schedule, as well as the regular pediatric nursing practicum schedule. The groups completed two simple simulation scenarios and one comprehensive scenario on the first day of the first, second, and third week of their practicum. Clinical judgment was measured in one simple simulation scenario (simulation I: neonatal apnea care) and one comprehensive scenario (simulation II: febrile infant care) by instructors while the simulation activity was in progress, and instructors also evaluated simulation learning using the video-recorded simulation episode. Instructors used the LCJR to evaluate students’ clinical judgment. Students completed a simulation satisfaction survey immediately after completing each simulation session. After completing the 3-week practicum, all students were asked to complete the post-critical thinking test. Instruments used for measurements of key dependent variables are described below.
Demographic Questionnaire. A demographic profile, developed by the research team, included the participant’s age, gender, prior simulation experience, clinical practicum experience, previous class exposure related to critical thinking offered as an elective course, previous nursing internship experience, and cardiopulmonary resuscitation (CPR)-related training.
Clinical Judgment. The LCJR was used for assessment of clinical judgment. The LCJR was developed by Lasater (2007) to assess clinical judgment during nursing simulation practice. It is based on Tanner’s (2006) four stages of clinical judgment model: noticing, interpreting, responding, and reflecting. The 11 items in the four stages are rated by a 4-point Likert scale of expertise, ranging from 1 (beginning) to 4 (exemplary). Reliability of the tool was previously reported with a Cronbach’s alpha of 0.80 to 0.97 (Mariani, Cantrell, Meakim, Prieto, & Dreifuerst, 2012). In the current study, the reliability of the LCJR was reported as a Cronbach’s alpha of 0.884 for the total scale and 0.825, 0.750, 0.877, and 0.668 for the four subscales (noticing, interpreting, responding, and reflecting, respectively).
Critical Thinking. A modified version of Yoon’s critical thinking tool (2008) was used to measure students’ pre- and post-critical thinking ability. It consisted of 27 items in a Likert scale, ranging from 1 (strong disagreement) to 5 (strong agreement). The seven subscales included objectivity, prudence, systematicity, intellectual eagerness/curiosity, intellectual fairness, healthy skepticism, and critical thinking self-confidence. The reliability using Cronbach’s alpha value for critical thinking in this study was 0.835.
Simulation Satisfaction. The Simulation Effectiveness Tool (Elfrink et al., 2012) was used to assess student satisfaction. SET is a 13-item Likert scale, ranging from 1 (do not agree) to 3 (strongly agree), that measures general satisfaction with simulation. Elfrink et al. (2012) reported the reliability of SET with a Cronbach’s alpha score of 0.93; Cronbach’s alpha value for the scales for simulation satisfaction in the current study was 0.743.
Data were analyzed using IBM SPSS® version 19.0 software. Descriptive data, including the frequencies, percentages, means, and standard deviations for the overall scales are presented in the Table. Paired t tests for the pre- and post-critical thinking scores and ANOVA for the critical thinking, clinical judgment, and satisfaction scores by the general characteristics were performed. Pearson’s correlation and chi-square analysis were performed to determine the relationship among variables.
Comparison of Variables Between Upper and Lower Groups by Critical Thinking Change
Ninety-five participants completed all components of the study. Most participants were women (90%), and their mean age was 22 years. Average clinical experience was 705 hours (range = 500 to 900 of the 1,000 required practicum clinical hours for applying for the licensure examination in Korea). Half of the participants had prior simulation experience and attended critical thinking classes. Most participants (85%) had CPR training, and several participants (16%) had CPR certification.
Total scores of critical thinking before and after the simulation using the courseware were 94.44 ± 15.34 and 100.71 ± 8.51, respectively. The critical thinking score significantly increased by 6.27 points (t = 4.032, p < 0.001). Five categories of critical thinking (intellectual eagerness, prudence, systematicity, intellectual fairness, and skepticism) of seven significantly increased after attending simulation using the courseware. The domains of objectivity and self-confidence showed a slightly increasing trend, but it was not significant. Total scores of clinical judgment using the LCJR in simulations with a simple scenario (simulation I) and a complex scenario (simulation II) were 29.36 and 27.29 points, respectively (of 44 maximum possible points).
Most students either agreed or strongly agreed that they were satisfied with the simulation learning and were generally satisfied with the overall experience of the courseware. The average satisfaction score on the SET was 32.98 points of a maximum of 39 possible points. In addition, most students had positive comments regarding the overall courseware experience. Many stated that they the felt simulation activities made them think more deeply on the nurses’ general routines that they observed during their clinical hours.
The Table shows the differences of critical thinking achievement between the upper and lower groups (i.e., above or below the mode score). No significant relationship was demonstrated for pre- and post-critical thinking with variables such as age, hours of previous practicum, number of prior courses of simulation, clinical judgment score, and scores of student satisfaction. To identify the predictors of critical thinking in students who achieved the higher critical thinking scores, a bivariate transformation was conducted of the pre- and post-critical thinking test scores for all participants. The mode score of 3 in critical thinking mean difference was used to divide the participants into two groups—those who achieved the higher mean difference (upper group) and those with the lower mean difference (lower group). The upper group included those who scored above the mode score of critical thinking mean difference, whereas the lower group was those below the mode score of critical thinking mean difference. The t test results showed a positive relationship between critical thinking and student clinical judgment as measured by the LCJR. The participants in the upper group showed significantly higher clinical judgment scores than the lower group; however, scores of satisfaction and other variables, including general characteristics, were not significantly different between the upper and lower groups.
Nursing education has begun to use simulation strategies as a means of reducing the gap between theory and clinical practice, which was noted as one of the significant problems that nursing education faced (Benner, Sutphen, Leonard, & Day, 2010). Simulation was thought of a strategy for improving nursing students’ critical thinking abilities. However, a systematic review of studies (Cant & Cooper, 2009) on critical thinking revealed unclear relationships with simulation.
The main findings of the current study revealed significant improvements in students’ critical thinking scores after completion of the simulation using this courseware. The study findings are different from other studies that reported no significant effect of simulation on student critical thinking improvement (Shinnick & Woo, 2013; Ravert, 2008). Cant and Cooper (2009), in a systematic review of simulation’s impact on critical thinking improvement, reported that half of the studies showed advanced knowledge, critical thinking ability, satisfaction, or confidence, but they questioned the relevance of students’ perception on their critical thinking improvement. The current study did not find significant predictor variables for critical thinking except for student clinical judgment, unlike the findings of Maneval et al. (2012) on the effects of simulation on critical thinking and clinical decision-making skills of new graduate nurses. Kim et al. (2014) also reported that academic level and satisfaction with nursing curriculum were significant predictors of students’ critical thinking abilities.
Despite the lack of significant predictors for the improvement in critical thinking in this study, the bivariate transformation of the upper and lower groups in their critical thinking achievement (difference between pre- and postmeasures) revealed a significant difference in the clinical judgment ability between the two groups. That is, those in the upper group of critical thinking achievement showed better clinical judgment. This supports the use of our simulation strategy, which focused more on advancing clinical judgment. Because students in both groups showed similar characteristics and their simulation satisfaction did not significantly differ, improving clinical judgment may have played an important role in improving critical thinking.
In the current study, simulation courseware included several components and multiple scenarios ranging from simple to complex clinical situations that could help students to improve their clinical judgment. Given the findings, we suggest that nursing schools may increase the extent of simulation learning in clinical practicum above the current recommended level of 10% in Korea (Korean Accreditation Board of Nursing Education, 2012). This would be in keeping with the recommendations of previous studies to add more simulation learning experience in nursing practicum (Ravert, 2008; Shin, Shim, & Lee, 2013; Shinnick & Woo, 2013; Sullivan-Mann, Perron, & Fellner, 2009).
The one-group design without a control group limits the generalization of the findings. However, it must be noted that having a control group that did not include a simulation session was not feasible because it would have involved a major change in the curriculum. Hence, this study aimed to examine the effects of supplementary simulation sessions to the regularly scheduled clinical practicum. Cross-communication among the six student groups who rotated through the three simulations learning experience may have influenced the learning outcomes. In addition, the small male sample size may limit the study findings. Hence, the findings of this study should be interpreted with these limitations in mind.
Simulation using this integrated courseware resulted in improved critical thinking. Clinical judgment may have played an important role in mediating improved critical thinking. Including this simulation courseware in the nursing practicum is recommended to improve students’ learning outcomes.
Using supplementary simulation courseware in the nursing practicum is an innovative learning strategy to improve students’ critical thinking and clinical judgment. Further studies are recommended that include a comparison group that uses traditional practicum only, perhaps in other schools.
- American Association of Colleges of Nursing. (2008). The essentials of baccalaureate education for professional nursing practice. Retrieved from http://www.aacn.nche.edu/education-resources/BaccEssentials08.pdf
- Benner, P., Sutphen, M., Leonard, V. & Day, L. (2010). Educating nurses: A call for radical transformation. San Francisco, CA: Jossey-Bass.
- Bultas, M.W. (2011). Enhancing the pediatric undergraduate nursing curriculum through simulation. Journal of Pediatric Nursing, 26, 224–229. doi:10.1016/j.pedn.2010.06.012 [CrossRef]
- Cant, R.P. & Cooper, S.J. (2009). Simulation-based learning in nursing education: Systematic review. Journal of Advanced Nursing, 66, 3–15. doi:10.1111/j.1365-2648.2009.05240.x [CrossRef]
- Elfrink, V.L., Leighton, K., Ryan-Wenger, N., Doyle, T.J. & Ravert, P. (2012). History and development of the simulation effectiveness tool (SET). Clinical Simulation in Nursing, 8, 199–210. doi:10.1016/j.ecns.2011.12.001 [CrossRef]
- Feingold, C., Calaluce, M. & Kallen, M. (2004). Computerized patient model and simulated clinical experiences: Evaluation with baccalaureate students. Journal of Nursing Education, 43, 156–163.
- Goodstone, L., Goodstone, M.S., Cino, K., Glaser, C.A., Kupferman, K. & Dember-Neal, T. (2013). Effect of simulation on the development of critical thinking in associate degree nursing students. Nursing Education Perspectives, 34, 159–162. doi:10.5480/1536-5026-34.3.159 [CrossRef]
- Groom, J.A., Henderson, D. & Sittner, B.J. (2013). NLN/Jeffries simulation framework state of the science project: Simulation design characteristics. Clinical Simulation in Nursing, 10, 337–344. doi:10.1016/j.ecns.2013.02.004 [CrossRef]
- Jeffries, P.R. (2006). A framework for designing, implementing, and evaluating simulations used as teaching strategies in nursing. Nursing Education Perspectives, 26, 96–103.
- Jeffries, P.R. & Rizzolo, M.A. (2006). Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national, multi-site, multi-method study. New York, NY: National League for Nursing.
- Kaddoura, M.A. (2010). New graduate nurses’ perceptions of the effects of clinical simulation on their critical thinking, learning, and confidence. The Journal of Continuing Education in Nursing, 41, 506–516. doi:10.3928/00220124-20100701-02 [CrossRef]
- Kim, C.J., Ahn, Y.H., Kim, M.W., Jeong, Y.O. & Lee, J.H. (2006). Development of standards and criteria for accreditation of a baccalaureate nursing education program: Reflections on the unique characteristics of the nursing profession. Journal of Korean Academy of Nursing, 36, 1002–1011.
- Kim, D.H., Moon, S., Kim, E.J., Kim, Y. & Lee, S. (2014). Nursing students’ critical thinking disposition according to academic level and satisfaction with nursing. Nurse Education Today, 34, 78–82. doi:10.1016/j.nedt.2013.03.012 [CrossRef]
- Korean Accreditation Board of Nursing Education. (2012). Nursing education accreditation. Retrieved from http://kabon.or.kr/eng/kabon02/index.php
- Lasater, K. (2007). Clinical judgment development: Using simulation to create an assessment rubric. Journal of Nursing Education, 46, 496–503.
- Lasater, K. (2011). Clinical judgment: The last frontier for evaluation. Nurse Education in Practice, 11, 86–92. doi:10.1016/j.nepr.2010.11.013 [CrossRef]
- Maneval, R., Fowler, K.A., Kays, J.A., Boyd, T.M., Shuey, J., Harne-Britner, S. & Mastrine, C.2012). The effect of high-fidelity patient simulation on the critical thinking and clinical decision-making skills of new graduate nurses. The Journal of Continuing Education in Nursing, 43, 125–134. doi:10.3928/00220124-20111101-02 [CrossRef]
- Mariani, B., Cantrell, M.A., Meakim, C., Prieto, P. & Dreifuerst, K.T. (2012). Structured debriefing and students’ clinical judgment abilities in simulation. Clinical Simulation in Nursing, 9, e147–e155. doi:10.1016/j.ecns.2011.11.009 [CrossRef]
- Norman, J. (2012). Systematic review of the literature on simulation in nursing education. The ABNF Journal, 23(2), 24–28.
- Parr, M.B. & Sweeney, N.M. (2006). Use of human patient simulation in an undergraduate critical care course. Critical Care Nursing Quarterly, 29, 188–198. doi:10.1097/00002727-200607000-00003 [CrossRef]
- Ravert, P. (2008). Patient simulator sessions and critical thinking. Journal of Nursing Education, 47, 557–562. doi:10.3928/01484834-20081201-06 [CrossRef]
- Schubert, C.R. (2012). Effect of simulation on nursing knowledge and critical thinking in failure to rescue events. The Journal of Continuing Education in Nursing, 43, 467–471. doi:10.3928/00220124-20120904-27 [CrossRef]
- Shin, H., Shim, K.K. & Lee, Y.N. (2013). Nursing activities identified through pediatric nursing simulation. Child Health Nursing Research, 19, 111–119. doi:10.4094/chnr.2013.19.2.111 [CrossRef]
- Shinnick, M.A. & Woo, W.A. (2013). The effect of human patient simulation on critical thinking and its predictors in prelicensure nursing students. Nurse Education Today, 33, 1062–1067. doi:10.1016/j.nedt.2012.04.004 [CrossRef]
- Sullivan-Mann, J., Perron, C. & Fellner, A. (2009). The effects of simulation on nursing student’s critical thinking scores: A quantitative study. Newborn and Infancy Nursing Reviews, 9, 111–116. doi:10.1053/j.nainr.2009.03.006 [CrossRef]
- Tanner, C.A. (2006). Thinking like a nurse: A research-based model of clinical judgment in nursing. Journal of Nursing Education, 45, 204–211.
- Yoon, J. (2008). The degree of critical thinking disposition of nursing students and the factors influencing critical thinking disposition. Journal of Korean Academy of Nursing Administration, 14, 159–166.
Comparison of Variables Between Upper and Lower Groups by Critical Thinking Change
|Age||22.41 ± 2.19||22.16 ± 1.51||0.629||0.531|
|Previous CT course|
|LCJR: Simulation I|
| Noticing||8.21 ± 1.56||7.93 ± 1.39||0.92||0.361|
| Interpreting||5.52 ± 1.11||5.37 ± 0.95||0.69||0.495|
| Responding||10.98 ± 2.14||10.77 ± 2.02||0.50||0.621|
| Reflecting||6.21 ± 1.02||6.23 ± 1.21||–0.09||0.927|
| Total LCJR||30.92 ± 4.86||30.30 ± 4.78||0.62||0.534|
|LCJR: Simulation II|
| Noticing||7.71 ± 6.42||6.42 ± 2.23||3.01||0.003|
| Interpreting||5.06 ± 1.16||4.37 ± 1.59||2.43||0.017|
| Responding||10.65 ± 2.54||9.00 ± 2.63||3.09||0.003|
| Reflecting||5.58 ± 1.26||5.23 ± 1.62||1.17||0.246|
| Total LCJR||29.00 ± 5.96||25.17 ± 6.92||2.88||0.005|
|SET 1||32.58 ± 4.22||32.44 ± 3.72||0.16||0.87|
|SET 2||33.74 ± 3.34||33.05 ± 4.06||0.89||0.375|
|Pre-CT||90.98 ± 19.81||98.63 ± 4.23||–2.48||0.015|
|Post-CT||104.29 ± 8.33||96.37 ± 6.52||5.07||0.000|