Journal of Nursing Education

Major Article 

Simulation and Clinical Competency in Undergraduate Nursing Programs: A Multisite Prospective Study

Mary E. Mancini, PhD, RN, NE-BC, FAHA, FSSH, ANEF, FAAN; Judy L. LeFlore, PhD, NNP, CPNP-AC&PC, ANEF, FAAN; Daisha Jane Cipher, PhD

Abstract

Background:

In prelicensure nursing education, there is a need to better understand the roles that simulation and traditional clinical instruction play in the development of clinical competence.

Method:

A prospective cohort study was conducted across four prelicensure nursing programs. Four undergraduate nursing programs tested an intervention cohort with a redesign of the use of simulation, a redistribution of clinical hours, and an implementation of these new educational approaches into simulation experiences.

Results:

The final sample consisted of 271 control students and 315 intervention students who were assessed at the end of five clinical courses. There was no significant difference between the control and intervention groups on licensure examination pass rates and no uniform differences in clinical competency.

Conclusion:

These findings suggest that the redistribution of clinical hours from traditional to simulation did not affect clinical competency or licensure examination results. Such redistributions have the potential to yield comparable results. [J Nurs Educ. 2019;58(10):561–568.]

Abstract

Background:

In prelicensure nursing education, there is a need to better understand the roles that simulation and traditional clinical instruction play in the development of clinical competence.

Method:

A prospective cohort study was conducted across four prelicensure nursing programs. Four undergraduate nursing programs tested an intervention cohort with a redesign of the use of simulation, a redistribution of clinical hours, and an implementation of these new educational approaches into simulation experiences.

Results:

The final sample consisted of 271 control students and 315 intervention students who were assessed at the end of five clinical courses. There was no significant difference between the control and intervention groups on licensure examination pass rates and no uniform differences in clinical competency.

Conclusion:

These findings suggest that the redistribution of clinical hours from traditional to simulation did not affect clinical competency or licensure examination results. Such redistributions have the potential to yield comparable results. [J Nurs Educ. 2019;58(10):561–568.]

The National Council of State Boards of Nursing (NCSBN) Transition to Practice study defined clinical competence as the ability to “observe and gather information, recognize deviations from expected patterns, prioritize data, make sense of data, maintain a professional response demeanor, provide clear communication, execute effective interventions, perform nursing skills correctly, evaluate nursing interventions, and self-reflect for performance improvement within a culture of safety” (Spector, 2011). Graduating competent RNs is predicated on the assumption that ample opportunities are made available for student learners to intentionally experience meaningful encounters that include the opportunity to apply and expand their knowledge and skills in hands-on clinical settings. Underpinning this assumption are the experiential learning theories developed by Carl Rogers, John Dewey, and especially Kolb (Kolb & Kolb, 2005), who explicated the importance of combining experience, perception, cognition, and behavior (Lisko & O'Dell, 2010).

The experiential learning necessary to prepare competent new graduate RNs is a combination of laboratory-based experiences, as well as directed and supervised clinical learning opportunities in a variety of health care settings. These settings include acute care hospitals, rehabilitation facilities, long-term care facilities, clinics, schools, or other community venues that provide students with a range of experiences across populations and the health–illness continuum (Ironside, McNelis, & Ebright, 2014).

Although there is agreement among educators and employers that clinical competence is the desired endpoint for an initial RN licensure program, there are not common operational definitions of various characteristics of clinical competence. Nor is there agreement as to the type, quantity, and quality of clinical experiences that are necessary to produce a competent graduate. Additionally, accrediting agencies for nursing programs articulate in general terms the competencies new nursing graduates should achieve, but they do not provide guidelines on the different types of clinical experiences and settings. Rather, accrediting agencies reference the need for adequate clinical experiences in a variety of settings to achieve the clinical competencies described (Institute of Medicine, 2011). There is a large variability across states regarding specific requirements for clinical experiences for prelicensure nursing programs. Some state boards of nursing require a specific number of clinical hours, and some monitor outcome measures of progression to graduation and first-time National Council Licensure Examination (NCLEX®) pass rates (NCSBN, 2019).

As noted by Ironside et al. (2014), clinical education is a “time- and resource-intensive” (p. 186) aspect of nursing programs and that “little is known about if and how current clinical experiences contribute to students' learning and readiness to practice” (p. 185). Their observational study of student and faculty experiences in a medical–surgical clinical setting indicated that despite intentions to the contrary, faculty and students continued to focus on task completion to the point that it often “overshadows the more complex aspects of learning nursing practice” (Ironside et al., 2014, p. 185).

The availability of specific clinical sites can also be a barrier in clinical education. Clinical placements for students have been reported to be most difficult in Pediatrics, Obstetrics, and Psychiatric/Mental Health rotations (Hayden, 2010; Hayden, Jeffries, Kardong-Edgren, & Spector, 2009), although difficulties have been reported in all clinical areas. From a program perspective, accessing and providing an adequate number and type of appropriate clinical learning opportunities in various sites is challenging. Commonly mentioned factors are the increased demand for clinical sites as the number of schools increase, a focus limited to acute care sites, and a shortage of clinical faculty (Allan & Aldebron, 2008; Durham & Alden, 2008). In addition, traditional acute care sites are decreasing the number of students they are willing to accommodate due to increasing regulatory expectations for clearing students (i.e., accepting students for clinical placement), as well as the demands of high-acuity patients on the nursing staff (LeFlore, personal communication, 2019).

Even when nursing students are provided with access to a clinical site, there is no guarantee that each learner will be exposed to the specific desired clinical experiences. This phenomenon has been referred to as “learning by random opportunity” (LeFlore, Anderson, Michael, Engle, & Anderson, 2007, p. 170). As such, the use of simulated clinical scenarios (i.e., simulation) in nursing education has increased. This approach allows prelicensure students to practice to a level of confidence while allowing educators to present information and evaluate student competency in specific areas even when direct patient care opportunities may be inadequate or nonexistent (Shin, Park, & Kim, 2015). As a learning and assessment strategy, simulation can ensure that each student is exposed to a predetermined set of clinical encounters until a desired level of competency is demonstrated. This, along with reported benefits of enhancing team-building skills and improving patient safety, has spread the integration of structured simulation experiences into nursing curricula (Cooper et al., 2012; Decker, Sportsman, Puetz, & Billings, 2008; Lisko & O'Dell, 2010).

Unfortunately, as with clinical experiences, the introduction of this powerful learning approach termed simulation has been guided more by preferences, anecdotes, and available equipment, rather than by evidence and objective guidelines. Often, simulation instruction is conducted with little faculty preparation, operating under the assumption that nurse educators would intuitively know how to use this powerful learning technology (Jeffries, Dreifuerst, Kardong-Edgren, & Hayden, 2015). This has led to large variability in the use of simulation in prelicensure RN programs. In some nursing programs, simulation has been used in addition to clinical experiences, whereas in other programs it has been used as a replacement for clinical hours (Hayden, 2010).

Several studies have evaluated simulation (in its various forms) as an effective method of instruction. A systematic review by Lapkin, Levett-Jones, Bellchambers, and Fernandez (2010) focused on eight studies involving simulation in nursing education, particularly human patient simulation manikins (HPSM). The authors concluded that the use of HPSMs significantly improved three outcomes integral to clinical reasoning in undergraduate nursing students: critical thinking, knowledge acquisition, and the ability to identify deteriorating patients. Moreover, the review indicated high student satisfaction with simulation experiences.

A meta-analysis conducted by Kim, Park, and Shin (2016) included 40 published quantitative studies of simulation in nursing education. Analyses revealed that student satisfaction levels were high, and simulation was generally effective in yielding significant cognitive, affective, and psychomotor outcomes.

A pilot study conducted by Schlairet and Fenster (2012) aimed to test differences in outcomes when students received no simulation, 30% simulation, 50% simulation, or 70% simulation in the place of clinical experiences in a nursing fundamentals course. The researchers found no differences among the groups in assessments of nursing knowledge and critical thinking. However, the group that received 30% simulation exhibited significantly lower clinical judgment scores than students in the other groups (Schlairet & Fenster, 2012).

NCSBN's National Simulation Study was an attempt to address the concerns that decreasing the number of hours that students spend in direct clinical encounters would have a negative effect on student performance (Hayden, Smiley, Alexander, Kardong-Edgren, & Jeffries, 2014). The NCSBN National Simulation Study was a randomized controlled multisite study involving 10 nursing programs (five Associate Degrees in Nursing [ADN] and five Bachelor of Science in Nursing [BSN]). The overall aim was to examine whether time and activities in a simulation laboratory could effectively substitute for traditional clinical hours in the prelicensure nursing curriculum. Students were randomized to one of three groups:

  • Control: Students had traditional clinical experiences (no more than 10% of clinical hours could be spent in simulation).
  • 25% group: Students had 25% of their traditional clinical hours replaced by simulation.
  • 50% group: Students had 50% of their traditional clinical hours replaced by simulation.

Results revealed that there were no differences between the groups in the study outcomes. The use of simulation for up to 50% of clinical hours produced no significant differences in clinical competence, comprehensive nursing knowledge, or NCLEX pass rates. These landmark findings indicated that the substitution of simulation for up to half of traditional clinical hours produces comparable educational outcomes.

In summary, further quantitative exploration and expansion of the role of simulation in the foundation of clinical competence in nursing programs is needed. To better understand the role that simulation and traditional clinical instruction plays on clinical competence, we undertook a prospective cohort study across four prelicensure nursing programs. This study was a part of a larger research project addressing simulation, clinical competence, and clinical hours. Cohorts were compared over time, with students assessed for clinical competency after simulations in select clinical courses, as well as before, during, and after traditional clinical experiences in the same courses.

The research objectives of this study were as follows:

  • Compare the clinical competencies of a cohort receiving primarily traditional clinical instruction with a cohort receiving reduced traditional clinical instruction and increased simulation.
  • Identify the impact of simulation on clinical competence and the impact of simulation plus traditional clinical instruction on clinical competence.

This project was designed to fill the gap left by the NCSBN National Simulation Study (Hayden et al., 2014) by identifying the additive role of both simulation and clinical instruction for students to meet certain clinical objectives and become competent new graduate nurses.

Method

In this prospective cohort multisite study, two community college programs granting ADN degrees and two university programs granting BSN degrees were recruited to participate. Student cohorts immediately prior to the program changes served as the control group (Spring 2015 enrollees). The Fall 2015 enrollees served as the intervention group. Each cohort was followed through four semesters (Junior 1, Junior 2, Senior 1, and Senior 2). Student data collected throughout the study included demographics, clinical competence, and NCLEX pass/fail. Students were excluded from the study if they discontinued a course (dropped or withdrew), failed a course, dropped or withdrew from the program, changed majors, or opted out of study participation.

Simulation Scenarios

The researchers selected a standardized set of clinical simulations from the set of National League for Nursing clinical simulation scenarios for use in this study. The adult Medical–Surgical scenario package contained learning objectives for 10 surgical and 10 medical scenarios and was designed to challenge students at different levels. Cases ranged from obtaining vital signs to recognizing and managing life-threatening complications. The Pediatric and Obstetric scenarios addressed learning objectives applicable to all types of undergraduate nursing programs.

The scenario selection for each clinical course (Foundations, Medical–Surgical, Pediatrics, Obstetrics, Critical Care, and Capstone) was determined based on the essential competencies described by subject matter experts from academia and service (task groups). These task group members were well-versed in the expectations of new graduate nurses. The task groups consisted of academic faculty, as well as representatives from the major employers of new graduate nurses. A total of five task groups were empaneled to focus on the practice areas of foundations, medical–surgical, pediatrics, psychiatric, critical care, obstetrics, and capstone. The members of the task groups selected, modified, and/or created the standardized clinical scenarios that represent the essential patient encounters in each of the clinical areas. Because simulation scenarios were not available for psychiatric nursing, the psychiatric subject matter expert task group needed to develop a standardized simulation scenario for this purpose.

The standardized simulated clinical scenarios included skills laboratories and simulation laboratories. All content, developed clinical cases, and hybrid simulations were available to all prelicensure nursing faculty across the study sites. The selected simulations were delivered at the beginning of each clinical course and used as a foundational experience before students in the intervention group entered their clinical rotations.

The reduction of clinical hours and redistribution of simulation hours are reported in Table 1. Only those hours common to both ADN and BSN programs were involved in this study; therefore, community and management courses were not included. Critical care hours were separate courses in the BSN programs but were blended into other courses in the ADN programs. The control students received their program's standard prescribed clinical instruction experiences and were assessed at the end of each clinical course using the same summative assessment tool as used with the intervention students. The intervention students experienced the newly developed (or modified) standardized NLN scenarios described above. The programs redistributed their clinical hours in the intervention cohorts by reducing the traditional clinical hours and increasing the simulation hours. The control cohorts' simulation hours ranged from 140 to 224, and the intervention cohorts' simulation hours ranged from 182 to 308.

Clinical Hours by Program and Changes Over Time

Table 1:

Clinical Hours by Program and Changes Over Time

Measure

Students were assessed for clinical competence with the Creighton Competency Evaluation Instrument (CCEI; Hayden, Keegan, Kardong-Edgren, & Smiley, 2014). The CCEI has evidenced high levels of content validity, internal consistency, and interrater reliability (Hayden, Keegan, Kardong-Edgren, & Smiley, 2014). The CCEI incorporated Quality and Safety Education for Nurses (Altmiller & Hopkins-Pepe, 2019) terminology and concepts and reflects the Essentials of Baccalaureate Education for Professional Nursing Practice by the American Association of Colleges of Nursing (2008). For this study, internal consistency (KR-20) coefficients were as follows: Medical–Surgical: 0.79, Obstetrics: 0.68: Pediatrics: 0.76, Psychiatric: 0.66, and Capstone: 0.88.

Task Groups

The five task groups determined objective criteria for each CCEI item. The competency domains were Assessment, Communication, Clinical Judgement, and Patient Safety, with evaluation items nested within each domain. The CCEI instrument is a checklist with possible answers of Yes, No, or Not Applicable (NA). The work of the task groups was to qualify each item under each competency domain so that an evaluator could objectively score the student on an item as Yes, No, or NA. An example for Item #1 would be “Obtains pertinent data”; this might be qualified by adding “as evidenced by one of the following: reviews the EMR, reviews physician's orders, ask for clarification, etc.” Each total score is a continuous score, with 0 as the lowest possible value and the highest possible value varying by the course evaluated. The CCEI was administered to assess competency in the following domains: Medical–Surgical, Psychiatric, Pediatrics, Obstetrics, and Capstone, resulting in five separate CCEI scores. There were missing CCEI data due to technical issues, especially at the beginning of the study, which affected the first course of each cohort (Foundations).

The technical difficulties mainly involved video equipment breakdown and/or transferring. This missing data challenge prevented the analysis of Foundations CCEI data.

Experts across the United States in the field of simulation were identified, contacted, and recruited to serve as blinded reviewers of all videotaped clinical scenarios. Eight experts agreed to participate. All reviewers were all faculty members in university schools of nursing, and seven of the eight reviewers resided outside of this study's home state of Texas. All reviewers were trained on the CCEI. This training is offered on the Creighton University website ( https://nursing.creighton.edu/academics/competency-evaluation-instrument). Faculty were asked to visit the site, complete the training, and then notify the research team after they had completed the training.

Each student's CCEI assessment was videorecorded and placed on a secure cloud environment for download by blinded external reviewers. The cadre of eight external reviewers was blinded to school and group membership, and all were trained to criterion in the use of the CCEI for formative and summative assessment. The reviewers were each assigned a set of video-recorded CCEIs to score.

Randomization Within Intervention

Within each school, at the start of each clinical course, the intervention cohort was divided into two groups: group A and group B. Group A students received the standardized clinical simulations and were assessed using the aforementioned standardized tool (CCEI) at the end of the simulation experience but before their patient care clinical experiences. Group B students received the same standardized simulations at the beginning of each clinical course. They proceeded directly to their patient care clinical experiences and were assessed immediately after the patient care clinical experiences (Figure 1). All four participating schools were approved to conduct these research activities by their respective institutional review boards.

Study design. Note. JR1 = Junior 1; JR2 = Junior 2; SR1 = Senior 1; SR2 = Senior 2.

Figure 1.

Study design. Note. JR1 = Junior 1; JR2 = Junior 2; SR1 = Senior 1; SR2 = Senior 2.

Statistical Analysis

Continuous parameters are reported as mean ± standard deviation, and discrete parameters were reported as n and percent (%). Mann-Whitney U tests were computed for comparisons of continuous demographic variables, and Pearson chi-square tests were computed for nominal demographic variables. Analyses of covariance were computed to compare CCEI scores of cohort A, cohort B, and the control group, controlling for school (as dummy coded covariates). Pairwise comparisons were made using the Sidak adjustment for multiple comparisons. Cohen's f values were calculated to represent the effect sizes for group comparisons. Cohen's f represents the magnitude of differences among three or more groups, where a small effect = .10, moderate = .25, and large = .40 (Cohen, 1988; Grove & Cipher, 2017). Data were analyzed for patterns of missing data using pattern plots. The patterns of missing data were found to be arbitrary (nonmonotone), assumed to be missing at random, and subsequently multiple imputation was implemented to account for missing CCEI data. Complete case analyses were also performed, and there were no differences in results. Therefore, the findings reflect the analyses of the imputed data. Analyses were performed using SPSS® 25.0 for Windows.

Results

The final number of active study participants was 586, of whom 11.1% were from “ADN Program 1,” 18.9% from “BSN Program 1,” 31.4% from “ADN Program 2,” and 38.6% from “BSN Program 2.” The study sample consisted of 271 control students and 315 intervention students. The demographic characteristics of the sample are displayed in Table 2. Most of the students were female (88.2%), and the mean age was 26.5 ± 7.6 years. The ethnic breakdown of the sample was primarily White (52.7%), followed by Hispanic/Latino (17.9%), Black/African American (12.3%), and Asian (10.9%). There were no significant differences between the control and intervention students on age, gender, or ethnicity.

Study Demographic Characteristics

Table 2:

Study Demographic Characteristics

Cohort Differences on Clinical Competency

The A, B, and control cohorts were compared on each CCEI score with ANCOVAs, with school as the covariate. Table 3 displays the pooled adjusted mean CCEI scores for each clinical course. Analyses revealed significant differences between the groups on Medical–Surgical CCEI scores (p < .001; Cohen's f = .063). Pairwise comparisons indicated that students in group A scored significantly higher than students in the control on the Medical–Surgical CCEI assessment. There were no significant differences between groups A and B, nor between group B and the control group.

Creighton Competency Evaluation Instrument Pooled Mean Scores, Adjusted for School

Table 3:

Creighton Competency Evaluation Instrument Pooled Mean Scores, Adjusted for School

Analyses revealed significant differences between the groups on Pediatric CCEI scores (p < .001; Cohen's f = .079). Pairwise comparisons indicated that students in the group B and control group scored significantly higher than students in both group A on the Pediatric CCEI assessment. There were no significant differences between group B and the control group.

Analyses revealed significant differences between the groups on Obstetrics CCEI scores (p < .001; Cohen's f = .09). Pairwise comparisons indicated that students in group B scored significantly higher than students in both group A and the control group on the Obstetrics CCEI assessment. There were no significant differences between group A and the control group.

Analyses revealed significant differences between the groups on Capstone CCEI scores (p < .001; Cohen's f = .072). Pairwise comparisons indicated that students in the control group scored significantly higher than students in both group A and group B on the Capstone CCEI assessment. There were no significant differences between groups A and B.

Analyses revealed no significant differences between the groups on Psychiatric CCEI scores (p = .11; Cohen's f = .039). When the groups were compared on NCLEX results (pass/fail), there were no significant differences between the control and intervention groups (Δ = 4.2%), χ2 (1, N = 586) = 2.09, p = .15 (Table 4).

NCLEX Pass Rates by School and Cohort

Table 4:

NCLEX Pass Rates by School and Cohort

Discussion

In this prospective multisite cohort study, a set of desired clinical competencies for nursing undergraduates was developed that included measurable knowledge, skills, abilities, and behaviors for specific clinical areas. Two ADN and two BSN programs redistributed clinical hours and implemented these new educational approaches into simulation experiences. Overall, these data confirm the findings of prior research that studied redistributions of traditional clinical instruction to simulation instruction. Similar to the findings of Schlairet and Fenster (2012) and the NCSBN National Simulation Study (Hayden et al., 2014), this study found negligible effects of the redistribution of clinical hours on clinical competency assessments. Effect sizes across comparisons were very small or small.

The first study aim was to compare the clinical competencies of a cohort receiving primarily traditional clinical instruction with a cohort receiving reduced traditional clinical instruction and increased simulation. Findings revealed that the intervention and control groups performed comparably on clinical competence assessments. In addition, the NCLEX pass rates for the intervention and control groups were comparable and did not significantly differ.

The second study aim was to identify the effects of simulation on clinical competence, and the impact of simulation plus traditional clinical instruction on clinical competence. Prelicensure nursing students were assessed for competence immediately after a course's simulation module, and those results were compared with those of students who were assessed for competence at the end of the course. Effect sizes for comparisons of the five assessments ranged from f = .039 (Psychiatric) to .090 (Obstetrics), revealing small effects even for the significant findings.

For the Medical–Surgical, Psychiatric, and Capstone courses, those assessed at the end of the course (cohort B) performed no better than those assessed immediately after simulation (cohort A). For the Pediatrics and Obstetrics courses, students in cohort B (assessed at the end of these courses performed) significantly better than those in cohort A (assessed immediately after simulation).

While not questioning the essential need for actual hands-on clinical experience in nursing education, the findings in the Medical–Surgical, Psychiatric, and Capstone courses leads one to question the unique contribution of traditional clinical experience on competence in these three areas when built on a base of carefully targeted simulation experiences. Beyond a reallocation of how clinical hours are spent, is there an opportunity to fundamentally rethink how we conceptualize hands-on clinical experiences when we build on a foundation of simulation?

The findings for Pediatrics and Obstetrics demonstrate an intuitive belief that the addition of traditional hands-on clinical experience has a measurable positive impact on competence. Therefore, one must ask why the positive finding of an additive effect of clinical experience would be true in these two courses while not true in the other three. It is possible that the findings for these two courses could be explained by the timing of the Pediatrics and Obstetrics courses in the nursing program. In three of the four programs participating in the study, the Pediatrics and Obstetrics courses shared a semester and therefore may have resulted in a similar pattern of assessment performance. Another possible explanation involves the lack of exposure to Pediatric and Obstetric education. These are unique populations that have needs that are not routinely discussed in the care of typical adult patients. The primary focus for competency throughout a prelicensure nursing program is on the care of adults and, as a result, students may have more knowledge, experience, and preparation regarding adult care. Therefore, for rotations addressing adults, it may have been easier for students in group A to excel on the CCEI. In contrast, the novel experiences and content involved in Pediatrics and Obstetrics courses may mean that students need maximum educational exposure before exhibiting high CCEI scores—in other words, at the end of course as opposed to the end of simulation (as our study findings indicate). In summary, the unique nature of these specialty populations may require a level of hands-on experience that is different than when working with an adult population. This is an area ripe for future study.

Based on the large variability in the use of simulation in nursing education in the United States, a focus on how faculty use clinical time (both hands-on and simulation) and the inclusion of best-practices of simulation and clinical instruction (e.g., clinical debriefing) might be more informative than a singular focus on clinical hours. Although this and other studies noted the potential to replace a large number of traditional clinical hours with simulation, staffing, logistics, and resources (financial and otherwise) precluded all partners from reducing clinical hours to the level demonstrated in NCSBN's National Simulation Study.

Limitations of the study included the lack of a fully randomized clinical trial. This study was a prospective cohort study, and the only randomization that occurred was within the intervention group. This study involved only four programs in Texas, thereby potentially limiting the generalizability of results. There were missing CCEI data due to technical issues pertaining to video equipment breakdown and/or transferring. Strengths of this study are the multisite participation and the involvement of both academic and clinical partners in the development of the standardized clinical scenarios.

The following website is hosted by the University of Texas at Arlington College of Nursing and Health Innovation to serve as an implementation toolkit. This toolkit includes operational definitions of this study's methodology, recommended implementation plan, patient information, flow of the scenario, and guided debriefing plans for use by other schools of nursing: http://www.uta.edu/conhi/smart-hospital/c-face/index.php.

Conclusion

These findings suggest that the redistribution of clinical hours from traditional to simulation did not substantially impact clinical competency nor licensure examination results. Future research involving the redistribution of clinical hours should take into consideration several factors. First, the variability in faculty knowledge with regard to the use of simulation should be noted. Although shifts from clinical to simulation hours may be planned, limitations on resources in the various simulation laboratories may become apparent, including space, time, technicians, and consumables. In this study, we found that the bottleneck related to access to clinical sites moved to a bottleneck to access the simulation facility.

The lack of relationship between hour redistribution and outcome measures (CCEI and NCLEX) raises the likelihood that the number of hours in clinical/simulation is not the metric of concern—it is what happens during the hours (e.g., the exposure the student encounters). This factor relates to the specifics of the clinical site and patients available to interact with the student. Undoubtedly, clinical experiences with actual patients are required to acquire the competencies of an RN. Future efforts should focus on how faculty use clinical time (both hands-on and simulation) and the inclusion of best practices of simulation and clinical instruction (e.g., clinical debriefing). These attributes in clinical education may be more likely to be fruitful than a singular focus on the total number of clinical hours or the distribution of those hours between simulation and actual hands-on experiences.

References

  • Allan, J.D. & Aldebron, J. (2008). A systematic assessment of strategies to address the nursing faculty shortage, U.S. Nursing Outlook, 56, 286–297. doi:10.1016/j.outlook.2008.09.006 [CrossRef]19041450
  • Altmiller, G. & Hopkins-Pepe, L. (2019). Why Quality and Safety Education for Nurses (QSEN) matters in practice. The Journal of Continuing Education in Nursing, 50, 199–200. doi:10.3928/00220124-20190416-04 [CrossRef]31026318
  • American Association of Colleges of Nursing. (2008). The essentials of baccalaureate education for professional nursing practice. Washington, DC: Author.
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.
  • Cooper, S., Beauchamp, A., Bogossian, F., Bucknall, T., Cant, R., Devries, B. & Young, S. (2012). Managing patient deterioration: A protocol for enhancing undergraduate nursing students' competence through web-based simulation and feedback techniques. BMC Nursing, 11, 18. doi:10.1186/1472-6955-11-18 [CrossRef]23020906
  • Decker, S., Sportsman, S., Puetz, L. & Billings, L. (2008). The evolution of simulation and its contribution to competency. The Journal of Continuing Education in Nursing, 39, 74–80. doi:10.3928/00220124-20080201-06 [CrossRef]18323144
  • Durham, C.F. & Alden, K.R. (2008). Enhancing patient safety in nursing education through patient simulation. In Hughes, R.G. (Ed.), Patient safety and quality: An evidence-based handbook for nurses. Rockville, MD: Agency for Healthcare Research and Quality.
  • Grove, S.K. & Cipher, D.J. (2017). Statistics for nursing research: A workbook for evidence-based practice (2nd ed.). St. Louis, MO: Elsevier.
  • Hayden, J. (2010). Use of simulation in nursing education: National survey results. Journal of Nursing Regulation, 1(3), 52–57. doi:10.1016/S2155-8256(15)30335-5 [CrossRef]
  • Hayden, J., Keegan, M., Kardong-Edgren, S. & Smiley, R.A. (2014). Reliability and validity testing of the Creighton competency evaluation instrument for use in the NCSBN national simulation study. Nursing Education Perspective, 35, 244–252. doi:10.5480/13-1130.1 [CrossRef]
  • Hayden, J.K., Jeffries, P.J., Kardong-Edgren, S. & Spector, N. (2009). The national simulation study: Evaluating simulated clinical experiences in nursing education. Unpublished research protocol. Chicago, IL: National Council of State Boards of Nursing.
  • Hayden, J.K., Smiley, R.A., Alexander, M., Kardong-Edgren, S. & Jeffries, P.R. (2014). The NCSBN national simulation study: A longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education [Supplemental material]. Journal of Nursing Regulation, 5(2).
  • Institute of Medicine. (2011). Transforming education. In The future of nursing: Leading change, advancing health (pp. 163–220). Washington, DC: National Academies Press.
  • Ironside, P.M., McNelis, A.M. & Ebright, P. (2014). Clinical education in nursing: Rethinking learning in practice settings. Nursing Outlook, 62, 185–191. doi:10.1016/j.outlook.2013.12.004 [CrossRef]24576446
  • Jeffries, P.R., Dreifuerst, K.T., Kardong-Edgren, S. & Hayden, J. (2015). Faculty development when initiating simulation programs: Lessons learned from the national simulation study. Journal of Nursing Regulation, 5(4), 17–23. doi:10.1016/S2155-8256(15)30037-5 [CrossRef]
  • Kim, J., Park, J.H. & Shin, S. (2016). Effectiveness of simulation-based nursing education depending on fidelity: A meta-analysis. BMC Medical Education, 16, 152. doi:10.1186/s12909-016-0672-7 [CrossRef]27215280
  • Kolb, A.Y. & Kolb, D.A. (2005). Learning styles and learning spaces: Enhancing experiential learning in higher education. Academy of Management Learning & Education, 4, 193–212. doi:10.5465/amle.2005.17268566 [CrossRef]
  • Lapkin, S., Levett-Jones, T., Bellchambers, H. & Fernandez, R. (2010). Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: A systematic review. Clinical Simulation in Nursing, 6(6), e207–e222. doi:10.1016/j.ecns.2010.05.005 [CrossRef]
  • LeFlore, J.L., Anderson, M., Michael, J.L., Engle, W.D. & Anderson, J. (2007). Comparison of self-directed learning versus instructor-modeled learning during a simulated clinical experience. Simulation in Healthcare, 2, 170–177. doi:10.1097/SIH.0b013e31812dfb46 [CrossRef]19088620
  • Lisko, S.A. & O'Dell, V. (2010). Integration of theory and practice: Experiential learning theory and nursing education. Nursing Education Perspectives, 31, 106–108.20455368
  • National Council of State Boards of Nursing. (2019). Member board profiles national council of state boards of nursing. Retrieved from https://www.ncsbn.org/profiles.htm
  • Schlairet, M.C. & Fenster, M.J. (2012). Dose and sequence of simulation and direct care experiences among beginning nursing students: A pilot study. Journal of Nursing Education, 51, 668–675. doi:10.3928/01484834-20121005-03 [CrossRef]23061524
  • Shin, S., Park, J.H. & Kim, J.H. (2015). Effectiveness of patient simulation in nursing education: Meta-analysis. Nurse Education Today, 35, 176–182. doi:10.1016/j.nedt.2014.09.009 [CrossRef]
  • Spector, N. (2011). The focus on competencies in nursing. Retrieved from https://www.bon.texas.gov/pdfs/education_innovation_pdfs/edudocs/dec-presentation.pdf

Clinical Hours by Program and Changes Over Time

Control GroupaTotal HoursHands-OnSimulationb
ADN program 1 (n = 23)896684140 (15.6%)
ADN program 2 (n = 84)796540224 (28.1%)
BSN program 1 (n = 61)585344164 (28%)
BSN program 2 (n = 103)720477192 (26.1%)
Intervention GroupaTotal HoursbHands-OncSimulationd
ADN program 1 (n = 42)896642 (−42)182 (20.3%)
ADN program 2 (n = 100)796456 (−84)308 (38.7%)
BSN program 1 (n = 50)585328 (−16)187 (31.9%)
BSN program 2 (n = 123)688393 (−84)240 (34.9%)

Study Demographic Characteristics

VariableControl (n = 271)Intervention (n = 315)Whole Sample (N = 586)p



χ̄SDχ̄SDχ̄SD
Age26.67.426.47.926.57.6.25
Variablen%n%N%p
Male2910.74012.76911.8.52
Ethnicitya
  Asian3011.13410.86410.9.92
  Black/African American3111.441137212.3.56
  Hispanic/Latino46175918.710517.9.58
  White14754.216251.430952.7.50

Creighton Competency Evaluation Instrument Pooled Mean Scores, Adjusted for School

Clinical Course and GroupMeanSE95% CI, Lower95% CI, UpperF RatioaEffect Size (Cohen's f)
Medical–Surgicalb
  A26.80.525.827.9
  B25.90.824.027.8
  Control24.60.423.825.48.790.063
Psychiatric
  A9.60.38.910.3
  B10.30.39.611.0
  Control10.00.39.410.52.200.039
Pediatrics
  A18.10.616.819.4
  B20.20.519.121.3
  Control20.10.419.121.112.480.079
Obstetrics
  A12.20.311.612.9
  B13.40.312.714.1
  Control12.10.211.612.610.740.090
Capstone
  A32.10.830.433.7
  B32.30.731.033.7
  Control34.80.633.536.210.390.072

NCLEX Pass Rates by School and Cohort

School and CohortControl (n = 271)Intervention (n = 315)Whole Sample (N = 586)Difference
ADN program 1a (n = 65)100%81%87.7%19%
ADN program 2 (n = 184)82.1%77%79.3%5.1%
BSN program 1 (n = 111)83.6%76%80.2%7.6%
BSN program 2 (n = 226)94.2%95.1%94.7%−0.9%
Total88.6%84.4%88.5%4.2%b
Authors

Dr. Mancini is Professor, Senior Associate Dean for Education Innovation, Baylor Professor for Healthcare Research, College of Nursing and Health Innovation, Dr. LeFlore is Professor Emeritus, and Dr. Cipher is Associate Professor, College of Nursing and Health Innovation, University of Texas at Arlington, Arlington, Texas.

This study was funded by a grant from Texas Higher Education Coordinating Board. The authors thank the DFW Hospital Council, Ms. Sally Williams, Dr. Mark Meyer, Dr. Elaine Evans, Dr. Michelle Aebersold, Dr. Ann Louise Butt, Dr. Desiree Diaz, Dr. Laura Gonzalez, Mr. Sooyun Kim, Dr. Rosemary Macy, Dr. Pat Thomas, and Dr. Janet Willhaus for their invaluable participation and support throughout this study.

The authors have disclosed no potential conflicts of interest, financial or otherwise.

Address correspondence to Daisha Jane Cipher, PhD, Associate Professor, College of Nursing and Health Innovation, University of Texas at Arlington, 701 S. Nedderman Drive, Arlington, TX 76019-0407; e-mail: cipher@uta.edu.

Received: April 09, 2019
Accepted: July 24, 2019

10.3928/01484834-20190923-02

Sign up to receive

Journal E-contents