Journal of Nursing Education

Major Article 

Development and Evaluation of an Algorithm-Corresponding Instrument for Nursing Simulation

Yu-nah Lee, PhD, RN; Hyunsook Shin, PhD, RN, CPNP-PC; Dahae Rim, PhD, RN; Kaka Shim, PhD, RN

Abstract

Background:

This study aimed to develop and validate an assessment instrument for students engaging with an algorithm-based simulation scenario addressing emergency measures for high-risk newborns with apnea in a neonatal intensive care unit.

Method:

The study was conducted in two phases of development and evaluation of the algorithm-corresponding instrument. One hundred sixty-nine senior nursing students from two universities in South Korea were evaluated using the developed instrument.

Results:

The developed and validated instrument consisted of three dimensions (assessment points, nursing skills, and communication) measured through 13 items. The exploratory factor analysis revealed three factors of the instrument, and the confirmatory factor analysis demonstrated a better model fit for a three-factor instrument model than for other models.

Conclusion:

The developed algorithm-corresponding assessment instrument is suitable for assessing the clinical decision-making ability of nursing students in a simulation scenario. [J Nurs Educ. 2020;59(11):617–626.]

Abstract

Background:

This study aimed to develop and validate an assessment instrument for students engaging with an algorithm-based simulation scenario addressing emergency measures for high-risk newborns with apnea in a neonatal intensive care unit.

Method:

The study was conducted in two phases of development and evaluation of the algorithm-corresponding instrument. One hundred sixty-nine senior nursing students from two universities in South Korea were evaluated using the developed instrument.

Results:

The developed and validated instrument consisted of three dimensions (assessment points, nursing skills, and communication) measured through 13 items. The exploratory factor analysis revealed three factors of the instrument, and the confirmatory factor analysis demonstrated a better model fit for a three-factor instrument model than for other models.

Conclusion:

The developed algorithm-corresponding assessment instrument is suitable for assessing the clinical decision-making ability of nursing students in a simulation scenario. [J Nurs Educ. 2020;59(11):617–626.]

The goal of nursing education is to improve clinical competence. Clinical competence is a combination of the knowledge, skills, attitudes, and performances that nurses need to perform to promote patient health in a clinical context (Notarnicola et al., 2016). To improve clinical competence, nursing education should aim to improve the clinical judgment process and decision making.

Educating nursing students to provide adequate, on-time responses to emergency situations is critical to their education. Algorithm-based education can enhance students' clinical competence (Birkhoff & Donner, 2010). Currently, clinical simulation learning is largely based on common clinical protocols and specific algorithms reflecting patient situations (Roh et al., 2016), and present nursing simulation scenarios correspond to multiple stages of an algorithm. Given that algorithm-based simulation education contributes to students' clinical competence (Birkhoff & Donner, 2010), nurse educators need to develop more varied algorithm-based nursing simulation scenarios and assessment instruments that can measure whether nursing students achieve the learning goals of the scenarios.

In a neonatal care setting, nurses often confront deteriorating patient situations that sometimes lead to life-threatening emergencies for patients; therefore, nurses' ability to respond to those emergencies promptly and correctly is critically important. In emergency situations, accurate clinical judgment and effective application of clinical skills following a step-by-step approach are directly related to patient outcomes. Therefore, nursing students should have learning experiences that are algorithm-based clinical judgments, action processes for complex deteriorating situations.

Effective assessment instruments for nursing students' performance are essential to ensure the quality of nursing education. However, most instruments used in nursing simulation contain items that evaluate the learner's overall competence or individual skill performance rather than their performance on critical points during each stage of the scenario. To capture the learner's performance during each stage, it is critical that the algorithm-corresponding instrument measures each stage transition, intervening moment, and embedded clinical judgment. Recent reviews (Adamson et al., 2013; Kardong-Edgren et al., 2010) of simulation assessment instruments found that evaluation instruments mostly measured simulation reactions and learning, which were recognized as low-level performance outcomes.

Most previous studies involving simulation assessment instruments were limited to capturing the clinical competence stages corresponding to specific algorithm-based simulation scenarios (Adamson et al., 2013; Kardong-Edgren et al., 2010). Gaps in previous studies and in educational practice have led to either inappropriate performance evaluation or ineffective assessment of clinical competence. Well-structured instruments that reflect corresponding stages of scenario algorithms may contribute to nursing students' development of psychomotor and cognitive skills (Kassab & Kenner, 2011; Korhan et al., 2018).

In our previous study (Shin et al., 2015a), we developed a simulation scenario that addressed emergency measures for a high-risk newborn with apnea but mostly focused on nurses' clinical judgment. The objective of the scenario was for students to recognize deterioration in the condition of a high-risk newborn and to exercise proper clinical judgment and clinical core skills in responding to that situation. The scenario was based on neonatal emergency care resulting in an overall algorithm-based process that reflected a nurse's competence. To evaluate student achievement in the scenario objectives, it was necessary to develop a specific assessment instrument that reflected the clinical judgment process and the core skills applied during each algorithm-corresponding stage. Therefore, this study aimed to develop and validate an assessment instrument for measuring students' clinical decision making in response to the simulation scenario.

Method

Study Design

This study used a two-phase process to develop and validate a scenario-specific instrument that corresponded to a simulation scenario algorithm. In the first phase, a simulation algorithm-corresponding instrument (ACI) was developed to assess nursing students' response to a simulation that involved providing care for a high-risk newborn with apnea. In the second phase, the validity and reliability of the instrument were evaluated by assessing its psychometric properties.

Sample and Setting

A total of 200 undergraduate nursing students were recruited from two universities in Seoul, Korea. The inclusion criterion was that student participants in each school had to be enrolled in a pediatric nursing practicum between December 2013 and August 2014. Overall, the data for 169 of the students originally recruited were analyzed; 31 students were excluded for incomplete questionnaire responses.

Ethical Considerations

This study was approved by the institutional review board of a university in Korea. A researcher informed the participants about the study procedures and the potential risks and benefits of participation. Participants provided written, informed consent before beginning the study simulation.

Procedure

First Phase: Development of Simulation Algorithm and Instrument

The simulation scenario was designed for undergraduate students participating in a pediatric nursing practicum and developed to provide clinical experience, enhance skill development, and improve knowledge required to provide nursing care in a neonatal intensive care unit (NICU). The specific learning objectives were to recognize the change in the baby's condition in the clinical deterioration of the respiratory system and make appropriate judgments and actions based on the algorithm. In a cardiopulmonary resuscitation situation, the objective was to implement accurate techniques and communicate in a professional attitude.

The scenario involved an exacerbation situation in which a high-risk newborn was vomiting and then exhibited decreasing dioxygen saturation and heart rate. Nursing students participating in the simulation gained experience in how to cope with this emergency as a NICU nurse. The learning objectives for this scenario included providing appropriate nursing activities to care for a high-risk newborn with apnea in a NICU and performing effective resuscitation using the acquired core skills.

The algorithm used for this scenario was designed to challenge participants to choose among several care options within a framework and to enhance their clinical judgment based on Tanner's (2006) clinical judgment model. The main simulation mechanism included the key steps of neonatal resuscitation using the airway-breathing-circulation method, as well as noticing, interpreting, responding to, and reflecting on the situation as part of the clinical judgment process. We developed an algorithm that captured clinical judgment activities and required core competencies at each step of the process. The clinical expert validity of the scenario, algorithm frames, and transitions among frames was shown to be 96.3% in a previous study (Shin et al., 2015a). An ACI was developed according to the individual components of the simulation algorithm. The instrument addressed clinical judgments (e.g., noticing, interpreting, responding, and reflecting), resuscitation skills, and communication skills.

Second Phase: Psychometric Evaluation of the ACI

To validate the ACI, the instrument was applied during a pediatric nursing simulation practicum. In this practicum, student participants experienced a simulation involving apnea in a high-risk newborn in a NICU. The estimated simulation time for the scenario, including simulation operation, self-analysis, and debriefing time, was approximately 1 hour. This simulation followed the protocol (Rim & Shin, in press; Shin et al., 2015a; Shin et al., 2015b) and it included prelearning, skill laboratory practice, simulation operation, and reflective debriefing process. During the prelearning, participants discussed the clinical symptoms of a high-risk baby with respiration deterioration and clinical judgment criteria based on algorithm and resuscitation techniques and professional attitude during emergency situations. In the skill laboratories, students practiced accurate techniques of keeping airway position, chest compression, and bag-valve mask ventilation. In each simulation session, two students participated in the simulation scenario with a high-fidelity simulator and experienced deteriorating situations for 7 to 10 minutes. Reflective debriefing method included individual and group debriefing. Individual debriefing consisted in concept mapping of the situation, SBAR (Situation, Background, Assessment, Recommendation) writing, and a reflective assessment of recorded self-performance using the tool in this study. After the individual debriefing, students had group debriefing sessions to reflect on their actions and to recapture the moments in simulation session. The instructor evaluated students' simulation activities with this tool. The evaluator identified indicators that should be shown as simulation-situation progresses and scored them as a group at the same time. During the simulation, two researchers experienced in simulation operation and neonatal care participated to test inter-rater reliability; the observed interrater reliability was .70.

Analyses

Study data were analyzed using IBM SPSS® (version 20) and STATA®. SPSS was used to analyze the descriptive data for participant characteristics and for exploratory factor analysis (EFA). Confirmatory factor analyses (CFA) were also performed using STATA; CFA was used to estimate construct validity, and multi-CFA was used to analyze the measurement invariance of the ACI.

Exploratory Factor Analysis

The EFA method was used to examine the number of ACI factors and the relationships among its items to define the construct. To ensure adequate sample suitability, Bartlett's Test of Sphericity and the Kaiser-Meyer-Olkin (KMO) procedure were performed. For the factor analysis, principal component analysis (PCA) with varimax rotation was used as the factor extraction method.

Construct Validity Evaluation

CFA was used to assess ACI construct validity and estimated model fit. Model evaluation was based on a variety of fit measures: chi-square test, associated probability, root mean-square error of approximation, standardized root-mean-square residual, comparative fit index, Akaike's information criterion, and Bayesian information criterion.

Results

Initial Version of ACI

The ACI was developed to evaluate the performance of nursing students in simulated apneic high-risk newborn care. As explained earlier, the simulation scenario was designed to allow students to experience an emergency clinical situation in a NICU. The algorithm for the scenario comprised nine frames of state-transition and 16 arrows indicating interpreting processes and intervening performances (Figure 1). Each frame can transition into the next frame according to correct or incorrect intervening performances. The frames represent the specific scenario situation, and the arrows indicate clinical judgment activities and core skills that should be applied according to changes in the scenario situation.

Initial algorithm of algorithm-corresponding instrument. Note. BT = body temperature; HR = heart rate; RR = respiration rate; SpO2 = oxygen saturation.

Figure 1.

Initial algorithm of algorithm-corresponding instrument. Note. BT = body temperature; HR = heart rate; RR = respiration rate; SpO2 = oxygen saturation.

We developed an ACI that could measure 16 corresponding points in the algorithm; consequently, the instrument initially developed contained 16 items (Table 1). Each item and component of the instrument reflected the flow of the algorithm and the arrows of state-transitioning, interpreting situations, and intervening performances in the algorithm for the apneic high-risk newborn care scenario. For each of the 16 items, scores ranged from none/beginning (0 points) to accomplished (2 points), based on the indicators described below. The instrument items proceeded in the order of the airway-breathing-circulation steps called for in the scenario. The first step, airway, consisted of four activities: (a) checking the patient's condition, (b) maintaining the airway position, (c) suctioning, and (d) evaluating the patient's status. The breathing step consisted of four activities: (a) noticing apnea, (b) applying tactile stimulation to promote breathing, (c) bagging, and (d) evaluating the patient's status at each intervention point. The third step, circulation, consisted of four activities: (a) noticing bradycardia, (b) chest compression, (c) bagging for resuscitation, and (d) evaluating the patient's status during resuscitation. In addition, communication was observed throughout the simulation in terms of fulfilling the team role, communication skills, and professionalism. At the end of the simulation, the item scores were summed. A high total score represented greater competency to solve the problem in the simulation. The reliability of the initial instrument was estimated as a Cronbach's alpha of .71.

The Algorithm Corresponding Contents and Evaluation Point of the Initial Developed Instrument

Table 1:

The Algorithm Corresponding Contents and Evaluation Point of the Initial Developed Instrument

Exploratory Factor Analysis

The EFA was conducted to define the construct of the ACI. In this factor analysis, the KMO procedure and the Bartlett's test of sphericity were used to confirm the adequacy of the sample. The KMO value was 0.768, which exceeded the recommended value of 0.60. The result of the Bartlett test was significant (χ2 = 739.58, df = 78, p < .01).

PCA was used as the factor extraction method, and the Varimax method was selected as the factor rotation method. In the first PCA, the initial number of factors extracted was five, with eigenvalues of 1 or more for 16 items. The factors accounted for 63.59% of the total variance. Three items were then eliminated from the factor pattern matrix; specifically, items a2, c10, and c11 were eliminated because they had little relation to the contents of the factors between the items. A second PCA was performed on the 13 remaining items, and the number of factors extracted was four, with eigenvalues of 1 or more for the 13 items. The factors accounted for 63.91% of the total variance. The first factor included six items (a4, b5, b6, b7, b9, and c13) with loadings between .544 and .798. These items focused on identifying or recognizing the patient's condition and responding after the nursing intervention. The second factor consisted of three items (cm1, cm2, and cm3) that focused on communication among nurses. All the items in the second factor had high loadings, between .692 and .854. The third factor contained two items (b8 and c12) with loadings between .796 and .868; these items focused on bagging skills during the emergency situation. The fourth factor consisted of two items (a1 and a3) with loadings between .734 and .793. Item a1 focused on whether the nurse was assessing the patient's condition at the start of the scenario. Item a3 focused on the nurse's psychomotor skills in suctioning at the proper time after the vomiting episode. The fourth factor showed no communality between the items. Items a1 and a3 were then removed, and PCA was again performed with the remaining 11 items (Table 2). The three factors accounted for 63.75% of the total variance. One the basis of the results of the factor analyses and item meanings, we hypothesized that the structure of the ACI consisted mainly of three domains: an assessment point, nursing skill proficiency, and communication. The eigenvalues and scree plot are precise indicators of how many factors should be retained, and the scree plot indicated three factors (Figure 2).

Factor Analysis

Table 2:

Factor Analysis

The scree plot. Note. x-axis = component number; y-axis = eigenvalue.

Figure 2.

The scree plot. Note. x-axis = component number; y-axis = eigenvalue.

Confirmatory Factor Analysis

The CFA was performed to evaluate the fit between solution types and to confirm the ACI's construct validity. In our study, CFA was applied to three models. The first was the one-factor solution for the entire simulation algorithm. The second model was the four-factor solution based on the results of factor analysis and the scenario algorithm flow. The third model was the three-factor solution, which consisted of assessment, nursing skills, and communication, according to the hypothesized structure and the scree plot results.

The CFA results are shown in Table 3. The four-factor model and one-factor model were both significant (p < .001). For the three-factor model, the statistical values were χ2 = 190.73, root-mean-square error of approximation = 0.120, standardized root-mean-square residual = 0.077, comparative fit index = 0.847, Akaike's information criterion = 3,363.45, and Bayesian information criterion = 3,494.91. The three-factor model exhibited better fit indices than the one-factor and four-factor models. However, the root-mean-square residual index of the three-factor model was not satisfactory, as it exceeded the acceptable value of 0.05.

Model Fit Statics for Confirmatory Factor Analysis

Table 3:

Model Fit Statics for Confirmatory Factor Analysis

In summary, the three-factor model had the best fit indices. Standardized factor loading and residuals for the three-factor model are presented in Figure 3. Two items, item a1 of the subscale cognitive domain and item a3 of the subscale psychomotor domain, had relatively lower loadings on their designated factors compared to all other items.

Standardized factor loading in confirmatory factor analysis. Note. Cognitive = assessment point; Psychomotor = nursing skills; and Affective = communication.

Figure 3.

Standardized factor loading in confirmatory factor analysis. Note. Cognitive = assessment point; Psychomotor = nursing skills; and Affective = communication.

Final ACI

The original 16 items of the ACI were reduced to 13 items based on the results of the EFA and CFA. Figure 4 and Table 4 include the final 10 algorithm-corresponding points of the scenario and the three items of the communication domain observed throughout the scenario. In the revised instrument, four items corresponding to patient assessment were merged into two items according to the algorithm stage. Also, two items related to chest compression and bagging were combined into one item, resulting in a total of three items being excluded. Items related to nursing skills included two attributes divided into interpretations meaning “on time” and content for “skill proficiency.”

Final algorithm of algorithm-corresponding instrument. Note. BT = body temperature; HR = heart rate; RR = respiration rate; SpO2 = oxygen saturation.

Figure 4.

Final algorithm of algorithm-corresponding instrument. Note. BT = body temperature; HR = heart rate; RR = respiration rate; SpO2 = oxygen saturation.

Final Assessment Instrument

Table 4:

Final Assessment Instrument

The final instrument comprised three domains: assessment point, nursing skill proficiency, and communication. The psychomotor domain had four items—3, 5, 7, and 9—related to core techniques that should be used in the emergency. The affective domain consisted of three items—overall 1, overall 2, and overall 3—addressing overall attitude, professionalism, and communication. Finally, the cognitive domain contained six items—1, 2, 4, 6, 8, and 10—representing the cognitive processes of assessing, noticing, and recognizing the patient's condition.

Discussion

Given that it is difficult to validate instruments that evaluate nursing students' performance, such as clinical judgment rubrics or clinical reasoning and decision-making checklists, a well-structured instrument can help evaluators to accurately measure student performance and can foster clinical judgement development in students. To develop an algorithm-corresponding assessment instrument for a nursing simulation scenario, we visualized the simulation scenario flow with frames and matched evaluation indicators (arrows) to reflect the nursing skills that should be acquired as simulation objectives. Each component of the ACI matches a stage of clinical judgment. In addition, the instrument includes items that reflect measurable indicators (arrows) in the algorithm, including state transitioning, interpreting situations, and intervening performances. Because our instrument development process visualized the algorithm and captured required clinical competencies, we were able to produce a comprehensive measurement instrument for appraising clinical nursing competency. The development process employed may help educators to recognize students' behavioral indicators in clinical simulations of the type used in our study.

Clinical competence evaluation is defined as an integrated means of combining knowledge, understanding, problem solving, technical skills, attitudes, and communication skills in the evaluation process. Evaluation instruments currently used in nursing education are intended to evaluate nursing competence, including clinical judgment and the knowledge, skills, and attitudes needed for nursing students to enter practice (Shipman et al., 2012). The objective structured clinical examination (OSCE) instruments have been used to assess clinical practice in nursing simulation education. Although the OSCEs are validated and useful tools (Adamson et al., 2013), they have limitations in that they cannot measure contextual processes, such as the clinical judgment process (Phelan et al., 2014). Several instruments have been presented to evaluate clinical judgment processes and critical thinking, such as the Lasater Clinical Judgment Rubric, and the California Critical Thinking Disposition Inventory. Only the Clinical Simulation Evaluation Tool (Radhakrishnan et al., 2007) and the Creighton Simulation Evaluation Instrument (Todd et al., 2008) were developed to simultaneously measure students' skill competency and clinical judgment process. However, these tools do not assess specific skills as the OSCEs do. Our study introduced an evaluation instrument that simultaneously assesses nursing skill competency such as suctioning, tactile stimulation, bag–mask ventilation, and cardiac compression, as well as a clinical judgment process that observe and the data of symptoms and condition changes of patients and identify nursing actions on a contextual basis.

Our study's EFA results revealed three factors among the ACI items, which were validated by the CFA results. The three ACI factors, assessment point, skill proficiency, and communication, are theoretically sound. The major findings show that the ACI measures nursing competency according to the learning objectives of three domains in Bloom's taxonomy. Bloom (1956) described the three domains of educational objectives as cognitive, psychomotor, and affective, and these learning objectives are crucial in nursing simulation education. The newborn care apnea simulation has high complexity; consequently, instructors can expect higher learning outcome levels for nursing students participating in the simulation. To assess higher learning outcome levels, such as integrated nursing competency, assessment instruments should be based on the learning objectives and three domains (cognitive, psychomotor, affective) of nursing competency (Leigh et al., 2016). Previous studies have suggested that the validity of simulation-based assessment instruments must be ensured (Adamson et al., 2013; Kardong-Edgren et al., 2010; Leigh et al., 2016). Our findings indicated that an instrument reflecting the three objectives of the simulation scenario had acceptable validity for the newborn-with-apnea simulation.

Although the CFA results supported the three-factor model, some indices in the four-factor model exhibited a better fit index than the three-factor model. This was likely because some items, such as a1 and a3, were ambiguous. Item a1 measures students' behavior at the first stage, and it was difficult for raters to distinguish the difference between “just looking at the patient” behavior and “closely observing the patient.” In addition, the infant simulator used in the apnea scenario was poor for visualizing vomiting. As a result, it was difficult to identify whether students noticed the airway obstruction situation, which affected item a3 (suctioning). The suctioning item included both interpretive and skillful aspects, but the rater might be unable to properly rate suctioning because the item lacks sensitivity and clarity. Items a1 and a3 both decreased the tool's overall validity. However, given that a1 (checking patient condition) was appropriate for measuring nursing competency at the beginning of the simulation and a3 (suctioning) was a key nursing intervention in the scenario, a1 and a3 were included in the final ACI. Ensuring rating reliability is important for evaluating nursing students during simulated clinical interventions (Garner et al., 2017). To increase rating reliability, rater sensitivity and consistency are essential (Kardong-Edgren et al., 2017). To increase the reliability of our instrument, we added concrete behavior indicators to item a1 and modified the simulation environment to improve the perception of vomiting in item a3.

The ACI was developed to measure integrated performance, including clinical judgment, and evaluated using checklist-based multilateral criteria. However, the scoring system poses a study limitation. We recommend that the ACI scoring system be improved in future studies. We plan to expand the simulation program to maintain the same practice standards between RN and nursing students in further research.

Conclusion

To maximize learning outcomes in nursing simulations, the simulated scenario should include concrete learning objectives that are assessed through a validated instrument that can measure corresponding points in the scenario. We developed the 13-item ACI to evaluate comprehensive nursing competency, including clinical judgment, clinical skill, and communication, during a neonatal emergency care situation. This scenario specific ACI provides both nursing educators and students with objective evaluation criteria. In addition, the specific scenario and the scenario-based instrument represent a simulation process that includes both application and evaluation, which can easily and efficiently facilitate implementing clinical simulations in nursing education. Finally, an integrated scenario and assessment instrument will advance the use of simulation in nursing education and lead to improved student learning outcomes.

References

  • Adamson, K. A., Kardong-Edgren, S. & Willhaus, J. (2013). An updated review of published simulation evaluation instruments. Clinical Simulation in Nursing. doi:10.1016/j.ecns.2012.09.004 [CrossRef]
  • Birkhoff, S. D. & Donner, C. (2010). Enhancing pediatric clinical competency with high-fidelity simulation. The Journal of Continuing Education in Nursing, 41(9), 418–423 doi:10.3928/00220124-20100503-03 [CrossRef]
  • Bloom, B. S. (1956). Taxonomy of educational objectives: Cognitive domain (Vol. 1). McKay.
  • Garner, S. L., Killingsworth, E. & Raj, L. (2017). Partnering to establish and study simulation in international nursing education. Nurse Educator, 42(3), 151–154 doi:10.1097/NNE.0000000000000333 [CrossRef]
  • Kardong-Edgren, S., Adamson, K. A. & Fitzgerald, C. (2010). A review of currently published evaluation instruments for human patient simulation. Clinical Simulation in Nursing. doi:10.1016/j.ecns.2009.08.004 [CrossRef]
  • Kardong-Edgren, S., Oermann, M. H., Rizzolo, M. A. & Odom-Maryon, T. (2017). Establishing inter- and intrarater reliability for high-stakes testing using simulation. Nursing Education Perspectives, 38(2), 63–68 doi:10.1097/01.NEP.0000000000000114 [CrossRef]
  • Kassab, M. & Kenner, C. (2011). Simulation and neonatal nursing education. Newborn and Infant Nursing Reviews, 11(1), 8–9 doi:10.1053/j.nainr.2010.12.006 [CrossRef]
  • Korhan, E., Yilmaz, D., Celik, G., Haci, D. & Bayasan, A. (2018). The effects of simulation on nursing students psychomotor skills. International Journal of Clinical Skills, 12(1). https://www.ijocs.org/abstract/the-effects-of-simulation-on-nursing-students-psychomotor-skills-12438.html
  • Leigh, G., Stueben, F., Harrington, D. & Hetherman, S. (2016). Making the case for simulation-based assessments to overcome the challenges in evaluating clinical competency. International Journal of Nursing Education Scholarship, 13(1), 27–34 doi:10.1515/ijnes-2015-0048 [CrossRef]
  • Notarnicola, I., Petrucci, C., Barbosa, M. R. D. J., Giorgi, F., Stievano, A. & Lancia, L. (2016). Clinical competence in nursing: A concept analysis. Professioni Infermieristiche, 69(3). http://www.profinf.net/pro3/index.php/IN/article/view/279
  • Phelan, A. O., Connell, R., Murphy, M., McLoughlin, G. & Long, O. (2014). A contextual clinical assessment for student midwives in Ireland. Nurse Education Today, 34(3), 292–294 doi:10.1016/j.nedt.2013.10.016 [CrossRef]
  • Radhakrishnan, K., Roche, J. P. & Cunningham, H. (2007). Measuring clinical practice parameters with human patient simulation: A pilot study. International Journal of Nursing Education Scholarship, 4(1). doi:10.2202/1548-923X.1307 [CrossRef] PMID:17402934
  • Rim, D. & Shin, H. (in press). Effective instructional design template for virtual simulations in nursing education. Nurse Education Today.
  • Roh, Y. S., Lim, E. J. & Barry Issenberg, S. (2016). Effects of an integrated simulation-based resuscitation skills training with clinical practicum on mastery learning and self-efficacy in nursing students. Collegian (Royal College of Nursing, Australia), (23)1, 53–59. doi:10.1016/j.colegn.2014.10.002 [CrossRef]
  • Shin, H., Lee, Y. N. & Rim, D. H. (2015a). Evaluation of algorithm-based simulation scenario for emergency measures with high-risk newborns presenting with apnea. Child Health Nursing Research, 21(2), 98–106 doi:10.4094/chnr.2015.21.2.98 [CrossRef]
  • Shin, H., Ma, H., Park, J., Ji, E. S. & Kim, D. H. (2015b). The effect of simulation courseware on critical thinking in undergraduate nursing students: Multi-site pre-post study. Nurse Education Today, 35(4), 537–542 doi:10.1016/j.nedt.2014.12.004 [CrossRef] PMID:25549985
  • Shipman, D., Roa, M., Hooten, J. & Wang, Z. J. (2012). Using the analytic rubric as an evaluation tool in nursing education: The positive and the negative. Nurse Education Today, 32(3), 246–249 doi:10.1016/j.nedt.2011.04.007 [CrossRef]
  • Tanner, C. (2006). Thinking like a nurse: A research-based model of clinical judgment in nursing. Journal of Nursing Education, 45(6), 204–211 doi:10.3928/01484834-20060601-04 [CrossRef]
  • Todd, M., Manz, J. A., Hawkins, K. S., Parsons, M. E. & Hercinger, M. (2008). The development of a quantitative evaluation tool for simulations in nursing education. International Journal of Nursing Education Scholarship, 5(1), 41 doi:10.2202/1548-923X.1705 [CrossRef] PMID:19049492

The Algorithm Corresponding Contents and Evaluation Point of the Initial Developed Instrument

Measured Indicators (Arrow Number)ItemCorresponding ContentEvaluation Points (Indicators)
1a1Checking patient conditionChecking chest movement or respiratory pattern, monitoring vital signs, and watching the patient
2a2Noticing vomiting episodeAction of noticing: position change, notify coworkers
3a3SuctioningTimely action, use of lubricant, pressure, insertion depth
4a4Patient evaluation: airway phaseRechecking the position, monitoring respiratory pattern, skin color
5b5Noticing apnea statusAssessing respiratory rate and pattern, skin color, and chest movement
6b6Tactile stimulationTactile site and intensity
7b7Patient evaluation: Breathing phase 1Checking chest movement or respiratory patterns, vital sign monitoring, assessing skin color, auscultation
8b8BaggingMask sealing, C-E technique, frequency, proper chest movement
9b9Patient evaluation: Breathing phase 2Vital sign monitoring, checking chest expansion
10c10Noticing bradycardia statusHeart rate <60
11c11Chest compression interventionSite, depth, frequency of chest compression
12c12Bagging during CPRMask sealing, C-E technique, frequency, proper chest movement, CPR ratio
13c13Patient evaluation: Circulation phaseChecking chest movement or respiratory pattern, vital sign monitoring, assessing skin color, auscultation after CPR
Overall 1cm1Team roleClear assigning role during the emergency situation
Overall 2cm2CommunicationOpen and loop communication
Overall 3cm3Professional attitudeProfessional attitude

Factor Analysis

Item NumberItemFactor

123
b6Tactile stimulation.799−.101.030
b5Noticing apnea.787.023−.010
b7Patient evaluation: Breathing phase 1.765.308.182
b9Patient evaluation: Breathing phase 2.551.292.418
a4Patient evaluation: Airway phase.548.212.076
c13Patient evaluation: Circulation phase.519.528.067
cm2Communication.150.863.010
cm3Professional attitude.239.796.198
cm1Team role−.089.695.356
b8Bagging.085.129.868
c12Bagging with compression.095.149.800
Eigenvalues after rotation2.8202.4141.778
% explained variance25.63421.95016.166
Cumulative %25.63447.58363.749

Model Fit Statics for Confirmatory Factor Analysis

Modelχ2dfp ValueRMSEASRMRCFIAICBIC
Three-factor model190.72962<.0010.1200.0770.8473363.4543494.910
Four-factor model174.70559<.0010.1080.0720.8323353.4303494.275
One-factor model322.37165<.0010.1530.1000.6273489.0963611.162

Final Assessment Instrument

NumberIndicatorCorresponding PointsAccomplished (2)bMissing Some (1)bBeginning (0)b
1Respiratory rate1. Checking patient conditionAllMissing someNo
Respiratory pattern
Skin color
Staring patient and patient's monitor
2Airway patency checking (vomiting, checking and rechecking position, respiratory rate, skin color)2. Patient evaluation: Airway phaseAll1<E<3No
3aOn time3. InterpretingYesNo
Skill (pressure, insertion depth, time)3. SuctioningAllMissing someNo
4Inspecting chest movement and skin color4. Noticing of apnea statusAllMissing someNo
Monitoring vital sign
Verbalizing apnea status
5aOn time5. InterpretingYesNo
Skill (accurate site, proper intensity)5. Tactile stimulationAllMissing oneNo
6Inspecting chest movement, respiratory pattern, color, monitoring vital signs, auscultation (optional)6. Patient evaluation: Breathing phase 1AllMissing someNo
7aOn time7. InterpretingYesNo
Skill (mask sealing [mouth and nose sealing, & C-E technique], frequency, and chest rise)7. BaggingAllMissing someNo
8Staring patient monitor (HR, SpO2)8. Patient evaluation:AllMissing oneNo
Inspecting chest (expansion)Breathing phase 2
9aOn time9. InterpretingYesNo
Skill (mask sealing [mouth and nose sealing, & C-E technique], frequency, chest rise, CPR ratio, compression site and depth)9. Bagging and compressionAllMissing someNo
10Apprehension of patient status: Vital sign monitoring10. Patient evaluation: Circulation phaseAllMissing someNo
Assessment after CPR
11OverallTeam roleExcellentFairPoor
12OverallCommunicationActive discussionInformative talksNo eye contact
13OverallProfessional attitudeExcellentHesitantDisregard
Authors

Dr. Lee is Assistant Professor, Chodang University, Department of Nursing, Muan County; Dr. Shin is Professor, and Dr. Rim is Research Fellow, Kyung Hee University, College of Nursing; and Dr. Shim is Assistant Professor, Sangmyung University, Department of Nursing, Seoul, Korea.

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea, funded by the Ministry of Science, ICT, and Future Planning (KRF #2016R1A2B4010413). The authors thank Jon S. Mann, Instructor of the UIC Academic Center for Excellence, for editorial assistance.

The authors have disclosed no potential conflicts of interest, financial or otherwise.

Address correspondence to Hyunsook Shin, PhD, RN, CPNP-PC, Professor, 26 Kyungheedae-ro, Dongdaemun-gu, Seoul, Republic of Korea 02447; email: hsshin@khu.ac.kr.

Received: April 01, 2020
Accepted: July 15, 2020

10.3928/01484834-20201020-04

Sign up to receive

Journal E-contents