Journal of Nursing Education

Major Article 

Using Debriefing for Meaningful Learning to Foster Development of Clinical Reasoning in Simulation

Kristina Thomas Dreifuerst, PhD, RN, ACNS-BC, CNE

Abstract

Debriefing is critical to learning from simulation experiences, yet the literature reports little research describing best practices within nursing. Debriefing for Meaningful Learning (DML) is a systematic process for debriefing in which teachers and students explicate different aspects of reflection and generate new meanings from simulation experiences. The purpose of this exploratory, quasi-experimental, pretest–posttest study was to test the relationship of DML on the development of clinical reasoning skills in prelicensure nursing students when compared with customary debriefing strategies and on students’ perception of quality of the debriefing experience. Analysis of data demonstrated a greater change in clinical reasoning skills and identification of higher-quality debriefing and a positive correlation between clinical reasoning and perception of quality. Findings demonstrate that DML is an effective debriefing method. It contributes to the body of knowledge supporting the use of debriefing in simulation learning and supports the development of best teaching practices.

Abstract

Debriefing is critical to learning from simulation experiences, yet the literature reports little research describing best practices within nursing. Debriefing for Meaningful Learning (DML) is a systematic process for debriefing in which teachers and students explicate different aspects of reflection and generate new meanings from simulation experiences. The purpose of this exploratory, quasi-experimental, pretest–posttest study was to test the relationship of DML on the development of clinical reasoning skills in prelicensure nursing students when compared with customary debriefing strategies and on students’ perception of quality of the debriefing experience. Analysis of data demonstrated a greater change in clinical reasoning skills and identification of higher-quality debriefing and a positive correlation between clinical reasoning and perception of quality. Findings demonstrate that DML is an effective debriefing method. It contributes to the body of knowledge supporting the use of debriefing in simulation learning and supports the development of best teaching practices.

Dr. Dreifuerst is Assistant Professor, Indiana University, School of Nursing, Indianapolis, Indiana.

This research was supported by the Sigma Theta Tau International Joan K. Stout Research Grant, an Indiana University Research Incentive Fund Fellowship Block Grant, and the International Nursing Association for Clinical Simulation Learning Debra Spunt Mini Grant. Research questions and table data are directly derived from Dr. Dreifuerst’s doctoral dissertation. The author thanks Dr. Pamela Ironside for her assistance in preparing this manuscript, Dr. Pamela Jeffries and Dr. Daniel Pesut for overseeing the research, and Dr. Omar Espinoza for statistical analysis.

The author has disclosed no potential conflicts of interest, financial or otherwise.

Address correspondence to Kristina Thomas Dreifuerst, PhD, RN, ACNS-BC, CNE, Assistant Professor, Indiana University, School of Nursing, 1111 Middle Drive, NU W435, Indianapolis, IN 46202-5107; e-mail: kdreifue@iupui.edu.

Received: September 09, 2011
Accepted: February 22, 2012
Posted Online: April 09, 2012

Learning to think like a nurse is a critical component of prelicensure education (Tanner, 2006). Clinical reasoning skills represent a significant component of this thinking. As a result, educators actively seek teaching and learning strategies to engage students in meaningful learning, which goes beyond rote repetition and memorization, to promote conceptual understanding that supports development of clinical reasoning and guides the provision of patient care. Debriefing for Meaningful Learning (DML) is a method that uses guided reflection to foster greater development of clinical reasoning and thinking like a nurse in students (Dreifuerst, 2009; Tanner, 2006). This particular method of reflection supports students’ ability to translate their thinking, in the context of clinical experience, into actionable knowledge and decision making, thereby enhancing learning and fostering new reasoning and understanding that can be used in subsequent clinical encounters (Dreifuerst, 2009).

The importance of debriefing in all types of simulation has been well documented (Cantrell, 2008; Fanning & Gaba, 2007), yet few empirical studies demonstrate the effects of using particular debriefing strategies. Debriefing for Meaningful Learning is one method that can be used to teach clinical reasoning in debriefing in all types of simulation including high-fidelity simulation (HFS). Debriefing for Meaningful Learning focuses on developing the clinical reasoning skills necessary for practice in today’s complex health care environment using reflection-inaction, reflection-on-action, and reflection-beyond-action (Dreifuerst, 2010). The purpose of this study was to describe and test the relationship between DML and the development of clinical reasoning skills in prelicensure nursing students when compared with customary debriefing strategies used with HFS experiences.

Literature Review

Debriefing

Debriefing is the time that follows the simulated clinical experience, when the student and faculty revisit the encounter reflectively and learn from the events that occurred (Arafeh, Hansen, & Nichols, 2010; Decker, 2007; Dreifuerst, 2009; Fanning & Gaba, 2007). Simulated clinical experiences can range from low fidelity, such as case studies, to medium fidelity with task-trainers and high fidelity involving the use of computer-enhanced manikins that realistically represent the physiologic response of a patient in credible clinical environments, as well as virtual simulations using gaming or Internet-based platforms (Jeffries, 2007). Although the format and process of debriefing can vary considerably, it is common for teachers and students to review what went right, what went wrong, and what should be done differently during the next simulation experience (Dreifuerst, 2009).

Debriefing is a constructivist teaching strategy and is an essential component of all levels of simulation that solidifies the learning (Cantrell, 2008). Debriefing provides an opportunity for students to actively and reflectively build on prior learning and test assumptions about nursing care and patient responses with other participants in the experience (Dreifuerst, 2009). Typically, students and faculty debrief simulation experiences using a model first developed by the military for pilots (Dismukes, Gaba, & Howard, 2006; Fanning & Gaba, 2007) and widely disseminated in the multisite simulation study by Jeffries (2007). This traditional model of simulation debriefing often includes a discussion between the teacher–coach–debriefing facilitator and the student(s) based on the intended learning objectives. In this typical approach, there is a focus on critique of performance prompted by asking the participants to describe what was done correctly, what was not done correctly, and what they would do differently the next time (Decker, 2007; Flannagan, 2008). By answering these questions, students use the inductive and deductive thinking skills that are foundational to critical thinking and, occasionally, some analysis and basic reflection. In this manner, debriefing becomes formative feedback to students based on evaluation of their performance. It is this feedback that is intended to change behavior or clinical practice (Decker, 2007; Rudolph, Simon, Raemer, & Eppich, 2008).

Rudolph et al. (2008) also promoted the advocacy–inquiry method of nonjudgmental debriefing. This is different from the typical approach that focuses on the critique of performance (Rudolph, Simon, Rivard, Dufresne, & Raemer, 2007). Instead, nonjudgmental debriefing uses the advocacy–inquiry technique, where the debriefer begins by stating an observation–hypothesis–assumption and then asks the student or trainee to validate or explain it. This technique, developed for use with postgraduate medical school trainees, uses the inquiry to test the debriefing educator’s assumptions about what occurred in the simulation. This form of debriefing prompts students to articulate their frames, or mental representations, and make sense of their assumptions and understandings (Rudolph et al., 2007).

In their literature review on debriefing in nursing education, Neill and Wotton (2011) identified nine published articles on debriefing in simulation related to nursing education. Their synthesis revealed six themes in this scarce literature: the debriefing structure, the faculty demeanor, the perception of the safety of the environment, the style and use of probes or cues by the debriefer, the best time to debrief, and the time to allocate for debriefing (p. e3). They concluded that debriefing practices vary and there is little consensus in the literature. Questions remained, particularly about what to debrief, when to debrief, how to debrief, and who should do the debriefing to foster meaningful learning (Dreifuerst, 2009).

Increasing support has been seen for the relationship between debriefing and learning in simulation pedagogy. Many reports have gone so far as to indicate that debriefing is essential for learning with simulated clinical experiences (Decker, 2007; Dreifuerst, 2009; Fanning & Gaba, 2007; Shinnick, Woo, Horwich, & Steadman, 2011). However, measurement of the effect of this learning remains challenging. Skill acquisition and demonstration, knowledge gain, contextual patient care, advances in critical thinking, and development of clinical reasoning and judgment are variables that have been measured to discern the effect of simulation and debriefing (Dreifuerst, 2010; Kuiper, Heinrich, Matthias, Graham, & Bell-Kotwall, 2008; Lasater, 2007; Shinnick et al., 2011). Moreover, although debriefing is considered the cornerstone of simulation learning, techniques vary greatly (Dreifuerst, 2009; Neill & Wooten, 2011; Rudolph et al., 2008; Shinnick et al., 2011), and there is little documentation of the best debriefing strategies for the discipline of nursing (Draper, 2009; Dreifuerst, 2009). With inconsistent variables and instruments, it is not surprising that the results are not easily generalizable, yet each study showed that debriefing was important for learning.

Debriefing for Meaningful Learning

Debriefing for Meaningful Learning is a specific and consistent method of debriefing. It begins with a systematic process to release emotions from the simulation experience and moves into a critical analysis of the events. The DML process concludes with exercises to explicate different aspects of reflection and generate new meanings for the learners (Dreifuerst, 2010; Schön, 1983). To teach conceptually, educators use the DML method to take students beyond critical thinking toward the higher thinking skills of clinical reasoning through analysis and evaluation, coupled with the entwined experience of reflection and anticipation. In this way, each experience informs the next clinical situation students encounter (Dreifuerst, 2010; Facione & Facione, 2006; Jasper, 2003; Lasater, 2007; Tanner, 2006).

The DML method of debriefing uses a consistent process involving six components: (a) engage (the participants), (b) explore (options reflecting-in-action), (c) explain (decisions, actions, and alternatives using deduction, induction, and analysis), (d) elaborate (thinking-like-a-nurse and expanding analysis and inferential thinking), (e) evaluate (the experience reflecting-on-action), and (f) extend (inferential and analytic thinking, reflecting-beyond-action). By using the DML method, debriefing can become the foundation and catalyst for meaningful learning and actionable knowledge demonstrated through the development of clinical reasoning. The purpose of this research was to identify and measure the effect of this debriefing method on students’ clinical reasoning skill development and their perception of the quality of the debriefing experience.

Design

This exploratory, quasi-experimental, pretest–posttest study investigated the relationship between the use of the DML method during HFS debriefing and the development of clinical reasoning skills in prelicensure nursing students and between the use of DML and students’ perception of the quality of debriefing. The study addressed three questions:

  • Compared with usual and customary debriefing methods, does the use of the DML debriefing strategy positively affect the development of clinical reasoning skills in undergraduate nursing students?
  • Do nursing students perceive a difference in the quality of debriefing when the DML strategy is used, compared with usual and customary debriefing methods?
  • Is there a correlation between the quality of debriefing as evaluated by nursing students and a change in clinical reasoning skills?

These research questions were tested with a single intervention variable—the DML method—and three instruments: the Health Sciences Reasoning Test© (HSRT) (Facione & Facione, 2006), the Debriefing Assessment for Simulation in Healthcare©–Student Version (DASH–SV) (Simon, Rudolph, & Raemer, 2009), and the Debriefing for Meaningful Learning Supplemental Questions (DMLSQ), in conjunction with HFS. Statistical tests included the Mann-Whitney-Wilcoxon and the analysis of covariance for research question one, the Mann-Whitney-Wilcoxon W and Kruskal-Wallis for research question two, and simple linear regression for research question three.

Sample

Two hundred thirty-eight nursing students, enrolled in the seventh semester of an eight-semester, traditional baccalaureate degree nursing program at a midwestern U.S. university school of nursing, were used as the sample participants for this study. These students were enrolled in an existing pair of clinical and theory courses covering complex adult health issues in the acute care setting, and HFS was an existing component of the clinical course. Students were informed about the research study and gave consent using a process approved by the institutional review board. Students were each assigned a participant number to protect his or her identity. Clinical groups were randomly assigned to the experimental or control groups.

A priori, the desired sample size was determined according to Lipsey (1990, p. 94) and was confirmed by using G*Power analysis (Faul, Erdfelder, Buchner, & Lang, 2009). Because there is little prior data reported on this concept, using the recommendations by Lipsey (1990), the alpha or significance level was set at p = 0.05, and the beta or type 2 error was set at 0.20, or a power of 80%. Based on this, 74 total participants were estimated to be necessary, with 37 in each group, for a medium effect size of 0.50 and 80% power (Table 1). This was considered adequate for the exploratory nature of this study. Recruitment in the first semester fell short of the desired numbers. To attain a sufficient pool of participants, students were recruited a second time in the subsequent semester with the intent of using only the second set of data. However, it was determined post hoc, using the one-sample Kolmogorov-Smirnov, test that neither the second set of data alone nor the combined set of results from the first and second sets represented normally distributed data for any of the instruments used in the study. Based on these findings, the investigator decided to enroll participants a third time in the next consecutive semester when more than 100 students were anticipated in the same pair of courses. This decision to recruit more participants was based on the assumption that as sample size increases, the sampling distribution becomes more normally shaped and sampling error is reduced (Lipsey, 1990, p. 31). The intention was to recruit a third time and then statistically determine whether the third sample alone, with a larger number of participants, would demonstrate normality.

Demographics of Study Participants (N = 240a)

Table 1: Demographics of Study Participants (N = 240)

In the third semester of participant recruitment, 131 students enrolled in the courses were recruited to participate in the study. One hundred twenty-three students consented; however, once again, using the one-sample Kolmogorov-Smirnov test, the third data set alone did not demonstrate normality. Statistically, there was consistency in all three sample sets. All students from the three different samples were taking the same courses, at the same school, with the same instructors, syllabus, and course content, and used the same HFS simulation experiences. Time, in relation to different semesters, was the only identifiable difference. Based on this, the investigator decided to determine whether the three samples could be combined into a single, larger sample of 240 participants to address the normality issue.

Statistical analysis to determine homogeneity of all three sets of data using the Welch and Brown–Forsythe robust tests of equality of means were significant and supported the ability to combine them into one set of data. Post hoc analysis of desired sample size and effect, based on the actual combined sample size of 240 participants, showed that when the alpha or significance level was changed to p < 0.10, with a medium effect size of 0.50, the beta or type 2 error became 0.01, with an increase of power to 99%. These analyses confirmed the appropriateness of the decision to use the combined data as a single set. During the study, two participants in the third sample set were lost to follow up, as they completed only half of the posttest measures before withdrawing from the study. The final participant sample of 238 students was representative of the demographics of the undergraduate population attending this midwestern university baccalaureate program in nursing (Table 2).

Age Distribution of Study Participants (N = 240a)

Table 2: Age Distribution of Study Participants (N = 240)

Method

Three weeks prior to the HFS experience, participants were instructed to complete the 33-item HSRT (Facione & Facione, 2006) and six demographic questions online as a pretest. The 4-hour–long simulations involved adult health situations using high-fidelity patient simulators in a high-fidelity simulated clinical environment. The scenarios represented clinical situations based on didactic content previously covered in the theory course. Each simulation experience was divided into two parts, with 30 minutes allotted for the simulation and 30 minutes allotted for debriefing, per the existing structure of this aspect of the course.

After students arrived in groups for HFS, they were randomly assigned roles to play in each scenario. For example, one student played the role of the primary nurse, another student had the role of a secondary nurse (one who was delegated to), a third student was assigned as a family member with a scripted role, and two students were assigned to be recorders. Any remaining students in the group participated as observers or other health care professionals. Following the simulation experience, the students and the faculty went to a conference room to debrief. The experimental group was debriefed using the DML method by the researcher who created it. Study participants in the control group received usual and customary debriefing by their clinical instructors using the debriefing strategy based on the work by Jeffries (2007).

Immediately following the debriefing, all participants (control and experimental groups) were asked to complete two instruments: the DASH-SV (Simon et al., 2009) and the DMLSQ, which was designed specifically for this study. Three weeks after the simulation experience, the students took a second version of the HSRT online as a posttest. The 3-week interval was chosen arbitrarily to fit into the students’ schedule. In addition, at that time, participants were given a second opportunity to comment on the DMLSQ items.

Instruments

Health Sciences Reasoning Test

This study used a single intervention variable, the DML method, and three instruments: the HSRT (Facione & Facione, 2006), the DASH–SV (Simon et al., 2009), and the DMLSQ. Research questions one and three were tested using the HSRT data. The HSRT is a copyrighted tool that measures clinical reasoning and clinical decision making in a health–clinical context using a multiple choice test format. It identifies clinically reasoned responses to clinical situations in five areas that contribute to higher thinking: analysis, evaluation, inference, and inductive and deductive reasoning (Facione & Facione, 2006). However, it is not specific to the domain of nursing.

Internal consistency reliability of the overall HSRT tool using the Kuder-Richardson-20 calculation for dichotomous multidimensional scales is estimated at 0.81 (N = 444), indicating a high level of reliability. Facione and Facione (2006) reported that the instrument has high levels of internal consistency for three subscales: inductive reasoning (0.76), deductive reasoning (0.71), and evaluation (0.77). Analysis (0.54) and inference (0.52) had lower scores, indicating less internal consistency (Facione & Facione, 2006). Further, test–retest analysis of the HSRT, using interclass correlation, demonstrated substantial agreement (0.61 to 0.80) in all subscales and for the overall instrument (0.79), supporting strong reliability (Facione & Facione, 2006). They also established content and construct validity of test items through correlation to the Delphi Report (APA Committee on Pre-College Philosophy, 1990). Criterion validity has not been published for this instrument (Facione & Facione, 2006).

Debriefing Assessment for Simulation in Healthcare–Student Version

The second instrument used in this study to address research questions two and three was the DASH–SV, an instrument that uses a behaviorally anchored rating scale to identify the extent to which students perceive the debriefer, which demonstrated the six elements of effective debriefing following simulation experiences (Simon et al., 2009). This instrument is a variation of the DASH, a tool designed to be used by peer faculty to evaluate the quality of debriefing (Simon et al., 2009). Simon et al. (2009) established criterion and content validity for the DASH, but reliability data have not been published. The DASH–SV uses the same six criterion and effectiveness scales as the DASH but reports data from the student perspective. This is important when new interventions are introduced into the teaching–learning environment, as it is not unusual for there to be a discrepancy between the faculty’s and students’ evaluation of a teaching strategy. The DASH–SV instrument was used by the study participants to assess the ability of the debriefer to achieve the following elements (Simon et al., 2009, p. 3):

  • Establishes an engaging learning environment.
  • Maintains an engaging learning environment.
  • Structures debriefing in an organized way.
  • Provokes engaging discussions.
  • Identifies and explores performance gaps.
  • Helps simulation participants achieve or sustain good practice.

For content and construct validity, items for the DASH–SV have been reviewed by the developers of the DASH (D. Raemer, personal communication, June 12, 2009). Initial reliability of the DASH-SV was established during this study, and Cronbach’s alpha coefficient was determined to be 0.82 (N = 6, M = 29.537, variance = 24.259, SD = 4.925), indicating very good reliability.

Debriefing for Meaningful Learning Supplemental Questions

The third set of data came from items identified as the supplemental questions, or DMLSQ, which explored the participant’s perceptions of the use of DML. Because this was a new debriefing method and different from others that students had experienced, it was important to obtain user feedback and to compare it with responses from participants who received usual and customary debriefing. New teaching strategies and pedagogies typically are evaluated by faculty and students to determine the influence on learning and how successfully the objectives and outcomes of the design were met (Nilson, 2003).

Four DMLSQ questions were asked specifically regarding elements of the DML: (a) the usefulness of the student work-sheet that is a part of the DML method, (b) the participants’ perception of their ability to know what to do when they encounter a patient with the same or similar clinical condition as demonstrated in the simulation, (c) their perception of the amount of time allotted for debriefing, and (d) their awareness of reflective thinking being evident during the simulation and debriefing experience.

Results

The first research question was tested using data from the HSRT. Two hundred forty nursing students took the pretest and 238 completed the posttest. The pretest data for the total sample (N = 240, M = 23.9, SD = 5.6) provides the baseline for all participants and comprises both the experimental group (n = 122, M = 23.3, SD = 5.7) and the control group (n = 118, M = 24.4, SD = 5.4). The posttest data for the total sample (N = 238, M = 24.1, SD = 5.3) represents the scores after the simulation and debriefing experience for all participants and comprises both the experimental group (n = 122, M = 24.3, SD = 5.3) and the control group (n = 116, M = 23.9, SD = 5.3).

The change in mean scores from pretest to posttest was analyzed using the relative difference between mean scores and the Mann-Whitney-Wilcoxon test (U = 3973.5, W = 10759.5, Z = −6.059, p = 0.000), which were significant. The results from this test were confirmed with an analysis of covariance that controlled for performance on the pretest to demonstrate differences on the posttest between groups. In this model, the pretest scores were the covariate and the debriefing method, and posttest mean scores were the dependent variables. The null hypothesis, which stated (when pretest scores were accounted for) there would be no difference in posttest scores for the experimental and control groups, was rejected.

An analysis of covariance showed that the between-participants’ test effect of DML on the total HSRT score was significant, F(1, 237) = 28.55, p ⩽ 0.05, and the covariate was significantly related to the debriefing method, F(1, 237) = 623.91, p ⩽ 0.05, with a large effect size of 0.84. This means that given a pretest score, a student debriefed using the DML method will have a better overall score on the posttest of clinical reasoning than a student debriefed with usual and customary debriefing strategies. Scatter plots and regression lines confirmed this relationship.

The second research question was addressed using the scores from the DASH–SV and the DMLSQ instruments. The Mann-Whitney-Wilcoxon W and Kruskal-Wallis statistical tests were used to evaluate these data. Two hundred thirty-eight study participants completed the DASH–SV. The data for the total sample (N = 238, M = 4.92, SD = 0.82) represent rating scores for the debriefing experience by all participants and comprise both the experimental group (n = 120, M = 5.58, SD = 2.90) and the control group (n = 118, M = 4.23, SD = 0.46).

Statistical analysis for this research question used nonparametric tests because this data set did not demonstrate normality. A Mann-Whitney-Wilcoxon test was used to evaluate the null hypothesis that there would be no difference in mean scores on the DASH–SV when comparing the experimental group who received the DML method and the control group who received the usual and customary debriefing. The Z-values for each of the mean scores from the six elements measured by the DASH–SV and the four questions from the DMLSQ were significant with p < 0.05. The mean aggregate DASH–SV scores were also significant (Z = −11.99, p ⩽ 0.001). This demonstrated that the experimental and control groups perceived a difference in the quality of debriefing when the DML method was used, compared with usual and customary debriefing. Thus, the null hypothesis was also rejected. It is of interest to note that the DML method received statistically higher scores on all of the DASH-SV elements except number one, which was not able to be well differentiated because prebriefing was done simultaneously with both groups together by the clinical faculty and not the debriefer.

Finally, the third research question examined the relationships among the data from HSRT, the DASH–SV, and the DMLSQ. The purpose of this step in the analysis was to test the assumption of the null hypothesis that no association between changes in the quality of debriefing could also be explained by changes in participants’ reasoning skills. Based on the results from the first question, the change in total HSRT mean scores was used as the predictor variable for this third question, and each of the elements from DASH–SV, including the aggregate score, and each DMLSQ score were outcome variables in a simple linear regression analysis evaluating the relationships. To analyze the data for this question, 11 simple regression models were developed—one for each item on the DMLSQ and each element on the DASH–SV. The results, summarized in Table 3, demonstrate that all were statistically significant except the DMLSQ item called Worksheet and DASH–SV element one. The regression lines supported these interpretations and indicated that an increase in student perception of the quality of debriefing can be explained by a greater positive change in HSRT scores from pretest to posttest. This question addressed the relationship between perceptions of teaching and the associated learning. The null hypothesis was rejected when the data demonstrated that greater changes in reasoning skills were associated with higher perceptions of quality debriefing.

Correlation Between the Quality of Debriefing and a Change in Clinical Reasoning Scores

Table 3: Correlation Between the Quality of Debriefing and a Change in Clinical Reasoning Scores

Discussion

The goal of nursing curriculum and learning activities is to not only impart knowledge, skills, and attitudes but also integrate these contextually in the clinical setting so students can use them in the provision of patient care. The increasing complexity of patient care requires that nurses have developed skills in clinical reasoning. Debriefing clinical experiences is an opportunity for teaching and learning that cultivates the thinking necessary for clinical reasoning. The goal of this study was to test a reflective debriefing method for use with prelicensure students that would foster meaningful learning, represented by a change in clinical reasoning and high quality.

The findings from the first question revealed a significant difference in the change in scores from pretest to posttest between groups, demonstrating that use of the DML method affects development of clinical reasoning. Debriefing for Meaningful Learning did not teach students the specific content that the HSRT tested or how to take the test but rather how to think and reason within the context of the clinical environment. This was an important component of the study because teaching–learning strategies need to be evaluated for their effectiveness.

Most importantly, there was a statistically significant change in scores demonstrated by the experimental group, despite the fact that the study design involved only one episode of intervention. The literature on learning offers two possible explanations. The DML method may have been either so innovative that it stimulated learning and adoption or so credible that it affirmed how students were already reasoning and supported their ability to be confident in how they reason through clinical situations (Beers & Bowden, 2007). The education literature reports positive effects from the use of constructivism, a model of repetitiveness and continually building new content on foundational structures (Golding, 2011; Yeun Lie Lim, 2011). For these reasons, the sustainability of the learning from DML cannot be determined or surmised from these results; those questions were beyond the scope of this study but should be considered for future research using DML.

Another possible reason for the difference in scores between the experimental and control groups could be the confounding variable of the debriefer, which in this case was the researcher. Although the control groups used the clinical instructors as debriefers, the researcher was the debriefer for the students in the experimental groups. Paget (2001) noted,

The important role played by the facilitator of reflective practice cannot be underestimated…many factors contribute to the influence of the debriefer including: personality, style, knowledge, familiarity, perception of effectiveness, and development of relationship with participants.

Repeated studies with other debriefers are warranted to follow up on these results.

The second research question also demonstrated important findings. The data not only provided support for the effectiveness of DML but also demonstrated that students perceived consistently high-quality debriefing when it was used. This is important because it is common for students to evaluate teaching and for teachers to judge the effectiveness of teaching and learning from the students’ perspective. A positive learning environment is a goal of educators because it fosters desired outcomes and models respect for teachers and learners (Mayer, 2002). Further, good debriefing enhances learning from simulation because it also demonstrates respect for teaching and learning and fosters the desired outcomes from the simulation experience (Rudolph et al., 2007). Evidence-based faculty resources for debriefing are scarce and in demand as simulation use expands in schools of nursing. Educators are particularly interested in tools and strategies that have positive student response and desired learning outcomes. The analysis of the data from question two reveals that when DML is used, students perceive it to be a positive learning experience.

Positive results for the third research question were also a significant finding. Consistency between perception of a positive learning environment by students and demonstration of positive learning is the essence of teaching and embodies the significant learning experience described by Fink (2003). This is also a goal of simulation learning (Forneris & Peden-McAlpine, 2006). Success can be attained when students perceive a quality teaching–learning experience and they, in fact, demonstrate the intended learning outcomes. With increasing simulation use and its associated costs, successful outcomes are highly desired. Debriefing for Meaningful Learning not only provides consistent structure to the debriefing process for prelicensure nursing students but the results of this study demonstrate that its use can also be associated with desired student outcomes and a quality teaching–learning experience.

Limitations

Several limitations were identified within this study. First, it was challenging to find a quantitative, objective instrument that measured clinical reasoning in nursing students. The HSRT, although intended for assessment of health care professionals, is not specific to the discipline of nursing. As a result, the items in the instrument may not adequately measure the clinical reasoning used by nurses in clinically contextual, problem-based, experiential situations that call for thinking like a nurse. Clearly, as clinical reasoning continues to be a concept of interest, there will be an increasing need for development of an instrument that is specific to the domain of nursing.

Another limitation in this study was selection bias. Students were not able to be randomized completely to the control or experimental groups because simulation experiences at the university where this study was conducted are scheduled by clinical group cohorts. Differences in thinking, learning, and decision making were likely to be present in students assigned to each group, which could not be accounted for. As a result, the ability to generalize the process and outcomes of this study may not be possible in other schools of nursing or with other nursing students, particularly if they do not have experiences with HFS. Students volunteered to participate in this research, and those who declined to participate may have been somehow different than those who accepted. Clearly there is a need to repeat this study using a multisite, repeated measures design to address these limitations.

Implications

As nursing education continues to experience calls for reform, three areas are particularly relevant: (a) a renewed focus on the importance of developing foundational clinical reasoning skills in students that will transfer into practice, (b) the expanded use of different pedagogies, which incorporate advancing technology that continually changes the education landscape, and (c) faculty resources to integrate both of these into a dynamic and relevant curriculum.

The DML method addresses each of these areas. The literature supports problem-based, experiential, and reflective learning strategies to foster clinical reasoning skills in students (Beers, 2006; Day & Williams, 2002; Tiwari, Lai, So, & Yuen, 2006; Yaun, Kunaviktikul, Klunklin, & Williams, 2008; Yeun Lie Lim, 2011). Simulation learning, when comprehensively crafted, can address each of these needs. By shifting away from simulations that are primarily focused on a contextual task-training, skill-development, and performance evaluation to instead adopt consistent debriefing methods that teach students how to think like a nurse, the learning changes. Debriefing for Meaningful Learning provides an opportunity for faculty to actively teach reasoning skills with the same vigor and backed with the same evidence as patient care skills. In this manner, debriefing as a component of simulation is reconceptualized and reconnected to the learning process.

The process of testing a new method or teaching innovation can be challenging. Several lessons were learned, which are important for future work in this area. Matching instruments to innovations is challenging not only because of the need to ensure that the concept of interest is being measured adequately but also because it is important, when using multiple tools and making statistical comparisons, that the data make sense based on what occurred. Large samples can positively affect the normal distribution but can be challenging to find when interventions are based on a sample from a single class in a single school simply due to enrollment. Establishing homogeneity of variance in a sample that included students from different semesters and class sections was an essential step in this study so that a larger sample could be gathered. This step added unanticipated time to the study but positively affected the statistical analyses and outcomes.

Conclusion

Debriefing for Meaningful Learning was developed to address the need to actively teach students clinical reasoning skills that would transfer into practice. Initial results indicated this was successful. Incorporating a consistent method of faculty-facilitated prompting, guiding, and reflecting into each debriefing session promotes meaningful learning and the development of clinical reasoning in nursing students (Draper, 2009; Dreifuerst, 2010; Pless & Clayton, 1993). Although further research is warranted, the results of this study are promising and advocate further use of the DML method for debriefing not only HFS but other clinical experiences, with continued evaluation of the method to enhance its usefulness across settings in nursing education where simulation learning is utilized.

References

  • APA Committee on Pre-College Philosophy. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. “The Delphi Report.” New York, NY: American Philosophical Association.
  • Arafeh, J.M., Hansen, S.S. & Nichols, A. (2010). Debriefing in simulated-based learning: Facilitating a reflective discussion. Journal of Perinatal & Neonatal Nursing, 24, 302–309.
  • Beers, G.W. (2006). The effect of teaching method on objective test scores: Problem-based learning versus lecture. Journal of Nursing Education, 44, 305–309.
  • Beers, G.W. & Bowden, S. (2007). The effect of teaching method on long-term knowledge retention. Journal of Nursing Education, 44, 511–514.
  • Cantrell, M.A. (2008). The importance of debriefing in clinical simulations. Clinical Simulation in Nursing, 4(2), e19–e23. doi:10.1016/j.ecns.2008.06.006 [CrossRef]
  • Day, R.A. & Williams, B. (2002). Development of critical thinking through problem-based learning: A pilot study. Journal of Excellence in College Teaching, 11, 203–226.
  • Decker, S. (2007). Integrating guided reflection into simulated learning experiences. In Jeffries, P. (Ed.), Simulation in nursing (pp. 21–33). New York, NY: National League for Nursing.
  • Dismukes, R.K., Gaba, D.M. & Howard, S.K. (2006). So many roads: Facilitated debriefing in healthcare. Simulation in Healthcare: Journal of the Society for Medical Simulation, 1, 23–25.
  • Draper, S.W. (2009). What are learners actually regulating when given feedback?British Journal of Educational Technology, 40, 306–315. doi:10.1111/j.1467-8535.2008.00930.x [CrossRef]
  • Dreifuerst, K.T. (2009). The essentials of debriefing in simulation learning: A concept analysis. Nursing Education Perspectives, 30, 109–114.
  • Dreifuerst, K.T. (2010). Debriefing for meaningful learning: Fostering development of clinical reasoning through simulation (Doctoral dissertation). Indiana University Scholar Works Repository. Retrieved from Dissertations and Theses http://hdl.handle.net/1805/2459
  • Facione, N.C. & Facione, P.A. (2006). The health sciences reasoning test. Millbrae, CA: California Academic Press.
  • Fanning, R.M. & Gaba, D.M. (2007). The role of debriefing in simulation-based learning. Simulation in Healthcare: Journal of the Society for Medical Simulation, 2, 115–125. doi:10.1097/SIH.0b013e3180315539 [CrossRef]
  • Faul, F., Erdfelder, E., Buchner, A. & Lang, A.G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149–1160. doi:10.3758/BRM.41.4.1149 [CrossRef]
  • Fink, L.D. (2003). Creating significant learning experiences. San Francisco, CA: Jossey-Bass.
  • Flannagan, B. (2008). Debriefing: Theory and techniques. In Riley, R.H., (Ed.), Manual of simulation in healthcare (pp. 155–170). New York, NY: Oxford University Press.
  • Forneris, S.G. & Peden-McAlpine, C.J. (2006). Contextual learning: A reflective learning intervention for nursing education. International Journal of Nursing Education Scholarship, 3(1), Article 17. doi:10.2202/1548-923X.1254 [CrossRef]
  • Golding, C. (2011). The many faces of constructivist discussion. Educational Philosophy and Theory, 43, 467–483. doi:10.1111/j.1469-5812.2008.00481.x [CrossRef]
  • Jasper, M. (2003). Beginning reflective practice: Foundations in nursing and health care. Cheltenham, UK: Nelson Thornes.
  • Jeffries, P.R. (2007). Simulation in nursing education: From conceptualization to evaluation. New York, NY: National League for Nursing.
  • Kuiper, R.A., Heinrich, C., Matthias, A., Graham, M.J. & Bell-Kotwall, L. (2008). Debriefing with the OPT Model of Clinical Reasoning during high fidelity patient simulation. International Journal of Nursing Education Scholarship, 5, 1–13. doi:10.2202/1548-923X.1466 [CrossRef]
  • Lasater, K. (2007). High-fidelity simulation and the development of clinical judgment: Students’ experiences. Journal of Nursing Education, 46, 269–276.
  • Lipsey, M.W. (1990). Design sensitivity: Statistical power for experimental research. Newbury Park, CA: Sage.
  • Mayer, R.E. (2002). Rote versus meaningful learning. Theory Into Practice, 41, 226–231. doi:10.1207/s15430421tip4104_4 [CrossRef]
  • Neill, M.A. & Wotton, K. (2011). High-fidelity simulation debriefing in nursing education: A literature review. Clinical Simulation in Nursing, 7, e161–e168. doi:10.1016/j.ecns.2011.02.001 [CrossRef]
  • Nilsen, L.B. (2003). Teaching at its best. San Francisco, CA: Jossey-Bass.
  • Paget, T. (2001). Reflective practice and clinical outcomes: Practitioners’ views on how reflective practice has influenced their clinical practice. Journal of Clinical Nursing, 10, 204–214. doi:10.1046/j.1365-2702.2001.00482.x [CrossRef]
  • Pless, B.S. & Clayton, G.M. (1993). Clarifying the concept of critical thinking in nursing. Journal of Nursing Education, 32, 425–428.
  • Rudolph, J.W., Simon, R., Raemer, D.B. & Eppich, W.J. (2008). Debriefing as formative assessment: Closing performance gaps in medical education. Academic Emergency Medicine, 15, 1010–1016. doi:10.1111/j.1553-2712.2008.00248.x [CrossRef]
  • Rudolph, J.W., Simon, R., Rivard, P., Dufresne, R.L. & Raemer, D.B. (2007). Debriefing with good judgment: Combining rigorous feedback with genuine inquiry. Anesthesiology Clinics, 25, 361–376. doi:10.1016/j.anclin.2007.03.007 [CrossRef]
  • Schön, D.A. (1983). The reflective practitioner: How professionals think in action. New York, NY: Basic Books.
  • Shinnick, M.A., Woo, M., Horwich, T.B. & Steadman, R. (2011). Debriefing: The most important component in simulation?Clinical Simulation in Nursing, 7, e105–e111. doi:10.1016/j.ecns.2010.11.005 [CrossRef]
  • Simon, R., Rudolph, J.W. & Raemer, D.B. (2009). Debriefing Assessment for Simulation in Healthcare©–Student Version. Cambridge, MA: Center for Medical Simulation.
  • Tanner, C.A. (2006). Thinking like a nurse: A research-based model of clinical judgment in nursing. Journal of Nursing Education, 45, 204–211.
  • Tiwari, A., Lai, P., So, M. & Yuen, K. (2006). A comparison of the effects of problem-based learning and lecturing on the development of students’ critical thinking. Medical Education, 40, 547–554. doi:10.1111/j.1365-2929.2006.02481.x [CrossRef]
  • Yaun, H., Kunaviktikul, W., Klunklin, A. & Williams, B. (2008). Improvement of nursing students’ critical thinking skills through problem-based learning in the people’s republic of china: A quasi-experimental study. Nursing and Health Sciences, 10, 70–76. doi:10.1111/j.1442-2018.2007.00373.x [CrossRef]
  • Yeun, Lie & Lim, L.A. (2011). A comparison of students’ reflective thinking across different years in a problem-based learning environment. Instructional Science: An International Journal of the Learning Sciences, 39, 171–188. doi:10.1007/s11251-009-9123-8 [CrossRef]

Demographics of Study Participants (N = 240a)

MeasureDMLControl
Age (y)
  Mean25.126
  Median2323
  Mode2223
  Standard deviation6.226.71
  Range20 to 4920 to 47
  Minimum1818
  Maximum5047
  N valid122118
Race
  Not provided512
  White, Caucasian10083
  African American79
  Hispanic73
  Asian27
  Other14
Gender
  Female113104
  Male914

Age Distribution of Study Participants (N = 240a)

Age (y)DML (%)Control (%)
⩽201412
21 to 307070
31 to 401211
41 to 5047

Correlation Between the Quality of Debriefing and a Change in Clinical Reasoning Scores

Dependent VariableaR2 (Effect Size)Regression p Value
DMLSQ Worksheet0.0060.439
DMLSQ Knowledge0.0260.017
DMLSQ Time0.0160.048
DMLSQ Reflection0.0310.010
DASH©–SV Element 10.0030.409
DASH–SV Element 20.0430.002
DASH–SV Element 30.0490.001
DASH–SV Element 40.0810.000
DASH–SV Element 50.0170.059
DASH–SV Element 60.0390.003
DASH–SV Total (Aggregate)0.0630.000
Authors

Dr. Dreifuerst is Assistant Professor, Indiana University, School of Nursing, Indianapolis, Indiana.

This research was supported by the Sigma Theta Tau International Joan K. Stout Research Grant, an Indiana University Research Incentive Fund Fellowship Block Grant, and the International Nursing Association for Clinical Simulation Learning Debra Spunt Mini Grant. Research questions and table data are directly derived from Dr. Dreifuerst’s doctoral dissertation. The author thanks Dr. Pamela Ironside for her assistance in preparing this manuscript, Dr. Pamela Jeffries and Dr. Daniel Pesut for overseeing the research, and Dr. Omar Espinoza for statistical analysis.

The author has disclosed no potential conflicts of interest, financial or otherwise.

Address correspondence to Kristina Thomas Dreifuerst, PhD, RN, ACNS-BC, CNE, Assistant Professor, Indiana University, School of Nursing, 1111 Middle Drive, NU W435, Indianapolis, IN 46202-5107; e-mail: .kdreifue@iupui.edu

10.3928/01484834-20120409-02

Sign up to receive

Journal E-contents