Treatment fidelity refers to the strategies used to monitor and enhance the accuracy and consistency of an intervention to ensure it is implemented as intended and in a comparable manner to all study participants over time (Hildebrand et al., 2012; Santacroce, Maccarelli, & Grey, 2004; Tucker & Blythe, 2008; Vidovich, Lautenschlager, Flicker, Clare, & Almeida, 2013). Implementation components that may affect treatment fidelity include the study design, interventionist training, treatment delivery, receipt of treatment, and uptake by participants (Bellg et al., 2004; Eisenstein, Lobach, Montgomery, Kawamoto, & Anstrom, 2007). Although many treatment fidelity strategies are familiar (e.g., manuals, videotape monitoring of adherence, evaluation of participant enactment), these approaches may not capture the cognitive processes used when multi-component interventions are tailored based on assessment data (Breitenstein et al., 2010). These types of interventions vary considerably in format and intended outcome and are named person-centered, tailored, algorithms, protocols, evidence-based decision support, and clinical decision support (Bakken et al., 2008; Ersek, Turner, Cain, & Kemp, 2008; Kim et al., 2007; Lauver et al., 2002; Shapiro, 2006). In the current study, the term multicomponent interventions was used to refer to all types of interventions that include systematic action steps and require context-driven critical thinking from the user.
Branching simulations are case scenarios that require the user to generate a series of step-by-step decisions and actions. These simulations allow users to examine the consequences of their choices and may be particularly useful for multicomponent interventions that require clinical decision making represented in terms of (a) a flow of action steps and (b) a “route” of critical thinking through a sequence that is designed to achieve a desired result (Bateman, Allen, Kidd, Parsons, & Davies, 2012). The purpose of the current article is to describe the use of branching simulations as an approach to assessing fidelity and providing immediate feedback to interventionists regarding their skill at implementing a specific multicomponent intervention. Pertinent explanations from information-processing, decision-making, and learning theory are described. An exemplar from the authors’ treatment fidelity work is presented and includes a description of the number and types of deficits in critical thinking identified through branching simulations.
Treatment Fidelity and Multicomponent Interventions
Treatment fidelity includes adherence and competence (Tucker & Blythe, 2008). Adherence refers to the degree to which interventionists use procedures as explicated by the intervention, whereas competence means the level of skill with which interventionists deliver the intervention. Adherence and competence both aim to increase confidence that changes in outcomes are due to the delivery of the active ingredients of the intervention under investigation. Treatment fidelity is used to minimize unsystematic and unexpected intervention variability, draw accurate conclusions about intervention efficacy, and ensure future replicable treatments (Lenker et al., 2010; Robb, Burns, Docherty, & Haase, 2011; Tucker & Blythe, 2008).
Treatment fidelity monitoring can be time consuming and expensive, particularly when the intervention has multiple components (Santacroce et al., 2004). When interventionists are required to use assessment data to tailor the intervention, both adherence to multiple steps and competence in tailoring the intervention are required. Opportunities for breaks in fidelity are present for each step or component and may involve errors in critical thinking or taking action. Consequently, a need exists to match the critical thinking and action components of the intervention to the treatment fidelity plan and use theory to inform the selection of the elements that are central to the hypothesized outcomes (Lenker et al., 2010; Tucker & Blythe, 2008).
Fidelity testing should be planned with a keen awareness of the critical thinking required and the theory that explains these processes. Information-processing theory explains how people process information made available from the environment. Information processing is influenced by affect and includes attention, perception, and short-term and long-term memory (Forgas & George, 2001). Although intuitive and heuristic processes are often used to make quick clinical judgments, the tendency for people to base decisions on information that is easily accessed can lead to erroneous conclusions (McGinn et al., 2000).
Analytic decision-making theory explains the processes used to more thoroughly analyze a situation and make a decision on a course of action. Analytical processes aim to deepen both the information used to make decisions, as well as the use of a step-by-step, conscious, and logically defensible decision-making process (Lauri & Salanterä, 2002). The typical characteristics of clinical analysis include slow and focused information processing, pattern recognition, cue interpretation, hypothesis generation, hypothesis evaluation, and the use of logical and task-specific activities, such as those involved in assessment, treatment, and acting to mobilize necessary resources (Arslanian-Engoren, 2009; Simpson, Kovach, & Stetzer, 2012). The process requires making accommodations based on patient and situational variants.
The frequent and timely feedback provided using branching simulations is consistent with learning theory (Goodman, Brady, Duffy, Scott, & Pollard, 2008; Rodriguez, Loman, & Horner, 2009; Scheeler, Congdon, & Stansbery, 2010; Scheeler & Lee, 2002). All of the major theories of learning (i.e., behaviorism, cognitivism, sociocultural learning theory, meta cognitivism, social constructivism) explain that effective feedback is task or goal directed, specific, and neutral. Behaviorism and cognitivism emphasize that feedback should elaborate on errors that have been made, and social constructivism suggests that justification is needed to explain why an error is deemed incorrect (Thurlings, Vermeulen, Bastiaens, & Stijnen, 2013).
The use of branching simulations offers an opportunity to identify errors in fidelity for correction prior to human subject testing. The “correct” branches align with the critical thinking and action steps the researcher specifies for the intervention. The rationale for correct and incorrect decisions can help the user improve fidelity. Interventionists can make important decisions in a simulation and explore the consequences of those actions, while still being redirected to the requisite steps of the intervention. Assessing interventionists’ ability to implement a multicomponent protocol prior to entering participants into a clinical trial can save time and money, as well as prevent threats to validity caused by nonadherence to the intervention.
Branching simulations may have short, moderate, or long branches (Lebowitz & Klug, 2012). Short branches (Figure 1) allow the reader to receive immediate feedback on the effect of his or her decision on clinical care or outcomes. If the path chosen is incorrect, participants can be quickly directed to the correct decision and the rationale for that decision, and then reconnected to the main branch of critical thinking and action steps. Because short branches allow the user to quickly reconnect to the main branch of critical thinking and action steps, short branches are easier to align with the sequence of steps in the intervention. Moderate branches eventually rejoin the main branch, but they may offer multiple scenarios as the practitioner follows a series of critical thinking and action steps. Long branches are commonly used in simulations designed for entertainment, such as storytelling and computer gaming simulations. Long branches may be less useful for treatment fidelity purposes, as these branches break away from the primary sequence of the intervention entirely, forming new main branches with their own sets of short and moderate branches (Lebowitz & Klug, 2012). The number and type of branches used to measure fidelity depend heavily on the complexity of the intervention, the type of case, and what key factors differentiate the branches (e.g., the correct critical thinking or action step). Branches that are repetitive, not clearly coordinated with the sequence of steps in the intervention, or overly complex may confuse users.
Examples of short branches in branching simulation.
Note. prn = as needed; STI = serial trial intervention; CNA = certified nursing assistant.
Examples of short branches in branching simulation.
The authors of the current study used branching simulations as one component of their treatment fidelity plans testing a 5-step serial trial intervention (STI) to assess and treat patients with advanced dementia who are no longer able to consistently verbally communicate symptoms, preferences, and unmet needs. The STI is briefly outlined in Figure 2. A detailed description of the intervention and the authors’ randomized controlled studies is provided in other publications (Kovach et al., 2006, 2012).
Serial trial intervention.
The authors’ research team used clinical expertise and a review of theory and literature to conclude that two thinking processes (i.e., identifying a change in condition and cue interpretation) and two action steps (i.e., assessment and taking action to consult, treat, or mobilize necessary resources) were critical for assessing fidelity to the steps of the STI. Identifying a change in condition involves recognizing subtleties in physical, functional, and behavioral patterns and drawing on formal knowledge and past experiences (Arslanian-Engoren, 2009; Kovach, Logan, Joosse, & Noonan, 2012). Assessment involves formal actions taken to gather more information about a situation, cue, or person. Cue interpretation involves analysis of information collected during assessment. In addition to assessment, actions taken may include mobilizing resources, consulting with people or information sources, and administering a treatment.
The two thinking processes and two action steps that are critical components of the STI were incorporated into short branches of the cases. The use of short branches facilitated the provision of immediate feedback to interventionists and quick instruction regarding needed corrections to critical thinking and action steps. The cases include a woman with paresthesias emerging as a side effect of her antidepressant medication, a woman with exit-seeking behavior explained by her attempts to get away from her uncontrolled pain, a man with decreased appetite and agitation caused by postherpetic neuralgia, and a woman whose withdrawn behavior was precipitated by the absence of her daughter and pastor from their usual weekly visits. Five experts from research and clinical practice reviewed each case. The recommendations from experts for modification of the cases were incorporated into the final versions used. A short example of a branching simulation that involves using Step 1 of the STI is available in the online version of the current article (Appendix A).
Sixty-seven nurses (30 RNs and 37 licensed practical nurses [LPNs]) from 32 nursing homes acted as interventionists. To serve as an interventionist, nurses had to have at least 6 months of experience caring for individuals with dementia, work the day shift for 32 hours or more per week, and provide written consent. Nurses received their regular workday pay for training and simulation testing. Nurses acted as interventionists for approximately 4 months at each site. The authors reimbursed each facility for the nurses’ time spent on the study. The homes, in turn, provided $50 to the interventionist for each completed participant.
The treatment fidelity procedures included the use of manuals, training sessions, digitally produced vignettes, twice weekly fidelity checks, and branching simulations. Each nurse was given a training manual and spent 7 hours being trained to use the STI. No interventionists missed training sessions. During this training day, nurses also viewed four vignettes that the authors digitally produced using actors and the university film department. The vignettes were used to train nurses to (a) correctly use the steps of the STI and (b) consistently record assessments and treatments on the author-provided data collection forms.
At the completion of the training, nurses completed four branching simulations to test the accuracy of applying the steps of the STI. Fifteen (22%) nurses scored below the a priori 80% criterion for full retraining and retesting. Thirty (45%) needed partial retraining in specific areas. Twenty-six (39%) nurses made errors in assessment, followed by taking action (n = 21; 31%), identification of change in condition (n = 13; 19%), and cue interpretation (n = 13; 19%). No statistically significant differences in errors occurred between RNs and LPNs, or between nurses employed at proprietary and not-for-profit nursing homes.
When accuracy of branching simulations was less than 80%, full retraining was done on another day. Nurses who scored more than 80% but less than 100% received partial retraining based on areas missed. Following retraining, the branching simulations were administered again, and all nurses scored 100% in the second testing.
The current article describes a new strategy for monitoring treatment fidelity in multicomponent interventions. The development of the branching simulations was informed by both analytic decision-making and learning theories. In the authors’ research, branching simulations successfully identified interventionists who needed more training to properly use the STI. In addition, data collected on the accuracy of completing the simulations captured specific information regarding the types of errors being made. The finding that more errors were made in assessment than other areas is consistent with evidence that health parameters of nursing home residents are often poorly assessed (Barry et al., 2002; Kovach, Logan, Simpson, & Reynolds, 2010; Kovach et al., 2012; Tolson et al., 2011). Although total adherence and competence were not achieved after the initial training, the authors retrained interventionists to 100% fidelity to the required critical thinking and action steps in the simulations. Evidence from the authors’ research suggests that using branching simulations and providing immediate feedback to interventionists regarding errors can decrease potential threats to validity caused by inadequate fidelity to the intervention.
Difficulties exist when incorporating branching simulations into a treatment fidelity plan. The time and potential costs involved in developing the simulations and branches can be formidable. Development of simulations can be a long process that requires thoughtful attention to the processes and steps of the intervention, as well as the theory that underlies the intervention. Web-based simulations that use hyperlinks may need to seamlessly interface with multiple operating systems. Decisions need to be made about the choice of cases used in the simulations. For example, should the cases be common occurrences, idiosyncratic to specific populations, or involve the most serious or risky outcomes? What degree and quality of evidence must be available for a case to be included?
Implications for Experimental Research
The drawbacks of investing time and effort into developing branching simulations must be considered relative to the distinct benefits. Researchers can determine the path that interventionists follow through the decision-making process, and immediate feedback can be provided after each decision or action taken. This granular assessment of errors facilitates targeted retraining of interventionists, as well as the understanding of how to improve initial training. The ability to consistently identify where errors are made before the actual start of the intervention can increase the quality of research results, save time and money, and decrease Type II errors. Branching simulations can be used to improve efficacy studies, translational work, and comparative effectiveness studies. In addition, branching simulations, once developed, are easily sustainable and can be disseminated for inservice educational use when efficacious interventions are translated to practice.
Future work should examine the validity of branching simulations to accurately capture the ability to implement the intervention devoid of other factors, such as user fatigue, the delivery format, or chance. Traditional fidelity testing should be compared to traditional fidelity testing plus branching simulations to determine effectiveness of the approach for error identification, adherence, and competence. In the current study, the 100% fidelity at retraining could have been partially due to interventionists learning the simulations. A second, and possibly a third, set of different simulations for retesting could improve validity of repeated testings.
Clinical interventions that include decision support components have been defined as a viable solution to address the issues related to differences in the quality and cost of health care (Bryan & Boren, 2008). However, more potential exists for wide variations in the implementation of these multicomponent interventions. Conducting highquality research to test the efficacy and effectiveness of multicomponent interventions requires careful attention to the requisite user critical thinking and action steps. Branching simulations can serve as a credible measure of interventionists’ competency to make decisions and take actions that adhere to components of a multicomponent intervention.
- Arslanian-Engoren, C. (2009). Explicating nurses’ cardiac triage decisions. Journal of Cardiovascular Nursing, 24, 50–57. doi:10.1097/01.JCN.0000317474.50424.4f [CrossRef]
- Bakken, S., Currie, L.M., Lee, N.-J., Roberts, W.D., Collins, S.A. & Cimino, J.J. (2008). Integrating evidence into clinical information systems for nursing decision support. International Journal of Medical Informatics, 77, 413–420. doi:10.1016/j.ijmedinf.2007.08.006 [CrossRef]
- Barry, C.R., Brown, K., Esker, D., Denning, M.D., Kruse, R.L. & Binder, E.F. (2002). Nursing assessment of ill nursing home residents. Journal of Gerontological Nursing, 28(5), 4–7. doi:10.3928/0098-9134-20020501-04 [CrossRef]
- Bateman, J., Allen, M.E., Kidd, J., Parsons, N. & Davies, D. (2012). Virtual patients design and its effect on clinical reasoning and student experience: A protocol for a randomised factorial multicentre study. BMC Medical Education, 12. doi:10.1186/1472-6920-12-62 [CrossRef]
- Bellg, A.J., Borrelli, B., Resnick, B., Hecht, J., Minicucci, D.S. & Ory, M.Treatment Fidelity Workgroup of the NIH Behavior Change Consortium. (2004). Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychology, 23, 443–451. doi:10.1037/0278-6188.8.131.523 [CrossRef]
- Breitenstein, S.M., Gross, D., Garvey, C.A., Hill, C., Fogg, L. & Resnick, B. (2010). Implementation fidelity in community-based interventions. Research in Nursing & Health, 33, 164–173. doi:10.1002/nur.20373 [CrossRef]
- Bryan, C. & Boren, S.A. (2008). The use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: A systematic review of the literature. Informatics in Primary Care, 16, 79–91.
- Eisenstein, E.L., Lobach, D.F., Montgomery, P., Kawamoto, K. & Anstrom, K.J. (2007). Evaluating implementation fidelity in health information technology interventions. AMIA Annual Symposium Proceedings, 2007, 211–215.
- Ersek, M., Turner, J.A., Cain, K.C. & Kemp, C.A. (2008). Results of a randomized controlled trial to examine the efficacy of a chronic pain self-management group for older adults. Pain, 138, 29–40. doi:10.1016/j.pain.2007.11.003 [CrossRef]
- Forgas, J.P. & George, J.M. (2001). Affective influences on judgments and behavior in organizations: An information processing perspective. Organizational Behavior and Human Decision Processes, 86, 3–34. doi:10.1006/obhd.2001.2971 [CrossRef]
- Goodman, J.I., Brady, M.P., Duffy, M.L., Scott, J. & Pollard, N.E. (2008). The effects of “bug-in-ear” supervision on special education teachers’ delivery of learn units. Focus on Autism and Other Developmental Disabilities, 23, 207–216. doi:10.1177/1088357608324713 [CrossRef]
- Hildebrand, M.W., Host, H.H., Binder, E.F., Carpenter, B., Freedland, K.E., Morrow-Howell, N. & Lenze, E.J. (2012). Measuring treatment fidelity in a rehabilitation intervention study. American Journal of Physical Medicine & Rehabilitation, 91, 715–724. doi:10.1097/PHM.0b013e31824ad462 [CrossRef]
- Kim, T.Y., Lang, N.M., Berg, K., Weaver, C., Murphy, J. & Ela, S. (2007). Clinician adoption patterns and patient outcome results in use of evidence-based nursing plans of care. AMIA Annual Symposium Proceedings, 2007, 423–427.
- Kovach, C.R., Logan, B.R., Joosse, L.L. & Noonan, P.E. (2012). Failure to identify behavioral symptoms of people with dementia and the need for follow-up physical assessment. Research in Gerontological Nursing, 5, 89–93. doi:10.3928/19404921-20110503-01 [CrossRef]
- Kovach, C.R., Logan, B.R., Noonan, P.E., Schlidt, A.M., Smerz, J., Simpson, M. & Wells, T. (2006). Effects of the Serial Trial Intervention on discomfort and behavior of nursing home residents with dementia. American Journal of Alzheimer’s Disease & Other Dementias, 21, 147–155. doi:10.1177/1533317506288949 [CrossRef]
- Kovach, C.R., Logan, B.R., Simpson, M.R. & Reynolds, S. (2010). Factors associated with time to identify physical problems of nursing home residents with dementia. American Journal of Alzheimer’s Disease & Other Dementias, 25, 317–323. doi:10.1177/1533317510363471 [CrossRef]
- Kovach, C.R., Simpson, M.R., Joosse, L., Logan, B.R., Noonan, P.E., Reynolds, S.A. & Raff, H. (2012). Comparison of the effectiveness of two protocols for treating nursing home residents with advanced dementia. Research in Gerontological Nursing, 5, 251–263. doi:10.3928/19404921-20120906-01 [CrossRef]
- Lauri, S. & Salanterä, S. (2002). Developing an instrument to measure and describe clinical decision making in different nursing fields. Journal of Professional Nursing, 18, 93–100. doi:10.1053/jpnu.2002.32344 [CrossRef]
- Lauver, D.R., Ward, S.E., Heidrich, S.M., Keller, M.L., Bowers, B.J., Brennan, P.F. & Wells, T.J. (2002). Patient-centered interventions. Research in Nursing & Health, 25, 246–255. doi:10.1002/nur.10044 [CrossRef]
- Lebowitz, J. & Klug, C. (2012). Branching path stories. In Interactive storytelling for video games: Proven writing techniques for role playing games, online games, first person shooters, and more (pp. 179–204). New York, NY: Taylor & Francis.
- Lenker, J.A., Fuhrer, M.J., Jutai, J.W., Demers, L., Scherer, M.J. & De-Ruyter, F. (2010). Treatment theory, intervention specification, and treatment fidelity in assistive technology outcomes research. Assistive Technology, 22, 129–138. doi:10.1080/10400430903519910 [CrossRef]
- McGinn, T.G., Guyatt, G.H., Wyer, P.C., Naylor, C.D., Stiell, I.G. & Richardson, W.S. (2000). Users’ guides to the medical literature: XXII: How to use articles about clinical decision rules. Journal of the American Medical Association, 284, 79–84. doi:10.1001/jama.284.1.79 [CrossRef]
- Robb, S.L., Burns, D.S., Docherty, S.L. & Haase, J.E. (2011). Ensuring treatment fidelity in a multi-site behavioral intervention study: Implementing NIH behavior change consortium recommendations in the SMART trial. Psycho-oncology, 20, 1193–1201. doi:10.1002/pon.1845 [CrossRef]
- Rodriguez, B.J., Loman, S.L. & Horner, R.H. (2009). A preliminary analysis of the effects of coaching feedback on teacher implementation fidelity of First Step to Success. Behavior Analysis in Practice, 2(2), 11–21.
- Santacroce, S.J., Maccarelli, L.M. & Grey, M. (2004). Intervention fidelity. Nursing Research, 53, 63–66. doi:10.1097/00006199-200401000-00010 [CrossRef]
- Scheeler, M.C., Congdon, M. & Stansbery, S. (2010). Providing immediate feedback to co-teachers through bug-in-ear technology: An effective method of peer coaching in inclusion classrooms. Teacher Education and Special Education, 33, 83–96. doi:10.1177/0888406409357013 [CrossRef]
- Scheeler, M.C. & Lee, D.L. (2002). Using technology to deliver immediate corrective feedback to preservice teachers. Journal of Behavioral Education, 11, 231–241. doi:10.1023/A:1021158805714 [CrossRef]
- Shapiro, S.E. (2006). Guidelines for developing and testing clinical decision rules. West Journal of Nursing Research, 28, 244–253. doi:10.1177/0193945905283722 [CrossRef]
- Simpson, M.R., Kovach, C.R. & Stetzer, F. (2012). Predictors of nonpharmacological and pharmacological treatments stopped and started among nursing home residents with dementia. Research in Gerontological Nursing, 5, 130–137. doi:10.3928/19404921-20110831-01 [CrossRef]
- Thurlings, M., Vermeulen, M., Bastiaens, T. & Stijnen, S. (2013). Understanding feedback: A learning theory perspective. Educational Research Review, 9, 1–15. doi:10.1016/j.edurev.2012.11.004 [CrossRef]
- Tolson, D., Rolland, Y., Andrieu, S., Aquino, J.P., Beard, J. & Benetos, A.The International Association of Gerontology and Geriatrics/World Health Organization/Society Française de Gérontologie et de Gériatrie Task Force. (2011). International Association of Gerontology and Geriatrics: A global agenda for clinical research and quality of care in nursing homes. Journal of the American Medical Directors Association, 12, 184–189. doi:10.1016/j.jamda.2010.12.013 [CrossRef]
- Tucker, A.R. & Blythe, B. (2008). Attention to treatment fidelity in social work outcomes: A review of the literature from the 1990s. Social Work Research, 32, 185–190. doi:10.1093/swr/32.3.185 [CrossRef]
- Vidovich, M.R., Lautenschlager, N.T., Flicker, L., Clare, L. & Almeida, O.P. (2013). Treatment fidelity and acceptability of a cognition-focused intervention for older adults with mild cognitive impairment (MCI). International Psychogeriatrics, 25, 815–823. doi:10.1017/S1041610212002402 [CrossRef]