There has been considerable discussion in the professional literature about questionable research practices that raise doubt about the credibility of research findings (Shrout & Rodgers, 2018) and that limit reproducibility of research findings (Shepherd, Peratikos, Rebeiro, Duda, & McCowan, 2017). This discussion has led to what scientists term as a replication crisis (Goodman, Fanelli, & Ioannidis, 2016). Although investigators in various disciplines have provided suggestions to address this crisis (Alvarez, Key, & Núñez, 2018; Goodman et al., 2016; Shrout & Rodgers, 2018), similar discussions or reports of replication within nursing education literature are limited, despite a call for replication studies (Morin, 2016). Consequently, the focus of this article is on replication and reproducibility. The topic is important, given that the hallmark of good science is being able to replicate or reproduce findings (Morin, 2016). Replication serves to provide “stability in our knowledge of nature” (Schmidt, 2009, p. 92).
Differentiating Between Reproducibility and Replicability
Although both terms are important in moving forward the science of nursing education, Shepherd et al. (2017) differentiated between the two: “Research is reproducible [author emphasis] if, given a study's original data sets and analysis code, an independent scientist can obtain quantitative results identical to those obtained by the original researchers” (p. 387). However, replication occurs when “independent scientists can obtain similar results using the same methods with different data sets” (Shepherd et al., 2017, p. 387). The difference between these two terms rests with the type of data being used to replicate the study.
Conducting replication studies is a way by which study validity is assessed, confidence in study findings is increased (Benson & Borrego, 2015), and generalizability is enhanced (Shepherd et al., 2017); all are benefits of replication. Moreover, the issue of generalizability is important because disciplinary knowledge can be advanced only when empirical generalizations are present (Hubbard, 2016). Benefits resulting from reproducibility different from those of replication (Shepherd et al., 2017). These investigators offered that “Reproducible research bolsters confidence in the analytic findings, ensures that results are correct, engenders trust in the honesty and competency of researchers, and serves as a protection against fraud” (p. 387). Thus, benefits of both contribute to disciplinary knowledge.
Types of Replication
What constitutes a replication study is debatable (Fabrigar & Weneger, 2016; Schmidt, 2009). To help scientists consider the issue of replication, Schmidt (2009) reviewed the literature and offered the following categorization: direct replication and conceptual replication. Direct replication, sometimes called exact replication (Cai et al., 2018), is defined as “repetition of an experimental procedure” (Schmidt, 2009, p. 91), whereas conceptual replication is defined as “repetition of a test or hypothesis or a result of earlier research with different methods” (Schmidt, 2009, p. 91). Conceptual replication provides more information about “the conditions under which the results occur” (Cai et al., 2018, p. 4). Definitional confusion arises when the two definitions are mixed (Schmidt, 2009). Schmidt (2009, p. 93) suggested that confusion can be lessened by appreciating the five functions served by replication:
- To control for sampling error (chance results).
- To control for artifacts (lack of internal validity).
- To control for fraud.
- To generalize results to a larger or to a different population.
- To verify the underlying hypothesis of the earlier experiment.
The first four functions reflect direct replication, whereas the fifth function is consistent with conceptual replication. Schmidt (2009) argued that conceptual replication can contribute to understanding whereas facts are the product of direct replication. Conceptual replication “not only confirms facts but also assists in developing models and theories of the world” (Schmidt, 2009, p. 95).
This distinction is helpful when considering undertaking replication. An investigator contemplating conducting a replication study should consider why they wish to do so (Schmidt, 2009). Anderson and Maxwell (2016) offered a set of nuanced goals, recommended how to analyze the data, and suggested criteria for success to assist investigators considering replication. These goals help to address some of the ongoing issues present with replication.
What Can You As an Investigator Do to Foster Replication and Reproducibility of Research in Nursing Education?
Goodman et al. (2016) suggested thinking about three types of reproducibility: methods, results, and inferential. You can address method reproducibility by “providing enough information about study procedures and data so the procedures could, in theory or in actuality, be exactly repeated” (Goodman et al., 2016, p. 2). Achieving methods reproducibility in research in nursing education may not be completely possible because an investigator cannot reproduce the exact context and environment completely (Brandt et al., 2014). Nonetheless, this constraint does not prevent an investigator from being as explicit as possible when describing how a study was conducted. Providing sufficient information about how variables were measured, data were processed, and findings were analyzed is critical and improves transparency in reporting practices (Knudson, 2017). Although including comprehensive information may not be possible when submitting for publication, being willing to share this information when requested by another investigator reflects a willingness to foster replication and reproducibility.
Results reproducibility is addressed by using a similar design, although there continues to be debate about this approach. Duncan, Engel, Claessens, and Dowsett (2014) suggested three robustness-checking strategies to include when reporting results of an individual study. An investigator can use several statistical strategies, termed multiple estimation techniques, to determine the effect of an intervention rather than relying on one approach. Another strategy is the use of “multiple data sets” (Duncan et al., 2014, p. 3). Finally, when possible, performing subgroup analysis is another strategy to demonstrate investigator efforts to address results reproducibility.
Inferential reproducibility is “the drawing of qualitatively similar conclusions from either an independent replication of a study or a reanalysis of the original study” (Goodman et al., 2016, p. 4). Using a Bayesian perspective, one that recognizes “the probability that a claim is true after an experiment is a function of the strength of the new experimental evidence combined with how likely it was to be true before the experiment” (Goodman et al., 2016, p. 4), can help address the limitations of a frequentist approach. A frequentist approach to statistics “does not allow the assigning of a probability of truth to a hypothesis or claim” (Goodman et al., 2016, p. 4). When considered through a Bayesian perspective, reproducibility should provide sufficient evidence to either support or not support the original study outcomes (Note: The reader may wish to review an earlier article by Spurlock [2017a] on statistical significance).
One of the significant issues associated with replication is that the original study may have been underpowered (Anderson & Maxwell, 2017). When investigators attempt replication using the same sample size as the original study, underpowering continues. Thus, it is critical that investigators (a) recognize the underpowered issue and (b) take measure to ensure that they have sufficient power in their replication study (Note: The reader may wish to review an earlier article by Spurlock [2017b] on effect sizes).
Replication requires careful consideration and attention to details. Brandt et al. (2014) and McIntosh et al. (2017) are two resources you may wish to consult when contemplating replication. The former provides a recipe for exact or close replication, whereas the latter provides a framework for reproducibility.
Having confidence in what we do as educators is enhanced when there is empirical evidence to support our actions. Replication studies are one way to contribute to our knowledge base and to increase our confidence. As nurse scientists commit to advancing the science of nursing education, we must embrace the challenges of replication. Information in this article is one step to help scientists accept the challenge.
Please send feedback, comments, and suggestions for future Methodology Corner topics to Darrell Spurlock, Jr., PhD, RN, NEA-BC, ANEF, at
- Alvarez, R.M., Key, E.M. & Núñez, L. (2018). Research replication: Practical considerations. Political Analysis, 51, 422–426. doi:10.1017/S1049096517002566 [CrossRef]
- Anderson, S.F. & Maxwell, S.E. (2016). There's more than one way to conduct a replication study: Beyond statistical significance. Psychological Methods, 21(1), 1–12. doi:10.1037/met0000051 [CrossRef]
- Anderson, S.F. & Maxwell, S.E. (2017). Addressing the “replication crisis”: Using original studies to design replication studies with appropriate statistical power. Multivariate Behavioral Research, 52, 305–324. doi:10.1080/00273171.2017.1289361 [CrossRef]
- Benson, L. & Borrego, M. (2015). The role of replication in engineering education research. Journal of Engineering Education, 104, 388–393. doi:10.1002/jee.20082 [CrossRef]
- Brandt, M.J., IJzerman, H., Dijksterhuis, A., Farach, F.J., Geller, J., Giner-Sorolla, R. & van't Veer, A. (2014). The replication recipe: What makes for a convincing replication?Journal of Experimental Social Psychology, 50, 217–224. doi:10.1016/j.jesp.2013.10.005 [CrossRef]
- Cai, J., Morris, A., Hohensee, C., Hwang, S., Robison, V. & Hiebert, J. (2018). The role of replication studies in educational research. Journal for Research in Mathematics Education, 49(1), 2–8. http://www.jstor.org/stable/10.5951/jresmatheduc.49.1.002 doi:10.5951/jresematheduc.49.1.0002 [CrossRef]
- Duncan, G.J., Engel, M., Claessens, A. & Dowsett, C.J. (2014). Replication and robustness in developmental research. Developmental Psychology. Advance online publication. doi:10.1037/a0037996 [CrossRef]
- Fabrigar, L.R. & Wegener, D.T. (2016). Conceptualizing and evaluating the replication of research results. Journal of Experimental Social Psychology, 66, 68–80. doi:10.1016/j.jesp.2015.07.009 [CrossRef]
- Goodman, S.N., Fanelli, D. & Ioannidis, J.P.A. (2016). What does research reproducibility mean?Science and Translational Medicine, 8(341), 1–6. doi:10.1126/scitranslmed.aaf5027 [CrossRef]
- Hubbard, R. (2016). Corrupt research. The case of reconceptualizing empirical management and social science. Thousand Oaks, CA: Sage.
- Knudson, D. (2017). Confidence crisis of results in biomechanics research. Sports Biomechanics, 16, 425–433. doi:10.1080/14763141.2016.1246603 [CrossRef]
- McIntosh, L.D., Juehne, A., Vitale, C.R.H., Liu, X., Alcoser, R., Lukas, J.C. & Evanoff, B. (2017). Repeat: A framework to assess empirical reproducibility in biomedical research. BMC Medical Research Methodology, 17, 143. doi:10.1186/s12874-017-0377-6 [CrossRef]
- Morin, K.H. (2016). Replication: Needed now more than ever. Journal of Nursing Education, 55, 423–424. doi:10.3928/01484834-20160715-01 [CrossRef]
- Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90–100. doi:10.1037/a0015108 [CrossRef]
- Shepherd, B.E., Peratikos, M.B., Rebeiro, P.F., Duda, S.N. & McCowan, C.C. (2017). A pragmatic approach for reproducible research with sensitive data. American Journal of Epidemiology, 186, 387–392. doi:10.1093/aje/kwx066 [CrossRef]
- Shrout, P.E. & Rodgers, J.L. (2018). Psychology, science and knowledge construction: Broadening perspectives from the replication crisis. Annual Review of Psychology, 69, 487–510. doi:10.1146/annurev-psych-122216-011845 [CrossRef]
- Spurlock, D. (2017a). Beyond p < .05: Toward a Nightingalean perspective on statistical significance for nursing education researchers. Journal of Nursing Education, 56, 453–455. doi:10.3928/01484834-20170712-02 [CrossRef]
- Spurlock, D. (2017b). The purpose and power of reporting effect sizes in nursing education research. Journal of Nursing Education, 56, 645–647. doi:10.3928/01484834-20171020-02 [CrossRef]