Journal of Nursing Education

Methodology Corner 

When “Little Is Known” Is Not Entirely True

Darrell Spurlock Jr., PhD, RN, NEA-BC, ANEF


Research in nursing education often involves addressing problems and topics of interest to scholars from numerous fields outside of nursing—and to nurse educators from around the globe. Although it is sometimes true that little is known about a given topic of interest to nursing education researchers, often more is known about the topic if researchers consider evidence with extradisciplinary and international origins. In this Methodology Corner article, a framework for evaluating the applicability and transferability of study findings is presented alongside examples of how the framework can be operationalized to expand the evidence base from which nursing education researchers can draw when designing studies of their own. [J Nurs Educ. 2019;58(5):257–259.]


Research in nursing education often involves addressing problems and topics of interest to scholars from numerous fields outside of nursing—and to nurse educators from around the globe. Although it is sometimes true that little is known about a given topic of interest to nursing education researchers, often more is known about the topic if researchers consider evidence with extradisciplinary and international origins. In this Methodology Corner article, a framework for evaluating the applicability and transferability of study findings is presented alongside examples of how the framework can be operationalized to expand the evidence base from which nursing education researchers can draw when designing studies of their own. [J Nurs Educ. 2019;58(5):257–259.]

A common refrain in the introductory section of nursing education research proposals—and, eventually, the resulting research reports—is that little is known about a particular topic of interest. The presence of a knowledge gap serves as a core argument to justify the need for a study (Polit & Beck, 2016), and in many cases, knowledge gaps do exist and a study would help fill a void in nurse educators' understanding of important phenomena. However, knowledge gaps can easily be overestimated when nursing education researchers leave important, often high-quality evidence “on the table” because the evidence was generated in disciplines other than nursing or was conducted in countries different from the researchers' home country. Evidence from outside of nursing can easily be missed when researchers restrict the scope of their literature searches to databases related only to nursing or the health sciences. International literature may be viewed as too difficult to evaluate given the multitude of contextual factors that create real, but not insurmountable, challenges to determining its relevance to a given problem. Although the disciplinary and geographic origins of research evidence do impact how and to what extent the evidence is relevant to a specific research problem or proposed study, methods and tools exist to help researchers make informed decisions about the relevance of these sources of evidence to the researcher's own research problems and questions.

The consequences of concluding that little is known on a research topic based at least in part on failing to consider extradisciplinary evidence or evidence generated in international settings are numerous. Based on a conclusion that little is known about a given topic, researchers might decide to undertake foundational qualitative research necessary to provide conceptual clarity or to generate new theoretical perspectives on the topic. They might then carry out the time-intensive work of developing new scales or measures to measure relevant phenomena of interest. This would be followed by preliminary and small-scale studies that, although they can serve as a basis for larger, more complex studies in the future, take years to move from conception to dissemination. This work, often done rigorously and with the best of intentions, might serve only to delay the generation of evidence capable of immediately informing nursing education policy and practice. Holt (2003) provided several examples of how researchers in psychology sometimes rediscover phenomena or methods that had been described in the literature decades earlier but which appear to have been collectively forgotten by the discipline and its current generation of researchers. The rediscovered phenomena are given new life and new names but may lack the benefits of having been built on existing knowledge and, although perhaps not done intentionally, fail to acknowledge the contributions of earlier researchers.

When nursing education researchers proactively seek out and evaluate extradisciplinary or international evidence of potential relevance to their studies, redundant and unnecessary work can be avoided and the time to development of actionable evidence for nurse educators can be shortened by years—or even decades. In this Methodology Corner article, readers are provided with an overview of key methods and strategies to aid in evaluating the applicability and transferability of evidence not normally considered in nursing education research.

Applicability and Transferability

Although readers may be most familiar with the concepts of generalizability and external validity of study findings, these broad concepts lack a standardized method of evaluation and, as a result, are subject to debate and disagreement. Two related but more operational concepts identified and advanced by Wang, Moss, and Hiller (2006) are applicability and transferability. Although derived from a public health context and using intervention research as an example, the terms and the questions they generate have direct relevance across fields by helping researchers answer this question: What are the key contextual factors that promote or degrade the relevance of an existing study and its findings to a problem or topic of current interest? Wang et al. (2006) described applicability in process-focused terms, focusing on evaluative questions dealing with feasibility, acceptability, availability of resources, and whether there are barriers present that might impede applicability. Transferability is outcomes focused, where questions about the comparability of study populations, systems of delivering services or education, and the nature of the underlying problem are central. While transferability is most closely associated with intervention research, applicability has clear pertinence to intervention and to nonexperimental, descriptive research, which are the dominant study designs used by researchers in nursing education.

In more recent work that integrates learning from nearly two decades of effort in evaluating studies for inclusion in works of synthesis such as systematic reviews and meta-analyses, Munthe-Kaas, Nøkleby, and Nguyen (2019) evaluated 31 checklists available to researchers and policy makers to aid in evaluating evidence from diverse origins and using diverse methods in support of evidence-based decision making. Munthe-Kaas et al. organized the core areas of importance to evaluating the applicability and transferability of findings from one context to another into the following topical areas: population, intervention (and comparison intervention), implementation and environmental contexts, outcomes, and characteristics of the researcher. Table 1 provides a nonexhaustive list of specific questions researchers might ask when evaluating the extent to which studies with extradisciplinary or international origins are relevant and informative to nearly any phase of their own research planning activities. The questions elicit reasons why a study might not be judged relevant to a proposed study; in this way, existing research is treated as relevant unless judged otherwise.

Applicability and Transferability Questions With Associated ExamplesApplicability and Transferability Questions With Associated Examples

Table 1:

Applicability and Transferability Questions With Associated Examples

To briefly highlight an area where extradisciplinary and international research evidence could meaningfully inform research in nursing education, Schneider and Preckel (2017) synthesized the results from 38 meta-analyses of factors associated with student achievement in higher education. The review was based on 3,330 standardized effect size estimates gathered from an estimated 1,920,239 student research subjects over the past four decades. Ordered by the magnitude of their effect sizes, five of the top 10 variables were student-focused variables, several of which are commonly examined in studies of nursing student success, including admission test scores, high school grade point average, and motivation levels. The other half of the top 10 variables are instructor- or pedagogically focused and have received little or perhaps no attention in the nursing education literature. These factors include the use of peer assessment, which had the largest effect size (Cohen's d = 1.91), followed by teacher preparedness, the clarity and understandability of instructor explanations, and the ability of instructors to stimulate interest and intellectual curiosity in the subject matter. By reaching beyond the confines of the existing nursing education literature, nursing education researchers can include in future studies variables that have received inadequate attention in past investigations. It is not unreasonable to consider that models proposed over the years to explain perplexing challenges like nursing student success have yielded unsatisfactory solutions because they have addressed only some—and often not the most important—factors associated with student success.


The principles presented here for evaluating the relevance of extradisciplinary or international research evidence to nursing education contexts (primarily within in the United States) in no way represents a complete list of the questions researchers need to ask. The value added by using such a framework rests in how broad concepts such as generalizability and external validity can be succinctly distilled into a fairly discrete set of questions useful when making judgments about the relevance of diverse forms of evidence to studies in nursing education. By considering more of the evidence available outside our primary field of study and from educational settings beyond the usual borders, researchers may find that in fact, more is known about a variety of topics than what was previously thought—and by making use of this existing knowledge, the pace of knowledge development within our field can be accelerated.

Please send feedback, comments, and suggestions for future Methodology Corner topics to Darrell Spurlock, Jr., PhD, RN, NEA-BC, ANEF, at


  • Holt, R.R. (2003). Some history of a methodological rediscovery. The American Psychologist, 58, 406–407. doi:10.1037/0003-066X.58.5.406 [CrossRef]
  • Munthe-Kaas, H., Nøkleby, H. & Nguyen, L. (2019). Systematic mapping of checklists for assessing transferability. Systematic Reviews, 8(22), 1–16. doi:10.1186/s13643-018-0893-4 [CrossRef]
  • Polit, D.F. & Beck, C.T. (2016). Nursing research: Generating and assessing evidence for nursing practice (10th ed.). Philadelphia, PA: Lippincott Williams & Wilkins.
  • Schneider, M. & Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological Bulletin, 143, 565–600. doi:10.1037/bul0000098 [CrossRef]
  • Wang, S., Moss, J.R. & Hiller, J.E. (2006). Applicability and transferability of interventions in evidence-based public health. Health Promotion International, 21, 76–83. doi:10.1093/heapro/dai025 [CrossRef]

Applicability and Transferability Questions With Associated Examples

Key Applicability and Transferability QuestionsExample Scenarios

Do the problems or issues investigated in the origin study exist in similar ways in the proposed population to be studied?

Is the origin study population different enough on key demographic or other characteristics in ways that are likely to substantively influence study findings?

Is there evidence of low acceptability by study participants, participant preferences, or response rate/attrition problems that impact study findings and should therefore inform future studies?

The content of an intervention aimed at improving the writing skills of college students is likely to be language-specific and influenced by the precollege educational preparation of students, which varies considerably across the globe. Findings from a study on predictors of academic success among a large sample of college and university students with a wide range of majors from across the United States is likely highly relevant to a similar study of nursing students. High survey response rates in countries where students have little exposure to requests for research participation may not translate to settings where the survey request burden is high.
Intervention (if applicable)

Is the intervention described with enough clarity, in terms of its theoretical basis, content, and method of delivery, so that it could be replicated?

If tailoring the intervention is necessary, is it clear how much tailoring can be done that leaves intact the underlying causal mechanisms by which the intervention is thought to work?

To what extent is it possible to maintain intervention fidelity (i.e., consistency in intervention dose, frequency, timing, duration) when moving from the origin study to a new study context?

An innovative teaching method tested in a true experimental design will produce very different results than studies where those same teaching methods were integrated into existing courses and then evaluated retrospectively using student course evaluations. A learning strategy found to be highly effective in improving the research appraisal skills of doctoral psychology students may not function similarly among doctoral nursing students if the learning strategy presumes a deep discipline-specific knowledge of psychological research.
Implementation and environmental contexts

Are there structural differences in important systems, organizations, or in the legal or regulatory context that inhibit authentic implementation in the proposed study context?

Are there substantial barriers, both human and financial, that would fundamentally limit intervention or study procedure implementation in new contexts?

Are there cultural differences that substantially affect intervention implementation or evaluation?

Are there substantial differences in the comparison/control conditions or interventions (if applicable) in the origin study when compared to the proposed study context?

A 100% participation rate among student research subjects from countries where research involvement can be mandated is not possible in settings where ethical restrictions preclude mandatory research participation. Results from studies of faculty–student dyads may vary considerably across cultures depending on, among other factors, the extent to which the relationship are hierarchical in nature. The most important predictors of research productivity identified in a study of nursing faculty from research-intensive universities are unlikely to be the same for nursing faculty working in teaching-focused colleges and universities.

Are origin study outcomes defined in ways sufficiently applicable to the proposed study context?

Were outcome measures supported by adequate reliability and validity evidence available used in the origin study, and do equivalent measures exist for use in the proposed study context?

Do the intervention value propositions posited in the origin study apply in the proposed study context? If not, can similar value propositions be identified?

If surrogate or proxy measures were used in place of primary outcome measures, does adequate evidence exist to support use of those or similar measures in the proposed study context?

A large study identifying the most effective licensure exam preparation methods among social work students may be informative for a similar study among nursing students given that high-stakes licensure exams are required for entry into practice for both disciplines. Studies exploring the effectiveness of methods for clinical skill evaluation among medical students could be highly informative for similar studies among nursing students. Studies from a wide range of fields report weak, practically null relationships between subjects' self-reports of their knowledge or ability and scores from direct observation of their skills by trained faculty evaluators. Thus, the evidence supports not using self-reports in lieu of more direct methods, such as direct observation of performance.
Researcher characteristics

Are there differences in the characteristics of the origin study researcher(s) when compared to the proposed study that are likely to influence the conduct of the study and possibly study outcomes?

Were methods used to conduct the origin study that are impossible to replicate or which for legal or ethical reasons cannot be implemented in the proposed study context?

A study of the teaching effectiveness of college-level humanities faculty who participated in a 1-year teaching fellowship program may hold relevance for nursing faculty if replication of the fellowship program is possible. A study employing deception of student participants (with later debriefing) may be more acceptable in some disciplines than in others, and to some ethical review boards more than others.

Dr. Spurlock is Professor, School of Nursing, and Director, Leadership Center for Nursing Education Research, Widener University, Chester, Pennsylvania.

Address correspondence to Darrell Spurlock, Jr., PhD, RN, NEA-BC, ANEF, Professor, School of Nursing, Widener University, One University Place, Chester, PA 19013; e-mail:


Sign up to receive

Journal E-contents