Journal of Nursing Education

Methodology Corner 

Reliability

John M. Taylor, PhD

Abstract

A cursory look at the measurement practices in the Journal of Nursing Education revealed several deficits that our community is encouraged to address. In 2020, a little less than half of our quantitative studies did not provide reliability estimates from their own data, opting instead to provide estimates reported in the literature or none at all. Of the studies that did supply estimates using their data, only Cronbach's alpha was reported. Unfortunately, limitations with Cronbach's alpha and inconsistent reporting practices are likely undermining the science of nursing education. Researchers are encouraged to estimate reliability using their own data with increasingly valid techniques. [J Nurs Educ. 2021;60(2):65–66.]

Abstract

A cursory look at the measurement practices in the Journal of Nursing Education revealed several deficits that our community is encouraged to address. In 2020, a little less than half of our quantitative studies did not provide reliability estimates from their own data, opting instead to provide estimates reported in the literature or none at all. Of the studies that did supply estimates using their data, only Cronbach's alpha was reported. Unfortunately, limitations with Cronbach's alpha and inconsistent reporting practices are likely undermining the science of nursing education. Researchers are encouraged to estimate reliability using their own data with increasingly valid techniques. [J Nurs Educ. 2021;60(2):65–66.]

In a previous Methodology Corner column, Dr. Spurlock highlighted the indispensable role of measurement in the science of nursing education (Spurlock, 2017). Without validated tools, measurement scores may still reflect the phenomena a tool was intended to measure but may do so with intolerable levels of error or may not reflect the phenomena at all. Both problems can confound the goals and practices of nursing education (Spurlock, 2017). By focusing our attention on measurement, Dr. Spurlock hoped to encourage nurse education researchers to place more emphasis on sound measurement practices that help us build a “robust science of nursing education” (Spurlock, 2017, p. 259). This installment of the Methodology Corner column would like to echo Dr. Spurlock's efforts by focusing our attention on the reliability, or consistency, of measurement scores in nursing education research.

Many measurement practices in nursing education research should be addressed, and perhaps this column will address them in time. But the impetus for our current focus on reliability stems from a cursory look at the measurement practices in the Journal of Nursing Education (JNE) during 2020. From issues one through 12, approximately 17% (n = 3) of the quantitative works (n = 18) provided readers with evidence of the reliability of their scores using the extant literature, whereas 28% (n = 5) provided no reliability information at all. A little more than half (n = 10) provided reliability estimates using the data upon which their conclusions were based. Only Cronbach's alpha was used in studies that provided estimates based on the data at hand and estimates were larger than .70 in nearly all cases. These measurement trends appear to be similar to other bodies of literature. For example, Barry et al. (2014) found that reliability was largely estimated using Cronbach's alpha in health education and behavior journals and was estimated using the researchers' own data in a little less than half of the studies.

Although the range of reliability estimates suggest our field has enjoyed reasonably sound measurements in 2020, the use of Cronbach's alpha is somewhat limiting and the soundness of the conclusions drawn in nearly half of the quantitative works is unclear due to reliability estimates missing from the data at hand. Given these deficits, it might be beneficial to reiterate a few measurement practices that should be more consistently applied by our researchers:

  • When planning a study, researchers should allow the history of reliability on scores from a tool (e.g., indices of reliability used, sample characteristics the reliability estimates were based upon, size of the reliability estimates) to influence their decision to adopt a measure or not (American Educational Research Association et al., 2014). If reliability is overlooked, researchers may inadvertently select tools that do not reliably measure the variables in their studies.
  • After the data have been collected, researchers should report reliability estimates using the same data on which their inferences are based (American Psychological Association, 2020). Reliability is a property of the scores collected using a measurement tool, rather than a property of the tool itself, and reliability can vary substantively between samples (American Educational Research Association et al., 2014). Hence, researchers need to disclose to readers the scientific merits of the measurement processes undertaken in their studies, including the evidence for the reliability of their measurement scores (American Psychological Association, 2020).
  • Researchers need to look beyond Cronbach's alpha to an increasingly valid set of reliability techniques. Cronbach's alpha is unable to address all the reliability needs of nursing education. For example, at times, test-retest reliability might better fit the needs of a longitudinal study than Cronbach's alpha (American Psychological Association, 2020). Moreover, Cronbach's alpha is notably limited. For example, Cronbach's alpha likely underestimates the level of reliability of the scores from a measure due to assumptions that are unlikely to hold in applied circumstances (e.g., tau equivalence; McNeish, 2018). In addition, other forms of internal consistency have been found to perform better than Cronbach's alpha, such as coefficient omega (McNeish, 2018). If researchers continue to look to Cronbach's alpha, the growth of nursing education research is likely to be stymied.

Conclusion

I thank Dr. Spurlock for his dedication to the nursing profession and his leadership contributions to JNE. He initiated the Methodology Corner column as a means of advocating for the nursing profession by encouraging our community to adopt increasingly valid research practices. For the foreseeable future, the Methodology Corner will continue to promote methods and practices that help us build an increasingly robust nursing education science. Indeed, as a result of a cursory look at the measurement practices in JNE during 2020, it seemed appropriate for this installment to encourage our community to pursue a more consistent use of valid approaches to reliability. Specifically, researchers are reminded that they should evaluate the history of reliability under the tools they plan on using prior to collecting their data, provide reliability estimates using their own data, and do so using a set of increasingly valid techniques that better speak to the scientific merits of the measurements derived in their studies.

References

  • American Educational Research AssociationAmerican Psychological AssociationNational Council on Measurement in Education. (2014). Standards for educational and psychological testing. American Educational Research Association.
  • American Psychological Association. (2020). Publication manual of the American Psychological Association (7th ed.).
  • Barry, A. E., Chaney, B., Piazza-Gardner, A. K. & Chavarria, E. A. (2014). Validity and reliability reporting practices in the field of health education and behavior: A review of seven journals. Health Education & Behavior, 41(1), 12–18. doi:10.1177/1090198113483139 [CrossRef]
  • McNeish, D. (2018). Thanks coefficient alpha, we'll take it from here. Psychological Methods, 23(3), 412–433.
  • Spurlock, D. R. (2017). Measurement matters: Improving measurement practices in nursing education research. Journal of Nursing Education, 56(5), 257–259.
Authors

Dr. Taylor is Associate Professor, School of Nursing, Saint Louis University, St. Louis, Missouri.

The author has disclosed no potential conflicts of interest, financial or otherwise.

Address correspondence to John M. Taylor, PhD, Associate Professor, School of Nursing, Saint Louis University, 3525 Caroline Street, St. Louis, MO 63104; email: john.@slu.edu.

10.3928/01484834-20210120-02

Sign up to receive

Journal E-contents