Evidence-based practice (EBP) is critically important and effective for improving patient care quality and outcomes and reducing undesirable variability in health care (Hull, Thut, Cheng, Kaufhold, & Brown, 2016; Kram, DiBartolo, Hinderer, & Jones, 2015), yet it is not the standard of care (Melnyk, Gallagher-Ford, Long, & Fineout-Overholt, 2014). With reimbursement often linked to outcomes, and growing evidence supporting the use of evidence to inform care decisions, it is important for nurses to possess the knowledge needed to enact EBP.
Considerable research has helped define the knowledge and competencies needed to deliver or direct EBP (Melnyk, Fineout-Overholt, Gallagher-Ford, & Kaplan, 2012; Melnyk et al., 2014) at both patient and facility levels (American Association of Colleges of Nursing [AACN], 2008; Cronenwett et al., 2007; Quality and Safety Education for Nurses [QSEN], 2007). The American Nurses Credentialing Center (ANCC, 2013) Magnet® model is highly focused on the collaborative, professional process of evaluating and using evidence to inform changes in practice and the empirical outcomes that result. The ANCC Magnet model (2013) recognizes the importance of data and benchmarking to inform decisions at the patient and systems levels. Yet, even in ANCC Magnet–designated facilities where nurses are more likely to be educated about EBP and supported in its use, little is known about what RNs, who are responsible for enacting EBP and supporting new nurses to practice EBP, know about providing and facilitating evidence-based care.
Education is often reported by nurses to be a facilitator of EBP (Connor, Dwyer, & Oliveira, 2016; Melnyk et al., 2012). The influence of education on EBP is unclear, as EBP knowledge has not been objectively evaluated across educational levels. Similarly, the impact of specific educational strategies to facilitate EBP knowledge has not been well studied, perhaps due in part to a lack of available measures. Nursing has long relied on subjective measures to study EBP, including self-assessments of attitudes, beliefs, facilitators/barriers, knowledge, and implementation (Melnyk, Fineout-Overholt, & Mays, 2008; Nagy, Lumby, McKinley, & Macfarlane, 2001; Upton & Upton, 2006). Although subjective measures can be effectively used to study a variety of constructs, they should not be used as surrogate measures of more directly measurable constructs, such as knowledge. Evidence from outside the discipline consistently shows that self-reports of knowledge, skills, and ability correspond poorly to objective measures of the same construct (Davis et al., 2006; Zell & Krizan, 2014). These findings strongly suggest that researchers should limit the role of self-assessment when objective measures are available. Research with a well-developed, objective EBP knowledge measure is needed to rigorously evaluate what nurses know about EBP and the influence of educational programs, teaching strategies, and the organizational structures to facilitate and sustain EBP knowledge postlicensure. The purposes of this study were to (a) describe the EBP knowledge of RNs; (b) describe correlations among educational, institutional, practice, and social factors and EBP knowledge; (c) assess the correlation between objective and subjective measures of their EBP knowledge; and (d) gather additional validity evidence for the Evidence-Based Practice Knowledge Assessment in Nursing (EKAN).
The study used a multisite, cross-sectional, descriptive, correlational design. Human subjects approvals were obtained and documented prior to study implementation. All data were collected during a 12-week period in mid-2015 from RNs working at one of two participating Magnet-designated, acute care hospitals within a single hospital network in a midwestern U.S. state. Staff RNs and nurse leaders who provide or direct care in all departments, including specialty areas, were invited to participate. A total of 163 participants were recruited. A minimum desired sample size of N = 100 was guided by the requirements for the single-parameter Rasch model to operate stably with adequate quality-of-item thresholds in evaluating the psychometric performance of the EKAN.
Study staff recruited eligible participants via flyers, e-mail, and word of mouth. Interested participants reported to a data collection session in a computer laboratory or private workspace. Participation implied consent because no personally identifying information was collected. Study staff members proctored all data collection sessions to ensure participants had no access to personal belongings or reference materials (e.g., Web sites, books, cell phones). Most participants completed all study-related forms within 20 minutes. When possible, data were collected using standardized procedures in a computer laboratory with access to electronic versions of the instruments in Qualtrics®, a secure online data-capture platform. When computer access was not possible, data were collected using paper versions of the instruments.
Participants completed three instruments in the following order: demographic questionnaire, Evidence-Based Practice Questionnaire (EBPQ; Upton & Upton, 2006), and EKAN (Spurlock & Wonder, 2015). A 17-item demographic questionnaire was used to examine educational, institutional, practice, and social characteristics. The EBPQ (Upton & Upton, 2006) is a 24-item self-report measure comprising three subscales: EBP knowledge/skills, attitudes, and practice/use. For the 14-item knowledge/skills subscale, participants are asked to self-assess their knowledge/skills using a 7-point scale, ranging from 1 = poor to 7 = best. The 4-item attitudes subscale presents bipolar attitude statements connected via a 7-point scale. The 6-item practice/use subscale asks participants about the frequency of performing selected EBP skills during the past year on a 7-point scale, ranging from 1 = never to 7 = frequently. The EBPQ was selected for its broad acceptance and use in more than 40 countries and 12 languages (EBPQ, 2016; Upton & Upton, 2006; Upton, Upton, & Scurlock-Evans, 2014). In the current study, the EBPQ yielded Cronbach's alpha internal consistency reliability coefficients of .93 for the entire questionnaire, .92 for knowledge/skills subscale, .63 for attitude subscale, and .90 for practice/use subscale. These results are consistent with the internal consistency reliability estimates reported elsewhere (Upton & Upton, 2006; Upton et al., 2014).
The EKAN (Spurlock & Wonder, 2015) is a 20-item multiple choice test designed to measure nurses' EBP knowledge. Items on the EKAN were designed to measure knowledge from domains specified by the AACN (2008) and QSEN (2007) frameworks. Questions on the EKAN measure nurses' knowledge related to research appraisal, quality improvement strategies, and the EBP process, among other topics (see Spurlock & Wonder, 2015, for more details on the development and testing of the EKAN). Developed as an objective knowledge measure using a single-parameter Rasch model, the EKAN demonstrated strong item performance (item reliability of .98), acceptable person reliability (.66), and strong evidence of trait unidimensionality supported by infit and outfit statistics centering on 1.0 in the initial validation study (Spurlock & Wonder, 2015). The EKAN is scored using a simple scoring method (correct or incorrect), with a total possible score of 20 points. In the current study, EKAN mean item difficulty was 0.0005 (range = −4.01 to 2.49), where higher values indicate increased difficulty. Weighted mean square infit was 1.00 (range = 0.84 to 1.18), standardized weighted mean square infit was −0.018 (range = −2.36 to 3.09), unweighted mean square outfit was 0.99 (range = 0.63 to 1.36), and standardized unweighted mean square outfit was −0.04 (range = −2.26 to 3.02). The item separation index was robust at 6.27, but the person separation index was 1.15, indicating some restriction in trait range, due mainly to a small proportion of high-scoring examinees. EKAN item reliability was .98 and person reliability was .57, which are acceptable values for short scales with <3 proposed ability strata (Meyer, 2014).
Data were analyzed using SPSS® version 23 software. Descriptive statistics summarized sample characteristics and scores from the EBPQ and EKAN. Differences between EBPQ and EKAN scores, based on demographic factors, were assessed using one-way ANOVA. Single-parameter Rasch analysis of EKAN items and the overall scale was conducted using jMetrik™ (Meyer, 2014). Pearson's r correlation coefficients and hierarchical multiple regression examined the relationships between continuously measured variables, including objective (EKAN) and subjective (EBPQ) measures of EBP knowledge.
Educational, Institutional, Practice, and Social Characteristics of the Sample
Responses were received from 163 participants, although not all participants provided complete responses on all of the instruments. The majority of participants were Caucasian (95.1%, n = 155) women (92%, n = 150), with a mean age of 40.9 years (range = 23 to 66 years) and 14.6 years of experience (SD = 10.8; range = 1 to 43 years). Almost all participants (99.4%) reported English as their primary language. The majority of participants (57.7%) reported their highest earned degree as a baccalaureate, followed by a master's degree (22.1%). Because one of the study aims was to examine the psychometric properties of the EKAN, missing data imputation techniques were not used. Instead, an available-case approach was used in calculating descriptive statistics for demographics, scale scores, internal consistency reliability estimates, and in the correlation and regression analysis. A total of 151 participants provided complete data on the demographic questionnaire, EBPQ, and EKAN.
EBP Knowledge/Skills, Attitudes, and Practice/Use
Overall, participants rated themselves positively on EBP knowledge/skills, attitudes, and practice/use. With a maximum score of 7.0, EBPQ attitudes scores were highest at a mean of 5.51 (SD = 0.98), followed by knowledge/skills scores at a mean of 4.68 (SD = 0.81), and practices/use scores at a mean of 4.48 (SD = 1.37). Examination of differences in EBPQ scores based on participants' level of education using ANOVA revealed no statistically significant differences. Sum scores on the EKAN ranged from 5 to 18 (of 20 points), and the mean score was 10.58 (SD = 2.87). A statistically significant difference in mean EKAN sum scores was noted among participants with lower versus higher levels of nursing education (9.0 versus 12.7; F3,150 = 11.84, p < .001). Mean EKAN sum scores were 9.0 (SD = 2.1) for participants reporting a diploma or associate's degree in nursing as their highest degree, 10.3 (SD = 2.8) for those with a baccalaureate degree in nursing, and 12.7 (SD = 2.5) for those with a master's degree in nursing. A Bonferroni post hoc analysis identified a statistically significant difference (p < .001) in EKAN sum scores between subjects with master's degrees compared with those with all other degree types.
Correlation and Regression Analyses
Pearson's r correlation coefficients were calculated for continuously measured variables. Results are presented in the Table. A key finding was that although positive, statistically significant correlations were found between scores on each of the EBPQ subscales (r = .350 to .595, p < .01), correlations between each of the EBPQ subscales and the EKAN sum score were small, ranging from r = .017 to .123, and none reached statistical significance. In addition, responses to the belief item “I am sure I can deliver evidence-based care” were positively and statistically significantly correlated with scores on each subscale of the EBPQ, but not with objectively measured EBP knowledge scores from the EKAN.
Means, Standard Deviations, and Correlations among Variables (N = 151)
To further examine the extent to which educational and self-reported EBP knowledge/skills, attitudes, and practice/use variables are associated with objectively measured EBP knowledge, predictor variables were entered into a two-step hierarchical multiple regression analysis. Educational level was entered in the first block because it was the only variable producing a statistically significant correlation with EKAN scores. In the second block, scores from each of the EBPQ subscales (knowledge/skills, attitudes, and practice/use) and the belief item (“I am sure I can deliver evidence-based care”) were entered. Regression analysis revealed that educational level was a statistically significant predictor of EKAN scores (F1,149 = 30.43, p < .001, R2 = .170). When EBPQ subscale scores and the belief item were added in step 2, the incremental improvement in R2 was nonsignificant (R2 change = .024, p = .377).
Nurses' scores from the self-report EBPQ knowledge/skills and practices/use subscales were similar to findings from other studies of nurses in the United States and Australia (Duff, Butler, Davies, Williams, & Carlile, 2014; White-Williams et al., 2013). Although self-reports have been widely used in evaluating the influence of EBP education (Duff et al., 2014; Upton et al., 2014; White-Williams et al., 2013), little is known about the relationship among nurses' educational level and EBP knowledge, the overall accuracy of self-assessments of knowledge and skills, or how self-assessments are influenced by education.
Studies of EBP in nursing commonly evaluate correlations between level of education and self-rated knowledge/skills, attitudes, and practice/use with the EBPQ (Upton et al., 2014; White-Williams et al., 2013). In the current study, educational level was not statistically significantly associated with EBPQ subscale scores, a finding also noted by Allen, Lubejko, Thompson, and Turner (2015). However, small, positive correlations were found between years of RN experience and EBPQ practice/use subscale scores, a finding that lacks consistency in the literature (Upton et al., 2014). In a recent integrative review, Saunders and Vehviläinen-Julkunen (2016) found widespread confusion among nurses about the meaning and interpretations of EBP concepts, components, and processes. This may contribute to nurses' perceptions of providing EBP, when, in reality, care is still more closely aligned with traditional practices and other care models (Saunders & Vehviläinen-Julkunen, 2016).
The mean EKAN sum scores were consistent with a previous study of baccalaureate nursing students (Spurlock & Wonder, 2015), and the observed range was expected given the broad range of educational levels represented in the current sample. Higher levels of education were associated with higher scores on the EKAN but not on the EBPQ. Additional research is needed to identify the underlying mechanism for this effect. Although prime consideration should be given to the impact that formal academic education programs, pedagogical strategies, and continuing education have on building EBP knowledge, other factors such as professional role expectations, practical experience with EBP implementation, organizational culture and support of EBP, and workload must also be investigated.
Findings from this study demonstrate that although measures of self-assessed competence or ability often correlate with each other, they may not correlate with objectively measured competence or ability. The lack of congruence between self-assessed and objectively measured knowledge and ability is consistent with evidence from outside the discipline, where there is significant documentation of the inaccuracies of self-rated ability, especially with high-complexity tasks (Blanch-Hartigan, 2011; Davis et al., 2006; Lai & Teng, 2011; Zell & Krizan, 2014), such as EBP. Findings in medicine consistently show low, nonsignificant correlations between self-ratings and objective performance in areas such as appraisal ability (Lai & Teng, 2011) and knowledge (Blanch-Hartigan, 2011). These findings are consistent with those of Zell and Krizan (2014) in their large meta-synthesis of 22 meta-analyses that compared objective measures with self-reported assessments in the fields of psychology, medicine, education, and sports science, showing an overall correlation of r = .29 (SD = .11). Together, these findings lend support to the need for objective measurement of constructs such as knowledge.
In a systematic review of instruments to measure EBP knowledge, skills, and attitudes (Leung, Trevena, & Waters, 2014), the EBPQ was found to be the only instrument with sufficient validity evidence for use in practice; however, the researchers outlined the need to develop an objective instrument to measure knowledge and skills. The results of the current study should not be seen as a criticism of the EBPQ but instead as a demonstration of key limitations with using self-assessments for knowledge, competence, or ability. Because self-assessed knowledge and ability often correspond poorly to objectively measured counterparts, self-reports should not be treated as surrogate indicators with equivalent validity. Recognizing the link between EBP and quality patient care (Hull et al., 2016; Kram et al., 2015), it is essential that nursing leaders and educators not rely on self-assessment when making decisions about professional development programming or curricula aimed at enhancing knowledge and ability.
Participants in the current study comprised a homogenous sample recruited from two Magnet-designated, acute care hospital sites in a single hospital network. Although subgroup sizes for each educational level were sufficient for statistical testing, caution should be used with inferences given the unequal subgroup sizes. Finally, because the EKAN does not measure skill, it is important that level of EBP knowledge is not equated with ability to skillfully implement evidence-based care.
Implications for Future Research
Knowledge assessment is the first step in enabling leaders and educators to design and test the effectiveness of specific educational interventions, as well as organizational structures (e.g., shared governance, transformational leadership, support for advanced degrees) and resources (e.g., mentoring, programming, access to professional literature), to reinforce strengths and address weaknesses in nurses' EBP knowledge. Freed from the inherent limitations of measuring knowledge using only self-report measures, an objective measure of EBP knowledge supported by strong validity and reliability evidence enables practice leaders and educators to accurately assess what nurses know about EBP. When objective measures are used to evaluate the effectiveness of educational programs, educators can be more confident they are measuring the central construct of interest—knowledge—and not constructs such as confidence or attitudes, which are important motivational factors, but may only be weakly related to knowledge. Further opportunities exist to study the durability of EBP knowledge (across educational levels, roles, settings, and years of experience) to determine the frequency and type of education needed to sustain EBP knowledge in practice over the course of nurses' careers.
Given the poor correspondence between self-reported and objectively measured EBP knowledge/skill, attitudes, and practice/use, the current study highlights the importance of instrument selection to accurately assess the EBP knowledge of RNs. Through ongoing evaluation with a valid, reliable, objective instrument, clinical leaders, educators, and researchers can more effectively develop and test interventions and structures to facilitate and sustain nurses' EBP knowledge. This approach, consistent with the ANCC (2013) Magnet model, will enable the use of evidence to achieve excellence in nursing practice.
- Allen, N., Lubejko, B.G., Thompson, J. & Turner, B.S. (2015). Evaluation of a web course to increase evidence-based practice knowledge among nurses. Clinical Journal of Oncology Nursing, 19, 623–627. doi:10.1188/15.CJON.623-627 [CrossRef]
- American Association of Colleges of Nursing. (2008). The essentials of baccalaureate education for professional nursing practice. Retrieved from http://www.aacn.nche.edu/education-resources/baccessentials08.pdf
- American Nurses Credentialing Center. (2013). 2014 Magnet application manual. Silver Spring, MD: Author.
- Blanch-Hartigan, D. (2011). Medical students' self-assessment of performance: Results from three meta-analyses. Patient Education and Counseling, 84, 3–9. doi:10.1016/j.pec.2010.06.037 [CrossRef]
- Connor, L., Dwyer, P. & Oliveira, J. (2016). Nurses' use of evidence-based practice in clinical practice after attending a formal evidence-based practice course: A quality improvement evaluation. Journal for Nurses in Professional Development, 32, E1–E7. doi:10.1097/NND.0000000000000229 [CrossRef]
- Cronenwett, L., Sherwood, G., Barsteiner, J., Disch, J., Johnson, J., Mitchell, P. & Warren, J. (2007). Quality and safety education for nurses. Nursing Outlook, 55, 122–131. doi:10.1016/j.outlook.2007.02.006 [CrossRef]
- Davis, D.A., Mazmanian, P.E., Fordis, M., Van Harrison, R., Thorpe, K.E. & Perrier, L. (2006). Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA, 296, 1094–1102. doi:10.1001/jama.296.9.1094 [CrossRef]
- Duff, J., Butler, M., Davies, M., Williams, R. & Carlile, J. (2014). Perioperative nurses' knowledge, practice, attitude, and perceived barriers to evidence use: A multisite, cross-sectional survey. ACORN: Journal of Perioperative Nursing in Australia, 27(4), 28–35.
- Hull, B.L., Thut, C., Cheng, S.J., Kaufhold, D.M. & Brown, S.R. (2016). Changing the culture of a large multihospital acute care therapy system to value-added through best practice guidelines: A quality improvement project. Journal of Acute Care Physical Therapy, 7, 47–54. doi:10.1097/JAT.0000000000000025 [CrossRef]
- Kram, S.L., DiBartolo, M.C., Hinderer, K. & Jones, R.A. (2015). Implementation of the ABCDE Bundle to improve patient outcomes in the intensive care unit in a rural community hospital. Dimensions of Critical Care Nursing, 34, 250–258. doi:10.1097/DCC.0000000000000129 [CrossRef]
- Lai, N.M. & Teng, C.L. (2011). Self-perceived competence correlates poorly with objectively measured competence in evidence based medicine among medical students. BMC Medical Education, 11 (25), 1–8. doi:10.1186/1472-6920-11-25 [CrossRef]
- Leung, K., Trevena, L. & Waters, D. (2014). Systematic review of instruments for measuring nurses' knowledge, skills, and attitudes for evidence-based practice. Journal of Advanced Nursing, 70, 2181–2195. doi:10.1111/jan.12454 [CrossRef]
- Melnyk, B.M., Fineout-Overholt, E., Gallagher-Ford, L. & Kaplan, L. (2012). The state of evidence-based practice in US nurses: Critical implications for nurse leaders and educators. Journal of Nursing Administration, 42, 410–417. doi:10.1097/NNA.0b013e3182664e0a [CrossRef]
- Melnyk, B.M., Fineout-Overholt, E. & Mays, M.Z. (2008). The Evidence-Based Practice Beliefs and Implementation scales: Psychometric properties of two new instruments. Worldviews on Evidence-Based Nursing, 5, 208–216. doi:10.1111/j.1741-6787.2008.00126.x [CrossRef]
- Melnyk, B.M., Gallagher-Ford, L., Long, L.E. & Fineout-Overholt, E. (2014). The establishment of evidence-based practice competencies for practicing registered nurses and advanced practice nurses in real-world clinical settings: Proficiencies to improve healthcare quality, reliability, patient outcomes, and costs. Worldviews on Evidence-Based Nursing, 11, 5–15. doi:10.1111/wvn.12021 [CrossRef]
- Meyer, J.P. (2014). Applied measurement with jMetrik. New York, NY: Routledge.
- Nagy, S., Lumby, J., McKinley, S. & Macfarlane, C. (2001). Nurses' beliefs about the conditions that hinder or support evidence-based nursing. International Journal of Nursing Practice, 7, 314–321. doi:10.1046/j.1440-172X.2001.00284.x [CrossRef]
- Quality and Safety Education for Nurses. (2007). QSEN competencies. Retrieved from http://qsen.org/competencies/pre-licensure-ksas/
- Saunders, H. & Vehviläinen-Julkunen, K. (2016). The state of readiness for evidence-based practice among nurses: An integrative review. International Journal of Nursing Studies, 56, 128–140. doi:10.1016/j.ijnurstu.2015.10.018 [CrossRef]
- Spurlock, D. & Wonder, A.H. (2015). Validity and reliability evidence for a new measure: The Evidence-Based Practice Knowledge Assessment in Nursing. Journal of Nursing Education, 54, 605–613. doi:10.3928/01484834-20151016-01 [CrossRef]
- Upton, D. & Upton, P. (2006). Development of an evidence-based practice questionnaire for nurses. Journal of Advanced Nursing, 53, 454–458. doi:10.1111/j.1365-2648.2006.03739.x [CrossRef]
- Upton, D., Upton, P. & Scurlock-Evans, L. (2014). The reach, transferability, and impact of the evidence-based practice questionnaire: A methodological and narrative literature review. Worldviews on Evidence-Based Nursing, 11, 46–54. doi:10.1111/wvn.12019 [CrossRef]
- White-Williams, C., Patrician, P., Fazeli, P., Degges, M.A., Graham, S., Andison, M. & McCaleb, K.A. (2013). Use, knowledge, and attitudes toward evidence-based practice among nursing staff. The Journal of Continuing Education in Nursing, 44, 246–254. doi:10.3928/00220124-20130402-38 [CrossRef]
- Zell, E. & Krizan, Z. (2014). Do people have insight in their abilities? A metasynthesis. Perspectives on Psychological Sciences, 9, 111–125. doi:10.1177/1745691613518075 [CrossRef]
Means, Standard Deviations, and Correlations among Variables (N = 151)
|1. EBPQ practice/use||4.48 (1.37)||–|
|2. EBPQ attitudes||5.51 (0.98)||.350**||–|
|3. EBPQ knowledge/skills||4.68 (0.81)||.595**||.398**||–|
|4. EKAN sum score||10.58 (2.87)||.017||.123||.122||–|
|5. Age||40.88 (11.37)||.202*||.083||.027||−.030||–|
|6. Years of RN experience (⩾20 hours/week)||14.55 (10.82)||.168*||.101||.017||.002||.870**||–|
|7. “I am sure I can deliver evidence-based care.”||4.05 (0.67)||.294**||.228**||.413**||−.066||.094||.056||–|