The Journal of Continuing Education in Nursing

Original Article 

Improving Critical Thinking and Clinical Reasoning With a Continuing Education Course

Dina Monteiro Cruz, BSN, MSN, PhD; Cibele Mattos Pimenta, BSN, MSN, PhD; Margaret Lunney, RN, PhD

Abstract

Background:

Continuing education courses related to critical thinking and clinical reasoning are needed to improve the accuracy of diagnosis.

Method:

This study evaluated a 4-day, 16-hour continuing education course conducted in Brazil. Thirty-nine nurses completed a pretest and a posttest consisting of two written case studies designed to measure the accuracy of nurses’ diagnoses.

Results:

There were significant differences in accuracy from pretest to posttest for case 1 (p = .008) and case 2 (p = .042) and overall (p = .001).

Conclusion:

Continuing education courses should be implemented to improve the accuracy of nurses’ diagnoses.

Abstract

Background:

Continuing education courses related to critical thinking and clinical reasoning are needed to improve the accuracy of diagnosis.

Method:

This study evaluated a 4-day, 16-hour continuing education course conducted in Brazil. Thirty-nine nurses completed a pretest and a posttest consisting of two written case studies designed to measure the accuracy of nurses’ diagnoses.

Results:

There were significant differences in accuracy from pretest to posttest for case 1 (p = .008) and case 2 (p = .042) and overall (p = .001).

Conclusion:

Continuing education courses should be implemented to improve the accuracy of nurses’ diagnoses.

Dr. Cruz is Professor, Vice-Director, and Dr. Pimenta is Professor, Graduate Programs Coordinator, School of Nursing, University of São Paulo, Brazil. Dr. Lunney is Doctoral Faculty, Professor, and Graduate Programs Coordinator, College of Staten Island, Staten Island, New York.

The authors thank Dalete Delalibera Faria Motta and Geana Paula Kurita for their support with data collection.

This study was partially funded by a grant from the Fundação de Amparo à Pesquisa do Estado de São Paulo, which made the Visiting Professor Program possible at the School of Nursing, University of São Paulo.

Address correspondence to Dina M. Cruz, PhD, Universidade de São Paulo, School of Nursing, Av Dr Eneas de Carvalho Aguiar, 419, São Paulo, Brazil.

Critical thinking and clinical reasoning have become important nursing topics due to the complexity of clinical decisions in nursing and the risks of lower quality care when there are errors in decision making (Bucknall, 2003; Dowding & Thompson, 2004a, 2004b; Ebright, Patterson, Chalko, & Render, 2003; Ebright, Urden, Patterson, & Chalko, 2004; Hedberg & Larsson, 2004; Hendry & Walker, 2004; Higuchi & Donald, 2002; Huffman, Donoghue, & Duffield, 2004; Potter et al., 2005). Nurses’ diagnoses of clinical data to guide the selection of nursing interventions affect the quality of nursing care. A nursing diagnosis is an interpretation of human responses to health problems or life processes (North American Nursing Diagnosis Association International [NANDA-I], 2007). If nurses name their interpretations of patient data, the quality of nursing care will improve because such interpretations are the basis for selecting nursing interventions (Lunney, 2006). In a systematic review of 20 research studies conducted from 1966 to 2000 (Lunney, 2001), it was established that nurses’ interpretations of the same data vary widely, making accuracy a concern in this era of evidence-based nursing (Cruz, Pimenta, & Lunney, 2006; Levin, Lunney, & Krainovich-Miller, 2005).

In Brazil, there has been concern about the accuracy of nurses’ diagnoses since the 1980s, when diagnostic concepts from NANDA-I began to be used in nursing education, health care services, and research. Due to concern over accuracy, the University of São Paulo in São Paulo, Brazil, approved the implementation of a continuing education course on clinical reasoning conducted by a visiting professor from the United States. In conjunction with this course, an evaluation was conducted to determine the accuracy of participants’ diagnoses of human responses evident in written case studies specifically designed to measure accuracy. This article describes the findings of that evaluation.

Accuracy of Nurses’ Diagnoses

The accuracy of a nursing diagnosis is determined by a rater’s judgment about the degree to which a diagnostic statement matches the cues or data of a patient situation or simulation (Lunney, 2001). Accuracy is a concern when nurses do not use nursing diagnoses or state their interpretations of patient data; in these situations, accuracy cannot be examined or questioned. Accuracy needs to be the foundation for selecting nursing interventions regardless of whether nursing diagnoses are used (Lunney, 2006). However, there is little evidence that nurses attend to the accuracy of their diagnoses in clinical practice. Reasons for a lack of attention to diagnostic accuracy include inadequate knowledge about the complexity of interpreting human responses, the existence of other priorities in health care settings (Lunney, 2001), and inadequate knowledge and use of standardized nursing languages.

Nursing diagnosis theorists have identified three main categories that affect accuracy: the nature of the diagnostic task, situational factors, and the diagnostician (Carnevali & Thomas, 1993; Gordon, 1994). This study addresses the intellectual aspects of the diagnostician. Three types of intellectual factors were shown to influence accuracy: level of education, use of teaching aids, and cognitive abilities and strategies.

In previous studies it was shown that, in general, higher levels of education are related to significantly higher levels of accuracy (Lunney, 2001). However, the most significant differences in accuracy occur in relation to education specifically focused on nursing diagnosis and the diagnostic process (Lunney, 2001). Cognitive abilities are also related to the accuracy of nurses’ diagnoses and such abilities differ widely among nurses. For example, in a study using three basic tests of divergent productive thinking, Lunney (1992b) found wide differences in nurses’ abilities regarding fluency, flexibility, and elaboration. In general, their scores on these three types of thinking were lower than those of other groups (e.g., college students). The significance of these abilities was demonstrated by the study findings. The accuracy of diagnosing three written case studies was positively related to fluency (p < .002) and flexibility (p < .03) for case 2 and positively related to elaboration for cases 2 (p = .03) and 3 (p = .03). The number of questions nurses ask patients to obtain data and the number of possible hypotheses generated to explain patient data may be related to these basic thinking abilities, rather than education and experience in nursing. The existence of wide variations in the cognitive abilities of nurses is consistent with the finding from educational psychology that adults with similar educational backgrounds vary widely in cognitive abilities (Sternberg, 1997). However, cognitive abilities can be developed through instruction, practice, and effort (Sternberg).

Studies are needed to demonstrate the importance of continuing education coursework related to critical thinking and clinical reasoning. Every study of accuracy since 1966 showed that the accuracy of nurses’ diagnoses varies widely (Lunney, 2001) due to the complexity of diagnosing human responses, environmental factors that interfere with nurses’ thinking (Ebright et al., 2003, 2004), and the lack of attention to nurses’ accuracy (Lunney, 2001, 2006).

Conceptual Framework and Hypothesis

The critical thinking concepts identified by Scheffer and Rubenfeld (2000) in a Delphi study of nurse experts in critical thinking were used as the foundation of this course on critical thinking and clinical reasoning. The course focused on helping nurses develop the 7 cognitive skills and 10 habits of mind that were considered important for nursing practice by nurse experts (Scheffer & Rubenfeld). Cognitive skills include critical thinking abilities such as seeking information, discriminating, and analyzing. Habits of mind refer to the affective aspects of critical thinking or traits such as perseverance, flexibility, contextual perspective, and confidence developed over time (Scheffer & Rubenfeld). Accurate interpretation of human responses is influenced by both cognitive skills and habits of mind. This continuing education program provided content and processes to help nurses think about how they use these critical thinking concepts when they interpret human responses. It was hypothesized that diagnostic accuracy would be significantly higher after the course.

Method

This evaluation was conducted at the Nursing School of the University of São Paulo in March 2003. The study was approved by the Ethics in Research Committee of the school. The course, guided by 5 objectives and consisting of 16 hours of content, was conducted in English with consecutive translation to Portuguese. Course objectives were to explain why nurses should improve critical thinking; explain critical thinking; describe research findings related to critical thinking and clinical reasoning; apply critical thinking concepts with written case studies; and educate nurses regarding critical thinking and clinical reasoning. The course content focused on why using critical thinking is important for clinical reasoning (1 hour); the who, what, where, when, why, and how of critical thinking (7 hours); research findings related to the accuracy of nurses’ data interpretations (3 hours); case study analyses using critical thinking concepts (3 hours); and guidelines for self-development and teaching (2 hours).

Sample

Course participants were recruited through advertisements sent to hospital nursing departments and postgraduate nursing programs in São Paulo. The course was intended for nurses at the baccalaureate level or higher, including nurses at the master’s and doctoral levels who, after taking the course, could teach the content to other nurses. Sixty nurses enrolled in the program. Forty-six nurses were invited to participate in the study; 7 of the 46 did not attend the entire 16-hour program, so their pretest data were not used. Data from the 39 participants who attended the 16-hour program and completed both the pretest and the posttest were used for this study. All participants provided written informed consent.

Instruments

Two case studies related to medical-surgical nursing were used for the pretest and the posttest, each with an associated scoring manual. Only these two case studies were used because their validity and reliability had been established for the measurement of accuracy. They had been validated by four nationally recognized experts in nursing diagnosis and medical-surgical nursing (Lunney, 1992b). The case studies were translated to Portuguese by the investigators and reviewed by a professional Brazilian translator. Four Brazilian adult health nursing faculty members judged the case studies as describing situations usually encountered in medical-surgical units in Brazil. The only change was using Brazilian names for the patients.

Each case study consisted of 14 sentences on one page. The first part of the diagnosis was stated and the contributing factor was left blank. In the directions, participants were asked to identify the contributing factor that was best supported by data in the case study. The cases were designed to include at least four highly relevant cues for the highest accuracy diagnosis; moderately relevant cues consistent with the highest accuracy diagnosis; a few cues for competing diagnoses; and two disconfirming cues for diagnoses that were low accuracy. In a previous study, the interrater reliability of two raters’ scores of subjects’ answers to these cases was 0.96 (Lunney, 1992b). The high interrater reliability was achieved through the use of detailed scoring manuals with scores for each case validated by experts, based on the Lunney scoring method (Lunney, 2001).

The Lunney scoring method provides a 7-point scale, ranging from +5 (highest accuracy) to −1 (lowest accuracy), to rank the match between presenting cues and diagnostic statements (Table 1). This method has been used to measure accuracy for case studies and actual clinical cases in the United States and other countries (Lunney, 1992b; Lunney, Karlik, Kiss, & Murphy, 1997; Spies, Myers, & Pinnell, 1994).

Scale for Degrees of Accuracy

Table 1: Scale for Degrees of Accuracy

Each participant completed the two case studies in a 30-minute period before the program. Administration of the case studies was followed by 16 hours of teaching in a 4-day period unrelated to the case studies, reducing the possibility of participant interactions about the case studies. The same two case studies were administered 4 days later at the end of the program.

A demographics form with items regarding experience with nursing diagnosis and confidence in accuracy of interpreting human responses was completed at the pretest. In this context, confidence in accuracy is the assurance of one’s abilities to accurately diagnose clinical cases.

Data Analysis

Descriptive statistics were used to describe the study sample and the accuracy scores of data interpretations. The Wilcoxon matched test for ordinal data was used to determine differences between the pretest and the post-test accuracy scores. An alpha value of 0.05 was used for all statistical tests.

Results

Twenty-three (59%) of the participants had master’s- or doctoral-level education. The course attracted many participants (n = 22; 56%) who were leaders in nursing management, education, or research. Nurses in this study had experience with nursing diagnoses during their baccalaureate education (n = 13; 33.3%) or continuing education programs (n = 19; 48.7%). Many participants had additional experiences with nursing diagnoses related to research (35.9%), teaching (33.3%), or clinical practice (28.2%). On the demographics form, the participants were asked to rate their confidence in their ability to achieve accurate nursing diagnoses. A majority (n = 28; 71.8%) stated that their confidence was moderate, high, or very high, defined as 55% to 100% confident.

The hypothesis that a 4-day course consisting of 16 hours of content with discussion of critical thinking and clinical reasoning would positively influence participants’ diagnostic accuracy scores on written case studies was supported (Tables 2 and 3). Results of the Wilcoxon test demonstrated significant differences between pretest and posttest accuracy scores for case studies 1 (z = −2.63, p = .008) and 2 (z = −2.04, p = .042) and overall (z = −3.34, p = .001) (Table 3). On case study 1, 8 (20.5%) of the participants’ posttest scores were at least 1 point higher than their pretest scores; none scored lower, and 31 (79.5%) had no change. On case study 2, 14 (35.9%) of the participants scored higher on the posttest; 6 (15.4%) participants scored lower, and 19 (48.7%) had no change (Table 2). The accuracy scores of 20.5% of the participants improved on case study 1, and 38.5% improved on case study 2. High percentages of nurses (case study 1 = 64.1%; case study 2 = 71.8%) did not achieve the highest accuracy score on posttest.

Nurses’ Diagnostic Accuracy Scores on the Two Case Studies (n = 39)

Table 2: Nurses’ Diagnostic Accuracy Scores on the Two Case Studies (n = 39)

Results of the Wilcoxon Test for Differences Between the Pretest and the Posttest Accuracy Scores

Table 3: Results of the Wilcoxon Test for Differences Between the Pretest and the Posttest Accuracy Scores

A majority of the participants’ scores were high on the pretest, with scores indicating high (5), good (4), or adequate (3) accuracy for 25 participants (64.1%) on case study 1 and 32 participants (82%) on case study 2 (Table 2). On the posttest, these percentages were higher: 29 participants (74.4%) on case study 1 and 36 (92.3%) participants on case study 2 (Table 3).

The median scores for case study 2 were higher than those for case study 1 (Table 2). The Wilcoxon test comparing scores for case studies 1 and 2 showed no statistically significant difference on the pretest (z = −1.92, p = .055) but a statistically significant difference on the posttest (z = − 2.22, p = .027).

Discussion

This continuing education course for nurses from many local agencies attracted a high percentage of nurse leaders with master’s and doctoral degrees, many of whom had high confidence in their abilities to accurately diagnose clinical cases. In previous studies, nursing diagnosis education and experience was shown to be positively related to accuracy (Lunney, 2001).

Accuracy studies have been conducted with students or with practicing nurses with baccalaureate degrees and 5 years of experience (e.g., Lunney, 1992b; Spies et al., 1994). Although there is no comparison group, the participants in this study represented a group of nurses with high interest in nursing diagnoses. Similar to previous studies using the Lunney scoring method, the scores of this sample for each case study varied on six or seven levels of the accuracy scale, supporting concept development of accuracy as a continuous variable and use of the Lunney scoring method for measuring accuracy.

Support for the hypothesis that participation in a 4-day, 16-hour continuing education course on critical thinking and clinical reasoning would improve the accuracy of participants’ diagnoses of these two case studies demonstrates the value of such courses. The finding that the accuracy scores of 20.5% of the participants improved on case study 1 and 38.5% improved on case study 2 is satisfactory, considering that the focus of the course was critical thinking and clinical reasoning for practice situations and the specific content of these cases was not discussed in the course. It is possible that participants remembered the cases from the pretest; however, because there were no discussions of these cases or anything similar to these cases throughout the course, it is not likely that memory of the cases generated higher scores on the posttest.

A positive aspect of the findings was that, on the posttest, four fewer participants scored below 3 (Table 2). Scores below 3 indicated that the participants did not integrate the highly relevant cues to identify the important focus of the case studies. One explanation for this is lack of knowledge about the inherent concepts represented in the case studies. In Lunney’s (1992b) study, using the same two cases, 113 nurses were tested. However, the knowledge scores of 27 nurses regarding concepts in the case studies were too low for them to be included in the study. Knowledge is a necessary but insufficient condition to achieve accuracy of nurses’ diagnoses (Gordon, 1994).

Possible reasons why the accuracy scores of some participants did not improve are that they may have been overconfident or had not yet integrated the course content. Overconfidence was identified by Thompson (2003) as a heuristic, or pattern of thinking developed with experience, that can lead to errors in thinking. Also, in studies regarding distinguishing acute confusion from dementia, preset approaches to nursing care (McCarthy, 2003a, 2003b) were shown to interfere with the accuracy of clinical judgments. Heuristics such as overconfidence and hindsight are mental shortcuts that occur with experience. Thompson provided strategies to combat the negative effects of these heuristics.

Course content was focused on the need to adopt critical thinking processes in clinical practice; however, these recommendations may not have been adopted for the purpose of diagnosing these cases. More time may be needed for measurable changes in critical thinking and accuracy to occur; for example, the habits of mind take time to develop.

With case study simulations specifically designed to measure accuracy, the educational goal was that all nurses would master the content and achieve scores of +5 because the case study data clearly supported the highest accuracy diagnoses and not competing diagnoses. Despite the improved accuracy demonstrated in this study, a high percentage of nurses (case study 1 = 64.1%; case study 2 = 71.8%) did not achieve this goal. One explanation is that the course may not have been specific enough regarding how to apply the critical thinking knowledge, as shown by Rubenfeld and Scheffer (2006). Also, perhaps the guidelines for self-development and teaching, the last content presented, should be presented prior to case study analyses so that application of the guidelines can be reinforced.

Two additional explanations for less than 100% mastery are cognitive errors such as prematurely deciding on a diagnosis without integrating the relevant patient data (Carnevali & Thomas, 1993; Gordon, 1994) or biases based on preset approaches to care (Ebright et al., 2003, 2004; McCarthy, 2003a, 2003b; Thompson, 2003). Premature decision making is a diagnostic error made in medicine and nursing (Carnevali & Thomas; Gordon). This type of error should be discussed in courses on critical thinking and demonstrated in practice cases.

Another possible explanation for less than 100% mastery is that nurses may not be as motivated to achieve accuracy with written cases as they would be with real patients. Caring about the welfare of patients and the need for accurate diagnoses to guide clinical interventions may be a significant factor in the achievement of accuracy.

Conclusions and Implications

With the high complexity of clinical situations in nursing, and the implications of low accuracy for the quality of nursing care, continuing education courses are needed to assist nurses in attaining the associated knowledge and learning how to apply this knowledge to clinical cases. Lunney (2006) described a broad range of content areas that are needed to teach use of standardized nursing languages, including nursing diagnoses. The content areas that work best to achieve the goal of high accuracy have yet to be established through research.

Courses on critical thinking and clinical reasoning should include specific strategies for application of knowledge and opportunities to use cognitive strategies with clinical simulations. Specific strategies for application of critical thinking include helping participants to identify many possible diagnoses from case study data, not just the easiest diagnosis; recognize differences between high-relevance cues and low-relevance cues; and analyze the relative strengths of cues to diagnoses (Carlson-Catalano, 2001). For nurses with limited knowledge and experience regarding nursing diagnoses, additional content related to the diagnostic concepts and their meanings can be integrated with critical thinking and clinical reasoning.

The use of case simulations, written or computer-based, is a good way to test the effects of continuing education coursework on the accuracy of nurses’ diagnoses because all participants have the same available data (Lunney, 1992a). Simulations need to be carefully developed to achieve specific goals. Content validity should be estimated with content experts and reliability estimated with a pilot group that is similar to the study population (Waltz, Strickland, & Lenz, 2005). Further studies are needed to establish the type of continuing education coursework that best achieves the goal of helping nurses at all levels of expertise be more accurate. Also, a wide variety of valid and reliable case study simulations need to be developed as measures of accuracy.

References

  • Bucknall, T. 2003. The clinical landscape of critical care: Nurses’ decision-making. Journal of Advanced Nursing, 43, 310–319. doi:10.1046/j.1365-2648.2003.02714.x [CrossRef]
  • Carlson-Catalano, J. 2001. Teaching method for diagnostic skill development. In Lunney, M (Ed.), Critical thinking and nursing diagnosis: Case studies and analyses. (pp. 44–65). Philadelphia: North American Nursing Diagnosis Association International.
  • Carnevali, DL & Thomas, MD. 1993. Diagnostic reasoning and treatment decision making in nursing Philadelphia: J. B. Lippincott.
  • Cruz, DALM, Pimenta, CAM & Lunney, M. 2006. Teaching how to make accurate nurses’ diagnoses using an EBP model. In R. F. Levin & H. R. Feldman (Eds.), Teaching and learning evidenced-based practice in nursing: A guide for educators. (pp. 229–246). New York: Springer.
  • Dowding, D & Thompson, C. 2004a. Using decision trees to aid decision making in nursing. Nursing Times, 100(21), 36–39.
  • Dowding, D & Thompson, C. 2004b. Using judgment to improve accuracy in decision-making. Nursing Times, 100(22), 42–44.
  • Ebright, P, Patterson, E, Chalko, B & Render, M. 2003. Understanding the complexity of registered nurse work in acute care settings. Journal of Nursing Administration, 33, 630–638. doi:10.1097/00005110-200312000-00004 [CrossRef]
  • Ebright, PR, Urden, L, Patterson, E & Chalko, B. 2004. Themes surrounding novice nurse near-miss and adverse event situations. Journal of Nursing Administration, 34(11), 531–538. doi:10.1097/00005110-200411000-00010 [CrossRef]
  • Gordon, M. 1994. Nursing diagnosis: Process and application. St. Louis, MO: Mosby.
  • Hedberg, B & Larsson, US. 2004. Environmental elements affecting the decision-making process in nursing practice. Journal of Clinical Nursing, 13, 316–324. doi:10.1046/j.1365-2702.2003.00879.x [CrossRef]
  • Hendry, C & Walker, A. 2004. Priority setting in clinical nursing practice: Literature review. Journal of Advanced Nursing, 47(4), 427–436. doi:10.1111/j.1365-2648.2004.03120.x [CrossRef]
  • Higuchi, KAS & Donald, JG. 2002. Thinking processes used by nurses in clinical decision making. Journal of Nursing Education, 41(4), 145–153.
  • Huffman, K, Donoghue, J & Duffield, C. 2004. Decision-making in clinical nursing: Investigating contributing factors. Journal of Advanced Nursing, 45(1), 53–62. doi:10.1046/j.1365-2648.2003.02860.x [CrossRef]
  • Levin, R, Lunney, M & Krainovich-Miller, B. 2005. Improving diagnostic accuracy using an evidenced-based nursing model. International Journal of Nursing Terminologies and Classifications, 15(4), 114–122. doi:10.1111/j.1744-618X.2004.tb00008.x [CrossRef]
  • Lunney, M. 1992a. Development of written case studies as simulations of diagnosis in nursing. Nursing Diagnosis, 3(1), 23–29.
  • Lunney, M. 1992b. Divergent productive thinking factors and accuracy of nursing diagnoses. Research in Nursing & Health, 15(4), 303–311. doi:10.1002/nur.4770150409 [CrossRef]
  • Lunney, M. 2001. Critical thinking and nursing diagnosis: Case studies and analyses. Philadelphia: North American Nursing Diagnosis Association International.
  • Lunney, M. 2006. Helping nurses use NANDA, NOC, and NIC: Novice to expert. Nurse Educator, 31(1), 40–46. doi:10.1097/00006223-200601000-00011 [CrossRef]
  • Lunney, M, Karlik, B, Kiss, M & Murphy, P. 1997. Accuracy of nurses’ diagnoses of psychosocial responses. Nursing Diagnosis, 8(4), 157–166.
  • McCarthy, MC. 2003a. Detecting acute confusion in older adults: Comparing clinical reasoning of nurses working in acute, long term and community health care environments. Research in Nursing & Health, 26, 203–212. doi:10.1002/nur.10081 [CrossRef]
  • McCarthy, MC. 2003b. Situated clinical reasoning: Distinguishing acute confusion from dementia in hospitalized older adults. Research in Nursing & Health, 26, 90–101. doi:10.1002/nur.10079 [CrossRef]
  • North American Nursing Diagnosis Association International. 2007. Nursing diagnosis: Definitions and classification, 2007–2008. Philadelphia: Author.
  • Potter, P, Wolf, L, Boxerman, S, Grayson, D, Sledge, J & Dunagan, C et al. . 2005. Understanding the cognitive work of nursing in the acute care environment. Journal of Nursing Administration, 35, 327–335. doi:10.1097/00005110-200507000-00004 [CrossRef]
  • Rubenfeld, MG & Scheffer, BK. 2006. Critical thinking TACTICS for nurses Boston: Jones & Bartlett.
  • Scheffer, BK & Rubenfeld, MG. 2000. A consensus statement on critical thinking. Journal of Nursing Education, 39, 352–359.
  • Spies, MA, Myers, JL & Pinnell, N. 1994. Measurement of diagnostic ability of nurses using the Lunney scoring method for rating accuracy of nursing diagnosis. In R. M. Carroll-Johnson & M. Paquette (Eds.), Classification of nursing diagnoses: Proceedings of the tenth conference. (pp. 352–353). Philadelphia: J. B. Lippincott.
  • Sternberg, RJ. 1997. Successful intelligence: How practical and creative intelligence determine success in life. New York: Plume Books.
  • Thompson, C. 2003. Clinical experience as evidence in evidence-based practice. Journal of Advanced Nursing, 43, 230–237. doi:10.1046/j.1365-2648.2003.02705.x [CrossRef]
  • Waltz, CF, Strickland, OL & Lenz, ER. 2005. Measurement in nursing and health research (3rd ed.). New York: Springer.

Scale for Degrees of Accuracy

ScoreaCriteria
+5Diagnosis is consistent with all of the cues, supported by highly relevant cues, and precise.
+4Diagnosis is consistent with most or all of the cues and supported by relevant cues, but fails to reflect one or a few highly relevant cues.
+3Diagnosis is consistent with many of the cues, but fails to reflect the specificity of available cues.
+2Diagnosis is indicated by some of the cues, but there are insufficient cues relevant to the diagnosis or the diagnosis is a lower priority than other diagnoses.
+1Diagnosis is suggested by only one or a few cues.
0Diagnosis is not indicated by any of the cues. No diagnosis is stated when there are sufficient cues to state a diagnosis. The diagnosis cannot be rated.
−1Diagnosis is indicated by more than one cue, but should be rejected based on the presence of at least two disconfirming cues.

Nurses’ Diagnostic Accuracy Scores on the Two Case Studies (n = 39)

ScoreaCase Study 1
Case Study 2
Pretest
Posttest
Pretest
Posttest
n%Cumulative %n%Cumulative %n%Cumulative %n%Cumulative %
+5923.023.11435.935.9512.812.81128.228.2
+412.625.7--35.92153.966.61846.274.6
+31538.564.21538.574.4615.482.0717.992.5
+2615.479.6615.439.837.789.737.7100
+1615.494.025.094.925.194.8---
012.697.612.697.525.1100---
−112.610012.6100------

Results of the Wilcoxon Test for Differences Between the Pretest and the Posttest Accuracy Scores

Pretest
Posttest
p
RangeMedianRangeMedian
Case study 1−1 to +53.0−1 to +53.0.008
Case study 20 to +54.0+2 to +54.0.042
Averagea+1 to +53.0+1.5 to +54.0.001

Continuing Education

Cruz, DM, Pimenta, CM & Lunney, M. (2009). Improving Critical Thinking and Clinical Reasoning With a Continuing Education Course. The Journal of Continuing Education in Nursing, 40(3), 121–127.

  1. The accuracy of nurses’ diagnoses varies widely due to many factors.

  2. Educators can provide coursework, especially courses on critical thinking in the context of clinical reasoning, to improve accuracy.

  3. A continuing education course on critical thinking can improve cognitive abilities and thereby positively affect the accuracy of diagnoses.

Authors

Dr. Cruz is Professor, Vice-Director, and Dr. Pimenta is Professor, Graduate Programs Coordinator, School of Nursing, University of São Paulo, Brazil. Dr. Lunney is Doctoral Faculty, Professor, and Graduate Programs Coordinator, College of Staten Island, Staten Island, New York.

This study was partially funded by a grant from the Fundação de Amparo à Pesquisa do Estado de São Paulo, which made the Visiting Professor Program possible at the School of Nursing, University of São Paulo.

The authors thank Dalete Delalibera Faria Motta and Geana Paula Kurita for their support with data collection.

Address correspondence to Dina M. Cruz, PhD, Universidade de São Paulo, School of Nursing, Av Dr Eneas de Carvalho Aguiar, 419, São Paulo, Brazil.

10.3928/00220124-20090301-05

Sign up to receive

Journal E-contents