Journal of Nursing Education

The articles prior to January 2012 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

Use of Iliad to Improve Diagnostic Performance of Nurse Practitioner Students

Linda L Lange, RN, EdD; Sandra W Haak, RN, PhD; Michael J Lincoln, MD; Cheryl Bagley Thompson, RN, PhD; Charles W Turner, PhD; Charlene Weir, RN, PhD; Victoria Foerster, MD; David Nilasena, MD; Roger Reeves, RN, MS

Abstract

ABSTRACT

Nurse practitioners (NPs) have dual goals as primary care providers, combining the traditional goals of nursing with extended goals as diagnosticians. Diagnostic reasoning, therefore, is a critical component of NP education. Iliad, a computerized diagnostic reasoning expert system, has been used effectively to teach diagnostic skills to medical students. A pilot study was undertaken to determine the effects of Iliad training on NP students' diagnostic skill performance and to identify technical and instructional issues of implementation. The study found that the use of Iliad improved NP students' diagnostic reasoning, and that the training effects were modified by prior nursing experience. Successful use of Iliad required planning, faculty commitment, and technical support.

Abstract

ABSTRACT

Nurse practitioners (NPs) have dual goals as primary care providers, combining the traditional goals of nursing with extended goals as diagnosticians. Diagnostic reasoning, therefore, is a critical component of NP education. Iliad, a computerized diagnostic reasoning expert system, has been used effectively to teach diagnostic skills to medical students. A pilot study was undertaken to determine the effects of Iliad training on NP students' diagnostic skill performance and to identify technical and instructional issues of implementation. The study found that the use of Iliad improved NP students' diagnostic reasoning, and that the training effects were modified by prior nursing experience. Successful use of Iliad required planning, faculty commitment, and technical support.

INTRODUCTION

Nurse practitioners (NPs) have multiple roles in the provision of primary care services. They retain their unique nursing roles, and they also must develop enhanced medical diagnostic skills (Fowkes & Hunn, 1973; Shuler & Davis, 1993). Previous nursing experience provides role preparation for the nursing component of NP practice, but may not prepare students to function as diagnosticians. Consequently, developing skill in diagnostic reasoning is an important goal of graduate education for NPs.

Many NP programs rely on an apprenticeship model to teach diagnostic reasoning skills. Teaching strategies include clinical experiences with patients, case presentations and discussions, and expert feedback on diagnostic performance. These methods may have significant limitations. Depth and breadth of case exposure are critical to building domain-specific knowledge (Elstein, Shulman, & Sprafka, 1990), but customary teaching methods limit the number, type, and generalizability of case exposure. In addition, these methods may not allow for immediate, specific feedback from expert clinicians. Most important, it is not clear that they directly address the predominant challenge of NP education: enabling students to expand the traditional nursing goal of assessing and treating human response to illness, to include the additional goal of reaching a disease-focused diagnostic decision. This transition presents unique learning issues not faced by medical students (Shuler & Davis, 1993).

In this article, we describe a trial in which a computerized expert system called "Iliad" (Warner et al., 1988) was used to teach diagnostic reasoning to NP students. Expert systems have not previously been used as a teaching method in NP programs, although they have been used and validated in medical schools. The theoretical and practical justifications for such a teaching strategy are discussed, and the results of a pilot implementation study are presented. Emphasis is placed on: a) evaluating the separate contributions of nursing experience and Iliad training to several measures of diagnostic performance, b) exploring the implications of similarities and differences between nurse and physician decision-making, and c) providing a test of the practicality of using Iliad with NP students.

The Nurse Practitioner as Diagnostician

As health care continues to change in the United States, patients are more likely to enter the health care system through primary care providers. NPs are wellaccepted as providers to rural and under-served populations, and increasingly are enlisted as mainstream primary care providers (Fenton & Brykczynski, 1993). Primary health care NPs are responsible for diagnosis, management, and appropriate referral of "all health problems encountered by the client" (ANA, 1985, p. 6). NPs are prepared to identify variations from normal and to diagnose common diseases, as well as to diagnose human responses to actual and potential health problems (ANA, 1987).

NPs have dual goals in providing primary care, in that they must diagnose and treat the disease itself, as well as diagnose and treat the patient's responses to illness (Shuler & Davis, 1993). In a study of the clinical judgment of NPs, Brykczynski (1989) described the domain, Management of Patient Health/Illness Status in Ambulatory Care Settings. Discrete areas of practice within the domain were: assessing, monitoring, coordinating, and managing the health status of patients over time; detecting acute and chronic diseases while attending to the experience of illness; and selecting appropriate diagnostic and therapeutic interventions with attention to safety, cost, invasiveness, simplicity, acceptability, and efficacy. NPs have "expanded the boundary of nursing practice, first through extension of traditional medical services and then through definition of these and other services as nursing care" (ANA, 1985, p. 5).

Although the importance of diagnostic reasoning skill in NP practice has been clearly articulated, the published research about teaching diagnostic reasoning to nurses provides little guidance to educators. In a review of research regarding teaching clinical judgment in nursing, Tanner (1987) described only two studies that focused on methods to improve diagnostic accuracy; neither study employed NPs or NP students as subjects. Both studies focused on teaching information processing strategies, rather than on improving domain-specific knowledge. No generalizable improvement in clinical judgment was found to result from the teaching interventions that were studied. Tanner reasoned that the teaching interventions were usually too brief to have an influence on clinical judgment, and that the studies were limited by the lack of a valid measure of clinical judgment performance.

Diagnostic Reasoning

Diagnostic reasoning is an integral component of the clinical expertise of NPs, just as it is for physicians. Research in diagnostic reasoning and problem-solving suggests two opposing theories of clinical expertise. The first proposes that experts perform better than novices because they have learned to adopt better problem-solving strategies. The alternative theory suggests that experts perform better mainly because they have more domain-specific knowledge. The current evidence in the literature supports the latter position, that clinical expertise is largely a function of domain-specific knowledge. For example, several researchers have found that the diagnostic strategies employed by experts and novices are strikingly similar (Goran, Williamson, & Gonnella, 1973; Newble, Hoare, & Baxter, 1982; Norman, Tugwell, Feightner, Muzzin, & Jacoby, 1985; Williamson, 1965; Wingard & Williamson, 1973). In addition, Elstein and colleagues (1990) have repeatedly demonstrated that the major factor correlated with clinical problem-solving competence is greater knowledge in the domain, rather than better problem-solving strategies. Norman and colleagues (Norman, Neufeld, Walsh, Woodward, & McConvey, 1985; Norman, Tugwell, Feightner, Muzzin, & Jacoby, 1985) have shown that problem-solving expertise generalizes somewhat within a domain of medical knowledge (e.g., rheumatology), but does not necessarily generalize to other domains (e.g., pulmonary or cardiology). Norman's work suggests that experts do not possess any innate or learned advantage in problem-solving techniques. Rather, experts solve problems better because they know more in their domains than novices do (Norman, Tugwell, Feightner, Muzzin, & Jacoby, 1985).

Studies about clinical decision-making among nurses (Benner, 1984; Tanner, Padrick, Westfall, & Putzier, 1987) and NPs support the domain-specific theory of expertise. For example, White, Nativio, Robert, and Engberg (1992) used a computerized interactive video program to present a simulated patient with Trichomonas vaginales vaginitis to three groups of NPs: obstetric-gynecologic (OB/GYN) NPs, experienced Family NPs (FNPs), and inexperienced FNPs. The OB/GYN NPs, who had more expertise in the problem domain, were more likely to develop appropriate diagnostic hypotheses, while experienced FNPs and inexperienced FNPs collected data that were not hypothesisdriven. OB/GYN NPs asked fewer questions than did the FNPs, indicating that efficiency in clustering information was related to domain expertise. The study supported the importance of domain knowledge in diagnostic reasoning.

Diagnostic Errors

Evidence that NPs may have information processing problems that lead to diagnostic errors was reported by Rosenthal and colleagues (1992), who studied NPs' judgments of Chlamydia infection in 492 patients receiving primary gynecologic care. They found that NPs made diagnostic errors stemming from inconsistent use of clinical cues and collection of cues that were unrelated to the diagnostic problem.

Diagnostic errors may occur among both experts and novices as a result of inadequate information processing. In Kassirer and Kopelman's important work (1989), diagnostic errors were classified into four major types: a) faulty hypothesis triggering, b) faulty context formulation, c) faulty information gathering and processing, and d) faulty verification of diagnoses. Faulty hypothesis triggering can occur when the clinician either fails to consider appropriate initial hypotheses or fails to revise hypotheses to reflect new information. Faulty context formulation can occur when a clinician has competing goals for a patient encounter. For instance, a clinician could overlook a patient's health promotion needs when the patient presents with an acute and difficult-to-diagnose problem. Faulty gathering and processing of information can occur when clinicians either fail to order appropriate tests or misinterpret the predictive value of findings or test results. Finally, faulty verification may occur when clinicians fail to collect enough evidence to confirm adequately a diagnosis or to rule out competing diagnoses.

Diagnostic errors also can result because the information processing capacities of both experts and novices are limited. Humans have significant limitations in their ability to hold multiple concepts in active awareness. Even in simple medical cases, the range of facts to be considered is quite large compared to the number of items (typically five to seven) humans can simultaneously think about (Miller, 1956). To overcome inherent cognitive limitations, expert clinicians adopt a strategy of organizing large numbers of facts into smaller numbers of clusters, and they use automatic, well-learned information management processes (Bower & Gluck, 1984). In diagnostic reasoning, clusters may take the form of causal models based on pathophysiologic relationships. Although such simplifying heuristics can make diagnostic decision-making more manageable, they also may lead to diagnostic errors (Kahneman & Tversky, 1974).

Solving diagnostic problems correctly requires that the goals of the patient-caregiver interaction are congruent and appropriately formulated (Kassirer & Kopelman, 1989; Mayer, 1989). A goal consists of cognitive expectancies that organize the relationship between perception of the environment and development of the intention to act (Bandura, 1986). In any one situation or environment, many courses of action are possible (Gibson, 1979; Trope, 1986). Goals are known to affect ail aspects of information processing, including: a) perception and attention, b) encoding of information (Cohen & Ebbesen, 1979), c) storage and retrieval, d) judgment and higher order integration (Lichenstein & Srull, 1985), e) response selection, such as strategies and plans (Elliott & Dweck, 1988), and f) affective reactions (Srull & Wyer, 1986). For NPs, if the dual goals of diagnosing and treating the disease itself, and diagnosing and treating the patient's responses to health problems are incongruent, the result may be flawed information processing. For example, if the goal of a patient interaction is formulated within a traditional nursing perspective, knowledge structures consistent with that goal are activated. The resulting perception, attention, and plans may prevent successfully reaching a medical diagnosis.

In summary, the current system of educating NPs may be limited in providing the extensive domain experience required for effective diagnostic performance in primary care. Teaching methods may not provide the timely and specific feedback required to gain expertise, and they may not focus clearly on the diagnostic goal of NP-patient encounters. Iliad has been shown to be a training tool that can increase domain-specific knowledge and provide practice in diagnostic reasoning.

The Iliad Diagnostic Reasoning Expert System

Iliad is an expert system designed to provide both expert diagnostic consultations and patient simulations (Turner et al., 1990). The system is composed of an inference engine (a collection of rules and procedures for making decisions), and a knowledge base (a collection of medical facts and relationships) (Lincoln et al., 1991). Iliad uses Bayesian (probabilistic) and Boolean (deterministic) knowledge frames to describe diseases encountered in internal medicine. A knowledge frame is a hierarchically organized group of facts, with associated probabilities or logic, that can be combined to yield a statement of belief about a given condition or disease. Knowledge frames organize findings into clusters, useful chunks that permit the use of sensitivities, specificities, and rules to describe the relationship of a disease to its manifestations, and they provide a basis for explaining Iliad's conclusions (Guo, Lincoln, Haug, Turner, & Warner, 1993). Iliad's knowledge base includes over 6,300 disease manifestations and their relationships to over 1,350 diseases and intermediate diagnoses from internal medicine. Development of the knowledge base is ongoing.

The goal of Iliad training is to improve students' knowledge of the relationship of specific findings to a particular disease, that is, to improve domain knowledge. An Iliad case consists of history, physical examination, and diagnostic test findings in relation to a specific diagnosis; the case data may be drawn from patient charts or expert experience. A typical disease in Iliad contains more than 50 signs and symptoms. The frame-based representation in Iliad organizes related findings (signs, symptoms, and procedures) into pathophysiological clusters, or chunks (e.g., low cardiac output). The chunks are sometimes organized into larger chunks that represent broader pathophysiological concepts (say left-sided heart failure). This organizational structure is designed to simplify the memorization processes involved in decision-making.

Iliad can be used in either consultation or simulation mode. In simulation mode, which was the mode used in the present trial, cases can be presented to students for training or testing purposes. For training purposes, cases can be worked-up using teaching tools that, for example, show the sensitivity and specificity of a finding in relation to a disease, as well as the change in posterior probability attributable to each finding. For testing purposes, cases are worked-up in a similar way, but access to the teaching tools is denied. Test cases are disguised by changing the patient's presenting complaint or age, while leaving the final diagnosis and findings unchanged.

The Student Interface Case File (SICF) Manager automates the creation of unique sequences of cases and allows faculty to specify the date and time when each case becomes accessible to the student. The SICF Manager allows students to be assigned to individualized training and test regimens in an experimental design. For example, one student might be assigned a pneumothorax training case, another a gastritis training case, and both might be given a gastritis test case. To assist in the analysis of cases, Iliad provides a program called CaseStats which can analyze individual or grouped results of cases completed by students. CaseStats can create tables from the results, which are amenable to analysis by standard statistical packages such as SPSS or SAS.

METHODS

The goals of the pilot study were: a) to explore the effects of Iliad training on diagnostic reasoning of NP students and b) to demonstrate a method by which one might teach and evaluate diagnostic reasoning in the NP student population. Although Iliad has been used to teach and evaluate diagnostic reasoning among medical students, the NP student population differs from medical students in ways that could have impacts on Iliad's effectiveness. NP students have likely had prior case experiences with the nursing goal of diagnosing and treating human responses to health problems, rather than the medical goal of diagnosing disease. Thus, the authors expected that the effects of Iliad training and prior case experience would be associated with the NP students' diagnostic performance on Iliad test cases.

Specifically, prior case experience was expected to be associated with better selection of findings, since nursing experience would have resulted in extensive exposure to gathering needed information relative to the diagnosis. In contrast, Iliad training was expected to improve diagnostic performance (accuracy and verification). Although nurses may have had extensive prior case experience with a particular diagnosis, such experiences typically would not have had the goal of making medical diagnoses. Nurses also may not receive expert feedback about their diagnostic reasoning. Without feedback, nurses may not learn which specific set of findings is needed to confirm or disconfirm a diagnosis. Thus, the prior case experience may be reflected in some elements of diagnostic reasoning (e.g., identifying and collecting relevant information) but not in other essential elements (e.g., adequate verification of diagnostic hypotheses).

In our environment, NP students and faculty have not had experience with computerized tools for teaching diagnostic reasoning. Implementing Iliad as a training experience with NP students involved consideration of multiple issues, including availability and security of the computers, technical support, students' computer literacy, and integration of Iliad activities into the curriculum. The pilot study provided information about these issues.

Experimental Design

Students were randomly assigned in a 2 × 2 (Training Domain × Test Domain) to be trained either on Chest Pain or Abdominal Pain diagnoses. Each student completed four training cases in the assigned problem domain and four test cases, two in the assigned problem domain and two in the other problem domain. The final diagnoses for the eight cases were chest wall pain, spontaneous pneumothorax, pneumonia, esophageal spasm, acute gastritis, Crohn's disease, lactase deficiency, and viral gastroenteritis. The design allowed evaluation of the effects of Iliad training on performance. That is, determination of whether students performed better on a particular diagnosis (e.g., a pulmonary embolus patient) after completing an Iliad training case on that diagnosis, as compared to students who had not completed a training case on that diagnosis.

Subjects

The Institutional Review Board approved the study prior to initiation of data collection. The subjects were nine NP students enrolled in courses during the spring quarter, 1994. Students volunteered for participation in the study and gave written informed consent after receiving an explanation of study procedures.

Procedures

Students received a 2-hour orientation to Iliad, during which a faculty member demonstrated each step in the work-up of two sample cases, one in consultation mode and one in simulation training mode. Students followed along, working through both cases with the instructor, and then working through another simulation training case on their own. A research assistant (RA) (a nursing informatics graduate student) was hired to be present in the Iliad lab 6-8 hours per week to provide technical and academic support. Students were encouraged to schedule their work with Iliad cases when the RA was present.

All dependent variables were collected by Iliad into the SICF and then scored by the CaseStats program as students completed the simulation test cases. Students completed a General Experience Inventory (Haak, 1993) at the beginning of the study, and a Case-Specific Experience Inventory (CSEI) (Haak, 1993) following each case.

Independent Variables

Training Domain (Chest Pain or Abdominal Pain) was a randomly assigned independent variable that determined the medical domain of the training cases selected for each group. NP faculty members identified from existing Iliad simulation cases, eight cases that represented diagnostic problems defined as relevant to NP course objectives.

Test Case Experience (Trained Case or Untrained Case) refers to whether or not students had previously been randomly assigned to receive simulated case training on the same diagnosis as the test case. A student experienced a Trained Case test when encountering a case randomly selected from among her/his previous training cases. A student experienced an Untrained Case test when encountering a case randomly selected from among the set of training cases not personally seen (other students have seen these cases). The sequence of training and test cases was presented in different, counterbalanced random orders (i.e., a Latin square design) (Keppel, 1991), allowing us to assess each desired training comparison without confounding. For each student, half of the test cases are Trained Cases and half are Untrained Cases.

Case Specific Experience. Immediately after completing each case, students completed a Case Specific Experience Inventory (CSEI) (Haak, 1993), which asked for self-report of the scope and practice setting of past experience with actual patients having the same diagnosis as the Iliad case. Scope of experience was defined as total number of patients, recency of experience, and current frequency of experience with similar patients. Haak (1993) reported a Cronbach's alpha=.95 in a study of 60 nurses, nursing students, and lay persons who used a computer-assisted interactive video patient simulation designed to assess clinical nursing judgment performance. The CSEI independent variable served as a covariate to explain and statistically control within-condition variability in performance.

Dependent Variables

Diagnostic Errors measures the accuracy of a student's final diagnosis. Iliad assigns a score of 1.0 for an incorrect diagnosis and 0.0 for a correct diagnosis. Based on predictive validity established in previous studies (Lincoln et al, 1991), diagnostic errors were expected to be higher in the Iliad-Untrained condition, that is, on test cases that were not from the assigned problem domain.

Posterior Probability assesses the student's ability to gauge whether sufficient evidence has been collected to support the diagnosis before proceeding to treatment, thus detecting the error of premature closure (Kassirer & Kopelman, 1989). Posterior probability is calculated as Iliad's final posterior probability for the correct diagnosis. (Students were told during orientation that posterior probability should approach .95 when the hypothesis is adequately confirmed.) Scores on posterior probability were close to 1.0 when the student had gathered sufficient evidence to support the diagnosis. Scores fell toward 0.0 when the student collected insufficient evidence and closed the case too early.

The Cost dependent variable assesses the student's ability to reach a sufficient diagnostic threshold for a minimum expenditure of resources. The value of cost is calculated as the actual charge (based on University of Utah Hospital billing data) for tests and procedures accumulated in the work-up. Previous studies with Iliad (Lincoln et al., 1991) indicated that some students achieve a low work-up cost by prematurely closing the case. These students fail to obtain appropriate evidence to confirm the diagnosis. Thus, a low cost with a low posterior probability is not an indication of good performance. A high score on the cost variable usually reflects poor performance. Students with high cost scores are not able to be cost effective in their work-ups. High scores may occur in under-confident students who inappropriately continue to order unnecessary tests.

Average Findings Score assesses the student's performance in terms of selecting appropriate and cost-effective history, physical exam, or laboratory findings. For each finding elicited during the case work-up, Iliad calculates a finding score as the ratio of the score Iliad assigns to the student's finding and the score Iliad assigns to the best finding for that step in the work-up. Scores are generated from the information content provided by the finding for the diseases the student is pursuing, and the finding's cost, using a utility model that is described elsewhere (Cundick et al., 1989; Guo, Lincoln, Haug, Turner, & Warner, 1993). The average findings score is created by averaging across all of the students' queries; the score can range from 0.0 to 1.0, with higher scores indicating better performance.

RESULTS

The pilot study provided data about the effects of Iliad training on NP students' diagnostic skills as well as practical information about how Iliad might be used to teach and evaluate diagnostic reasoning among NP students. The results are based upon a relatively small sample size, thus, the analyses have relatively low statistical power in terms of demonstrating statistically significant differences. Some of the relationships described are probably due to random effects. Some of the findings may not be replicated in future studies with larger sample sizes. Despite these limitations, the analyses are presented to demonstrate a method of evaluating the effects of computerized training programs.

The analyses describe the effects of both prior casespecific experience and Iliad training on student performance. In order to evaluate the effects of prior case experience on performance, students' case experiences were assessed. Similarly, to evaluate the effects of Iliad training on performance, students' performances in relation to Iliad training with the specific diagnoses were analyzed. The performance measures reflect typical cognitive errors made in diagnostic work-ups: faulty hypothesis triggering (diagnostic errors), inadequate information gathering (average findings score, cost), and inappropriate verification of hypotheses (posterior probability).

Figure 1. Levels of prior experience with diagnosis by experimental conditions.

Figure 1. Levels of prior experience with diagnosis by experimental conditions.

Case-specific experience. The first analysis addressed students' prior case-specific experience with patients manifesting either chest pain or abdominal pain. The CSEI measured the frequency and recency of the student's experience with the specific diagnosis in the test case that the student had just completed. A 2x2x2 (Training Condition ? Test Domain ? Replication) mixed factorial analysis of variance on the CSEI variable was performed. The replication variable was treated as a repeated measures factor in the design. The test domain main effect was statistically significant [F(l,13)=5.76, p<.051, meaning that all of the students reported more experience in chest pain than in abdominal pain. The other effects were not statistically significant.

As Figure 1 indicates, students in both experimental conditions reported significantly more experience with the chest pain cases (M=I 1.67) than the abdominal pain cases (M=6.76). Based upon the results, students were expected to perform better on the chest pain cases than abdominal pain cases, provided that their prior domainspecific experience enhanced their diagnostic skills in the chest pain domain.

Average finding score. The average finding score reflects efficiency in information gathering and processing. Students with high findings scores are more likely to elicit findings that are high in information value and low in cost as compared to alternative findings that they might have pursued. A 2x2x2 (tiaining condition x test domain x replication) mixed factorial analysis of variance was performed on the average finding score as the dependent variable. The results indicated that students performed significantly better on the chest pain (M=70.79) than the abdominal pain (M=AlM) cases [F(l,13)=5.72, p<.05]. The results of the training condition ? test domain interaction are presented in Figure 2. To evaluate the relationship between prior case experience and information processing efficiency, a Pearson correlation was computed between the CSEI variable and the average findings score. (The sample size prohibited a covariate analysis.) The correlation was statistically significant across the entire set of test cases [r(34)=.40, p<.051. The correlation was marginally significant for the abdominal pain [r=.41, p<.10], but not for the chest pain cases [r=.12, p>.10J. The results for the average findings score are consistent with the results obtained from the CSEI. As a group, students had more experience in the chest pain area, and they were able to select better findings in the chest pain than the abdominal pain cases. Within the abdominal pain domain, students with more case-specific experience were able to select more cost effective findings.

Figure 2. Effects of experimental conditions on average findings scores.

Figure 2. Effects of experimental conditions on average findings scores.

Diagnostic errors. The next dependent variable assessed the accuracy of the final diagnosis. A 2x2x2 (training condition x test domain x replication) mixed factorial analysis of variance on diagnostic errors was performed. None of the main effects or interactions was statistically significant. As Figure 3 indicates, the trained students (individuals doing test cases in the same domain in which they did training cases) missed only about 6% of cases, while the untrained students (individuals doing test cases in a different domain than they did training cases) missed about 23% of cases. However, the difference was not statistically significant. All students had significantly more experience in chest pain cases than abdominal pain cases. However, diagnostic errors did not differ significantly between the two domains. Average error score for chest pain cases was 16.25%, while for abdominal pain cases the average error score was 12.5%.

To evaluate further the relationship between prior case experience and diagnostic accuracy, a biserial correlation between the CSEI and the diagnostic errors dependent variables was computed. The results indicated that prior case experience was marginally significant for the chest pain cases [r=.40,p<.10], but not for the abdominal pain cases [r=- .15,p>.10].

Figure 3. Effects of experimental conditions on percent of diagnostic errors.

Figure 3. Effects of experimental conditions on percent of diagnostic errors.

Posterior probability. The posterior probability dependent variable assesses the degree to which the student has elicited sufficient findings to confirm the diagnosis before beginning treatment. The measure reflects the presence of verification errors, such as premature closure. This error occurs when an individual is over-confident about the diagnosis based upon a too-limited set of findings. We performed a 2x2x2 (training condition x test domain x replication) mixed factorial analysis of variance on the posterior probability dependent variable. None of the main effects or interactions approached significance. As Figure 4 indicates, the students trained and tested on abdominal pain cases had the highest posterior probability (M=0.90). Thus, the results are not consistent with results expected, based on the mean differences in the prior case experience measure. Students did not perform significantly better on the chest pain (M=0.74) than the abdominal pain cases (M=0.80). Furthermore, individual differences in prior case experience were not significantly correlated with posterior probability in either the abdominal pain or the chest pain domain. Thus, prior case experience did not appear to predict which students would be able to avoid verification errors in their case work-ups.

Cost. The cost measure reflects the total cost for tests and procedures ordered by the student. A 2x2x2 (training condition x test domain x replication) mixed factorial analysis of variance was performed with the cost score as the dependent variable. The results indicated that the test domain main effect [F(l,13)=13.84, p<.05] and the training condition x test domain interaction LFXl1IS)=O-OS, p<.05] were statistically significant. As Figure 5 indicates, the students trained and tested on abdominal pain cases spent substantially more money on their work-ups (M=$230) than students in the other three conditions, where cost averages ranged from $75 to $132.

Figure 4. Effects of experimental conditions on the posterior probability of correct diagnosis.

Figure 4. Effects of experimental conditions on the posterior probability of correct diagnosis.

DISCUSSION

The Effects of Iliad Training

The results suggest that prior nursing case experience has some predictive value in identifying variations in student performance on diagnostic errors (hypothesis generation) and average findings score (information gathering and interpretation), but not on posterior probability (verification). The conclusion is congruent with the overall notion that prior experience influences performance. In prior experiences as staff nurses, NP students were accustomed to collecting information for both medical and nursing actions and to generating potential medical hypotheses. However, as staff nurses they were never responsible for verifying a medical diagnosis, which requires integration of knowledge about posterior probability with information gathering and interpretation. This skill is what must be learned in the NP program.

In this sample, Iliad training yielded greater improvement in the abdominal pain area than the chest pain area (although the differences were not statistically significant). The chest pain and abdominal pain domains have multiple differences other than simply being different knowledge areas. Many of the students had worked in acute care hospital environments where they were likely to be exposed to patients with the complaint of chest pain. Individuals who worked in intensive care were likely to have felt some responsibility for determining the origin of chest pain, or at least acquiring the clinical information needed to make a diagnostic decision. However, fewer of the NP students had worked in ambulatory care, where the chief complaint of non-acute abdominal pain is more common and also requires a more complex diagnostic work-up than in acute care settings. Perhaps prior experience enables students to recognize which findings are appropriate but not to determine the diagnostic implication of the findings. Perhaps NP students require some threshold level of experience (as in the chest pain area) before the prior experience can improve their diagnostic accuracy.

The findings are derived from a small sample of NP students who completed a limited number of training and test cases. However, to the extent that the findings are replicable, they provide information about teaching diagnostic reasoning in NP programs. Prior nursing experience seems to provide skill in information-gathering and hypothesis generation, but is not as clearly supportive of diagnostic verification skills. Skills that arise from prior experience in acute care settings may not generalize to less familiar, less acute patient situations. Thus, knowledge of students' prior experience can be useful in individualizing exposure to different work-ups, in which different aspects of diagnostic reasoning can be emphasized.

The higher diagnostic accuracy of the abdominal pain trained and tested group was probably due to the fact that these students used expensive laboratory teste to make their diagnoses. As the average findings scores indicate, students did not make cost-effective decisions, since they could have selected less expensive history and physical exam findings to support their diagnoses.

Individual differences in case experience were not significantly correlated with cost in either the abdominal pain or the chest pain domain. Again, the prior case experience variable did not predict which individuals would be able to generate low cost work-ups.

Demonstrating a Method of Teaching and Evaluating Diagnostic Reasoning

The pilot study provided information about using computerized learning tools with NP students. Numerous details, ranging from technical to curricular, had to be considered and managed.

Technical issues. A key element in the successful use of a computerized teaching tool is computer availability. Factors to consider when using Iliad are the number of machines available in the laboratory or other site and the times when students are free to use the machines. If all students work on Iliad at the same time, as was the case in our trial with NP students, the number of machines needed is equal to the number of students. Because a single department is unlikely to have a large number of computers available to students, use of a computer center or library laboratory often is necessary. In addition, such a laboratory may be required for an introductory orientation session, even if the students later distribute their time on the computers. For this project a library computer laboratory was utilized. The lab contained 15 Macintosh Usi computers, connected by an Ethernet network to a central server and to the campus network.

Security of the computer program and student data is a significant issue. The library's computer laboratory, like most public-access sites, had procedures in place to maintain security of software on the computers. The library used a "shareware" utility program called Rev-R-Dist to remove unauthorized programs and files from the hard drives of the computers on a daily basis. Since Iliad creates new files (e.g., the CaseStats files from each student's work), it was necessary to protect the Iliad directory from Rev-R-Dist's actions by setting certain flags in the Rev-RDist program. Knowing about the existence and operation of such programs is important when using Iliad in a public computer laboratory site.

Figure 5. Cost of diagnostic work-up by experimental conditions.

Figure 5. Cost of diagnostic work-up by experimental conditions.

Protecting software from human intervention also is necessary. Students may deliberately or inadvertently delete or interfere with their own or other students' files. To prevent this type of file corruption, students log onto Iliad using their Social Security numbers. Once they have logged onto the system, Iliad only allows them to access their own SICF Manager case files. To ensure that students did not delete files in the Iliad directory, all of the Iliad files except for an alias were made invisible on the desktop, and the alias was used to start the Iliad program.

Curriculum issues. Other factors to consider when implementing Iliad are students' general computer literacy, knowledge of probabilistic reasoning, and needs for continuing support. Students needed an orientation not only to Iliad, but also to Macintosh computers and to probability-based diagnostic reasoning. Although most students had used computers before, few were Macintosh literate; they needed training in how to open, save, and close files; to use pull-down menus; and to use a mouse as the primary data input device.

A research assistant was present to assist students in the Iliad lab. However, even with only 9 students participating in the project, the hours scheduled by the RA often were not sufficient to meet students' specific scheduling needs. Providing support when needed was a problem throughout the project, in part because the project was too brief to allow students time to develop expertise in using Iliad. In future projects, more supervised laboratory hours and individual reinforcement will be provided.

Decisions concerning how Iliad will be fit into the existing curriculum must be addressed. Questions such as: Will the use of Iliad simulations be an optional or a required learning activity? Will assignments include group discussion of Iliad cases? What cases and what sequence will fit with particular courses? must be answered at the outset. It is the authors' experience that "optional" learning activities often are not completed by students. In the present trial, while most of the 35 eligible students expressed interest in the project, only 9 attended orientation and completed all assigned cases. Based on this experience, the authors believe that if Iliad is to have its desired effect, the program must be used as a required learning activity for NP students.

Successful implementation of Iliad also requires that it be well integrated with other learning activities. One successful reinforcement technique employed at several universities is to hold an Iliad Clinico-Pathologic Conference (CPC). The Iliad CPC is based on a patient case, which all students work-up using Iliad's consultation mode (the skills learned in consultation are generalizable to simulations). After the students have independently worked-up the case and answered several questions about it, they meet in a group session with a faculty member who uses Iliad to discuss and solve the case. Students have responded enthusiastically to this method of instruction, and have become much more efficient in their use of Iliad's learning tools.

In summary, the authors have presented evidence to suggest that use of Iliad as a teaching tool will improve NP students' diagnostic reasoning, and that prior experience interacts with the effects. Prior nursing experience may provide skill in information-gathering and hypothesis-generation, but not in diagnostic verification skills. In this sample, training with Iliad yielded greater improvement in the domain in which students had less prior experience, although the différence was not statistically significant. Our pilot implementation study showed that successful use of Iliad required planning and support. Faculty must be sufficiently committed to the value of Iliad case experience to make it a required course activity. Iliad should be integrated with other course activities, and students will profit from structured group discussion following their individual work with Iliad.

REFERENCES

  • American Nurses Association. (1985). The scope of practice of the primary health care nurse practitioner. Kansas City, MO: Author.
  • American Nurses Association. (1987). Standards of practice for the primary health care nurse practitioner. Kansas City, MO: Author.
  • Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice-Hall.
  • Benner, P. (1984). From novice to expert. Menlo Park, CA Addison- Wesley.
  • Bower, G.H., & Gluck, MA. (1984). Evaluating an adaptive network model of human learning. Journal of Memory and Language, 27, 166-195.
  • Brykczynski, KA (1989). An interpretive study describing the clinical judgment of nurse practitioners. Schoforly Inquiry for Nursing Practice, 3, 75-104.
  • Cohen, E.E., & Ebbesen, E.B. (1979). Observational goals and schema activation: A theoretical framework for behavior prediction. Journal of Experimental Social Psychology, 15, 305-329.
  • Cundick, R.M., Turner, C.W., Lincoln, M.J., Buchanan, J.P., Anderson, C, Warner, H., Jr., & Bouhaddou, O. (1989). Iliad as a patient case simulator to teach medical problem solving. In L.C. Kingsland (Ed.), Proceedings of the 13th Annual Symposium on Computer Applications in Medical Care (pp. 902-906). Washington, DC: IEEE Computer Society Press.
  • Elliott, E.S. & Dweck, CS. (1988). Goals: An approach to motivation and achievement. Journal of Personality and Social Psychology, 54, 5-12.
  • Elstein, A., Shulman, L., & Sprafka, S. (1990). Medical problem solving: A ten-year retrospective. Evaluation and the Health Professions, 13, 5-36.
  • Fenton, M. V. & Brykczynski, KA. (1993). Qualitative distinctions and similarities in the practice of clinical nurse specialists and nurse practitioners. Journal of Professional Nursing, 9, 313326.
  • Fowkes, W. C, Jr. & Hunn, V.K (1973). Clinical assessment for the nurse practitioner. St. Louis, MO: CV. Mosby.
  • Gibson, JJ. (1979). The ecological approach to visual perception. Boston, MA: Houghton Mifflin.
  • Goran, M.J., Williamson J.W., & Gonnella J.S. (1973). The validity of patient management problems. Journal of Medical Education, 48, 171.
  • Guo, D., Lincoln, M.J., Haug, P.J., Turner, CW., & Warner, H.R. (1993). Comparison of different information content models by using two strategies: Development of the best information algorithm for Iliad. In C Safran (Ed.), Proceedings of the 17th Symposium on Computer Applications in Medical Care (pp. 465469). New York, NY: McGraw-Hill.
  • Haak, S.W. (1993). Development of a computer tool including interactive video simulation for eliciting and describing clinical nursing judgment performance. Unpublished doctoral dissertation, University of Utah, Salt Lake City, Utah.
  • Kahneman, D. & Tversky, A. (1974). Judgment under uncertainty. Science, 185, 1124-1131.
  • Kassirer, J.P. & Kopelman, R.I. (1989). Cognitive errors in diagnosis: Instantiation, classification, and consequences. American Journal of Medicine, 86, 433-441.
  • Keppel, G. (1991). Design and analysis: A researcher's handbook (3rd ed.). Englewood Cliffs, NJ: Prentice Hall.
  • Lichenstein, M. & SruU, T.K (1985). Conceptual and methodological issues in examining the relationship between consumer memory and judgment. In L.F. Alwitt & AA. Mitchell (Eds.), Psychological processes and advertising: Theory, research, and application (pp. 113-128). Hillsdale, NJ: Erlbaum.
  • Lincoln, M.J., Turner, CW., Haug, P., Warner, HR., Williamson, J.W., Bouhaddou, O., Jessen, S., Sorenson, D., Cundick, R., & Grant, M. (1991). Iliad training enhances medical students' diagnostic skills. Journal of Medical Systems, 15, 93110.
  • Mayer, R.E. (1989). Human nonadversary problem solving. In KJ. Gilhooly (Ed.), Human and machine problem solving (pp. 3956). New York, NY: Plenum Publishing.
  • Miller, GA. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97.
  • Newble, D.I., Hoare, J, & Baxter, A. (1982). Patient management problems: Issues of validity. Journal of Medical Education, 16, 137-142.
  • Norman, G.R., Neufeld, V.R., Walsh, A, Woodward, CA, McConvey, GA. (1985). Measuring physicians' performances by using simulated patients. Journal of Medical Education, 60, 925934.
  • Norman, G.R., Tugwell, P., Feightner, J.W., Muzzin, L.J., & Jacoby, L.L. (1985). Knowledge and clinical problem-solving. Journal of Medical Education, 19, 344-356.
  • Rosenthal, G.E., Mettler, G, Pare, S., Riegger, M., Ward, M., & Landerfeld, CS. (1992). Diagnostic judgments of nurse practitioners providing primary gynecologic care: A quantitative analysis. Journal of General Internal Medicine, 7, 304-311
  • Shuler, PA., & Davis, J.E. (1993). The Shuler nurse practitioner practice model: A theoretical framework for nurse practitioner clinicians, educators, and researchers. Journal of the American Academy of Nurse Practitioners, 5, 11-18.
  • Srull, T.K. & Wyer, R.S. (1986). The role of chronic and temporary goals in social information processing. In R.M. Sorrentino & E. Tory Higgins (Eds.), Handbook of motivation and cognition (pp. 543-549). New York, NY: Guilford Press.
  • Tanner, CA. (1987). Teaching clinical judgment. In J.J. Fitzpatrick & R.L. Taunton (Eds.), Annual review of nursing research (pp. 153-173). New York, NY: Macmillan.
  • Tanner, C, Padrick, K, Westfall, U., & Putzier, D. (1987). Diagnostic strategies of nurses and nursing students. Nursing Research, 36, 358-363.
  • Trope, Y (1986). Identification and inferential processes in dispositional attribution. Psychological Review, 92, 239-257.
  • Turner, CW, Williamson, J., Lincoln, M.J., Haug, P., Buchanan, J., Anderson, C, Grant, M., Cundick, R., & Warner, H.R. (1990). The effects of Iliad on medical student problem solving. In R.A. Miller (Ed.), Proceedings of the 14th Symposium on Computer Applications in Medical Care (pp. 478-481). New York, NY: McGraw-Hill.
  • Warner, H.R., Haug, P.S., Bouhaddou, 0., Lincoln, M., Warner, H. Jr., Sorensen, D., Williamson, J.W., & Fam, C (1988). Iliad as an expert consultant to teach differential diagnosis. Proceedings of the Twelfth Annual Symposium on Computer Applications in Medical Care, 371-376.
  • White, J.E., Nativio, D.G, Robert, SN., & Engberg, S.J. (1992). Content and process in clinical decision-making by nurse practitioners. IMAGE: Journal of Nursing Scholarship, 24, 153-158.
  • Williamson, J. W (1965). Assessing clinical judgment. Journal of Medical Education, 40, 180-187.
  • Wingard, J.R., & Williamson, J. W (1973). Grades as predictors of physicians' career performance: An evaluative literature review. Journal of Medical Education, 48, 311-322.

10.3928/0148-4834-19970101-09

Sign up to receive

Journal E-contents