Primary care providers overestimate likelihood of disease before, after certain tests
Providers in the primary care setting often overestimate the probability of a diagnosis before and after testing, suggesting that many are “unaccustomed to using probability in diagnosis and clinical practice,” researchers said.
“This research grew out of seeing a clear disconnect between how we taught testing to medical students and how we practice clinically,” David J. Morgan, MD, MS, professor of epidemiology and public health at the University of Maryland School of Medicine, told Healio Primary Care. “We teach testing as math equations and 2x2 tables but that doesn’t translate to patient care.”
Morgan and colleagues surveyed 553 practitioners — including resident physicians (n = 290), attending physicians (n = 202) and nurse practitioners (n = 61) — and asked them to estimate the likelihood of disease in four clinical scenarios that are common in primary care: pneumonia, cardiac ischemia, breast cancer and UTI.
“Each scenario was created for a general situation but included essential details to calculate true risk for patients (eg, age and absence of any risk factors for breast cancer in mammogram screening questions),” the researchers wrote. “The primary outcome of testing questions was to accurately identify the probability that a patient had disease after positive or negative results.”
The answers in the survey were compared against evidence-based estimates of disease from an expert panel, according to the researchers.
They found that the practitioners overestimated the probability of a disease before a test was taken in all scenarios.
Based on the survey, the probability of pneumonia following positive radiology results was 95% (evidence range = 46%-65%; P < .001); the probability of breast cancer after positive mammography was 50% (evidence range = 3%-9%; P < .001); the probability of cardiac ischemia following positive stress test results was 70% (evidence range = 2%-11%; P < .001); and the probability of UTI after positive urine culture was 80% (evidence range = 0%-8.3%; P < .001).
Following negative test results, the probability of disease dropped to 50% for pneumonia (evidence range = 10%-19%; P < .001); 5% for breast cancer (evidence range = < 0.05%; P < .001); 5% for cardiac ischemia (evidence range = 0.43%-2.5%; P < .001); and 5% for UTI (evidence range = 0%-0.11%; P < .001) — but it was still significantly higher than the expert panel’s estimates.
Morgan said in the interview that a nationwide study would produce results “fairly close” to the ones they reported.
“We are describing a problem in plain sight that has mostly been ignored but is critical to daily patient decisions. This could explain much of the overtreatment and overuse of medicine,” he said.
To address the issue, clinicians must acknowledge that they are likely overestimating the likelihood of diseases and think about the “true value” of tests.
“This process can be difficult without obvious references to use as guidance,” he said. “We are developing a site to make this easier, Testingwisely.com.”
In an invited commentary, Arjun K. Manrai, PhD, an assistant professor in the Computational Health Informatics Program at Harvard Medical School, wrote that the study by Morgan and colleagues points “to new targets for medical education and research avenues for how probabilistic information might be better integrated into care.”