In the Journals

Artificial intelligence models may predict pulmonary embolism risk

New data published in JAMA Network Open suggest that a machine learning algorithm may be able to accurately predict a patient’s risk for pulmonary embolism and may help improve use of CT imaging for PE.

The proposed workflow for the machine learning model — Pulmonary Embolism Result Forecast Model (PERFORM) — transformed raw electronic medical record data arranged as a timeline into feature vectors and developed a decision analytical model targeted toward adult patients referred for CT imaging for PE, according to the researchers.

For the study, the researchers trained and validated the PERFORM model using 3,397 annotated CT imaging exams for PE from 3,214 patients (mean age, 60.53 years; 53% women) seen at Stanford University hospitals and clinics. They then externally validated the model on 240 patients (mean age, 70.2 years; 55% women) seen at Duke University Medical Center.

The researchers also compared traditional machine learning models, including the ElasticNet model, and a new deep learning model (PE neural model) with three clinical scoring systems — Wells, Pulmonary Embolism Rule-out Criteria (PERC) an rGeneva —on holdout outpatient data sets from Stanford (100 patients; mean age, 57.74 years; 67% women) and Duke (101 patients; mean age, 73.06 years; 58.4% women). Accuracy of the prediction was assessed using the area under receiver operating curve (AUROC).

Results showed that scores for accuracy were high for both the PE neural model (AUROC = 0.85) and the ElasticNet model (AUROC = 0.93), although the ElasticNet model performed better on the internal data set (P = .01). In the external Duke data set, there was a decline in performance for the PE neural model and the ElasticNet model, but both models performed equally well on the external data set (PE neural, 0.72 vs. ElasticNet, 0.7; P = .17), suggesting that they are still generalizable to the Duke population, according to the researchers.

With an AUROC of 0.81, the PE neural model performed better than all other models and criteria on the Stanford and Duke holdout outpatient data, even when compared with ElasticNet on the Duke data (AUROC = 0.74; P = .01).

Overall, the best-performing model had an AUROC of 0.9 (95% CI, 0.87-0.91) for predicting a positive CT study for PE on intra-institutional holdout data and an AUROC of 0.71 (95% CI, 0.69-0.72) on the external Duke data set. For this model, superior AUROC performance (0.81; 95% CI, 0.77-0.87) and cross-institutional generalization of the model (0.81; 95% CI, 0.73-0.82) were observed on holdout outpatient populations from both the intra-institutional and extra-institutional data.

“The neural network model PERFORM possibly can consider multitudes of patient-specific risk factors and dependencies in retrospective structured EMR data to arrive at an imaging-specific PE likelihood recommendation and may accurately be generalized to new population distributions,” the researchers wrote. “The findings of this study suggest that this model may be used as an automated [clinical decision support tool] to improve use of PE-CT imaging in referred patients.” – by Melissa Foster

Disclosures: Banerjee reports no relevant financial disclosures. Please see the study for all other authors’ relevant financial disclosures.

New data published in JAMA Network Open suggest that a machine learning algorithm may be able to accurately predict a patient’s risk for pulmonary embolism and may help improve use of CT imaging for PE.

The proposed workflow for the machine learning model — Pulmonary Embolism Result Forecast Model (PERFORM) — transformed raw electronic medical record data arranged as a timeline into feature vectors and developed a decision analytical model targeted toward adult patients referred for CT imaging for PE, according to the researchers.

For the study, the researchers trained and validated the PERFORM model using 3,397 annotated CT imaging exams for PE from 3,214 patients (mean age, 60.53 years; 53% women) seen at Stanford University hospitals and clinics. They then externally validated the model on 240 patients (mean age, 70.2 years; 55% women) seen at Duke University Medical Center.

The researchers also compared traditional machine learning models, including the ElasticNet model, and a new deep learning model (PE neural model) with three clinical scoring systems — Wells, Pulmonary Embolism Rule-out Criteria (PERC) an rGeneva —on holdout outpatient data sets from Stanford (100 patients; mean age, 57.74 years; 67% women) and Duke (101 patients; mean age, 73.06 years; 58.4% women). Accuracy of the prediction was assessed using the area under receiver operating curve (AUROC).

Results showed that scores for accuracy were high for both the PE neural model (AUROC = 0.85) and the ElasticNet model (AUROC = 0.93), although the ElasticNet model performed better on the internal data set (P = .01). In the external Duke data set, there was a decline in performance for the PE neural model and the ElasticNet model, but both models performed equally well on the external data set (PE neural, 0.72 vs. ElasticNet, 0.7; P = .17), suggesting that they are still generalizable to the Duke population, according to the researchers.

With an AUROC of 0.81, the PE neural model performed better than all other models and criteria on the Stanford and Duke holdout outpatient data, even when compared with ElasticNet on the Duke data (AUROC = 0.74; P = .01).

Overall, the best-performing model had an AUROC of 0.9 (95% CI, 0.87-0.91) for predicting a positive CT study for PE on intra-institutional holdout data and an AUROC of 0.71 (95% CI, 0.69-0.72) on the external Duke data set. For this model, superior AUROC performance (0.81; 95% CI, 0.77-0.87) and cross-institutional generalization of the model (0.81; 95% CI, 0.73-0.82) were observed on holdout outpatient populations from both the intra-institutional and extra-institutional data.

PAGE BREAK

“The neural network model PERFORM possibly can consider multitudes of patient-specific risk factors and dependencies in retrospective structured EMR data to arrive at an imaging-specific PE likelihood recommendation and may accurately be generalized to new population distributions,” the researchers wrote. “The findings of this study suggest that this model may be used as an automated [clinical decision support tool] to improve use of PE-CT imaging in referred patients.” – by Melissa Foster

Disclosures: Banerjee reports no relevant financial disclosures. Please see the study for all other authors’ relevant financial disclosures.