June 20, 2019
2 min read
Save

Computer-aided design in melanoma would enhance care, not replace dermatologists

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Melanoma detection and diagnoses with computer-aided systems showed similar sensitivities when compared with those from dermatologists; however, the real-world applicability of the technology is unknown, according to meta-analysis results published in JAMA Dermatology.

“There is a fear that less-skilled physicians or even nonmedical personnel will use systems to deliver a service that should be restricted to dermatologists,” Vincent Dick, CandMed, member of the Vienna Dermatologic Imaging Research (ViDIR) group in the department of dermatology at Medical University of Vienna, and colleagues wrote.

Researchers used an online database search for eligible studies on the key words “melanoma,” “diagnosis,” “detection,” “computer-aided” or “artificial intelligence.” Studies were included if they investigated computer-aided diagnosis (CAD) systems accuracy in a screening setting for cutaneous melanoma or one that could be used in that type of setting.

Of 1,694 potentially eligible articles, 132 were included in the qualitative analysis and 70 in the quantitative meta-analysis.

Computer vision was utilized in 58 articles, deep learning in 55 articles and hardware-based methods were utilized in 19, according to researchers.

Fifty studies included only melanocytic lesions while nonmelanocytic lesions were included in 67 studies, according to the study.

The median thickness of invasive melanomas ranged from 0.2 mm to 1.5 mm, the researchers reported.

From the quantitative analysis, the summary estimate for melanoma sensitivity of CAD systems was 0.74 (95% CI, 0.66-0.8) and the specificity was 0.84 (95% CI, 0.79-0.88).

The studies that used proprietary test sets showed a significantly higher sensitivity than ones that used publicly available test sets (0.87; 95% CI, 0.82-0.91 vs. 0.57; 95% CI, 0.44-0.68), the researchers wrote. However, a lower specificity was found in proprietary test sets than publicly available test sets (0.72; 95% CI, 0.63-0.79 vs. 0.91; 95% CI, 0.88-0.94).

Systems using deep learning achieved a sensitivity of 0.44 (95% CI, 0.3-0.59) and a specificity of 0.92 (95% CI, 0.89-0.95). Computer vision systems achieved a sensitivity of 0.85 (95% CI, 0.80-0.88) and a specificity of 0.77 (95% CI, 0.69-0.84).

Researchers found significantly higher specificity in deep learning (0.92; 95% CI, 0.89-0.95) than for computer vision or hardware-based methods.

Existing devices or applications are not widely used despite multiple studies that show expert-level accuracy in CAD systems for melanoma, the researchers wrote.

“A potential reason for this mismatch may be that the results of the studies conducted in this field cannot be transferred directly to clinical practice,” Dick and colleagues wrote.

Additionally, many systems were not tested in the general population or as screening tools as most clinical studies were conducted in specialized referral centers with high melanoma prevalence.

Clinical studies of automated dermoscopic images are missing in the literature, they added.

The researchers wrote that a successful CAD would likely enhance and support dermatologists and not replace them. – by Abigail Sutton

 

 

Disclosures: Dick reports no relevant financial disclosures. Please see the study for all other authors’ relevant financial disclosures.