Disclosures: Du-Harpur reports she is a clinical advisor to Skin Analytics. Please see the study for all other authors’ relevant financial disclosures.
November 09, 2020
2 min read
Save

Researchers find flaws in convolutional neural networks for melanoma diagnosis

Disclosures: Du-Harpur reports she is a clinical advisor to Skin Analytics. Please see the study for all other authors’ relevant financial disclosures.
You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Flawed convolutional neural networks used to assess melanoma lesions have led to misclassification and misdiagnosis of those lesions, according to a letter to the editor in Journal of Investigative Dermatology.

“Convolutional neural networks (CNNs) are a class of deep-learning systems that are highly effective for classifying and analyzing image data,” Xinyi Du-Harpur, MA, MBBChir, MRCP, of the Centre for Stem Cells and Regenerative Medicine at King’s College London, the Francis Crick Institute and St John’s Institute of Dermatology, Guys Hospital, London, and colleagues wrote. “For skin cancer diagnosis, it has been claimed that CNNs can perform at a level of accuracy approaching that of a dermatologist.”

The researchers acknowledged the necessity of applying novel diagnostic approaches in the clinical setting but noted that it is “imperative” to understand “potential failure modes for CNN classifiers.”

The implementation of a CNN classifier for melanoma compared with benign melanocytic naevi yielded information on some of the vulnerabilities of the approach. With that in mind, the group retrained the model with different data and observed similar vulnerabilities.

In short, what the researchers discovered about the limitations of CNN architecture is that this approach may not be as effective as the human eye in generalizing information from images to novel data. They reported that “CNNs can be misled into incorrect classifications by artificially perturbing natural-world images.”

In the letter, the researchers offered the example of the network misclassifying a panda as a gibbon, which they called an “adversarial attack.” They then described two novel types of adversarial attack: alterations in color balance and alterations in the rotation of the input image that ultimately result in misclassification.

Findings from the study showed that when CNNs were tested on imaging data that were not represented in training sets, classification accuracy suffered. Other findings showed that accuracy of skin cancer diagnosis using CNNs varied depending on whether the images were taken using an iPhone, a Samsung device or a digital camera. These differences may be explained, in part, by variations in color balance, according to the researchers.

Clinicians are becoming increasingly aware that CNN architectures are limited. It is for this reason that the researchers stressed the necessity of improving color balance and translation or rotation of images used in skin cancer diagnosis. They added that it will be important to explore further strategies to mitigate these types of adversarial attacks.

One strategy to mitigate this effect is to retrain models with generated adversarial images. “Finally, it is essential to verify that other applications of artificial intelligence in medical imaging are robust to similar perturbations both through adversarial challenges as described above and standardization of image acquisition in the clinical setting,” the researchers wrote.