Deep learning useful in diabetic retinopathy screening

Lily Peng, MD, PhD
Lily Peng
Deep learning opens a world of opportunities, “in terms of making completely new scientific discoveries,” because, “you don’t have to have the rules in mind before training the model,” Lily Peng, MD, PhD, said during the keynote presentation at the Focus on Eye Health National Summit in Washington.

The technology fits well in medicine and health care, as there is an enormous amount of data to sift through with the growth of electronic records, image archiving and communication systems, Peng said.

She works at a Google research branch, applying deep learning to medical data.

Deep learning is a type of machine learning shown to be remarkably effective in the past few years, but is not a new science, she explained. “It’s based on artificial neural networks that have been around since the 1960s.”

“Neural nets” are a collection of simple, trainable units that are organized in layers, she said.

“These units are mathematical equations that take in numbers, make a computation and output numbers that are taken in by another layer, a set of equations that are shrunk together. It’s not magic, it’s just a lot of math,” she said.

Deep learning is also helpful in situations where expertise is limited, for example, an absence of health care providers, Peng continued.

Her group is partnering with eye doctors in India to screen for diabetic retinopathy. India has a shortage of 127,000 eye doctors, and about half of all patients are found to have preventable vision loss, Peng said.

“As this is preventable, this is an unacceptable state for many health care providers in India,” she added.

The project hired 54 U.S. ophthalmologists and acquired 135,000 fundus images in India. They rendered about 80,000 diagnoses, she said. Due to physician variability in grading, some images needed to be graded three or four times to get a proper majority decision.

They entered images and data into a popular neural net called Inception. She said Google Photos uses Inception within its image search functionality.

“It worked out pretty well,” she said. “We asked the neural net to give us one of the 5-point grades for the images, to tell us the field of view and whether it was an image of a left or right eye.”

They published results from their data in JAMA (Gulshan, et al.) in 2016 and Ophthalmology (Krause, et al.) in 2018.

“We trained the neural net on a majority decision [based on the ophthalmologists’ findings], and the net was on par with the majority decision of the ophthalmologists,” Peng said.

The group also uses “show me where” functionality to learn where the neural net is looking within an image to make its decision.

Show me where is like a heat map highlighting the areas of the picture that are relevant to the neural net and the question being asked, she explained.

To illustrate this, Peng showed photos of two dogs, an Afghan hound and a Pomeranian. Using show me where, the neural net highlights the characteristics that are unique to the different pets, such as the Pomeranian’s short snout and the Afghan hound’s long hair and narrow face.

They applied this technology to fundus photography and found that the neural nets focused on small microaneurysms and subtle hemorrhages to arrive at a conclusion, Peng said.

It was also adept at ignoring artifacts on the image that could interrupt the diagnosis, such as a corneal reflection or a dust spot.

She said the process has been a good start.

“There’s still a lot to do on the path of adoption, and we really need to get the details right.

They are continuing to work with providers in India, EyePACS, the Ministry of Health in Thailand and regulatory agencies so it is shown to be safe and effective, she said.

“If the doctors can’t use it, if it’s too slow and doesn’t incorporate well with existing screening systems, then it won’t be used, and that means it’s useless,” Peng added.

Her team is also working with hardware makers such as Nikon Optos and Verily Life Sciences to increase ease of use.

“Taking a picture shouldn’t be a barrier to screening,” she said.

Many of the artificial intelligence tools are open source and available for use; Peng reported an increase in research in this space.

“We think it’s really important because what is really going to change and move the needle is if many people are working on the problems and we are building this together,” she said. – by Abigail Sutton

References:

Peng L. Deep learning for retinal imaging. Presented at: Focus on Eye Health National Summit: Research to Impact. Washington; July 18, 2018. Accessed via live webcast.

Gulshan V, et al. JAMA. 2016;doi:10.1001/jama.2016.17216.

Krause J, et al. Ophthalmology. 2018;doi:10.1016/j.ophtha.2018.01.034.

Disclosure: Peng is employed by Google.

Lily Peng, MD, PhD
Lily Peng

Deep learning opens a world of opportunities, “in terms of making completely new scientific discoveries,” because, “you don’t have to have the rules in mind before training the model,” Lily Peng, MD, PhD, said during the keynote presentation at the Focus on Eye Health National Summit in Washington.

The technology fits well in medicine and health care, as there is an enormous amount of data to sift through with the growth of electronic records, image archiving and communication systems, Peng said.

She works at a Google research branch, applying deep learning to medical data.

Deep learning is a type of machine learning shown to be remarkably effective in the past few years, but is not a new science, she explained. “It’s based on artificial neural networks that have been around since the 1960s.”

“Neural nets” are a collection of simple, trainable units that are organized in layers, she said.

“These units are mathematical equations that take in numbers, make a computation and output numbers that are taken in by another layer, a set of equations that are shrunk together. It’s not magic, it’s just a lot of math,” she said.

Deep learning is also helpful in situations where expertise is limited, for example, an absence of health care providers, Peng continued.

Her group is partnering with eye doctors in India to screen for diabetic retinopathy. India has a shortage of 127,000 eye doctors, and about half of all patients are found to have preventable vision loss, Peng said.

“As this is preventable, this is an unacceptable state for many health care providers in India,” she added.

The project hired 54 U.S. ophthalmologists and acquired 135,000 fundus images in India. They rendered about 80,000 diagnoses, she said. Due to physician variability in grading, some images needed to be graded three or four times to get a proper majority decision.

They entered images and data into a popular neural net called Inception. She said Google Photos uses Inception within its image search functionality.

“It worked out pretty well,” she said. “We asked the neural net to give us one of the 5-point grades for the images, to tell us the field of view and whether it was an image of a left or right eye.”

They published results from their data in JAMA (Gulshan, et al.) in 2016 and Ophthalmology (Krause, et al.) in 2018.

“We trained the neural net on a majority decision [based on the ophthalmologists’ findings], and the net was on par with the majority decision of the ophthalmologists,” Peng said.

The group also uses “show me where” functionality to learn where the neural net is looking within an image to make its decision.

Show me where is like a heat map highlighting the areas of the picture that are relevant to the neural net and the question being asked, she explained.

To illustrate this, Peng showed photos of two dogs, an Afghan hound and a Pomeranian. Using show me where, the neural net highlights the characteristics that are unique to the different pets, such as the Pomeranian’s short snout and the Afghan hound’s long hair and narrow face.

They applied this technology to fundus photography and found that the neural nets focused on small microaneurysms and subtle hemorrhages to arrive at a conclusion, Peng said.

It was also adept at ignoring artifacts on the image that could interrupt the diagnosis, such as a corneal reflection or a dust spot.

She said the process has been a good start.

“There’s still a lot to do on the path of adoption, and we really need to get the details right.

They are continuing to work with providers in India, EyePACS, the Ministry of Health in Thailand and regulatory agencies so it is shown to be safe and effective, she said.

“If the doctors can’t use it, if it’s too slow and doesn’t incorporate well with existing screening systems, then it won’t be used, and that means it’s useless,” Peng added.

Her team is also working with hardware makers such as Nikon Optos and Verily Life Sciences to increase ease of use.

“Taking a picture shouldn’t be a barrier to screening,” she said.

Many of the artificial intelligence tools are open source and available for use; Peng reported an increase in research in this space.

“We think it’s really important because what is really going to change and move the needle is if many people are working on the problems and we are building this together,” she said. – by Abigail Sutton

References:

Peng L. Deep learning for retinal imaging. Presented at: Focus on Eye Health National Summit: Research to Impact. Washington; July 18, 2018. Accessed via live webcast.

Gulshan V, et al. JAMA. 2016;doi:10.1001/jama.2016.17216.

Krause J, et al. Ophthalmology. 2018;doi:10.1016/j.ophtha.2018.01.034.

Disclosure: Peng is employed by Google.