Pandemic spurs paradigm shift in artificial intelligence
The pandemic has accelerated digitalization in all fields, including health care.
Data, artificial intelligence, digital health systems and connectivity have been aiding the fight against COVID-19 in multiple ways, uncovering new possibilities and showing a clear road map of how AI can be integrated into the health care ecosystem to enhance safety, efficiency and effectiveness, and ultimately improve quality of patient care.
“This pandemic has put health care under stress but has also facilitated the analysis of where we are and what we are doing. It has been a powerful and beautiful wake-up call to see that the management not just of the disease, but of the patient, can be improved,” Ursula Schmidt-Erfurth, MD, PhD, professor and chair of the department of ophthalmology at University Eye Hospital, Vienna, said.
Digital methods enable new ways for patients to receive care and will retain their validity beyond the COVID-19 emergency.
“Significant efforts in the AI and big data space are already underway, and the pandemic has made us aware that a rapid acceleration in the pace of adoption of AI is mandatory,” she said.
“The immediate use and successful application of AI to tackle a major, global public health challenge in 2020 will likely increase the public and governmental acceptance of such technologies for other areas of health care, including chronic disease, in the future,” Daniel Shu Wei Ting, MD, PhD, assistant professor and head of AI and digital innovation of ophthalmology at Singapore National Eye Center, said. He is also one of the executive committee members of the American Academy of Ophthalmology AI task force and the STARD-AI task force.
A crisis can provide an opportunity, and this great crisis of 2020 provides a great opportunity for digital technology.
Optimize screening, monitoring, treatment
Digital methods of data analysis allow for remote screening, diagnosis and monitoring of patients, a great asset in the course of a pandemic, but also an opportunity under normal circumstances. They also ease referral processes from primary to tertiary care through the sharing and exchange of images.
“There are multiple opportunities with multiple advantages for the patients and the entire health care system. Starting from the first step of screening, we need efficient methods to identify early disease. This is particularly true in retina, where early detection and early treatment are key for good vision outcomes,” Schmidt-Erfurth said.
An estimated 200 million people worldwide are affected by early age-related macular degeneration, 300 million have diabetes, and 75% of those will develop diabetic eye disease. Screening is an enormous task that can only be met by systems of automated image analysis.
Another goal is to provide real-world treatment outcomes comparable to those of clinical trials.
“There is currently a gap, mostly due to undertreatment, and this means that we need to optimize monitoring frequency and precision, measuring the therapeutic response in terms of fluid resolution and fluid recurrence with objective, accurate and standardized methods,” Schmidt-Erfurth said.
Biomarkers of AMD progression
Schmidt-Erfurth and a team at University of Vienna pioneered AI in ophthalmology by developing AI algorithms as early as 2013.
“We established a huge AI laboratory for image analysis. State funding allowed us to set up an interdisciplinary team of international computer science and retinal imaging experts who developed more than 20 validated deep learning algorithms for the identification and quantification of disease biomarkers,” Schmidt-Erfurth said.
To predict disease progression and monitor the effects of pharmacologic intervention, an algorithm was designed for fully automated detection and quantification of intraretinal and subretinal fluid.
“The inability to reliably identify, localize and quantify fluid on OCT results in variability in injection rates, often leading to undertreatment. The introduction of AI-based algorithms may allow retina specialists everywhere in the world to detect, localize and quantify fluid in a fast, reliable and automated manner, leading to better outcomes and health care savings,” she said.
Both supervised and unsupervised learning are used in the search of biomarkers. In supervised learning, the intelligent system is instructed to search for biomarkers that are already known, such as fluid, atrophy or drusen. In unsupervised learning, the machine screens large data sets and recognizes patterns of micro-changes that are not visible by observation and were never identified before.
“This will allow us to eliminate previous bias in biomarker search, broaden the spectrum of relevant biomarkers and identify features that might shed new light on the pathogenesis of retinal diseases. It will also help identify new therapeutic targets, which will orient our research and development of new therapies,” Schmidt-Erfurth said.
Detection of DR
By 2040, approximately 600 million people will have diabetes. Screening for diabetic retinopathy, a leading cause of visual loss, is a widely recommended strategy to prevent diabetes-related visual impairment. Early detection of DR also prompts early education and systemic intervention to optimize glycemic and other vascular risk factors control before development of further complications. Many DR screening services worldwide, however, are constantly challenged by the manpower and financial implication. Using deep learning and the cross-sectional training and testing data sets collected worldwide, researchers at National University of Singapore developed a deep learning system, called SELENA, for the detection of diabetic retinopathy, glaucoma suspect and AMD.
“Deep learning has sparked the medical imaging fields since 2016. It is an extremely powerful machine learning technique that has overcome many technical unmet needs on image recognition, speech recognition and natural language processing. Based on a data set of nearly 500,000 retinal images, SELENA has excellent diagnostic performance in detecting DR, with an area under the curve of 0.93, 91% sensitivity and 90% specificity. This is a multicenter AI collaborative research effort with close to 30 co-investigators worldwide. Second, it is capable to detect prevalence rate of any DR, referable DR and [vision-threatening] DR and the DR-associated vascular risk factors in a much shorter grading time of the retinal images, 2 months vs. 2 years,” Ting said.
The generalizability of SELENA was demonstrated in multiple countries such as Singapore, Australia, the United States, Mexico, China and Hong Kong, as well as the low- to middle-income African population in Zambia. It has been now approved by the Singapore Health Service Authority and received a European CE mark as a fundus-based retinal screening device for DR, glaucoma suspect and AMD. The technical integration of SELENA is now complete and has been tested clinically for operational flow, with estimation of real-world deployment in 2021. It is also listed as one of the national AI strategies in Singapore.
“In Singapore, we started integrating the AI system into the Singapore Integrated Diabetic Retinopathy Programme since 2018. In fact, in a paper published in Lancet Digital Health, we also showed that the combination of human intelligence and AI yielded the best outcome from the health economic standpoint. We are expecting to see the patients’ outcomes in the next 3 to 5 years,” Ting said.
“Apart from fundus-based screening technologies, the team is actively researching into various other clinical diseases (eg, myopia, systemic vascular diseases), imaging modalities (eg, OCT, genomics) and novel technical methods (eg, generative adversarial network and explainable AI) to increase the diversity of the training and testing data sets and the explainability of the AI algorithms,” he said.
ROP diagnosis and severity
Automated image analysis and deep learning systems have the potential to overcome the multiple challenges of screening for retinopathy of prematurity, leading to improved and better targeted care, according to J. Peter Campbell, MD, MPH, assistant professor at Oregon Health & Science University (OHSU).
“There are a lot of babies who need to be screened, and ROP screening is inefficient in that sense because maybe 80% to 90% of the babies you screen do not need any sort of intervention. It is a stressful exam, usually performed in the neonatal ICU. Babies respond with slow heart rate and slow breathing and need careful monitoring. It is done by indirect ophthalmoscopy, the same way it was 60 years ago, and evaluation is subjective: Clinicians looking at the same baby, or picture of a baby, often don’t agree on what they are seeing,” he said.
As a result, infants with the same level of disease might be treated differently by different people, based on the subjective perception of disease severity. This runs the risk of overtreating infants who do not need to be treated and undertreating, treating too late or not treating at all infants who need treatment, leading to further complications, including blindness.
Plus disease is the most important clinical feature determining the need for treatment for ROP, but subjective biases affect also plus disease diagnosis and measurement. A collaborative team from OHSU, Harvard, Northeastern University and the University of Illinois at Chicago developed a deep learning system that classifies plus disease into the three categories of no plus, pre-plus and plus defined by the International Classification of Retinopathy of Prematurity.
“A deep convolutional neural network (CNN) was trained using a data set of 5,511 retinal images. Each image was previously assigned a reference standard diagnosis combining the image-based diagnosis by three independent expert graders and the clinical diagnosis of a specialist. The system was able to accurately classify unseen data as plus, pre-plus or no plus as well or more consistently than international ROP experts,” Campbell said.
AI was also used to develop an ROP severity score, running from 1 to 9. This tool enables quantitative disease monitoring and risk prediction, can help in the assessment of treatment response and post-treatment recurrences, and can also be used to collect and compare epidemiologic data.
Predicting glaucoma progression early
Artificial intelligence applied to detection of apoptosing retinal cells (DARC), an imaging method to track the process of retinal neurons apoptosis in vivo, showed the ability to predict glaucoma progression at 18 months.
“We have now an AI-aided biomarker for predicting glaucoma progression, with potentially wide clinical application and research application in the testing of new drugs,” M. Francesca Cordeiro, MD, PhD, chair and professor of ophthalmology at Imperial College London, said.
Retinal ganglion cell apoptosis is one of the earliest hallmarks of glaucoma, and DARC “opens a window” into the degenerative processes triggered by the disease at a cellular level.
“By confocal scanning laser ophthalmoscopy, after injection of fluorescently labeled annexin V, we are able to observe individual nerve cells dying in the living eye at the early stages of glaucoma, many years before any visual field changes occur,” Cordeiro said.
A drawback of DARC is the need to have trained observers to detect and manually count the single apoptosing retinal cells, appearing as annexin-positive hyperfluorescent spots in the retina. To enable faster and objective measurement of DARC, a CNN-aided algorithm was trained and validated using candidate DARC spots identified by at least two of five trained observers. When applied to a cohort of glaucoma patients, it was able to accurately gauge and measure signs of cell damage 18 months before OCT.
“We were also able to establish a precise threshold value because every single patient who had a DARC count above 30 went on to progress 18 months later,” Cordeiro said.
Such a powerful biomarker could speed up clinical trials on neuroprotective drugs, a promising new frontier never properly explored due to the slow progression of the disease, which requires many years of follow-up to show changes.
“Something we have been lacking in neuroprotection is good measures of how quickly people respond to successful treatment. Now we can shorten study times and set up smaller concept trials to then go on to larger ones when we have proved the efficacy,” she said.
Detecting systemic disease from the retina
AI-empowered DARC is now tested as a method to rapidly detect cell damage caused by other neurodegenerative conditions, including AMD, multiple sclerosis, Parkinson’s disease and dementia.
“As an extension of the brain, the retina provides a platform from which to study diseases of the nervous system. In many neurodegenerative conditions, early diagnosis is often challenging due to the lack of tests with high sensitivity and specificity. Retinal biomarkers in vivo are an additional diagnostic tool which may avoid the use of brain scans and other invasive tests,” Cordeiro said.
“In the recent New England Journal of Medicine, we showed that the deep learning algorithm is effective in detecting papilledema that could be due to the space-occupying lesions in the brain. In another Lancet Digital Health paper, we demonstrated the possibility of using deep learning to screen for the referable chronic kidney disease patients. These are some of the noninvasive AI-based potential alternatives that could be considered in the resource-constraint settings, especially for those low- to middle-income countries,” Ting said.
Retinal analysis in the future will play an important role in other medical fields, such as internal medicine, endocrinology and neurology, according to Schmidt-Erfurth.
“Even in a simple color photograph of the retina, algorithms can identify age, hypertension and can measure the blood glucose level in a noninvasive, inexpensive way. There is a completely new horizon opening here. Automated algorithms can be used as triage and screening tool by general practitioners and by non-medical professionals such as opticians and optometrists. It can help identify disease onset much earlier and organize specialist referral in a reliable, efficient and timely manner,” she said.
Transitioning from studies to clinical practice
The next challenge with AI in medicine is to translate the results of studies into practice.
“We are continuing to validate the technology while we seek regulatory approval and a pathway to clinical implementation. The FDA has assigned breakthrough status, which shortens the process, but there is still some way to go. In the U.S., AI devices are treated as software medical devices, and you have to define the indications for use, the intended population, the precise camera and mechanism with which it will be used, who will be doing the interpretation and demonstrate that it works,” Campbell said.
“It is early days for AI. What we are doing is still mostly at an academic level. In our university, we have started to use the fluid quantification tool in clinical studies to evaluate how anti-VEGF therapy can be optimized in terms of visual outcome, economic burden for the system and treatment burden for the patients,” Schmidt-Erfurth said.
Implementing AI-based solutions into clinical settings is challenging and requires a concerted effort from all stakeholders, including regulators, insurances, hospital managers, IT teams, physicians and patients, according to Ting.
“It also requires a realistic business model that needs to consider reimbursement, efficiency and the ability to improve clinical performance over time,” he said.
The challenges of data sharing, ownership
In order to build a robust deep learning system, it is important to have two main components: the “dictionary” (the data sets) and the “brain” (the CNN). A large number of images and data sharing from different centers is an obvious approach to increase the amount of input data for network training.
“A simple analogy will be the more you read, the cleverer you get. The one caveat is that you need to read the right books. Thus, the ground truth and data sets will need to be robust and well phenotyped for different diseases. The performance of the network will depend on the number of images, the quality of the images and how representative the data are for the entire spectrum of the disease,” Ting said.
Data sharing also faces obstacles related to the regulations and privacy rules of individual countries.
“While regulations are aimed to ensure patients’ privacy, they sometimes form barriers for effective research initiatives and patient care. AI research groups worldwide should continue to collaborate to rectify this barrier, aiming to harness the power of big data and deep learning to advance the discovery of scientific knowledge,” he said.
Data ownership is another critical issue.
“In the information age, data is the new oil, but the question is: Who owns the data? We see a lot of abuse coming from large IT initiatives where companies buy data without patient consent. Doctors have always been responsible for protecting patients’ records, and it is now the medical community that should take over control and establish the rules and regulations on how the patients’ personal data should be handled in medical AI,” Schmidt-Erfurth said.
Not a replacement, but a support
“Innovation always comes with disruption of the established, conventional settings. We have seen this multiple times in ophthalmology because we are very technology-dependent,” Schmidt-Erfurth said.
OCT, when it was introduced in the early 1990s, encountered a lot of resistance because ophthalmologists were skeptical about an imaging device that did not require their direct observation of the patient’s eye. Nevertheless, OCT has taken over the diagnostic field.
“We are taking the next logical step, which is to exploit the extensive imaging data set that OCT provides to train intelligent systems to detect pathological patterns and measure disease activity and therapeutic response more precisely than any person ever could do. This is the second experience in which doctors may feel they are losing control. It requires trust and a new mindset,” she said.
“We need to present new algorithms with a plausibility check that doctors can use as a decision support and not as a replacement for their own expert decision,” Schmidt-Erfurth said.
Ting said that the capabilities of deep learning, however, should not be construed as competence.
“What networks can provide is excellent performance in a well-defined task. Networks are able to classify DR and detect risk factors for AMD, but they are not a substitute for a retina specialist,” he said.
To improve clinical acceptance of deep learning systems, it is important to unravel the “black box” nature of deep learning.
“Deep learning has generated a lot of hype in the technical and medical world over the past 5 years. While it is heartening to see many robust AI algorithms in the medical field, it is more important to understand the limitations and intended use environment well to ensure successful clinical translation of the AI algorithms from the bench to bedside,” Ting said.
- Bellemo V, et al. Curr Diab Rep. 2019;doi:10.1007/s11892-019-1189-3.
- Bellemo V, et al. Lancet Digital Health. 2019,doi:10.1016/S2589-7500(19)30004-4.
- Bolón-Canedo V, et al. Comput Methods Programs Biomed. 2015;doi:10.1016/j.cmpb.2015.06.004.
- Brown JM, et al. JAMA Ophthalmol. 2018;doi:10.1001/jamaophthalmol.2018.1934.
- Campbell JP, et al. JAMA Ophthalmol. 2016;doi:10.1001/jamaophthalmol.2016.0611.
- Cheung CY, et al. Asia Pac J Ophthalmol (Phila). 2019;doi:10.22608/APO.201976.
- Milea D, et al. N Engl J Med. 2020;doi:10.1056/NEJMoa1917130.
- Normando EM, et al. Expert Rev Mol Diagn. 2020;doi:10.1080/14737159.2020.1758067.
- Sabanayagam C, et al. Lancet Digital Health. 2020;doi:10.1016/S2589-7500(20)30063-7.
- Schlegl T, et al. Ophthalmology. 2018;doi:10.1016/j.ophtha.2017.10.031.
- Schmidt-Erfurth U, et al. Ophthalmology. 2020;doi:10.1016/j.ophtha.2020.03.010.
- Schmidt-Erfurth U, et al. Prog Retin Eye Res. 2018;doi:10.1016/j.preteyeres.2018.07.004.
- Scruggs BA, et al. Transl Vis Sci Technol. 2020;doi:10.1167/tvst.9.2.5.
- Tian P, et al. Conf Proc IEEE Eng Med Biol Soc. 2016;doi:10.1109/EMBC.2016.7590948.
- Ting DSW, et al. Br J Ophthalmol. 2019;doi:10.1136/bjophthalmol-2018-313173.
- Ting DSW, et al. JAMA. 2017;doi:10.1001/jama.2017.18152.
- Ting DSW, et al. NPJ Digit Med. 2019;doi:10.1038/s41746-019-0097-x.
- Ting DSW, et al. Prog Retin Eye Res. 2019;doi:10.1016/j.preteyeres.2019.04.003.
- Ting DSW, et al. Lancet Digital Health. 2020;doi:10.1016/S2589-7500(19)30217-1.
- Xie Y, et al. Lancet Digital Health. 2020;doi:10.1016/S2589-7500(20)30060-1.
- Yap TE, et al. Cells. 2018;doi:10.3390/cells7060060.
- Yap TE, et al. Ther Adv Chronic Dis. 2019;doi:10.1177/2040622319882205.
- For more information:
- J. Peter Campbell, MD, MPH, can be reached at Casey Eye Institute, 515 SW Campus Drive, Portland, OR 97239; email: firstname.lastname@example.org.
- M. Francesca Cordeiro, MD, PhD, can be reached at UCL Institute of Ophthalmology, Bath Street, London EC1V 9EL, UK; email: email@example.com.
- Ursula Schmidt-Erfurth, MD, PhD, can be reached at Medical University Vienna, Department of Ophthalmology, Waehringer Guertel 18-20, A-1090 Vienna, Austria; email: firstname.lastname@example.org.
- Daniel Shu Wei Ting, MD, PhD, can be reached at Duke-NUS School of Medicine, 8 College Road, Singapore 169857; email: email@example.com.
Click here to read the Point/Counter to this Cover Story.