March 01, 2019
5 min read

Artificial Intelligence: Changing the Face of Colorectal Screening

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact

With a simple gaming computer and video hook-up, experimental versions of artificial intelligence have become a part of our regular colonoscopy experience at the University of California, Irvine, as we work to optimize performance and interface.

Today, most of our diagnostic colonoscopy rooms are running AI every day, surveying users for interface enhancements, and validating AI output against what is reported by the physician.

William E. Karnes

It is our hope that AI will alleviate the documentation and reporting burden felt by endoscopists, raise our overall adenoma detection rate (ADR) and lessen costs for pathology. While we have focused our initial efforts on colonoscopy, expansion of AI to other areas of endoscopy is happening quickly.

Ultimately, we hope to show that AI can improve the quality of our services, reduce costs, increase revenue and improve our quality of life as physicians.

Polyp Identification

It’s well known that ADR is inversely related to the risk for interval cancers after colonoscopy and colonoscopy is the single best test we have for preventing colon cancer. If we can all improve our ADRs to the true prevalence of adenomas, which is believed to be about 50%, then the risk for a colon cancer in people who receive appropriate interval colonoscopies could be reduced by 90% or more.

Thus far, in our experimental work, we have been able to show that overlay of our polyp detection algorithm during video review by expert colonoscopists further improves the detection of polyps by 20%. In the end, we hope use of this technology in practice will improve ADR toward true adenoma prevalence and therefore, prevent a lot of colon cancers.

Optical Pathology

Our optical pathology algorithm has shown about 94% accuracy overall using standard colonoscopes and optics in our center. We have demonstrated that light source does not significantly impact the outcome, making this AI tool potentially useful no matter your scope of choice.

Looking specifically at the ‘leave-alone’ concept in which you choose to leave a small polyp of 5 mm or less in the rectum or rectosigmoid, our experimental optical pathology AI achieves a 97% negative predictive value for adenomas. This is well above the threshold of 90%, required to say, ‘You’re not an adenoma. I’ll just leave you alone.’

Elsewhere in the colon for diminutive polyps, an accurate optical AI pathology system could support ‘resect-and-discard.’ If the surveillance interval that you’d recommend based on AI optical pathology is more than 90% concordant with that reported by the pathologist, diminutive polyps could be removed and thrown away. Doing so could potentially save a billion dollars a year related to pathology.


Our AI pathology algorithm performs at high standards on images, but is unproven during live colonoscopy. We are looking forward to the multicenter clinical studies to show that AI-assisted real-time optical pathology meets the criteria for ‘leave-alone’ and ‘resect-and-discard.’

Documentation, Reporting

Endoscopists have increasing documentation obligations, including reporting withdrawal times, prep quality, cecal intubation rate, ADR, etc. These are all quality measures reportable to CMS and will determine our reimbursement.

ADR is particularly burdensome. Currently, we wait for the pathology report, enter the reports in an Excel spreadsheet or some other database to calculate our ADR and then later report that to CMS, typically through a paid registry such as GIQuIC.

What if AI could not only help us find polyps but also tell us when a polyp is an adenoma? This could provide the potential for automated ADR.

We are developing linkages between our AI output and report-writing software. Our objective is that AI will automatically write a colonoscopy report with auto-labeled images and automatically record quality measures. Additionally, integration with GIQuIC is a given because our output is already aligned with their format. We anticipate that AI will improve the quality of our documentation and reduce the burden of reporting to CMS.

We hope to make AI for colonoscopy documentation available very soon. As a documentation assistant, it does not affect clinical decision-making and has no risk to the patient because it does not preclude or interfere with our responsibility to assure its accuracy.

Figure 1. These images depict the use of AI in the detection of, assessment and removal of Polyps.
Source: William E. Karnes, MD

Beyond Traditional Colonoscopy

AI is not just for colonoscopy. We are also exploring AI for upper endoscopy and capsule endoscopy.

For example, AI may help endoscopists better evaluate Barrett’s esophagus. Few experts have the eye to look at Barrett’s and know where to target their biopsies to find dysplasia or early cancer. Instead, we resort to multiple random biopsies and/or brushing in the hope that areas of dysplasia or cancer are not missed.

We are developing an algorithm that shows great promise at finding dysplasia within Barrett’s. If this proves true, we anticipate a future of visual notification indicating where to target biopsies. Hopefully, it will reduce dysplasia miss-rates and minimize biopsies.

Another pain point where AI could potentially shine is in capsule endoscopy. Capsule endoscopy involves review of up to 14 hours of video. Though it is possible to review these videos in an hour or 2, it’s still not something that most gastroenterologists enjoy doing and lesions can be missed. There are data to suggest capsule reviewers miss 20% of potentially significant findings when reading these long capsule studies.


Imagine being able to identify potentially significant findings in a matter of minutes with AI.

We’ve developed an algorithm to detect abnormalities on capsule endoscopy. The prototype accuracy is very high for separating abnormal from normal video frames. We found running these capsule videos through the AI reduces the total number of frames to be reviewed by up to 90%, with a promising false negative rate, so far missing no significant findings but missing rare frames of a finding.

We are working on getting the AI capsule reading service proven and to market as soon as possible.

Changing Practice Soon

We have been working on multiple algorithms – times of insertion and withdrawal, cecum identification, Boston bowel prep score, polyp detection, polyp size, tools used, Mayo Endoscopic Score and optical pathology. We have been able to run all these algorithms simultaneously in real time during high definition colonoscopy on 60 frame-per-second with no perceptible lag or frame-drop. It’s simply overlaid in what we are watching when we are doing our colonoscopy.

We’ve been able to validate each of these algorithms on images and, in many cases, on videos, and we’ve presented the accuracy of each of these algorithms by themselves. We have an NIH-funded grant to optimize the interface as we continuously improve AI performance. The technology is updated every 10 days. Once fully optimized, we will freeze the version and start multicenter clinical trials this year.

We cannot yet make any claims about our polyp detection AI to improve ADR, or our AI-assisted optical pathology for “leave alone” or “resect and discard”. These claims will require FDA clearance after appropriate validation.

Disclosure: Karnes is the cofounder and Chief Medical Officer of Docbot, Inc.