Loading Events

« All Events

  • This event has passed.

ECE PhD Proposal Review: Muhamed Yildiz

October 12, 2020 @ 1:00 pm - 2:00 pm

PhD Proposal Review: Interpretable Machine Learning for Retinopathy of Prematurity

Muhamed Yildiz

Location: Zoom Link

Abstract: Retinopathy of Prematurity (ROP), a leading cause of childhood blindness, is diagnosed by clinical ophthalmoscopic examinations or reading retinal images. Plus disease, defined as abnormal tortuosity and dilation of the posterior retinal blood vessels, is the most important feature to determine treatment-requiring ROP. State-of-the-art ROP detection systems employ convolutional neural networks (CNNs) and achieve up to $0.947$ and $0.982$ area under the ROC curve (AUC) in the discrimination of and levels of ROP. However, due to their black-box nature, clinicians are reluctant to trust diagnostic predictions of CNNs.

First, we aim to create an interpretable, feature extraction-based pipeline, namely, I-ROP ASSIST, that achieves CNN like performance when diagnosing plus disease from retinal images. Our method segments retinal vessels, detects the vessel centerlines. Then, our method extracts features relevant to ROP, including tortuosity and dilation measures, and uses these features for classification via logistic regression, support vector machines and neural networks to assess a severity score for the input. For predicting and levels of ROP on a dataset containing 5512 posterior retinal images, we achieve $0.88$ and $0.94$ AUC, respectively. Our system combining automatic retinal vessel segmentation, tracing, feature extraction and classification is able to diagnose plus disease in ROP with CNN like performance.

Furthermore, we aim to address the interpretability problem of CNN-based ROP detection system. Incorporating visual attention capabilities in CNNs enhances interpretability by highlighting regions in the images that CNNs utilize for prediction. Generic visual attention methods do not leverage structural domain information such as tortuosity and dilation of retinal blood vessels in ROP diagnosis. We propose the Structural Visual Guidance Attention Networks (SVGA-Net) method, that leverages structural domain information to guide visual attention in CNNs. SVGA-Net achieves $0.979$ and $0.987$ AUC to predict and levels of ROP. Moreover, SVGA-Net consistently results in higher AUC compared to visual attention CNNs without guidance, baseline CNNs, and CNNs with structured masks.