$800K NSF Grant to Diagnose ROP
ECE Assistant Professor Stratis Ioannidis, Associate Professor Deniz Erdogmus, and Professor Jennifer Dy were awarded an $800K NSF grant to create an "Assistive Integrative Support Tool for Retinopathy of Prematurity".
In collaboration with researchers at the MGH/HST Athinoula A. Martinos Center and the Oregon Health & Science University, the team will use machine learning techniques to build an assistive tool to detect retinopathy of prematurity in infants with multimodal clinical data. NSF has provided approximately $1.9M to the consortium in support of the project.
Retinopathy of prematurity (ROP) is a leading cause of childhood visual loss worldwide, and the social burdens of infancy-acquired blindness are enormous. Early diagnosis is critically important for successful treatment, and can prevent most cases of blindness. However, lack of access to expert medical diagnosis and care, especially in rural areas, remains a growing healthcare challenge. In addition, clinical expertise in ROP is lacking, and medical professionals are struggling to meet the increasing need for ROP care. As point-of-care technologies for diagnosis and intervention are rapidly expanding, the potential ability to assess ROP severity from any location with an internet connection and a camera, even without immediate ophthalmologic consultation available, could significantly improve delivery of ROP care by identifying infants who are in most urgent need for referral and treatment. This would dramatically reduce the incidence of blindness without a proportionate increase in the need for human resources, which take many years to develop.
The joint NU, MGH and OHSU team will develop a prototype assistive integrative support tool for ROP, comprising image analysis, information fusion of clinical, imaging, and diagnostic data, and generative probabilistic and regression models with associated computationally efficient machine learning algorithms. The outcomes of the project include disease severity metrics and diagnostic estimates obtained through clinical evidence classifiers trained jointly over expert-generated labels, including both discrete diagnostic labels, as well as comparison outcomes of relative severity between pairs of images.