SUNet: A Lesion Regularized Model for Simultaneous Diabetic Retinopathy and Diabetic Macular Edema Grading

Diabetic retinopathy (DR), as a leading ocular disease, is often with a complication of diabetic macular edema (DME). However, most existing works only aim at DR grading but ignore the DME diagnosis, but doctors will do both tasks simultaneously. In this paper, motivated by the advantages of multi-task learning for image classification, and to mimic the behavior of clinicians in visual inspection for patients, we propose a feature Separation and Union Network (SUNet) for simultaneous DR and DME grading. Further, to improve the interpretability of the disease grading, a lesion regularizer is also imposed to regularize our network. Specifically, given an image, our SUNet first extracts a common feature for both DR and DME grading and lesion detection. Then a feature blending block is introduced which alternately uses feature separation and feature union for task-specific feature extraction,where feature separation learns task-specific features for lesion detection and DR and DME grading, and feature union aggregates features corresponding to lesion detection, DR and DME grading. In this way, we can distill the irrelevant features and leverage features of different but related tasks to improve the performance of each given task. Then the taskspecific features of the same task at different feature separation steps are concatenated for the prediction of each task. Extensive experiments on the very challenging IDRiD dataset demonstrate that our SUNet significantly outperforms existing methods for both DR and DME grading.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Videos in this product

SUNet: A Lesion Regularized Model for Simultaneous Diabetic Retinopathy and Diabetic Macular Edema Grading

00:11:30
0 views
Diabetic retinopathy (DR), as a leading ocular disease, is often with a complication of diabetic macular edema (DME). However, most existing works only aim at DR grading but ignore the DME diagnosis, but doctors will do both tasks simultaneously. In this paper, motivated by the advantages of multi-task learning for image classification, and to mimic the behavior of clinicians in visual inspection for patients, we propose a feature Separation and Union Network (SUNet) for simultaneous DR and DME grading. Further, to improve the interpretability of the disease grading, a lesion regularizer is also imposed to regularize our network. Specifically, given an image, our SUNet first extracts a common feature for both DR and DME grading and lesion detection. Then a feature blending block is introduced which alternately uses feature separation and feature union for task-specific feature extraction,where feature separation learns task-specific features for lesion detection and DR and DME grading, and feature union aggregates features corresponding to lesion detection, DR and DME grading. In this way, we can distill the irrelevant features and leverage features of different but related tasks to improve the performance of each given task. Then the taskspecific features of the same task at different feature separation steps are concatenated for the prediction of each task. Extensive experiments on the very challenging IDRiD dataset demonstrate that our SUNet significantly outperforms existing methods for both DR and DME grading.