How to Extract More Information with Less Burden: Fundus Image Classification and Retinal Disease Localization with Ophthalmologist Intervention

Image classification using deep convolutional neural networks (DCNN) has a competitive performance as compared to other state-of-the-art methods. Here, attention can be visualized as a heatmap to improve the explainability of DCNN. We generated the initial heatmaps by using gradient-based classification activation map (Grad-CAM). We first assume that these Grad-CAM heatmaps can reveal the lesion regions well, then apply the attention mining on these heatmaps. Another, we assume that these Grad-CAM heatmaps can't reveal the lesion regions well then apply the dissimilarity loss on these Grad-CAM heatmaps. In this study, we asked the ophthalmologists to select 30% of the heatmaps. Furthermore, we design knowledge preservation (KP) loss to minimize the discrepancy between heatmaps generated from the updated network and the selected heatmaps. Experiments revealed that our method improved accuracy from 90.1% to 96.2%. We also found that the attention regions are closer to the GT lesion regions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Videos in this product

How to Extract More Information with Less Burden: Fundus Image Classification and Retinal Disease Localization with Ophthalmologist Intervention

00:10:36
0 views
Image classification using deep convolutional neural networks (DCNN) has a competitive performance as compared to other state-of-the-art methods. Here, attention can be visualized as a heatmap to improve the explainability of DCNN. We generated the initial heatmaps by using gradient-based classification activation map (Grad-CAM). We first assume that these Grad-CAM heatmaps can reveal the lesion regions well, then apply the attention mining on these heatmaps. Another, we assume that these Grad-CAM heatmaps can't reveal the lesion regions well then apply the dissimilarity loss on these Grad-CAM heatmaps. In this study, we asked the ophthalmologists to select 30% of the heatmaps. Furthermore, we design knowledge preservation (KP) loss to minimize the discrepancy between heatmaps generated from the updated network and the selected heatmaps. Experiments revealed that our method improved accuracy from 90.1% to 96.2%. We also found that the attention regions are closer to the GT lesion regions.