Dense Correlation Network for Automated Multi-Label Ocular Disease Detection with Paired Color Fundus Photographs

This video program is a part of the Premium package:

Dense Correlation Network for Automated Multi-Label Ocular Disease Detection with Paired Color Fundus Photographs


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Dense Correlation Network for Automated Multi-Label Ocular Disease Detection with Paired Color Fundus Photographs

0 views
  • Share
Create Account or Sign In to post comments
In ophthalmology, color fundus photography is an economic and effective tool for early-stage ocular disease screening. Since the left and right eyes are highly correlated, we utilize paired color fundus photographs for our task of automated multi-label ocular disease detection. We propose a Dense Correlation Network (DCNet) to exploit the dense spatial correlations between the paired CFPs. Specifically, DCNet is composed of a backbone Convolutional Neural Network (CNN), a Spatial Correlation Module (SCM), and a classifier. The SCM can capture the dense correlations between the features extracted from the paired CFPs in a pixel-wise manner, and fuse the relevant feature representations. Experiments on a public dataset show that our proposed DCNet can achieve better performance compared to the respective baselines regardless of the backbone CNN architectures.
In ophthalmology, color fundus photography is an economic and effective tool for early-stage ocular disease screening. Since the left and right eyes are highly correlated, we utilize paired color fundus photographs for our task of automated multi-label ocular disease detection. We propose a Dense Correlation Network (DCNet) to exploit the dense spatial correlations between the paired CFPs. Specifically, DCNet is composed of a backbone Convolutional Neural Network (CNN), a Spatial Correlation Module (SCM), and a classifier. The SCM can capture the dense correlations between the features extracted from the paired CFPs in a pixel-wise manner, and fuse the relevant feature representations. Experiments on a public dataset show that our proposed DCNet can achieve better performance compared to the respective baselines regardless of the backbone CNN architectures.