Classification of Ocular Diseases Employing Attention-Based Unilateral and Bilateral Feature Weighting and Fusion

This video program is a part of the Premium package:

Classification of Ocular Diseases Employing Attention-Based Unilateral and Bilateral Feature Weighting and Fusion


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Classification of Ocular Diseases Employing Attention-Based Unilateral and Bilateral Feature Weighting and Fusion

0 views
  • Share
Create Account or Sign In to post comments
Early diagnosis of ocular diseases is key to prevent severe vision damage and other healthcare-related issues. Color fundus photography is a commonly utilized screening tool. However, due to the small symptoms present for early-stage ocular diseases, it is difficult to accurately diagnose the fundus photographs. To this end, we propose an attention-based unilateral and bilateral feature weighting and fusion network (AUBNet) to automatically classify patients into the corresponding disease categories. Specifically, AUBNet is composed of a feature extraction module (FEM), a feature fusion module (FFM), and a classification module (CFM). The FEM extracts two feature vectors from the bilateral fundus photographs of a patient independently. With the FFM, two levels of feature weighting and fusion are proceeded to prepare the feature representations of bilateral eyes. Finally, multi-label classifications are conducted by the CFM. Our model achieves competitive results on a real-life large-scale dataset.
Early diagnosis of ocular diseases is key to prevent severe vision damage and other healthcare-related issues. Color fundus photography is a commonly utilized screening tool. However, due to the small symptoms present for early-stage ocular diseases, it is difficult to accurately diagnose the fundus photographs. To this end, we propose an attention-based unilateral and bilateral feature weighting and fusion network (AUBNet) to automatically classify patients into the corresponding disease categories. Specifically, AUBNet is composed of a feature extraction module (FEM), a feature fusion module (FFM), and a classification module (CFM). The FEM extracts two feature vectors from the bilateral fundus photographs of a patient independently. With the FFM, two levels of feature weighting and fusion are proceeded to prepare the feature representations of bilateral eyes. Finally, multi-label classifications are conducted by the CFM. Our model achieves competitive results on a real-life large-scale dataset.