Supervised Augmentation: Leverage Strong Annotation for Limited Data

This video program is a part of the Premium package:

Supervised Augmentation: Leverage Strong Annotation for Limited Data


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Supervised Augmentation: Leverage Strong Annotation for Limited Data

0 views
  • Share
A previously less exploited dimension to approach the data scarcity challenge in medical imaging classification is to leverage strong annotation, when available data is limited but the annotation resource is plentiful. Strong annotation at finer level, such as region of interest, carries more information than simple image level annotation, therefore should theoretically improve performance of a classifier. In this work, we explored utilizing strong annotation by developing a new data augmentation method, which improved over common data augmentation (random crop and cutout) by significantly enriching augmentation variety and ensuring valid label given guidance from strong annotation. Experiments on a real world application of classifying gastroscopic images demonstrated that our method outperformed state-of-the-art methods by a large margin at all different settings of data scarcity. Additionally, our method is flexible to integrate with other CNN improvement techniques and handle data with mixed annotation.
A previously less exploited dimension to approach the data scarcity challenge in medical imaging classification is to leverage strong annotation, when available data is limited but the annotation resource is plentiful. Strong annotation at finer level, such as region of interest, carries more information than simple image level annotation, therefore should theoretically improve performance of a classifier. In this work, we explored utilizing strong annotation by developing a new data augmentation method, which improved over common data augmentation (random crop and cutout) by significantly enriching augmentation variety and ensuring valid label given guidance from strong annotation. Experiments on a real world application of classifying gastroscopic images demonstrated that our method outperformed state-of-the-art methods by a large margin at all different settings of data scarcity. Additionally, our method is flexible to integrate with other CNN improvement techniques and handle data with mixed annotation.