Self-Supervision vs. Transfer Learning: Robust Biomedical Image Analysis against Adversarial Attacks

This video program is a part of the Premium package:

Self-Supervision vs. Transfer Learning: Robust Biomedical Image Analysis against Adversarial Attacks


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Self-Supervision vs. Transfer Learning: Robust Biomedical Image Analysis against Adversarial Attacks

0 views
  • Share
Create Account or Sign In to post comments
Deep neural networks are being increasingly used for disease diagnosis and lesion localization in biomedical images. However, training deep neural networks not only requires large sets of expensive ground truth (image labels or pixel annotations), they are also susceptible to adversarial attacks. Transfer learning alleviates the former problem to some extent by pre-training the lower layers of a neural network on a large labeled dataset from a different domain (e.g., ImageNet). In transfer learning, the final few layers are trained on the target domain (e.g., chest X-rays), while the pre-trained layers are only fine-tuned or even kept frozen. An alternative to transfer learning is self-supervised learning in which a supervised task is created by transforming the unlabeled images from the target domain itself. The lower layers are pre-trained to invert the transformation in some sense. In this work, we show that self-supervised learning combined with adversarial training offers additional advantages over transfer learning as well as vanilla self-supervised learning. In particular, the process of adversarial training leads to both a reduction in the amount of supervised data required for comparable accuracy, as well as a natural robustness to adversarial attacks. We support our claims using experiments on the two modalities and tasks -- classification of chest X-rays, and segmentation of MRI images, as well as two types of adversarial attacks -- PGD and FGSM.
Deep neural networks are being increasingly used for disease diagnosis and lesion localization in biomedical images. However, training deep neural networks not only requires large sets of expensive ground truth (image labels or pixel annotations), they are also susceptible to adversarial attacks. Transfer learning alleviates the former problem to some extent by pre-training the lower layers of a neural network on a large labeled dataset from a different domain (e.g., ImageNet). In transfer learning, the final few layers are trained on the target domain (e.g., chest X-rays), while the pre-trained layers are only fine-tuned or even kept frozen. An alternative to transfer learning is self-supervised learning in which a supervised task is created by transforming the unlabeled images from the target domain itself. The lower layers are pre-trained to invert the transformation in some sense. In this work, we show that self-supervised learning combined with adversarial training offers additional advantages over transfer learning as well as vanilla self-supervised learning. In particular, the process of adversarial training leads to both a reduction in the amount of supervised data required for comparable accuracy, as well as a natural robustness to adversarial attacks. We support our claims using experiments on the two modalities and tasks -- classification of chest X-rays, and segmentation of MRI images, as well as two types of adversarial attacks -- PGD and FGSM.