Robust Detection of Adversarial Attacs on Medical Images

This video program is a part of the Premium package:

Robust Detection of Adversarial Attacs on Medical Images


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Robust Detection of Adversarial Attacs on Medical Images

0 views
  • Share
Create Account or Sign In to post comments
Although deep learning systems trained on medical images have shown state-of-the-art performance in many clinical pre- diction tasks, recent studies demonstrate that these systems can be fooled by carefully crafted adversarial images. It has raised concerns on the practical deployment of deep learning based medical image classification systems. To tackle this problem, we propose an unsupervised learning approach to detect adversarial attacks on medical images. Our approach is capable of detecting a wide range of adversarial attacks without knowing the attackers nor sacrificing the classification performance. More importantly, our approach can be easily embedded into any deep learning-based medical imaging system as a module to improve the system?s robustness. Experiments on a public chest X-ray dataset demonstrate the strong performance of our approach in defending adversarial attacks under both white-box and black-box settings.
Although deep learning systems trained on medical images have shown state-of-the-art performance in many clinical pre- diction tasks, recent studies demonstrate that these systems can be fooled by carefully crafted adversarial images. It has raised concerns on the practical deployment of deep learning based medical image classification systems. To tackle this problem, we propose an unsupervised learning approach to detect adversarial attacks on medical images. Our approach is capable of detecting a wide range of adversarial attacks without knowing the attackers nor sacrificing the classification performance. More importantly, our approach can be easily embedded into any deep learning-based medical imaging system as a module to improve the system?s robustness. Experiments on a public chest X-ray dataset demonstrate the strong performance of our approach in defending adversarial attacks under both white-box and black-box settings.