Localization of Critical Findings in Chest X-Ray without Local Annotations Using Multi-Instance Learning

The automatic detection of critical findings in chest X-rays (CXR), such as pneumothorax, is important for assisting radiologists in their clinical workflow like triaging time-sensitive cases and screening for incidental findings. While deep learning (DL) models has become a promising predictive technology with near-human accuracy, they commonly suffer from a lack of explainability, which is an important aspect for clinical deployment of DL models in the highly regulated healthcare industry. For example, localizing critical findings in an image is useful for explaining the predictions of DL classification algorithms. While there have been a host of joint classification and localization methods for computer vision, the state-of-the-art DL models require locally annotated training data in the form of pixel level labels or bounding box coordinates. In the medical domain, this requires an expensive amount of manual annotation by medical experts for each critical finding. This requirement becomes a major barrier for training models that can rapidly scale to various findings. In this work, we address these shortcomings with an interpretable DL algorithm based on multi-instance learning that jointly classifies and localizes critical findings in CXR without the need for local annotations. We show competitive classification results on three different critical findings (pneumothorax, pneumonia, and pulmonary edema) from three different CXR datasets.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Videos in this product

Localization of Critical Findings in Chest X-Ray without Local Annotations Using Multi-Instance Learning

00:14:48
0 views
The automatic detection of critical findings in chest X-rays (CXR), such as pneumothorax, is important for assisting radiologists in their clinical workflow like triaging time-sensitive cases and screening for incidental findings. While deep learning (DL) models has become a promising predictive technology with near-human accuracy, they commonly suffer from a lack of explainability, which is an important aspect for clinical deployment of DL models in the highly regulated healthcare industry. For example, localizing critical findings in an image is useful for explaining the predictions of DL classification algorithms. While there have been a host of joint classification and localization methods for computer vision, the state-of-the-art DL models require locally annotated training data in the form of pixel level labels or bounding box coordinates. In the medical domain, this requires an expensive amount of manual annotation by medical experts for each critical finding. This requirement becomes a major barrier for training models that can rapidly scale to various findings. In this work, we address these shortcomings with an interpretable DL algorithm based on multi-instance learning that jointly classifies and localizes critical findings in CXR without the need for local annotations. We show competitive classification results on three different critical findings (pneumothorax, pneumonia, and pulmonary edema) from three different CXR datasets.