Automatic Bounding Box Annotation of Chest X-Ray Data for Localization of Abnormalities

Due to the increasing availability of public chest x-ray datasets over the last few years, automatic detection of findings and their locations in chest x-ray studies has become an important research area for AI application in healthcare. Whereas for finding classification tasks image-level labeling suffices, additional annotation in the form of bounding boxes is required for detection of finding textit{locations}. However, the process of marking findings in chest x-ray studies is both time consuming and costly as it needs to be performed by radiologists. To overcome this problem, weakly supervised approaches have been employed to depict finding locations as a byproduct of the classification task, but these approaches have not shown much promise so far. With this in mind, in this paper we propose an textit{automatic} approach for labeling chest x-ray images for findings and locations by leveraging radiology reports. Our labeling approach is anatomically textit{standardized} to the upper, middle, and lower lung zones for the left and right lung, and is composed of two stages. In the first stage, we use a lungs segmentation UNet model and an atlas of normal patients to mark the six lung zones on the image using standardized bounding boxes. In the second stage, the associated radiology report is used to label each lung zone as positive or negative for finding, resulting in a set of six labeled bounding boxes per image. Using this approach we were able to automatically annotate over 13,000 images in a matter of hours, and used this dataset to train an opacity detection model using RetinaNet to obtain results on a par with the state-of-the-art.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Videos in this product

Automatic Bounding Box Annotation of Chest X-Ray Data for Localization of Abnormalities

00:20:30
0 views
Due to the increasing availability of public chest x-ray datasets over the last few years, automatic detection of findings and their locations in chest x-ray studies has become an important research area for AI application in healthcare. Whereas for finding classification tasks image-level labeling suffices, additional annotation in the form of bounding boxes is required for detection of finding textit{locations}. However, the process of marking findings in chest x-ray studies is both time consuming and costly as it needs to be performed by radiologists. To overcome this problem, weakly supervised approaches have been employed to depict finding locations as a byproduct of the classification task, but these approaches have not shown much promise so far. With this in mind, in this paper we propose an textit{automatic} approach for labeling chest x-ray images for findings and locations by leveraging radiology reports. Our labeling approach is anatomically textit{standardized} to the upper, middle, and lower lung zones for the left and right lung, and is composed of two stages. In the first stage, we use a lungs segmentation UNet model and an atlas of normal patients to mark the six lung zones on the image using standardized bounding boxes. In the second stage, the associated radiology report is used to label each lung zone as positive or negative for finding, resulting in a set of six labeled bounding boxes per image. Using this approach we were able to automatically annotate over 13,000 images in a matter of hours, and used this dataset to train an opacity detection model using RetinaNet to obtain results on a par with the state-of-the-art.