A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation

This video program is a part of the Premium package:

A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation

0 views
  • Share
Create Account or Sign In to post comments
Brain tumor segmentation in magnetic resonance images (MRI) is necessary for diagnosis, monitoring and treatment, while manual segmentation is time-consuming, labor-intensive and subjective. In addition, single modality can?t provide enough information for accurate segmentation. In this paper, we propose a multi-modality fusion network based on attention mechanism for brain tumor segmentation. Our network includes four channel-independent encoding paths to independently extract features from four modalities, the feature fusion block to fuse the four features, and a decoding path to finally segment the tumor. The channel-independent encoding path can capture modality-specific features, However, not all the features extracted from the encoders are useful for segmentation. In this paper, we propose to use the attention mechanism to guide the fusion block. In this way, the modality-specific features can be separately recalibrated along the channel and space paths, which can suppress less informative features and emphasize the useful ones. The obtained shared latent feature representation is finally projected by the decoder to the brain tumor segmentation. The experiment results on BraTS 2017 dataset demonstrate the effectiveness of our proposed method.
Brain tumor segmentation in magnetic resonance images (MRI) is necessary for diagnosis, monitoring and treatment, while manual segmentation is time-consuming, labor-intensive and subjective. In addition, single modality can?t provide enough information for accurate segmentation. In this paper, we propose a multi-modality fusion network based on attention mechanism for brain tumor segmentation. Our network includes four channel-independent encoding paths to independently extract features from four modalities, the feature fusion block to fuse the four features, and a decoding path to finally segment the tumor. The channel-independent encoding path can capture modality-specific features, However, not all the features extracted from the encoders are useful for segmentation. In this paper, we propose to use the attention mechanism to guide the fusion block. In this way, the modality-specific features can be separately recalibrated along the channel and space paths, which can suppress less informative features and emphasize the useful ones. The obtained shared latent feature representation is finally projected by the decoder to the brain tumor segmentation. The experiment results on BraTS 2017 dataset demonstrate the effectiveness of our proposed method.