Already purchased this program?
Login to View
This video program is a part of the Premium package:
Transforming Intensity Distribution of Brain Lesions Via Conditional GANs for Segmentation
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Transforming Intensity Distribution of Brain Lesions Via Conditional GANs for Segmentation
Brain lesion segmentation is crucial for diagnosis, surgical planning, and analysis. Owing to the fact that pixel values of brain lesions in magnetic resonance (MR) scans are distributed over the wide intensity range, there is always a considerable overlap between the class-conditional densities of lesions. Hence, an accurate automatic brain lesion segmentation is still a challenging task. We present a novel architecture based on conditional generative adversarial networks (cGANs) to improve the lesion contrast for segmentation. To this end, we propose a novel generator adaptively calibrating the input pixel values, and a Markovian discriminator to estimate the distribution of tumors. We further propose the Enhancement and Segmentation GAN (Enh-Seg-GAN) which effectively incorporates the classifier loss into the adversarial one during training to predict the central labels of the sliding input patches. Particularly, the generated synthetic MR images are a substitute for the real ones to maximize lesion contrast while suppressing the background. The potential of proposed frameworks is confirmed by quantitative evaluation compared to the state-of-the-art methods on BraTS'13 dataset.
Brain lesion segmentation is crucial for diagnosis, surgical planning, and analysis. Owing to the fact that pixel values of brain lesions in magnetic resonance (MR) scans are distributed over the wide intensity range, there is always a considerable overlap between the class-conditional densities of lesions. Hence, an accurate automatic brain lesion segmentation is still a challenging task. We present a novel architecture based on conditional generative adversarial networks (cGANs) to improve the lesion contrast for segmentation. To this end, we propose a novel generator adaptively calibrating the input pixel values, and a Markovian discriminator to estimate the distribution of tumors. We further propose the Enhancement and Segmentation GAN (Enh-Seg-GAN) which effectively incorporates the classifier loss into the adversarial one during training to predict the central labels of the sliding input patches. Particularly, the generated synthetic MR images are a substitute for the real ones to maximize lesion contrast while suppressing the background. The potential of proposed frameworks is confirmed by quantitative evaluation compared to the state-of-the-art methods on BraTS'13 dataset.