RADIOGAN: Deep Convolutional Conditional Generative Adversial Network To Generate PET Images

This video program is a part of the Premium package:

RADIOGAN: Deep Convolutional Conditional Generative Adversial Network To Generate PET Images


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

RADIOGAN: Deep Convolutional Conditional Generative Adversial Network To Generate PET Images

1 view
  • Share
Create Account or Sign In to post comments
Generative neural networks are a very promising tool to solve the problem of lack of data, especially in medical imaging where only few data are provided sometimes. RADIOGAN is a new deep conditional architecture based on generative adversarial networks (GAN) and trained on FDG PET images for the synthesis of PET exams including physiological and/or pathological fixations. We show that walking in latent space can be used as a tool to evaluate the quality of the generated images. A multicenter database of 1606 patients was used (422 head and neck cancer, 189 lung cancer, 97esophageal cancer, 225 lymphomas and 675 without pathological fixations (considered as normal PET)). PET images were spatially normalized to an isotropic resolution of 2 mm3, normalized between [0 30] SUVs and then between [0 1] for use by RADIOGAN. The RADIOGAN architecture was built to take advantage of both DCGAN (Deep Convolutional GAN) and CGAN (Conditional GAN) to create a new DCCGAN (Deep Convolutional Conditional Generative Adversarial Network) architecture. In addition to the image, it uses the class (pathology) of the image as information. The generator generates a new PET image from a random vector Z, but by specifying the image class C it will be conditioned to generate an image belonging to the same class as C (normal patient, esophageal cancer, lung...). The model is trained for 300 iterations with an Adam optimizer and a learning rate of 0.0002. The metric is performance. RADIOGAN has been trained to generate realistic images, excluding non-realistic images (half patient, patient with 3legs or a head instead of the stomach, etc...). The generator takes as input a random vector and the desired class. Walking in a latent space results in a sequence of realistic images of patients slowly changing from one class to another (normal patient to lung cancer for example). RADIOGAN was compared with two state of the art DCGAN and CGAN architectures, and outperformed the other architectures by synthesizing high quality realistic images. RADIOGAN showed very promising results and can be extended to CT images.
Generative neural networks are a very promising tool to solve the problem of lack of data, especially in medical imaging where only few data are provided sometimes. RADIOGAN is a new deep conditional architecture based on generative adversarial networks (GAN) and trained on FDG PET images for the synthesis of PET exams including physiological and/or pathological fixations. We show that walking in latent space can be used as a tool to evaluate the quality of the generated images. A multicenter database of 1606 patients was used (422 head and neck cancer, 189 lung cancer, 97esophageal cancer, 225 lymphomas and 675 without pathological fixations (considered as normal PET)). PET images were spatially normalized to an isotropic resolution of 2 mm3, normalized between [0 30] SUVs and then between [0 1] for use by RADIOGAN. The RADIOGAN architecture was built to take advantage of both DCGAN (Deep Convolutional GAN) and CGAN (Conditional GAN) to create a new DCCGAN (Deep Convolutional Conditional Generative Adversarial Network) architecture. In addition to the image, it uses the class (pathology) of the image as information. The generator generates a new PET image from a random vector Z, but by specifying the image class C it will be conditioned to generate an image belonging to the same class as C (normal patient, esophageal cancer, lung...). The model is trained for 300 iterations with an Adam optimizer and a learning rate of 0.0002. The metric is performance. RADIOGAN has been trained to generate realistic images, excluding non-realistic images (half patient, patient with 3legs or a head instead of the stomach, etc...). The generator takes as input a random vector and the desired class. Walking in a latent space results in a sequence of realistic images of patients slowly changing from one class to another (normal patient to lung cancer for example). RADIOGAN was compared with two state of the art DCGAN and CGAN architectures, and outperformed the other architectures by synthesizing high quality realistic images. RADIOGAN showed very promising results and can be extended to CT images.