Ising-GAN: Annotated Data Augmentation with a Spatially Constrained Generative Adversarial Network

This video program is a part of the Premium package:

Ising-GAN: Annotated Data Augmentation with a Spatially Constrained Generative Adversarial Network


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Ising-GAN: Annotated Data Augmentation with a Spatially Constrained Generative Adversarial Network

0 views
  • Share
Create Account or Sign In to post comments
Data augmentation is a popular technique with which new dataset samples are artificially synthesized to the end of aiding training of learning-based algorithms and avoiding overfitting. Methods based on Generative adversarial networks (GANs) have recently rekindled interest in research on new techinques for data augmentation. With the current paper we propose a new GAN-based model for data augmentation, comprising a suitable Markov Random Field-based spatial constraint that encourages synthesis of spatially smooth outputs. Oriented towards use with medical imaging sets where a localization/segmentation annotation is available, our model can simultaneously also produce artificial annotations. We gauge performance numerically by measuring performance of U-Net trained to detect cells on microscopy images, by taking into account the produced augmented dataset. Numerical trials, as well as qualitative results validate the usefulness of our model.
Data augmentation is a popular technique with which new dataset samples are artificially synthesized to the end of aiding training of learning-based algorithms and avoiding overfitting. Methods based on Generative adversarial networks (GANs) have recently rekindled interest in research on new techinques for data augmentation. With the current paper we propose a new GAN-based model for data augmentation, comprising a suitable Markov Random Field-based spatial constraint that encourages synthesis of spatially smooth outputs. Oriented towards use with medical imaging sets where a localization/segmentation annotation is available, our model can simultaneously also produce artificial annotations. We gauge performance numerically by measuring performance of U-Net trained to detect cells on microscopy images, by taking into account the produced augmented dataset. Numerical trials, as well as qualitative results validate the usefulness of our model.