3D Conditional Adversarial Learning for Synthesizing Microscopic Neuron Image Using Skeleton-To-Neuron Translation

This video program is a part of the Premium package:

3D Conditional Adversarial Learning for Synthesizing Microscopic Neuron Image Using Skeleton-To-Neuron Translation


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

3D Conditional Adversarial Learning for Synthesizing Microscopic Neuron Image Using Skeleton-To-Neuron Translation

1 view
  • Share
Create Account or Sign In to post comments
The automatic reconstruction of single neuron cells from microscopic images is essential to enabling large-scale data-driven investigations in neuron morphology research. However, the performances of single neuron reconstruction algorithms are constrained by both the quantity and the quality of the annotated 3D microscopic images since the annotating single neuron models is highly labour intensive. We propose a framework for synthesizing microscopy-realistic 3D neuron images from simulated single neuron skeletons using conditional Generative Adversarial Networks (cGAN). We build the generator network with multi-resolution sub-modules to improve the output fidelity. We evaluate our framework on Janelia-Fly dataset from the BigNeuron project. With both qualitative and quantitative analysis, we show that the proposed framework outperforms the other state-of-the-art methods regarding the quality of the synthetic neuron images. We also show that combining the real neuron images and the synthetic images generated from our framework can improve the performance of neuron segmentation.
The automatic reconstruction of single neuron cells from microscopic images is essential to enabling large-scale data-driven investigations in neuron morphology research. However, the performances of single neuron reconstruction algorithms are constrained by both the quantity and the quality of the annotated 3D microscopic images since the annotating single neuron models is highly labour intensive. We propose a framework for synthesizing microscopy-realistic 3D neuron images from simulated single neuron skeletons using conditional Generative Adversarial Networks (cGAN). We build the generator network with multi-resolution sub-modules to improve the output fidelity. We evaluate our framework on Janelia-Fly dataset from the BigNeuron project. With both qualitative and quantitative analysis, we show that the proposed framework outperforms the other state-of-the-art methods regarding the quality of the synthetic neuron images. We also show that combining the real neuron images and the synthetic images generated from our framework can improve the performance of neuron segmentation.