Non-Rigid 2d-3d Registration Using Convolutional Autoencoders

This video program is a part of the Premium package:

Non-Rigid 2d-3d Registration Using Convolutional Autoencoders


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Non-Rigid 2d-3d Registration Using Convolutional Autoencoders

0 views
  • Share
Create Account or Sign In to post comments
In this paper, we propose a novel neural network-based framework for the non-rigid 2D-3D registration of the lateral cephalogram and the volumetric cone-beam CT (CBCT) images. The task is formulated as an embedding problem, where we utilize the statistical volumetric representation and embed the X-ray image to a code vector regarding the non-rigid volumetric deformations. In particular, we build a deep ResNet-based encoder to infer the code vector from the input X-ray image. We design a decoder to generate digitally reconstructed radiographs (DRRs) from the non-rigidly deformed volumetric image determined by the code vector. The parameters of the encoder are optimized by minimizing the difference between synthetic DRRs and input X-ray images in an unsupervised way. Without geometric constraints from multi-view X-ray images, we exploit structural constraints of the multi-scale feature pyramid in similarity analysis. The training process is unsupervised and does not require paired 2D X-ray images and 3D CBCT images. The system allows constructing a volumetric image from a single X-ray image and realizes the 2D-3D registration between the lateral cephalograms and CBCT images.
In this paper, we propose a novel neural network-based framework for the non-rigid 2D-3D registration of the lateral cephalogram and the volumetric cone-beam CT (CBCT) images. The task is formulated as an embedding problem, where we utilize the statistical volumetric representation and embed the X-ray image to a code vector regarding the non-rigid volumetric deformations. In particular, we build a deep ResNet-based encoder to infer the code vector from the input X-ray image. We design a decoder to generate digitally reconstructed radiographs (DRRs) from the non-rigidly deformed volumetric image determined by the code vector. The parameters of the encoder are optimized by minimizing the difference between synthetic DRRs and input X-ray images in an unsupervised way. Without geometric constraints from multi-view X-ray images, we exploit structural constraints of the multi-scale feature pyramid in similarity analysis. The training process is unsupervised and does not require paired 2D X-ray images and 3D CBCT images. The system allows constructing a volumetric image from a single X-ray image and realizes the 2D-3D registration between the lateral cephalograms and CBCT images.