Towards Fully Automatic 2d Us to 3d CT/MR Registration: A Novel Segmentation-Based Strategy

This video program is a part of the Premium package:

Towards Fully Automatic 2d Us to 3d CT/MR Registration: A Novel Segmentation-Based Strategy


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Towards Fully Automatic 2d Us to 3d CT/MR Registration: A Novel Segmentation-Based Strategy

0 views
  • Share
Create Account or Sign In to post comments
2D-US to 3D-CT/MR registration is a crucial module during minimally invasive ultrasound-guided liver tumor ablations. Many modern registration methods still require manual or semi-automatic slice pose initialization due to insufficient robustness of automatic methods. The state-of-the-art regression networks do not work well for liver 2D US to 3DCT/MR registration because of the tremendous inter-patientvariability of the liver anatomy. To address this unsolved problem, we propose a deep learning network pipeline which? instead of a regression ? starts with a classification network to recognize the coarse ultrasound transducer pose followed by a segmentation network to detect the target plane of the US image in the CT/MR volume. The rigid registration result is derived using plane regression. In contrast to the state-of-the-art regression networks, we do not estimate registration parameters from multi-modal images directly, but rather focus on segmenting the target slice plane in the volume. The experiments reveal that this novel registration strategy can identify the initial slice phase in a 3D volume more reliably than the standard regression-based techniques. The proposed method was evaluated with 1035 US images from 52 patients. We achieved angle and distance errors of 12.7?6.2? and 4.9?3.1 mm, clearly outperforming state-of-the-art re-gression strategy which results in 37.0?15.6? angle error and 19.0?11.6 mm distance error.
2D-US to 3D-CT/MR registration is a crucial module during minimally invasive ultrasound-guided liver tumor ablations. Many modern registration methods still require manual or semi-automatic slice pose initialization due to insufficient robustness of automatic methods. The state-of-the-art regression networks do not work well for liver 2D US to 3DCT/MR registration because of the tremendous inter-patientvariability of the liver anatomy. To address this unsolved problem, we propose a deep learning network pipeline which? instead of a regression ? starts with a classification network to recognize the coarse ultrasound transducer pose followed by a segmentation network to detect the target plane of the US image in the CT/MR volume. The rigid registration result is derived using plane regression. In contrast to the state-of-the-art regression networks, we do not estimate registration parameters from multi-modal images directly, but rather focus on segmenting the target slice plane in the volume. The experiments reveal that this novel registration strategy can identify the initial slice phase in a 3D volume more reliably than the standard regression-based techniques. The proposed method was evaluated with 1035 US images from 52 patients. We achieved angle and distance errors of 12.7?6.2? and 4.9?3.1 mm, clearly outperforming state-of-the-art re-gression strategy which results in 37.0?15.6? angle error and 19.0?11.6 mm distance error.