Already purchased this program?
Login to View
This video program is a part of the Premium package:
3D Ultrasound Generation from Partial 2D Observations Using Fully Convolutional and Spatial Transformation Networks
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
3D Ultrasound Generation from Partial 2D Observations Using Fully Convolutional and Spatial Transformation Networks
External beam radiation therapy (EBRT) is a therapeutic modality often used for the treatment of various types of cancer. EBRT?s efficiency highly depends on accurate tracking of the target to be treated and therefore requires the use of real-time imaging modalities such as ultrasound (US) during treatment. While US is cost effective and non-ionizing, 2D US is not well suited to track targets that displace in 3D, while 3D US is challenging to integrate in real-time due to insufficient temporal frequency. In this work, we present a 3D inference model based on fully convolutional networks combined with a spatial transformative network (STN) layer, which given a 2D US image and a baseline 3D US volume as inputs, can predict the deformation of the baseline volume to generate an up-to-date 3D US volume in real-time. We train our model using 20 4D liver US sequences taken from the CLUST15 3D tracking challenge, testing the model on image tracking sequences. The proposed model achieves a normalized cross-correlation of 0.56 in an ablation study and a mean landmark location error of 2.92 ? 1.67mm for target anatomy tracking. These promising results demonstrate the potential of generative STN models for predicting 3D motion fields during EBRT.
External beam radiation therapy (EBRT) is a therapeutic modality often used for the treatment of various types of cancer. EBRT?s efficiency highly depends on accurate tracking of the target to be treated and therefore requires the use of real-time imaging modalities such as ultrasound (US) during treatment. While US is cost effective and non-ionizing, 2D US is not well suited to track targets that displace in 3D, while 3D US is challenging to integrate in real-time due to insufficient temporal frequency. In this work, we present a 3D inference model based on fully convolutional networks combined with a spatial transformative network (STN) layer, which given a 2D US image and a baseline 3D US volume as inputs, can predict the deformation of the baseline volume to generate an up-to-date 3D US volume in real-time. We train our model using 20 4D liver US sequences taken from the CLUST15 3D tracking challenge, testing the model on image tracking sequences. The proposed model achieves a normalized cross-correlation of 0.56 in an ablation study and a mean landmark location error of 2.92 ? 1.67mm for target anatomy tracking. These promising results demonstrate the potential of generative STN models for predicting 3D motion fields during EBRT.