Already purchased this program?
Login to View
This video program is a part of the Premium package:
Zero-Shot Adaptation to Simulate 3D Ultrasound Volume by Learning a Multilinear Separable 2D Convolutional Neural Network
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Zero-Shot Adaptation to Simulate 3D Ultrasound Volume by Learning a Multilinear Separable 2D Convolutional Neural Network
Ultrasound imaging relies on sensing of waves returned after interaction with scattering media present in biological tissues. An acoustic pulse transmitted by a single element transducer dilates along the direction of propagation, and is observed as 1D point spread function (PSF) in A-mode imaging. In 2D B-mode imaging, a 1D array of transducer elements is used and dilation of pulse is also observed along the direction of these elements, manifesting a 2D PSF. In 3D B-mode imaging using a 2D matrix of transducer elements, a 3D PSF is observed. Fast simulation of a 3D B-mode volume by way of convolutional transformer networks to learn the PSF family would require a training dataset of true 3D volumes which are not readily available. Here we start in Stage 0 with a simple physics based simulator in 3D to generate speckles from a tissue echogenicity map. Next in Stage 1, we learn a multilinear separable 2D convolutional neural network using 1D convolutions to model PSF family along direction of ultrasound propagation and orthogonal to it. This is adversarially trained using a visual Turing test on 2D ultrasound images. The PSF being circularly symmetric about an axis parallel to the direction of wave propagation, we simulate full 3D volume, by way of alternating the direction of 1D convolution along 2 axes that are mutually orthogonal to the direction of wave propagation. We validate performance using visual Turing test with experts and distribution similarity measures.
Ultrasound imaging relies on sensing of waves returned after interaction with scattering media present in biological tissues. An acoustic pulse transmitted by a single element transducer dilates along the direction of propagation, and is observed as 1D point spread function (PSF) in A-mode imaging. In 2D B-mode imaging, a 1D array of transducer elements is used and dilation of pulse is also observed along the direction of these elements, manifesting a 2D PSF. In 3D B-mode imaging using a 2D matrix of transducer elements, a 3D PSF is observed. Fast simulation of a 3D B-mode volume by way of convolutional transformer networks to learn the PSF family would require a training dataset of true 3D volumes which are not readily available. Here we start in Stage 0 with a simple physics based simulator in 3D to generate speckles from a tissue echogenicity map. Next in Stage 1, we learn a multilinear separable 2D convolutional neural network using 1D convolutions to model PSF family along direction of ultrasound propagation and orthogonal to it. This is adversarially trained using a visual Turing test on 2D ultrasound images. The PSF being circularly symmetric about an axis parallel to the direction of wave propagation, we simulate full 3D volume, by way of alternating the direction of 1D convolution along 2 axes that are mutually orthogonal to the direction of wave propagation. We validate performance using visual Turing test with experts and distribution similarity measures.