Multi-Contrast MR Reconstruction with Enhanced Denoising Autoencoder Prior Learning

This video program is a part of the Premium package:

Multi-Contrast MR Reconstruction with Enhanced Denoising Autoencoder Prior Learning


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Multi-Contrast MR Reconstruction with Enhanced Denoising Autoencoder Prior Learning

0 views
  • Share
Create Account or Sign In to post comments
This paper proposes an enhanced denoising autoencoder prior (EDAEP) learning framework for accurate multi-contrast MR image reconstruction. A multi-model structure with various noise levels is designed to capture features of different scales from different contrast images. Furthermore, a weighted aggregation strategy is proposed to balance the impact of different model outputs, making the performance of the proposed model more robust and stable while facing noise attacks. The model was trained to handle three different sampling patterns and different acceleration factors on two public datasets. Results demonstrate that our proposed method can improve the quality of reconstructed images and outperform the previous state-of-the-art approaches. The code is available at https://github.com/yqx7150.
This paper proposes an enhanced denoising autoencoder prior (EDAEP) learning framework for accurate multi-contrast MR image reconstruction. A multi-model structure with various noise levels is designed to capture features of different scales from different contrast images. Furthermore, a weighted aggregation strategy is proposed to balance the impact of different model outputs, making the performance of the proposed model more robust and stable while facing noise attacks. The model was trained to handle three different sampling patterns and different acceleration factors on two public datasets. Results demonstrate that our proposed method can improve the quality of reconstructed images and outperform the previous state-of-the-art approaches. The code is available at https://github.com/yqx7150.