Soft-Label Guided Semi-Supervised Learning for Bi-Ventricle Segmentation in Cardiac Cine MRI

This video program is a part of the Premium package:


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Soft-Label Guided Semi-Supervised Learning for Bi-Ventricle Segmentation in Cardiac Cine MRI

0 views
  • Share
Create Account or Sign In to post comments
Deep convolutional neural networks have been applied to medical image segmentation tasks successfully in recent years by taking advantage of a large amount of training data with golden standard annotations. However, it is difficult and expensive to obtain good-quality annotations in practice. This work aims to propose a novel semi-supervised learning framework to improve the ventricle segmentation from 2D cine MR images. Our method is efficient and effective by computing soft labels dynamically for the unlabeled data. Specifically, we obtain the soft labels, rather than hard labels, from a teacher model in every learning iteration. The uncertainty of the target label of unlabeled data is intrinsically encoded in the soft label. The soft label can be improved towards the ideal target in training. We use a separate loss to regularize the unlabeled data to produce similar probability distribution as the soft labels in each iteration. Experiments show that our method outperforms a state-of-the-art semi-supervised method.
Deep convolutional neural networks have been applied to medical image segmentation tasks successfully in recent years by taking advantage of a large amount of training data with golden standard annotations. However, it is difficult and expensive to obtain good-quality annotations in practice. This work aims to propose a novel semi-supervised learning framework to improve the ventricle segmentation from 2D cine MR images. Our method is efficient and effective by computing soft labels dynamically for the unlabeled data. Specifically, we obtain the soft labels, rather than hard labels, from a teacher model in every learning iteration. The uncertainty of the target label of unlabeled data is intrinsically encoded in the soft label. The soft label can be improved towards the ideal target in training. We use a separate loss to regularize the unlabeled data to produce similar probability distribution as the soft labels in each iteration. Experiments show that our method outperforms a state-of-the-art semi-supervised method.