Learning Optimal Shape Representations for Multi-Modal Image Registration

This video program is a part of the Premium package:

Learning Optimal Shape Representations for Multi-Modal Image Registration


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Learning Optimal Shape Representations for Multi-Modal Image Registration

0 views
  • Share
Create Account or Sign In to post comments
In this work, we present a new strategy for the multi-modal registration of atypical structures with boundaries that are difficult to define in medical imaging (e.g. lymph nodes). Instead of using a standard Mutual Information (MI) similarity metric, we propose to use the combination of MI with the Modality Independent Neighbourhood Descriptors (MIND) that can help enhancing the organs of interest from their adjacent structures. Our key contribution is then to learn the MIND parameters which optimally represent specific registered structures. As we register atypical organs, Neural-Network approaches requiring large databases of annotated training data cannot be used. We rather strongly constrain our learning problem using the MIND formalism, so that the optimal representation of images depends on a limited amount of parameters. In our results, pure MI-based registration is compared with MI-MIND registration on 3D synthetic images and CT/MR images, leading to improved structure overlaps by using MI-MIND. To our knowledge, this is the first time that MIND-MI is evaluated and appears as relevant for multi-modal registration.
In this work, we present a new strategy for the multi-modal registration of atypical structures with boundaries that are difficult to define in medical imaging (e.g. lymph nodes). Instead of using a standard Mutual Information (MI) similarity metric, we propose to use the combination of MI with the Modality Independent Neighbourhood Descriptors (MIND) that can help enhancing the organs of interest from their adjacent structures. Our key contribution is then to learn the MIND parameters which optimally represent specific registered structures. As we register atypical organs, Neural-Network approaches requiring large databases of annotated training data cannot be used. We rather strongly constrain our learning problem using the MIND formalism, so that the optimal representation of images depends on a limited amount of parameters. In our results, pure MI-based registration is compared with MI-MIND registration on 3D synthetic images and CT/MR images, leading to improved structure overlaps by using MI-MIND. To our knowledge, this is the first time that MIND-MI is evaluated and appears as relevant for multi-modal registration.