Deep Multimodal Brain Network Learning for Joint Analysis of Structural Morphometry and Functional Connectivity

This video program is a part of the Premium package:

Deep Multimodal Brain Network Learning for Joint Analysis of Structural Morphometry and Functional Connectivity


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Deep Multimodal Brain Network Learning for Joint Analysis of Structural Morphometry and Functional Connectivity

0 views
  • Share
Create Account or Sign In to post comments
Learning from the multimodal brain imaging data attracts a large amount of attention in medical image analysis due to the proliferation of multimodal data collection. It is widely accepted that multimodal data can provide complementary information than mining from a single modality. However, unifying the image-based knowledge from the multimodal data is very challenging due to different image signals, resolution, data structure, etc.. In this study, we design a supervised deep model to jointly analyze brain morphometry and functional connectivity on the cortical surface and we name it deep multimodal brain network learning (DMBNL). Two graph-based kernels, i.e., geometry-aware surface kernel (GSK) and topology-aware network kernel (TNK), are proposed for processing the cortical surface morphometry and brain functional network. The vertex features on the cortical surface from GSK is pooled and feed into TNK as its initial regional features. In the end, the graph-level feature is computed for each individual and thus can be applied for classification tasks. We test our model on a large autism imaging dataset. The experimental results prove the effectiveness of our model.
Learning from the multimodal brain imaging data attracts a large amount of attention in medical image analysis due to the proliferation of multimodal data collection. It is widely accepted that multimodal data can provide complementary information than mining from a single modality. However, unifying the image-based knowledge from the multimodal data is very challenging due to different image signals, resolution, data structure, etc.. In this study, we design a supervised deep model to jointly analyze brain morphometry and functional connectivity on the cortical surface and we name it deep multimodal brain network learning (DMBNL). Two graph-based kernels, i.e., geometry-aware surface kernel (GSK) and topology-aware network kernel (TNK), are proposed for processing the cortical surface morphometry and brain functional network. The vertex features on the cortical surface from GSK is pooled and feed into TNK as its initial regional features. In the end, the graph-level feature is computed for each individual and thus can be applied for classification tasks. We test our model on a large autism imaging dataset. The experimental results prove the effectiveness of our model.