Multi-frame CT-video Registration for 3D Airway-Wall Analysis

This video program is a part of the Premium package:

Multi-frame CT-video Registration for 3D Airway-Wall Analysis


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Multi-frame CT-video Registration for 3D Airway-Wall Analysis

0 views
  • Share
Create Account or Sign In to post comments
Bronchoscopy and three-dimensional (3D) computed tomography (CT) are important complementary tools for managing lung diseases. Endobronchial video captured during bronchoscopy gives live views of the airway-tree interior with vivid detail of the airway mucosal surfaces, while the 3D CT images give considerable anatomical detail. Unfortunately, little effort has been made to link these rich data sources. This paper describes a rapid interactive multi-frame method for registering the video frames constituting a complete bronchoscopic video sequence onto their respective locations within the CT-based 3D airway tree. Registration results using both phantom and human cases show our method?s efficacy compared to ground-truth data, with a maximum position error = 8.5mm, orientation error of 17 degrees, and a minimum trajectory accuracy = 94.1%. We also apply our method to multimodal 3D airway-wall analysis within a comprehensive bronchoscopic video analysis system.
Bronchoscopy and three-dimensional (3D) computed tomography (CT) are important complementary tools for managing lung diseases. Endobronchial video captured during bronchoscopy gives live views of the airway-tree interior with vivid detail of the airway mucosal surfaces, while the 3D CT images give considerable anatomical detail. Unfortunately, little effort has been made to link these rich data sources. This paper describes a rapid interactive multi-frame method for registering the video frames constituting a complete bronchoscopic video sequence onto their respective locations within the CT-based 3D airway tree. Registration results using both phantom and human cases show our method?s efficacy compared to ground-truth data, with a maximum position error = 8.5mm, orientation error of 17 degrees, and a minimum trajectory accuracy = 94.1%. We also apply our method to multimodal 3D airway-wall analysis within a comprehensive bronchoscopic video analysis system.