A Triple-Stage Self-Guided Network for Kidney Tumor Segmentation

This video program is a part of the Premium package:

A Triple-Stage Self-Guided Network for Kidney Tumor Segmentation


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

A Triple-Stage Self-Guided Network for Kidney Tumor Segmentation

0 views
  • Share
Create Account or Sign In to post comments
The morphological characteristics of kidney tumor is crucial factor for radiologists to make accurate diagnosis and treatment. Unfortunately, performing quantitative study of the relationship between kidney tumor morphology and clinical outcomes is very difficult because kidney tumor varies dramatically in its size, shape, location, etc. Automatic semantic segmentation of kidney and tumor is a promising tool towards developing advanced surgical planning techniques. In this work, we present a triple-stage self-guided network for kidney tumor segmentation task. The low-resolution net can roughly locate the volume of interest (VOI) from down-sampled CT images, while the full-resolution net and tumor refine net can extract accurate boundaries of kidney and tumor within VOI from full resolution CT images. We innovatively propose dilated convolution blocks (DCB) to replace the traditional pooling operations in deeper layers of U-Net architecture to retain detailed semantic information better. Besides, a hybrid loss of dice and weighted cross entropy is used to guide the model to focus on voxels close to the boundary and hard to be distinguished. We evaluate our method on the KiTS19 (MICCAI 2019 Kidney Tumor Segmentation Challenge) test dataset and achieve 0.9674, 0.8454 average dice for kidney and tumor respectively, which ranked the 2nd place in the KiTS19 challenge.
The morphological characteristics of kidney tumor is crucial factor for radiologists to make accurate diagnosis and treatment. Unfortunately, performing quantitative study of the relationship between kidney tumor morphology and clinical outcomes is very difficult because kidney tumor varies dramatically in its size, shape, location, etc. Automatic semantic segmentation of kidney and tumor is a promising tool towards developing advanced surgical planning techniques. In this work, we present a triple-stage self-guided network for kidney tumor segmentation task. The low-resolution net can roughly locate the volume of interest (VOI) from down-sampled CT images, while the full-resolution net and tumor refine net can extract accurate boundaries of kidney and tumor within VOI from full resolution CT images. We innovatively propose dilated convolution blocks (DCB) to replace the traditional pooling operations in deeper layers of U-Net architecture to retain detailed semantic information better. Besides, a hybrid loss of dice and weighted cross entropy is used to guide the model to focus on voxels close to the boundary and hard to be distinguished. We evaluate our method on the KiTS19 (MICCAI 2019 Kidney Tumor Segmentation Challenge) test dataset and achieve 0.9674, 0.8454 average dice for kidney and tumor respectively, which ranked the 2nd place in the KiTS19 challenge.