Dgnet: Diagnosis Generation Network from Medical Image

This video program is a part of the Premium package:

Dgnet: Diagnosis Generation Network from Medical Image


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Dgnet: Diagnosis Generation Network from Medical Image

0 views
  • Share
Create Account or Sign In to post comments
Histopathological examination of skin lesions is considered as gold standard for correct diagnosis of skin disease, especially for manifold types of skin cancers. Limited by scarce histopathological image sets, inconspicuous patterns between different appearances of histopathological features, and weak predictive power of existing models, few researches focus on computer-aided diagnosis of skin diseases based on histopathological images. Although the rapid development of deep learning technologies has shown remarkable advantages over traditional methods on medical images retrieval and mining, it remains the inability to interpret the prediction in visually and semantically meaningful ways. Motivated by above analysis, we put forward an attention-based model to automatically generate diagnostic reports from raw histopathological examination images, and meanwhile providing final diagnostic result and visualize attention for justifications of the model diagnosis process. Our model includes image model, language model and a separate attention module. The image model is proposed to extract multi-scale feature maps. The language model, aims to read and explore the discriminative feature maps, extracted by image model, to learn a direct mapping from caption words to image pixels. We propose an improved, trainable attention module, separated from language model and make the captions data exposed to language model, meanwhile, we apply a week-touched method in the connection of attention module and language model. In our experiments, we conduct the model training, model validating, and model testing using a dataset of 1200 histopathological images consisting of 11 different skin diseases. These histopathological images and related diagnostic reports were collected in collaboration with a number of pathologists during the past ten years. As the results show, our approach performs a more superior data-fitting ability and faster convergence rate compared with soft attention model. Furthermore, according to the comparison of evaluation scores , our model is indicative of better language understanding.
Histopathological examination of skin lesions is considered as gold standard for correct diagnosis of skin disease, especially for manifold types of skin cancers. Limited by scarce histopathological image sets, inconspicuous patterns between different appearances of histopathological features, and weak predictive power of existing models, few researches focus on computer-aided diagnosis of skin diseases based on histopathological images. Although the rapid development of deep learning technologies has shown remarkable advantages over traditional methods on medical images retrieval and mining, it remains the inability to interpret the prediction in visually and semantically meaningful ways. Motivated by above analysis, we put forward an attention-based model to automatically generate diagnostic reports from raw histopathological examination images, and meanwhile providing final diagnostic result and visualize attention for justifications of the model diagnosis process. Our model includes image model, language model and a separate attention module. The image model is proposed to extract multi-scale feature maps. The language model, aims to read and explore the discriminative feature maps, extracted by image model, to learn a direct mapping from caption words to image pixels. We propose an improved, trainable attention module, separated from language model and make the captions data exposed to language model, meanwhile, we apply a week-touched method in the connection of attention module and language model. In our experiments, we conduct the model training, model validating, and model testing using a dataset of 1200 histopathological images consisting of 11 different skin diseases. These histopathological images and related diagnostic reports were collected in collaboration with a number of pathologists during the past ten years. As the results show, our approach performs a more superior data-fitting ability and faster convergence rate compared with soft attention model. Furthermore, according to the comparison of evaluation scores , our model is indicative of better language understanding.