CALGL Net: Pathological Images Generate Diagnostic Results

Histopathological images can reveal the reason and severity of the disease, so it plays an important role in clinical diagnosis. Due to the lack of clear correspondence between possible lesion area and the image feature of histopathology, and the lack of detailed annotated histopathological image sets, there are few researches on computer-aided diagnosis of skin cancer based on histopathological images. In addition, for medical imaging, results that are interpretable and consistent with medical knowledge are even more important. Based on the above analysis, we propose an improved and trainable C-ALGL model (CNN-AttendLSTM-GenerateLSTM Net), which can both generate meaningful visual image results and reasonable diagnostic descriptions from histopathological images. The model includes an image processing module, a diagnostic text processing module, and a visual image-diagnostic result text generating module, etc., as shown in the Figure 1. We propose an improved language model structure, which consists of three parts: Attention-LSTM, Attention, and Generate LSTM. The language model reduces the burden of learning both attention and generating diagnostic texts when there is only one layer of LSTM by adding an attention module to the LSTM mezzanine and changing the LSTM parameter transmission path. This module finally generates the corresponding visualized image of the lesion area and diagnoses the text. Our experiments were trained, validated, and tested on a skin histopathological image dataset collected from a major dermatology hospital located in Northeast China. The dataset contains 1.2K histopathological images of 11 skin diseases with corresponding disease category labels and diagnostic text, which was annotated and maintained by numbers of pathologists during the past ten years. We have selected several classical convolutional neural network frameworks for feature extraction of pathological images. Based on the feature of pathological images, we chose two popular image caption methods as the baseline to compare its performance with ours. The results are shown in Table 1, which indicate that our method has higher quality text generation ability for diagnosis results. Furthermore, the C-ALGL NET can be generated to other types of medical images with the purpose of obtaining meaningful diagnostic results.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Videos in this product

CALGL Net: Pathological Images Generate Diagnostic Results

00:09:15
0 views
Histopathological images can reveal the reason and severity of the disease, so it plays an important role in clinical diagnosis. Due to the lack of clear correspondence between possible lesion area and the image feature of histopathology, and the lack of detailed annotated histopathological image sets, there are few researches on computer-aided diagnosis of skin cancer based on histopathological images. In addition, for medical imaging, results that are interpretable and consistent with medical knowledge are even more important. Based on the above analysis, we propose an improved and trainable C-ALGL model (CNN-AttendLSTM-GenerateLSTM Net), which can both generate meaningful visual image results and reasonable diagnostic descriptions from histopathological images. The model includes an image processing module, a diagnostic text processing module, and a visual image-diagnostic result text generating module, etc., as shown in the Figure 1. We propose an improved language model structure, which consists of three parts: Attention-LSTM, Attention, and Generate LSTM. The language model reduces the burden of learning both attention and generating diagnostic texts when there is only one layer of LSTM by adding an attention module to the LSTM mezzanine and changing the LSTM parameter transmission path. This module finally generates the corresponding visualized image of the lesion area and diagnoses the text. Our experiments were trained, validated, and tested on a skin histopathological image dataset collected from a major dermatology hospital located in Northeast China. The dataset contains 1.2K histopathological images of 11 skin diseases with corresponding disease category labels and diagnostic text, which was annotated and maintained by numbers of pathologists during the past ten years. We have selected several classical convolutional neural network frameworks for feature extraction of pathological images. Based on the feature of pathological images, we chose two popular image caption methods as the baseline to compare its performance with ours. The results are shown in Table 1, which indicate that our method has higher quality text generation ability for diagnosis results. Furthermore, the C-ALGL NET can be generated to other types of medical images with the purpose of obtaining meaningful diagnostic results.