IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 351 - 400 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Calibrationless Parallel MRI Using Model Based Deep Learning (C-MODL)

    00:14:59
    0 views
    We introduce a fast model based deep learning approach for calibrationless parallel MRI reconstruction. The proposed scheme is a non-linear generalization of structured low rank (SLR) methods that self learn linear annihilation filters from the same subject; the proposed scheme pre-learns the nonlinear annihilation relations in the Fourier domain from exemplar data. The pre-learning strategy significantly reduces the computational complexity, making the proposed scheme three orders of magnitude faster than SLR schemes. The proposed framework also allows the use of a complementary spatial domain prior; the hybrid regularization scheme offers improved performance over calibrated image domain MoDL approach.The calibrationless strategy minimizes potential mismatches between calibration data and main scan, while eliminating the need for a fully sampled calibration region.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
Deep convolutional neural networks have been applied to medical image segmentation tasks successfully in recent years by taking advantage of a large amount of training data with golden standard annotations. However, it is difficult and expensive to obtain good-quality annotations in practice. This work aims to propose a novel semi-supervised learning framework to improve the ventricle segmentation from 2D cine MR images. Our method is efficient and effective by computing soft labels dynamically for the unlabeled data. Specifically, we obtain the soft labels, rather than hard labels, from a teacher model in every learning iteration. The uncertainty of the target label of unlabeled data is intrinsically encoded in the soft label. The soft label can be improved towards the ideal target in training. We use a separate loss to regularize the unlabeled data to produce similar probability distribution as the soft labels in each iteration. Experiments show that our method outperforms a state-of-the-art semi-supervised method.
  • A Context Based Deep Learning Approach for Unbalanced Medical Image Segmentation

    00:13:40
    0 views
    Automated medical image segmentation is an important step in many medical procedures. Recently, deep learning networks have been widely used for various medical image segmentation tasks, with U-Net and generative adversarial nets (GANs) being some of the commonly used ones. Foreground-background class imbalance is a common occurrence in medical images, and U-Net has difficulty in handling class imbalance because of its cross entropy (CE) objective function. Similarly, GAN also suffers from class imbalance because the discriminator looks at the entire image to classify it as real or fake. Since the discriminator is essentially a deep learning classifier, it is incapable of correctly identifying minor changes in small structures. To address these issues, we propose a novel context based CE loss function for U-Net, and a novel architecture Seg-GLGAN. The context based CE is a linear combination of CE obtained over the entire image and its region of interest (ROI). In Seg-GLGAN, we introduce a novel context discriminator to which the entire image and its ROI are fed as input, thus enforcing local context. We conduct extensive experiments using two challenging unbalanced datasets: PROMISE12 and ACDC. We observe that segmentation results obtained from our methods give better segmentation metrics as compared to various baseline methods.
  • Soft-Label Guided Semi-Supervised Learning for Bi-Ventricle Segmentation in Cardiac Cine MRI

    00:08:26
    0 views
    Deep convolutional neural networks have been applied to medical image segmentation tasks successfully in recent years by taking advantage of a large amount of training data with golden standard annotations. However, it is difficult and expensive to obtain good-quality annotations in practice. This work aims to propose a novel semi-supervised learning framework to improve the ventricle segmentation from 2D cine MR images. Our method is efficient and effective by computing soft labels dynamically for the unlabeled data. Specifically, we obtain the soft labels, rather than hard labels, from a teacher model in every learning iteration. The uncertainty of the target label of unlabeled data is intrinsically encoded in the soft label. The soft label can be improved towards the ideal target in training. We use a separate loss to regularize the unlabeled data to produce similar probability distribution as the soft labels in each iteration. Experiments show that our method outperforms a state-of-the-art semi-supervised method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Block Axial Checkerboarding: A Distributed Algorithm for Helical X-Ray CT Reconstruction

    00:14:55
    0 views
    Model-Based Iterative Reconstruction (MBIR) methods for X-ray CT provide improved image quality compared to conventional techniques like filtered backprojection (FBP), but their computational burden is undesirably high. Distributed algorithms have the potential to significantly reduce reconstruction time, but the communication overhead of existing methods has been a considerable bottleneck. This paper proposes a distributed algorithm called Block-Axial Checkerboarding (BAC) that utilizes the special structure found in helical CT geometry to reduce inter-node communication. Preliminary results using a simulated 3D helical CT scan suggest that the proposed algorithm has the potential to reduce reconstruction time in multi-node systems, depending on the balance between compute speed and communication bandwidth.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Interacting Convolution with Pyramid Structure Network for Automated Segmentation of Cervical Nuclei in Pap Smear Images

    00:14:39
    0 views
    Pap smear method which is based on the morphological properties of cell nuclei is used to detect pre-cancerous cells in the uterine cervix. An automated and accurate segmentation of nuclei is essential in detection. In this paper, we propose an Interacting Convolution with Pyramid Structure Network (ICPN), which consists of a sufficient aggregating path that focus on more nucleus contexts and a selecting path that enable nucleus localization. The two paths are built on Interacting Convolutional Modules (ICM) and Internal Pyramid Resolution Complementing Modules (IPRCM) respectively. ICM reciprocally aggregates different details of contexts from two sizes of kernels for capturing distinguishing features of diverse sizes and shapes of nuclei. Meanwhile, IPRCM hierachically complements kinds of resolution features to prevent information loss in encoding precedure. The proposed method shows a Zijdenbos similarity index (ZSI) of 0.972(+/-)0.04 on Herlev dataset compared to the state-of-the-art approach.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Nuclei Segmentation Using Mixed Points and Masks Selected from Uncertainty

    00:14:38
    0 views
    Weakly supervised learning has drawn much attention to mitigate the manual effort of annotating pixel-level labels for segmentation tasks. In nuclei segmentation, point annotation has been successfully used for training. However, points lack the shape information. Thus the segmentation of nuclei with non-uniform color is unsatisfactory. In this paper, we propose a framework of weakly supervised nuclei segmentation using mixed points and masks annotation. To save the extra annotation effort, we select typical nuclei to annotate masks from uncertainty map. Using Bayesian deep learning tools, we first train a model with points annotation to predict the uncertainty. Then we utilize the uncertainty map to select the representative hard nuclei for mask annotation automatically. The selected nuclear masks are combined with points to train a better segmentation model. Experimental results on two nuclei segmentation datasets prove the effectiveness of our method. The code is publicly available.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Enhanced Image Registration with a Network Paradigm and Incorporation of a Deformation Representation Model

    00:13:18
    0 views
    In conventional registration methods, regularization functionals and balancing hyper-parameters need to be designed and tuned. Even so, heterogeneous tissue property and balance requirement remain challenging. In this study, we propose a registration network with a novel deformation representation model to achieve spatially variant conditioning on the deformation vector field (DVF). In the form of a convolutional auto-encoder, the proposed representation model is trained with a rich set of DVFs as a feasibility descriptor. Then the auto-encoding discrepancy is combined with fidelity in training the overall registration network in an unsupervised learning paradigm. The trained network generates DVF estimates from paired images with a single forward inference evaluation run. Experiments with synthetic images and 3D cardiac MRIs demonstrate that the method can accomplish registration with physically and physiologically more feasible DVFs, sub-pixel registration errors and millisecond execution time, and incorporation of the representation model improved the registration network performance significantly.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Attention-Based CNN for KL Grade Classification: Data from the Osteoarthritis Initiative

    00:13:24
    0 views
    Knee osteoarthritis (OA) is a chronic degenerative disorder of joints and it is the most common reason leading to total knee joint replacement. Diagnosis of OA involves subjective judgment on symptoms, medical history, and radiographic readings using Kellgren-Lawrence grade (KL-grade). Deep learning-based methods such as Convolution Neural Networks (CNN) have recently been applied to automatically diagnose radiographic knee OA. In this study, we applied Residual Neural Network (ResNet) to first detect knee joint from radiographs and later combine ResNet with Convolutional Block Attention Module (CBAM) to make a prediction of the KL-grade automatically. The proposed model achieved a multi-class average accuracy of 74.81%, mean squared error of 0.36, and quadratic Kappa score of 0.88, which demonstrates a significant improvement over the published results. The attention maps were analyzed to provide insights on the decision process of the proposed model.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Reduce False-Positive Rate by Active Learning for Automatic Polyp Detection in Colonoscopy Videos

    00:10:19
    0 views
    Automatic polyp detection is reported to have a high false-positive rate (FPR) because of various polyp-like structures and artifacts in complex colon environment. An efficient polyp?s computer-aided detection (CADe) system should have a high sensitivity and a low FPR (high specificity). Convolutional neural networks have been implemented in colonoscopy-based automatic polyp detection and achieved high performance in improving polyp detection rate. However, complex colon environments caused excessive false positives are going to prevent the clinical implementation of CADe system. To reduce false positive rate, we proposed an automatic polyp detection algorithm, combined with YOLOv3 architecture and active learning. This algorithm was trained with colonoscopy videos/ images from 283 subjects. Through testing with 100 short and 9 full colonoscopy videos, the proposed algorithm shown FPR of 2.8% and 1.5%, respectively, similar sensitivities of expert endoscopists.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Diffeomorphic Smoothing for Retinotopic Mapping

    00:18:39
    0 views
    Retinotopic mapping, the mapping of visual input on the retina to cortical neurons, is an important topic in vision science. Typically, cortical neurons are related to visual input on the retina using functional magnetic resonance imaging (fMRI) of cortical responses to slowly moving visual stimuli on the retina. Although it is well known from neurophysiology studies that retinotopic mapping is locally diffeomorphic (i.e. smooth, differentiable, and invertible) within each local area, the retinotopic maps from fMRI are often not diffeomorphic, especially near the fovea, because of the low signal-noise ratio of fMRI. The aim of this study is to develop and solve a mathematical model that produces diffeomorphic retinotopic mapping from fMRI data. Specifically, we adopt a geometry concept, the Beltrami coefficient, as the tool to define diffeomorphism, and model the problem in an optimization framework. We then solve the model with numerical methods. The results obtained from both synthetic and real retinotopy datasets demonstrate that the proposed method is superior to the conventional smoothing methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Scanner Invariant Multiple Sclerosis Lesion Segmentation from MRI

    00:14:34
    0 views
    This paper presents a simple and effective generalization method for magnetic resonance imaging (MRI) segmentation when data is collected from multiple MRI scanning sites and as a consequence is affected by (site-)domain shifts. We propose to integrate a traditional encoder-decoder network with a regularization network. This added network includes an auxiliary loss term which is responsible for the reduction of the domain shift problem and for the resulting improved generalization. The proposed method was evaluated on multiple sclerosis lesion segmentation from MRI data. We tested the proposed model on an in-house clinical dataset including 117 patients from 56 different scanning sites. In the experiments, our method showed better generalization performance than other baseline networks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Optimizing Particle Detection by Colocalization Analysis in Multi-Channel Fluorescence Microscopy Images

    00:14:22
    0 views
    Automatic detection of virus particles displayed as small spots in fluorescence microscopy images is an important task to elucidate infection processes. Particles are typically labeled with multiple fluorophores to acquire multi-channel images. We propose a new weakly supervised approach for automatic particle detection in the lower SNR channel of two-channel fluorescence microscopy data. A main advantage is that labeled data is not required. Instead of using labeled data, colocalization in different channels is exploited as a surrogate for ground truth using a novel measure. Our approach has been evaluated using synthetic as well as challenging live cell microscopy images of human immunodeficiency virus type 1 particles. We found, that our approach yields comparable results to a state-of-the-art supervised method, and can cope with defective fluorescent labeling as well as chromatic aberration of the microscope.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Medical Data Inquiry Using a Question Answering Model

    00:09:46
    0 views
    Access to hospital data is commonly a difficult, costly and time-consuming process requiring extensive interaction with network administrators. This leads to possible delays in obtaining insights from data, such as diagnosis or other clinical outcomes. Healthcare administrators, medical practitioners, researchers and patients could benefit from a system that could extract relevant information from healthcare data in real-time. In this paper, we present a question answering system that allows health professionals to interact with a large-scale database by asking questions in natural language. This system is built upon the BERT and SQLOVA models, which translate a user's request into an SQL query, which is then passed to the data server to retrieve relevant information. We also propose a deep bilinear similarity model to improve the generated SQL queries by better matching terms in the user's query with the database schema and contents. This system was trained with only 75 real questions and 455 back-translated questions, and was evaluated over 75 additional real questions about a real health information database, achieving a retrieval accuracy of 78%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Cone-Beam Artifact Removal using CycleGAN and Spectral Blending for Adaptive Radiotherapy

    00:13:09
    0 views
    Cone-beam computed tomography (CBCT) used in radiotherapy (RT) has the advantage of being taken daily, but is difficult to use for purposes other than patient setup because of the poor image quality compared to fan-beam computed tomography (CT). Even though several methods have been proposed including the deformable image registration method to improve the quality of CBCT, the outcomes have not yet been satisfactory. Recently, deep learning has shown to produce high-quality results for various image-to-image translation tasks, suggesting the possibility of being an effective tool for converting CBCT into CT. In the field of RT, however, it may not always be possible to obtain paired datasets which consist of exactly matching CBCT and CT images. This study aimed to develop a novel, unsupervised deep-learning algorithm, which requires only unpaired CBCT and fan-beam CT images to remove the cone-beam artifact and thereby improve the quality of CBCT. Specifically, two cycle consistency generative adversarial networks (CycleGAN) were trained in the sagittal and coronal directions, and the generated results along those directions were then combined using spectral blending technique. To evaluate our methods, we applied it to American Association of Physicists in Medicine dataset. The experimental results show that our method outperforms the existing CylceGAN-based method both qualitatively and quantitatively.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Macular GCIPL Thickness Map Prediction via Time-Aware Convolutional LSTM

    00:11:12
    0 views
    Macular ganglion cell inner plexiform layer (GCIPL) thickness is an important biomarker for clinical managements of glaucoma. Clinical analysis of GCIPL progression uses averaged thickness only, which easily washes out small changes and reveals no spatial patterns. This is the first work to predict the 2D GCIPL thickness map. We propose a novel Time-aware Convolutional Long Short-Term Memory (TC-LSTM) unit to decompose memories into the short-term and long-term memories and exploit time intervals to penalize the short-term memory. TC-LSTM unit is incorporated into an auto-encoder-decoder so that the end-to-end model can handle irregular sampling intervals of longitudinal GCIPL thickness map sequences and capture both spatial and temporal correlations. Experiments show the superiority of the proposed model over the traditional method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Zero-Shot Medical Image Artifact Reduction

    00:09:52
    0 views
    Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients' characteristics, surrounding environment, etc. However, existing deep learning based artifact reduction methods are restricted by their training set with specific predetermined artifact type and pattern. As such, they have limited clinical adoption. In this paper, we introduce a "Zero-Shot" medical image Artifact Reduction (ZSAR) framework, which leverages the power of deep learning but without using general pre-trained networks or any clean image reference. Specifically, we utilize the low internal visual entropy of an image and train a light-weight image-specific artifact reduction network to reduce artifacts in an image at test-time. We use Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) as vehicles to show that ZSAR can reduce artifacts better than the state-of-the-art both qualitatively and quantitatively, while using shorter test time. To the best of our knowledge, this is the first deep learning framework that reduces artifacts in medical images without using a priori training set.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Functional Multi-Connectivity: A Novel Approach to Assess Multi-Way Entanglement between Networks and Voxels

    00:13:17
    0 views
    The interactions among brain entities, commonly computed through pair-wise functional connectivity, are assumed to be manifestations of information processing which drive function. However, this focus on large-scale networks and their pair-wise temporal interactions is likely missing important information contained within fMRI data. We propose leveraging multi-connected features at both the voxel- and network-level to capture ?multi-way entanglement? between networks and voxels, providing improved resolution of interconnected brain functional hierarchy. Entanglement refers to each brain network being heavily enmeshed with the activity of other networks. Under our multi-connectivity assumption, elements of a system simultaneously communicate and interact with each other through multiple pathways. As such we move beyond the typical pair-wise temporal partial or full correlation. We propose a framework to estimate functional multi-connectivity (FMC) by computing the relationship between system-wide connections of intrinsic connectivity networks (ICNs). Results show that FMC obtains information which is different from standard pair-wise analyses.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CSAF-CNN: Cross-Layer Spatial Attention Map Fusion Network for Organ-At-Risk Segmentation in Head and Neck CT Images

    00:15:01
    0 views
    Accurate segmentation of organ at risk (OARs) in head and neck CT images is critical for planning of radiotherapy of the nasopharynx cancer. In segmentation tasks, fully convolutional networks (FCNs) are widely used. Recently, as a kind of attention module, concurrent squeeze and excitation (scSE) blocks in FCNs are proved to have good performance. However, the attention feature maps generated by scSE blocks are isolated from each other, which doesn?t help network notice the similarities among different feature maps. Consequently, we propose cross-layer spatial attention map fusion network (CSAF-CNN) to fuse different spatial attention maps to solve this problem. In addition, we introduce a top-k exponential logarithmic dice loss (TELD-Loss) in OARs segmentation, which effectively alleviates the serious sample imbalance problem of this task. We evaluate our framework in the head & neck CT scans of nasopharynx cancer patients in StructSeg 2019 challenge. We validate the effectiveness of the proposed method through ablation study, and achieve very competitive results.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Recognition of Event-Associated Brain Functional Networks in EEG for Brain Network Based Applications

    00:19:59
    0 views
    Network perspective studies of the human brain are rapidly increasing due to the advances in the field of network neuroscience. In several brain network based applications, recognition of event-associated brain functional networks (BFNs) can be crucial to understand the event processing in the brain and can play a significant role to characterize and quantify the complex brain networks. This paper presents a framework to identify the event-associated BFNs using phase locking value (PLV) in EEG. Based on the PLV dissimilarities during the rest and event tasks, we identify the reactive band and the event-associated most reactive pairs (MRPs). With the MRPs identified, the event-associated BFNs are formed. The proposed method is employed on `database for emotion analysis using physiological signals (DEAP)' data set to form the BFNs associated with several emotions. With the emotion-associated BFNs as features, comparable state-of-the-art multiple emotion classification accuracies are achieved. Result show that, with the proposed method, event-associated BFNs can be identified and can be used in brain network based applications.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Region Proposal Network with IoU-Balance Loss and Graph Prior for Landmark Detection in 3D Ultrasound

    00:12:04
    0 views
    3D ultrasound (US) can improve the prenatal examinations for fetal growth monitoring. Detecting anatomical landmarks of fetus in 3D US has plenty of applications. Classical methods directly regress the coordinates or gaussian heatmaps of landmarks. However, these methods tend to show drawbacks when facing with the large volume and poor image quality of 3D US images. Different from previous methodology, in this work, we propose a successful and first investigation about exploiting object detection framework for landmark detection in 3D US. By regressing multiple parameters of the landmark-centered bounding box (B-box) with strict criteria, object detection framework presents potentials in outperforming previous landmark detection methods. Specifically, we choose the region proposal network (RPN) with localization and classification branches as our backbone for detection efficiency. Based on 3D RPN, we propose to adopt an IoU-balance loss to enhance the communication between two branches and promote the landmark localization. Furthermore, we build a distance based graph prior to regularize the landmark localization and therefore reduce false positives. We validate our method on the challenging task of detection for five fetal facial landmarks. Regarding the landmark localization and classification criteria, our method outperforms the state-of-the-art methods in efficacy and efficiency.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly-supervised Balanced Attention Network for Gastric Pathology Image Localization and Classification

    00:13:23
    0 views
    Gastric cancer pathological images classification and localization are critical in early diagnosis and therapy of related diseases. Clinically, it takes a long time to scan a pathological image due to its high resolution and blurry boundaries, which leads to requirements for automatic cancer region localization over the pathological image. In this paper, a weakly supervised model is proposed to classify and localize the gastric cancer region in the pathological image with image-level labels. We propose a channel-wise attention (CA) and spatial-wise attention (SA) module to balance the feature (feature balanced module, FBM) and coalesce the dropout attention mechanism (dropout attention module, DAM) into our model to enhance the feature significance. Based on the classification model, we extract the optimal feature map to generate the localization bounding box with a cross attention module. Experiments on a sufficient gastric dataset indicate that our method outperforms other algorithms in classification accuracy and localization accuracy, which demonstrates the effectiveness of our method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Dgnet: Diagnosis Generation Network from Medical Image

    00:04:57
    0 views
    Histopathological examination of skin lesions is considered as gold standard for correct diagnosis of skin disease, especially for manifold types of skin cancers. Limited by scarce histopathological image sets, inconspicuous patterns between different appearances of histopathological features, and weak predictive power of existing models, few researches focus on computer-aided diagnosis of skin diseases based on histopathological images. Although the rapid development of deep learning technologies has shown remarkable advantages over traditional methods on medical images retrieval and mining, it remains the inability to interpret the prediction in visually and semantically meaningful ways. Motivated by above analysis, we put forward an attention-based model to automatically generate diagnostic reports from raw histopathological examination images, and meanwhile providing final diagnostic result and visualize attention for justifications of the model diagnosis process. Our model includes image model, language model and a separate attention module. The image model is proposed to extract multi-scale feature maps. The language model, aims to read and explore the discriminative feature maps, extracted by image model, to learn a direct mapping from caption words to image pixels. We propose an improved, trainable attention module, separated from language model and make the captions data exposed to language model, meanwhile, we apply a week-touched method in the connection of attention module and language model. In our experiments, we conduct the model training, model validating, and model testing using a dataset of 1200 histopathological images consisting of 11 different skin diseases. These histopathological images and related diagnostic reports were collected in collaboration with a number of pathologists during the past ten years. As the results show, our approach performs a more superior data-fitting ability and faster convergence rate compared with soft attention model. Furthermore, according to the comparison of evaluation scores , our model is indicative of better language understanding.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Mask uncertainty regularization to improve machine learning-based medical image segmentation

    00:07:00
    0 views
    Segmentation of the different structures on CT and MRI scans is still challenging problem which requires very accurate and confident ground truth (GT) segmentation and strong automated solutions. This work present an approach to naturally adjust the training process by smoothing the borders of the segmentation mask in the band of several pixels. The proposed method can be considered as either regularization or data pre-processing step to compensate uncertainty of the GT. This method can be used for any organs segmentation problem both binary and multiclass. As one of the applications we report results in terms of Dice, Precision and Recall scores of the numerical experiments for the binary segmentation of the spleen on abdominal CT scans. Obtained results demonstrate stable and continuous improvement from very strong baseline in target Dice metric up to 3.8% for one fold and 1% for the mean averaged 5-fold ensemble, achieving Dice of 0.9486 and Recall 0.9535.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Identifying Hard-Tissue Conditions from Dental Images Using Convolutional Neural Network

    00:06:56
    1 view
    Despite the enormous success of deep learning in various biomedical domains, its applications to dental hard tissue conditions are underexplored, in particular for analyzing photographic dental images. To bridge this gap, we propose a deep convolutional neural network framework to identify dental hard-tissue conditions from photographic tooth images, and show its superior performance over a few popular learning models.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Topologic and Geometric Methods in Osteoporotic Imaging

    00:31:05
    0 views
    Osteoporosis is a common age-related disease associated with enhanced fracture risk. Approximately, 40% of women and 15% of men suffer one or more osteoporotic fractures in their lifetime, which reduce quality of life and, often, lead to immobility and mortality. State-of-the-art in vivo volumetric imaging allows segmentation and quantitative characterization of trabecular bone microstructure for accurate assessment of bone strength and fracture-risk. This paper reviews topologic and geometric methods characterizing bone microstructure and their applications to osteoporosis.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • FGB: Feature Guidance Branch for Organ Detection in Medical Images

    00:11:57
    0 views
    In this paper, we propose a novel method that detects and locates different abdominal organs from CT images. We 1) utilize the distributions of organs on CT images as a prior to guide object localization; 2) design an efficient guidance map and propose an interpretable scoring method, feature guidance branch(FGB) to filtrate low-level feature maps by scoring for them; 3) establish effective relations among feature maps by visualization to enhance interpretability. Evaluated with three public datasets, the proposed method outperforms the baseline model on all tasks with a remarkable margin. Furthermore, we conduct exhaustive visualization experiments to verify the rationality and effectiveness of our proposed model.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Diffeomorphic Registration for Retinotopic Mapping Via Quasiconformal Mapping

    00:16:43
    0 views
    Human visual cortex is organized into several functional regions/areas. Identifying these visual areas of the human brain (i.e., V1, V2, V4, etc) is an important topic in neurophysiology and vision science. Retinotopic mapping via functional magnetic resonance imaging (fMRI) provides a non-invasive way of defining the boundaries of the visual areas. It is well known from neurophysiology studies that retinotopic mapping is diffeomorphic within each local area (i.e. locally smooth, differentiable, and invertible). However, due to the low signal-noise ratio of fMRI, the retinotopic maps from fMRI are often not diffeomorphic, making it difficult to delineate the boundaries of visual areas. The purpose of this work is to generate diffeomorphic retinotopic maps and improve the accuracy of the retinotopic atlas from fMRI measurements through the development of a specifically designed registration procedure. Although existing cortical surface registration methods are very advanced, most of them have not fully utilized the features of retinotopic mapping. By considering those features, we form a mathematical model for registration and solve it with numerical methods. We compared our registration with several popular methods on synthetic data. The results demonstrate that the proposed registration is superior to conventional methods for the registration of retinotopic maps. The application of our method to a real retinotopic mapping dataset also resulted in much smaller registration errors.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • SUNet: A Lesion Regularized Model for Simultaneous Diabetic Retinopathy and Diabetic Macular Edema Grading

    00:11:30
    0 views
    Diabetic retinopathy (DR), as a leading ocular disease, is often with a complication of diabetic macular edema (DME). However, most existing works only aim at DR grading but ignore the DME diagnosis, but doctors will do both tasks simultaneously. In this paper, motivated by the advantages of multi-task learning for image classification, and to mimic the behavior of clinicians in visual inspection for patients, we propose a feature Separation and Union Network (SUNet) for simultaneous DR and DME grading. Further, to improve the interpretability of the disease grading, a lesion regularizer is also imposed to regularize our network. Specifically, given an image, our SUNet first extracts a common feature for both DR and DME grading and lesion detection. Then a feature blending block is introduced which alternately uses feature separation and feature union for task-specific feature extraction,where feature separation learns task-specific features for lesion detection and DR and DME grading, and feature union aggregates features corresponding to lesion detection, DR and DME grading. In this way, we can distill the irrelevant features and leverage features of different but related tasks to improve the performance of each given task. Then the taskspecific features of the same task at different feature separation steps are concatenated for the prediction of each task. Extensive experiments on the very challenging IDRiD dataset demonstrate that our SUNet significantly outperforms existing methods for both DR and DME grading.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Stem-Based Dissection of Inferior Fronto-Occipital Fasciculus with a Deep Learning Model

    00:15:06
    0 views
    The aim of this work is to improve the virtual dissection of the Inferior Frontal Occipital Fasciculus (IFOF) by combining a recent insight on white matter anatomy from ex-vivo dissection and a data driven approach with a deep learning model. Current methods of tract dissection are not robust with respect to false positives and are neglecting the neuroanatomical waypoints of a given tract, like the stem. In this work we design a deep learning model to segment the stem of IFOF and we show how the dissection of the tract can be improved. The proposed method is validated on the Human Connectome Project dataset, where expert neuroanatomists segmented the IFOF on multiple subjects. In addition we compare the results to the most recent method in the literature for automatic tract dissection.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Tooth Segmentation and Labeling from Digital Dental Casts

    00:13:06
    0 views
    This paper presents an approach to automatic and accurate segmentation and identification of individual teeth from digital dental casts via deep graph convolutional neural networks. Instead of performing the teeth-gingiva and inter-tooth segmentation in two separate phases, the proposed method enables the simultaneous segmentation and identification of the gingiva and teeth. We perform the vertex-wise feature learning via the feature steered graph convolutional neural network (FeaStNet) [1] that dynamically updates the mapping between convolutional filters and local patches from digital dental casts. The proposed framework handles the tightly intertwined segmentation and labeling tasks with a novel constraint on crown shape distribution and concave contours to remove ambiguous labeling of neighboring teeth. We further enforce the smooth segmentation using the pairwise relationship in local patches to penalize rough and inaccurate region boundaries and regularize the vertex-wise labeling in the training process. The qualitative and quantitative evaluations on the digital dental casts obtained in the clinical orthodontics demonstrate that the proposed method achieves efficient and accurate tooth segmentation and produces performance improvements to the state-of-the-art.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Condensed U-Net (CU-Net): An Improved U-Net Architecture for Cell Segmentation Powered by 4x4 Max-Pooling Layers

    00:14:37
    0 views
    Recently, the U-Net has been the dominant approach in the cell segmentation task in biomedical images due to its success in a wide range of image recognition tasks. However, recent studies did not focus enough on updating the architecture of the U-Net and designing specialized loss functions for bioimage segmentation. We show that the U-Net architecture can achieve more successful results with efficient architectural improvements. We propose a condensed encoder-decoder scheme that employs the 4x4 max-pooling operation and triple convolutional layers. The proposed network architecture is trained using a novel combined loss function specifically designed for bioimage segmentation. On the benchmark datasets from the Cell Tracking Challenge, the experimental results show that the proposed cell segmentation system outperforms the U-Net.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Supervised Augmentation: Leverage Strong Annotation for Limited Data

    00:13:18
    0 views
    A previously less exploited dimension to approach the data scarcity challenge in medical imaging classification is to leverage strong annotation, when available data is limited but the annotation resource is plentiful. Strong annotation at finer level, such as region of interest, carries more information than simple image level annotation, therefore should theoretically improve performance of a classifier. In this work, we explored utilizing strong annotation by developing a new data augmentation method, which improved over common data augmentation (random crop and cutout) by significantly enriching augmentation variety and ensuring valid label given guidance from strong annotation. Experiments on a real world application of classifying gastroscopic images demonstrated that our method outperformed state-of-the-art methods by a large margin at all different settings of data scarcity. Additionally, our method is flexible to integrate with other CNN improvement techniques and handle data with mixed annotation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Combining Multimodal Information for Metal Artefact Reduction: An Unsupervised Deep Learning Framework

    00:15:05
    0 views
    Metal artefact reduction (MAR) techniques aim at removing metal-induced noise from clinical images. In Computed Tomography (CT), supervised deep learning approaches have been shown effective but limited in generalisability, as they mostly rely on synthetic data. In Magnetic Resonance Imaging (MRI) instead, no method has yet been introduced to correct the susceptibility artefact, still present even in MAR-specific acquisitions. In this work, we hypothesise that a multimodal approach to MAR would improve both CT and MRI. Given their different artefact appearance, their complementary information can compensate for the corrupted signal in either modality. We thus propose an unsupervised deep learning method for multimodal MAR. We introduce the use of Locally Normalised Cross Correlation as a loss term to encourage the fusion of multimodal information. Experiments show that our approach favours a smoother correction in the CT, while promoting signal recovery in the MRI.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Localization of Critical Findings in Chest X-Ray without Local Annotations Using Multi-Instance Learning

    00:14:48
    0 views
    The automatic detection of critical findings in chest X-rays (CXR), such as pneumothorax, is important for assisting radiologists in their clinical workflow like triaging time-sensitive cases and screening for incidental findings. While deep learning (DL) models has become a promising predictive technology with near-human accuracy, they commonly suffer from a lack of explainability, which is an important aspect for clinical deployment of DL models in the highly regulated healthcare industry. For example, localizing critical findings in an image is useful for explaining the predictions of DL classification algorithms. While there have been a host of joint classification and localization methods for computer vision, the state-of-the-art DL models require locally annotated training data in the form of pixel level labels or bounding box coordinates. In the medical domain, this requires an expensive amount of manual annotation by medical experts for each critical finding. This requirement becomes a major barrier for training models that can rapidly scale to various findings. In this work, we address these shortcomings with an interpretable DL algorithm based on multi-instance learning that jointly classifies and localizes critical findings in CXR without the need for local annotations. We show competitive classification results on three different critical findings (pneumothorax, pneumonia, and pulmonary edema) from three different CXR datasets.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Exploiting Uncertain Deep Networks for Data Cleaning in Digital Pathology

    00:12:47
    0 views
    With the advent of digital pathology, there has been an increasing interest in providing pathologists with machine learning tools, often based on deep learning, to obtain faster and more robust image assessment. Nonetheless, the accuracy of these tools relies on the generation of large training sets of pre-labeled images. This is typically a challenging and cumbersome process, requiring extensive pre-processing to remove spurious samples that may lead the training to failure. Unlike their plain counterparts, which tend to provide overconfident decisions and cannot identify samples they have not been specifically trained for, Bayesian Convolutional Neural Networks provide a reliable measure of classification uncertainty. In this study, we exploit this inherent capability to automatize the data cleaning phase of histopathological image assessment. Our experiments on a case study of Colorectal Cancer image classification demonstrate that our approach can boost the accuracy of downstream classification by 15% at least.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • DeepSEED: 3D Squeeze-and-Excitation Encoder-Decoder Convolutional Neural Networks for Pulmonary Nodule Detection

    00:07:10
    0 views
    Pulmonary nodule detection plays an important role in lung cancer screening with low-dose computed tomography (CT) scans. It remains challenging to build nodule detection deep learning models with good generalization performance due to unbalanced positive and negative samples. In order to overcome this problem and further improve state-of-the-art nodule detection methods, we develop a novel deep 3D convolutional neural network with an Encoder-Decoder structure in conjunction with a region proposal network. Particularly, we utilize a dynamically scaled cross entropy loss to reduce the false positive rate and combat the sample imbalance problem associated with nodule detection. We adopt the squeeze-and-excitation structure to learn effective image features and utilize inter-dependency information of different feature maps. We have validated our method based on publicly available CT scans with manually labelled ground-truth obtained from LIDC/IDRI dataset and its subset LUNA16 with thinner slices. Ablation studies and experimental results have demonstrated that our method could outperform state-of-the-art nodule detection methods by a large margin.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Leveraging Self-supervised Denoising for Image Segmentation

    00:14:52
    0 views
    Deep learning (DL) has arguably emerged as the method of choice for the detection and segmentation of biological structures in microscopy images. However, DL typically needs copious amounts of annotated training data that is for biomedical problems usually not available and excessively expensive to generate. Additionally, tasks become harder in the presence of noise, requiring even more high-quality training data. Hence, we propose to use denoising networks to improve the performance of other DL-based image segmentation methods. More specifically, we present ideas on how state-of-the-art self-supervised CARE networks can improve cell/nuclei segmentation in microscopy data. Using two state-of-the-art baseline methods, U-Net and StarDist, we show that our ideas consistently improve the quality of resulting segmentations, especially when only limited training data for noisy micrographs are available.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Characterizing the Propagation Pattern of Neurodegeneration in Alzheimers Disease by Longitudinal Network Analysis

    00:13:14
    0 views
    Converging evidence shows that Alzheimer?s disease (AD) is a neurodegenerative disease that represents a disconnection syndrome, whereby a large-scale brain network is progressively disrupted by one or more neuropathological processes. However, the mechanism by which pathological entities spread across a brain network is largely unknown. Since pathological burden may propagate trans-neuronally, we propose to characterize the propagation pattern of neuropathological events spreading across relevant brain networks that are regulated by the organization of the network. Specifically, we present a novel mixed-effect model to quantify the relationship between longitudinal network alterations and neuropathological events observed at specific brain regions, whereby the topological distance to hub nodes, high-risk AD genetics, and environmental factors (such as education) are considered as predictor variables. Similar to many cross-section studies, we find that AD-related neuropathology preferentially affects hub nodes. Furthermore, our statistical model provides strong evidence that abnormal neuropathological burden diffuses from hub nodes to non-hub nodes in a prion-like manner, whereby the propagation pattern follows the intrinsic organization of the large-scale brain network.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Robust Automatic Multiple Landmark Detection

    00:13:48
    0 views
    Reinforcement learning (RL) has proven to be a powerful tool for automatic single landmark detection in 3D medical images. In this work, we extend RL-based single landmark detection to detect multiple landmarks simultaneously in the presence of missing data in the form of defaced 3D head MR images. Our purposed technique is both time-efficient and robust to missing data. We demonstrate that adding auxiliary landmarks can improve the accuracy and robustness of estimating primary target landmark locations. The multi-agent deep Q-network (DQN) approach described here detects landmarks within 2mm, even in the presence of missing data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-Scale Unrolled Deep Learning Framework for Accelerated Magnetic Resonance Imaging

    00:10:23
    0 views
    Accelerating data acquisition in magnetic resonance imaging (MRI) has been of perennial interest due to its prohibitively slow data acquisition process. Recent trends in accelerating MRI employ data-centric deep learning frameworks due to its fast inference time and ?one-parameter-fit-all? principle unlike in traditional model-based acceleration techniques. Unrolled deep learning framework that combines the deep priors and model knowledge are robust compared to naive deep learning-based framework. In this paper, we propose a novel multiscale unrolled deep learning framework which learns deep image priors through multi-scale CNN and is combined with unrolled framework to enforce data-consistency and model knowledge. Essentially, this framework combines the best of both learning paradigms:model-based and data-centric learning paradigms. Proposed method is verified using several experiments on numerous data sets.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CEUS-Net: Lesion Segmentation in Dynamic Contrast-Enhanced Ultrasound with Feature-Reweighted Attention Mechanism

    00:11:51
    0 views
    Contrast-enhanced ultrasound (CEUS) has been a popular clinical imaging technique for the dynamic visualization of the tumor microvasculature. Due to the heterogeneous intratumor vessel distribution and ambiguous lesion boundary, automatic tumor segmentation in the CEUS sequence is challenging. To overcome these difficulties, we propose a novel network, CEUS-Net, which is a novel U-net network infused with our designed feature-reweighted dense blocks. Specifically, CEUS-Net incorporates the dynamic channel-wise feature re-weighting into the Dense block for adapting the importance of learned lesion-relevant features. Besides, in order to efficiently utilize dynamic characteristics of CEUS modality, our model attempts to learn spatial-temporal features encoded in diverse enhancement patterns using a multichannel convolutional module. The CEUS-Net has been tested on tumor segmentation tasks of CEUS images from breast and thyroid lesions. It results in the dice index of 0.84, and 0.78 for CEUS segmentation of breast and thyroid respectively.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Informative Retrieval Framework for Histopathology Whole Slides Images Based on Deep Hashing Network

    00:11:00
    0 views
    Histopathology image retrieval is an emerging application for Computer-aided cancer diagnosis. However, the current retrieval methods, especially the methods based on deep hashing, pay less attention to the characteristic of histopathology whole slide images (WSIs). The retrieved results are occasionally occupied by similar images from a few WSIs. The retrieval database cannot be sufficiently utilized. To solve these issues, we proposed an informative retrieval framework based on deep hashing network. Specifically, a novel loss function for the hashing network and a retrieval strategy are designed, which contributes more informative retrieval results without reducing the retrieval precision. The proposed method was verified on the ACDC-LungHP dataset and compared with the state-of-the-art method. The experimental results have demonstrated the effectiveness of our method in the retrieval of large-scale database containing histopathology while slide images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improving Lung Nodule Detection with Learnable Non-Maximum Suppression

    00:14:34
    0 views
    Current lung nodule detection methods generate several candidate regions per nodule, such that a Non-Maximum Suppression (NMS) algorithm is required to select a single region per nodule while eliminating the redundant ones. GossipNet is a 1D Neural Network (NN) for NMS, which can learn the NMS parameters rather than relying on handcrafted ones. However, GossipNet does not take advantage of image features to learn NMS. We use Faster R-CNN with ResNet18 for candidate region detection and present FeatureNMS --- a neural network that provides additional image features to the input of GossipNet, which result from a transformation over the voxel intensities of each candidate region in the CT image. Experiments indicate that FeatureNMS can improve nodule detection in 2.33% and 0.91%, on average, when compared to traditional NMS and the original GossipNet, respectively.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Diagnosing Colorectal Polyps in the Wild with Capsule Networks

    00:12:07
    0 views
    Colorectal cancer, largely arising from precursor lesions called polyps, remains one of the leading causes of cancer-related death worldwide. Current clinical standards require the resection and histopathological analysis of polyps due to test accuracy and sensitivity of optical biopsy methods falling substantially below recommended levels. In this study, we design a novel capsule network architecture (D-Caps) to improve the viability of optical biopsy of colorectal polyps. Our proposed method introduces several technical novelties including a novel capsule architecture with a capsule-average pooling (CAP) method to improve efficiency in large-scale image classification. We demonstrate improved results over the previous state-of-the-art convolutional neural network (CNN) approach by as much as 43%. This work provides an important benchmark on the new Mayo Polyp dataset, a significantly more challenging and larger dataset than previous polyp studies, with results stratified across all available categories, imaging devices and modalities, and focus modes to promote future direction into AI-driven colorectal cancer screening systems.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Quantitative Analysis of Microglia in Bright-Field Images of Zebrafish

    00:14:05
    1 view
    Microglia are known to play important roles in brain development and homeostasis, yet their molecular regulation is still poorly understood. Identification of microglia regulators is facilitated by genetic screening and studying the phenotypic effects in animal models. Zebrafish are ideal for this, as their external development and transparency allow in vivo imaging by bright-field microscopy in the larval stage. However, manual analysis of the images is very labor intensive. Here we present a computational method to automate the analysis. It merges the optical sections into an all-in-focus image to simplify the subsequent steps of segmenting the brain region and detecting the contained microglia for quantification and downstream statistical testing. Evaluation on a fully annotated data set of 50 zebrafish larvae shows that the method performs close to the human expert.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly Supervised Lesion Co-Segmentation on CT Scans

    00:14:17
    0 views
    Lesion segmentation in medical imaging serves as an effective tool for assessing tumor sizes and monitoring changes in growth. However, not only is manual lesion segmentation time-consuming, but it is also expensive and requires expert radiologist knowledge. Therefore many hospitals rely on a loose substitute called response evaluation criteria in solid tumors (RECIST). Although these annotations are far from precise, they are widely used throughout hospitals and are found in their picture archiving and communication systems (PACS). Therefore, these annotations have the potential to serve as a robust yet challenging means of weak supervision for training full lesion segmentation models. In this work, we propose a weakly-supervised co-segmentation model that first generates pseudo-masks from the RECIST slices and uses these as training labels for an attention-based convolutional neural network capable of segmenting common lesions from a pair of CT scans. To validate and test the model, we utilize the DeepLesion dataset, an extensive CT-scan lesion dataset that contains 32,735 PACS bookmarked images. Extensive experimental results demonstrate the efficacy of our co-segmentation approach for lesion segmentation with a mean Dice coefficient of 90.3%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Brain Lesion Detection Using a Robust Variational Autoencoder and Transfer Learning

    00:12:14
    0 views
    Automated brain lesion detection from multi-spectral MR images can assist clinicians by improving sensitivity as well as specificity in lesion studies. Supervised machine learning methods have been successful in lesion detection. However, these methods usually rely on a large number of manually delineated imaging data for specific imaging protocols and parameters and often do not generalize well to other imaging parameters and demographics. Most recently, unsupervised models such as Auto-Encoders have become attractive for lesion detection since they do not need access to manually delineated lesions. Despite the success of unsupervised models, using pre-trained models on an unseen dataset is still a challenge. This difficulty is because the new dataset may use different imaging parameters, demographics, and different pre-processing techniques. Additionally, using a clinical dataset that has anomalies and outliers can make unsupervised learning challenging since the outliers can unduly affect the performance of the learned models. These two difficulties make unsupervised lesion detection a particularly challenging task. The method proposed in this work addresses these issues using a two-prong strategy: (1) we use a robust variational autoencoder model that is based on robust statistics specifically, $beta$-divergence which can learn from data with outliers; (2) we use a transfer-learning method for learning models across datasets with different characteristics. Our results on MRI datasets demonstrate that we can improve the accuracy of lesion detection by adapting robust statistical models and transfer learning for a Variational Auto-Encoder model.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Separation of Metabolite and Macromolecule Signals for 1H-MRSI Using Learned Nonlinear Models

    00:14:11
    0 views
    This paper presents a novel method to reconstruct and separate metabolite and macromolecule (MM) signals in 1H magnetic resonance spectroscopic imaging (MRSI) data using learned nonlinear models. Specifically, deep autoencoder (DAE) networks were constructed and trained to learn the nonlinear low-dimensional manifolds, where the metabolite and MM signals reside individually. A regularized reconstruction formulation is proposed to integrate the learned models with signal encoding model to reconstruct and separate the metabolite and MM components. An efficient algorithm was developed to solve the associated optimization problem. The performance of the proposed method has been evaluated using simulation and experimental 1H-MRSI data. Efficient low-dimensional signal representation of the learned models and improved metabolite/MM separation over the standard parametric fitting based approach have been demonstrated.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Longitudinal Analysis of Mild Cognitive Impairment Via Sparse Smooth Network and Attention-Based Stacked Bi-Directional Long Short Term Memory

    00:08:52
    0 views
    Alzheimer's disease (AD) is a common irreversible neurodegenerative disease among elderlies. To identify the early stage of AD (i.e., mild cognitive impairment, MCI), many recent studies in the literature use only a single time point and ignore the conducive multi-time points information. Therefore, we propose a novel method that combines multi-time sparse smooth network with long short-term memory (LSTM) network to identify early and late MCIs from multi-time points of resting-state functional magnetic resonance imaging (rs-fMRI). Specifically, we first construct the sparse smooth brain network from rs-fMRI data at different time-points, then an attention based stacked bidirectional LSTM is used to extract features and analyze them longitudinally. Finally, we classify them using Softmax classifier. The proposed method is evaluated on the public Alzheimer's Disease Neuroimaging Phase II (ADNI-2) database and demonstrates the impressive erformance compared with the state-of-the-art methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep learning based MPI system matrix recovery to increase the spatial resolution of reconstructed images

    00:12:54
    0 views
    Magnetic particle imaging (MPI) data is commonly reconstructed using a system matrix acquired in a time-consuming calibration measurement. The calibration approach has the important advantage over model-based reconstruction that it takes the complex particle physics as well as system imperfections into account. This benefit comes for the cost that the system matrix needs to be re-calibrated whenever the scan parameters, particle types or even the particle environment (e.g. viscosity or temperature) changes. One route for reducing the calibration time is the sampling of the system matrix at a subset of the spatial positions of the intended field-of-view and employing system matrix recovery. Recent approaches used compressed sensing (CS) and achieved subsampling factors up to 28 that still allowed reconstructing MPI images of sufficient quality. In this work, we propose a novel framework with ComplexRGB-Loss and a 3d-System Matrix Recovery Network. We demonstrate that the 3d-SMRnet can recover a 3d system matrix with a subsampling factor of 64 in less than one minute. Furthermore, 3d-SMRnet outperforms CS in terms of system matrix quality, reconstructed image quality, and processing time. The advantage of our method is demonstrated by reconstructing open access MPI datasets. The model is further shown to be capable of inferring system matrices for different particle types.

Advertisement