IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 1 - 50 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Context Based Deep Learning Approach for Unbalanced Medical Image Segmentation

    00:13:40
    0 views
    Automated medical image segmentation is an important step in many medical procedures. Recently, deep learning networks have been widely used for various medical image segmentation tasks, with U-Net and generative adversarial nets (GANs) being some of the commonly used ones. Foreground-background class imbalance is a common occurrence in medical images, and U-Net has difficulty in handling class imbalance because of its cross entropy (CE) objective function. Similarly, GAN also suffers from class imbalance because the discriminator looks at the entire image to classify it as real or fake. Since the discriminator is essentially a deep learning classifier, it is incapable of correctly identifying minor changes in small structures. To address these issues, we propose a novel context based CE loss function for U-Net, and a novel architecture Seg-GLGAN. The context based CE is a linear combination of CE obtained over the entire image and its region of interest (ROI). In Seg-GLGAN, we introduce a novel context discriminator to which the entire image and its ROI are fed as input, thus enforcing local context. We conduct extensive experiments using two challenging unbalanced datasets: PROMISE12 and ACDC. We observe that segmentation results obtained from our methods give better segmentation metrics as compared to various baseline methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning with Less Data Via Weakly Labeled Patch Classification in Digital Pathology

    00:14:00
    0 views
    In Digital Pathology (DP), labeled data is generally very scarce due to the requirement that medical experts provide annotations. We address this issue by learning transferable features from weakly labeled data, which are collected from various parts of the body and are organized by non-medical experts. In this paper, we show that features learned from such weakly labeled datasets are indeed transferable and allow us to achieve highly competitive patch classification results on the colorectal cancer (CRC) dataset and the PatchCamelyon (PCam) dataset by using an order of magnitude less labeled data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Open-Set Oct Image Recognition with Synthetic Learning

    00:11:38
    0 views
    Due to new eye diseases discovered every year, doctors may encounter some rare or unknown diseases. Similarly, in medical image recognition field, many practical medical classification tasks may encounter the case where some testing samples belong to some rare or unknown classes that have never been observed or included in the training set, which is termed as an open-set problem. As rare diseases samples are difficult to be obtained and included in the training set, it is reasonable to design an algorithm that recognizes both known and unknown diseases. Towards this end, this paper leverages a novel generative adversarial network (GAN) based synthetic learning for open-set retinal optical coherence tomography (OCT) image recognition. Specifically, we first train an auto-encoder GAN and a classifier to reconstruct and classify the observed images, respectively. Then a subspace-constrained synthesis loss is introduced to generate images that locate near the boundaries of the subspace of images corresponding to each observed disease, meanwhile, these images cannot be classified by the pre-trained classifier. In other words, these synthesized images are categorized into an unknown class. In this way, we can generate images belonging to the unknown class, and add them into the original dataset to retrain the classifier for the unknown disease discovery
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • An Alternating Projection-Image Domains Algorithm for Spectral CT

    00:14:03
    0 views
    Spectral computerized tomography (Spectral CT) is a medical and biomedical imaging technique which uses the spectral information of the attenuated X-ray beam. Energy-resolved photon-counting detector is a promising technology for improved spectral CT imaging and allows to obtain material selective images. Two different kind of approaches resolve the problem of spectral reconstruction consisting of material decomposition and tomographic reconstruction: the two-step methods which are most often projection based methods, and the one-step methods. While the projection based methods are interesting for the fast computational time, it is not easy to introduce some spatial priors in the image domain contrary to one-step methods. We present a one-step method combining, in an alternating minimization scheme, a multi-material decomposition in the projection domain and a regularized tomographic reconstruction introducing the spatial priors in the image domain. We present and discuss promising results from experimental data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Domain Adaptation for Cross-Device Oct Lesion Detection Via Learning Adaptive Features

    00:03:57
    0 views
    Optical coherence tomography (OCT) is widely used in computer-aided medical diagnosis of retinal pathologies. Deep convolutional network has been successfully applied to detect lesions from OCT images. Different OCT imaging devices inevitably cause variation in the distribution between training phase and testing phase, which will lead to extremely reduction on model performance. Most existing unsupervised domain adaptation methods are mainly focused on lesion segmentation, there are few studies on lesion detection tasks especially for OCT images. In this paper, we propose a novel unsupervised domain adaptation framework adaptively learning feature representation to achieve cross-device lesion detection for OCT images. Firstly, we design global and local adversarial discriminators to force the networks to learn device-independent features. Secondly, we develop a non-parameter adaptive feature norm into global adversarial discriminator to stabilize the discrimination in target domain. Finally, we perform the validation experiment on lesion detection task across two OCT devices. The results exhibit that the proposed framework has promising performance compared with other unsupervised domain adaptation approaches.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Generalizable Framework for Domain-Specific Nonrigid Registration: Application to Cardiac Ultrasound

    00:10:27
    0 views
    Many applications of nonrigid point set registration could benefit from a domain-specific model of allowed deformations. We observe that registration methods using mixture models optimize a differentiable log-likelihood function and are thus amenable to gradient-based optimization. In theory, this allows optimization of any transformations that are expressed as arbitrarily nested differentiable functions. In practice such optimization problems are readily handled with modern machine learning tools. We demonstrate, in experiments on synthetic data generated from a model of the left cardiac ventricle, that complex nested transformations can be robustly optimized using this approach. As a realistic application, we also use the method to propagate the model through an entire cardiac ultrasound sequence. We conclude that this approach, which works with both points and oriented points, provides an easily generalizable framework in which complex, application-specific transformation models may be constructed and optimized.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning and Unsupervised Fuzzy C-Means Based Level-Set Segmentation for Liver Tumor

    00:07:51
    1 view
    In this paper, we propose and validate a novel level-set method integrating an enhanced edge indicator and an automatically derived initial curve for CT based liver tumor segmentation. In the beginning, a 2D U-net is used to localize the liver and a 3D fully convolutional network (FCN) is used to refine the liver segmentation as well as to localize the tumor. The refined liver segmentation is used to remove non-liver tissues for subsequent tumor segmentation. Given that the tumor segmentation obtained from the aforementioned 3D FCN is typically imperfect, we adopt a novel level-set method to further improve the tumor segmentation. Specifically, the probabilistic distribution of the liver tumor is estimated using fuzzy c-means clustering and then utilized to enhance the object indication function used in level-set. The proposed segmentation pipeline was found to have an outstanding performance in terms of both liver and liver tumor.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deconvolution for Improved Multifractal Characterization of Tissues in Ultrasound Imaging

    00:15:12
    0 views
    Several existing studies showed the interest of estimating the multifractal properties of tissues in ultrasound (US) imaging. However, US images are not carrying information only about the tissues, but also about the US scanner. Deconvolution methods are a common way to restore the tissue reflectivity function, but, to our knowledge, their impact on estimated fractal or multifractal behavior has not been studied yet. The objective of this paper is to investigate this influence through a dedicated simulation pipeline and an in vivo experiment.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Determination of the Fetal Cardiac Cycle in Ultrasound Using Spatio-Temporal Neural Networks

    00:10:26
    0 views
    The characterization of the fetal cardiac cycle is an important determination of fetal health and stress. The anomalous appearance of different anatomical structures during different phases of the heart cycle is a key indicator of fetal congenital hearth disease. However, locating the fetal heart using ultrasound is challenging, as the heart is small and indistinct. In this paper, we present a viewpoint agnostic solution that automatically characterizes the cardiac cycle in clinical ultrasound scans of the fetal heart. When estimating the state of the cardiac cycle, our model achieves a mean-squared error of 0.177 between the ground truth cardiac cycle and our prediction. We also show that our network is able to localize the heart, despite the lack of labels indicating the location of the heart in the training process.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Enriching Statistical Inferences on Brain Connectivity for Alzheimer's Disease Analysis Via Latent Space Graph Embedding

    00:11:26
    0 views
    We develop a graph node embedding Deep Neural Network that leverage on statistical outcome measure and graph structure given in the data. The objective is to identify regions of interests (ROIs) in the brain that are affected by topological changes of brain connectivity due to specific neurodegenerative diseases by enriching statistical group analysis. We tackle this problem by learning a latent space where statistical inference can be made more effectively. Our experiments on a large-scale Alzheimer's Disease dataset show promising result identifying ROIs that show statistically significant group differences separating even early and late Mild Cognitive Impairment (MCI) groups whose effect sizes are very subtle.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Progressive Abdominal Segmentation with Adaptively Hard Region Prediction and Feature Enhancement

    00:07:02
    0 views
    Abdominal multi-organ segmentation achieves much attention in recent medical image analysis. In this paper, we propose a novel progressive framework to promote the segmentation accuracy of abdominal organs with various shapes and small sizes. The entire framework consists of three parts: 1) a Global Segmentation Module extracting the pixel-wise global feature representation; 2) a Localization Module adaptively discovering the top-n hard local regions and effective both in training and testing phase; 3) an Enhancement Module enhancing the features of hard local regions and aggregating with the global features to refine the final representation. Specifically, we predefine 512 region proposals on the cross-sectional view of the CT image to generate coordinates pseudo labels which can supervise Localization Module. In the training phase, we calculate the segmentation error of each region proposal and select the eight ones with the lowest Dice scores as the hard regions. Once these hard regions are determined, their center coordinates are adopted as the pseudo labels to train the Localization Network by using Manhattan Distance Loss. For inference, the entire model directly accomplishes the hard region localization and feature enhancement to promote pixel-wise accuracy. Without bells and whistles, extensive experimental results demonstrate that the proposed method outperforms its counterparts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Lung Nodule Malignancy Classification Based on NLSTx Data

    00:14:48
    0 views
    While several datasets containing CT images of lung nodules exist, they do not contain definitive diagnoses and often rely on radiologists' visual assessment for malignancy rating. This is in spite of the fact that lung cancer is one of the top three most frequently misdiagnosed diseases based on visual assessment. In this paper, we propose a dataset of difficult-to-diagnose lung nodules based on data from the National Lung Screening Trial (NLST), which we refer to as NLSTx. In NLSTx, each malignant nodule has a definitive ground truth label from biopsy. Herein, we also propose a novel deep convolutional neural network (CNN) / recurrent neural network framework that allows for use of pre-trained 2-D convolutional feature extractors, similar to those developed in the ImageNet challenge. Our results show that the proposed framework achieves comparable performance to an equivalent 3-D CNN while requiring half the number of parameters.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 3D Ultrasound Generation from Partial 2D Observations Using Fully Convolutional and Spatial Transformation Networks

    00:12:16
    0 views
    External beam radiation therapy (EBRT) is a therapeutic modality often used for the treatment of various types of cancer. EBRT?s efficiency highly depends on accurate tracking of the target to be treated and therefore requires the use of real-time imaging modalities such as ultrasound (US) during treatment. While US is cost effective and non-ionizing, 2D US is not well suited to track targets that displace in 3D, while 3D US is challenging to integrate in real-time due to insufficient temporal frequency. In this work, we present a 3D inference model based on fully convolutional networks combined with a spatial transformative network (STN) layer, which given a 2D US image and a baseline 3D US volume as inputs, can predict the deformation of the baseline volume to generate an up-to-date 3D US volume in real-time. We train our model using 20 4D liver US sequences taken from the CLUST15 3D tracking challenge, testing the model on image tracking sequences. The proposed model achieves a normalized cross-correlation of 0.56 in an ablation study and a mean landmark location error of 2.92 ? 1.67mm for target anatomy tracking. These promising results demonstrate the potential of generative STN models for predicting 3D motion fields during EBRT.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A New Spatially Adaptive Tv Regularization for Digital Breast Tomosynthesis

    00:14:21
    0 views
    Digital breast tomosynthesis images provide volumetric morphological information of the breast helping physicians to detect malign lesions. In this work, we propose a new spatially adaptive total variation (SATV) regularization function allowing to preserve adequately the shape of small objects such as microcalcifications while ensuring a high quality restoration of the background tissues. First, an original formulation for the weighted gradient field is introduced, that efficiently incorporates prior knowledge on the location of small objects. Then, we derive our SATV regularization, and integrate it in a novel 3D reconstruction approach for DBT. Experimental results carried out on both phantom and clinical data show the great interest of our method for the recovery of DBT volumes showing small lesions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Segmentation and Classification of Melanoma and Nevus in Whole Slide Images

    00:16:08
    0 views
    The incidence of skin cancer cases and specifically melanoma has tripled since the 1990s in The Netherlands. The early detection of melanoma can lead to an almost 100% 5-year survival prognosis dropping drastically when detected later. Studies show that pathologists can have a discordance reporting of melanoma to nevi up to 14.3%. An automated method could help support pathologists in diagnosing melanoma and prioritize cases based on a risk assessment. Our method used 563 whole slide images to train and test a system comprising of two models that segment and classify skin sections to melanoma, nevus or negative for both. We used 232 slides for training and validation and the remaining 331 for testing. The first model uses a U-Net architecture to perform a semantic segmentation and the output of that model was used to feed a convolution neural network to classify the WSI with a global label. Our method achieved a Dice score of 0.835 ? 0.08 on the segmentation of the validation set and an weighted F1-score of 0.954 on the independent test dataset. Out of the 176 melanoma slides, the algorithm managed to classify 173 correctly. Out of the 62 nevi slides the algorithm managed to correctly classify 57.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Two-Layer Residual Sparsifying Transform Learning for Image Reconstruction

    00:12:50
    0 views
    Signal models based on sparsity, low-rank and other properties have been exploited for image reconstruction from limited and corrupted data in medical imaging and other computational imaging applications. In particular, sparsifying transform models have shown promise in various applications, and offer numerous advantages such as efficiencies in sparse coding and learning. This work investigates pre-learning a two-layer extension of the transform model for image reconstruction, wherein the transform domain or filtering residuals of the image are further sparsified in the second layer. The proposed block coordinate descent optimization algorithms involve highly efficient updates. Preliminary numerical experiments demonstrate the usefulness of a two-layer model over the previous related schemes for CT image reconstruction from low-dose measurements.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Prior-aware CNN with Multi-Task Learning for Colon Images Analysis

    00:13:43
    0 views
    Adenocarcinoma is the most common cancer, the pathological diagnosis for it is of great significance. Specifically, the degree of gland differentiation is vital for defining the grade of adenocarcinoma. Following this domain knowledge, we encode glandular regions as prior information in convolutional neural network (CNN), guiding the network's preference for glands when inferring. In this work, we propose a prior-aware CNN framework with multi-task learning for pathological colon images analysis, which contains gland segmentation and grading classification branches simultaneously. The segmentation's probability map also acts as the spatial attention for grading, emphasizing the glandular tissue and removing noise of irrelevant parts. Experiments reveal that the proposed framework achieves accuracy of 97.04% and AUC of 0.9971 on grading. Meanwhile, our model can predict gland regions with mIoU of 0.8134. Importantly, it is based on the clinical-pathological diagnostic criteria of adenocarcinoma, which makes our model more interpretable.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Myocardial T1-Mapping Framework with Recurrent and U-Net Convolutional Neural Networks

    00:15:39
    0 views
    Noise and aliasing artifacts arise in various accelerated cardiac magnetic resonance (CMR) imaging applications. In accelerated myocardial T1-mapping, the traditional three-parameter based nonlinear regression may not provide accurate estimates due to sensitivity to noise. A deep neural network-based framework is proposed to address this issue. The DeepT1 framework consists of recurrent and U-net convolution networks to produce a single output map from the noisy and incomplete measurements. The results show that DeepT1 provides noise-robust estimates compared to the traditional pixel-wise three parameter fitting.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Wnet: An End-To-End Atlas-Guided and Boundary-Enhanced Network for Medical Image Segmentation

    00:11:09
    0 views
    Medical image segmentation is one of the most important pre-processing steps in computer-aided diagnosis, but it is a challenging task because of the complex background and fuzzy boundary. To tackle these issues, we propose a double U-shape-based architecture named WNet, which is capable of capturing exact positions as well as sharpening their boundary. We first build an atlas-guided segmentation network (AGSN) to obtain a position-aware segmentation map by incorporating prior knowledge on human anatomy. We further devise a boundary-enhanced refinement network (BERN) to yield a clear boundary by hybridizing a Multi-scale Structure Similarity (MS-SSIM) loss function and making full use of refinement at training and inference in an end-to-end way. Experimental results show that the proposed WNet can accurately capture an organ with sharpened details and hence improves the performance on two datasets compared to the previous state-of-the-arts. Index Terms?Probabilistic atlas, Atlas-guided segmentation network, Boundary-enhanced refinement network.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • An Evaluation of Regularization Strategies for Subsampled Single-Shell Diffusion MRI

    00:10:34
    0 views
    Conventional single-shell diffusion MRI experiments acquire sampled values of the diffusion signal from the surface of a sphere in q-space. However, to reduce data acquisition time, there has been recent interest in using regularization to enable q-space undersampling. Although different regularization strategies have been proposed for this purpose (i.e., sparsity-promoting of the spherical ridgelet representation and Laplace-Beltrami Tikhonov regularization), there has not been a systematic evaluation of the strengths, weaknesses, and potential synergies of the different regularizers. In this work, we use real diffusion MRI data to systematically evaluate the performance characteristics of these different approaches and determine whether one approach is fundamentally more powerful than the other. Results from retrospective subsampling experiments suggest that both regularization strategies offer largely similar reconstruction performance (though with different levels of computational complexity) with some degree of synergy (albeit, relatively minor).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Mapping Cerebral Connectivity Changes after Mild Traumatic Brain Injury in Older Adults Using Diffusion Tensor Imaging and Riemannian Matching of Elastic Curves

    00:12:52
    0 views
    Although diffusion tensor imaging (DTI) can identify white matter (WM) changes due to mild traumatic brain injury (mTBI), the task of within-subject longitudinal matching of DTI streamlines remains challenging in this condition. Here we combine (A) automatic, atlas-informed labeling of WM streamline clusters with (B) streamline prototyping and (C) Riemannian matching of elastic curves to quantify within-subject changes in WM structure properties, focusing on the arcuate fasciculus. The approach is demonstrated in a group of geriatric mTBI patients imaged acutely and ~6 months post-injury. Results highlight the utility of differen-tial geometry approaches when quantifying brain connectivity alterations due to mTBI.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Transfer-GAN: Multimodal CT Image Super-Resolution Via Transfer Generative Adversarial Networks

    00:14:18
    0 views
    Multimodal CT scans, including non-contrast CT, CT perfusion, and CT angiography, are widely used in acute stroke diagnosis and therapeutic planning. While each imaging modality has its advantage in brain cross-sectional feature visualizations, the varying image resolution of different modalities hinders the ability of the radiologist to discern consistent but subtle suspicious findings. Besides, higher image quality requires a high radiation dose, leading to increases in health risks such as cataract formation and cancer induction. In this work, we propose a deep learning-based method Transfer-GAN that utilizes generative adversarial networks and transfer learning to improve multimodal CT image resolution and to lower the necessary radiation exposure. Through extensive experiments, we demonstrate that transfer learning from multimodal CT provides substantial visualization and quantity enhancement compare to the training without learning the prior knowledge.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Convolutional Framework for Accelerated Magnetic Resonance Imaging

    00:12:43
    0 views
    Magnetic Resonance Imaging (MRI) is a noninvasive imaging technique that provides exquisite soft-tissue contrast without using ionizing radiation. The clinical application of MRI may be limited by long data acquisition times; therefore, MR image reconstruction from highly undersampled k-space data has been an active area of research. Many works exploit rank deficiency in a Hankel data matrix to recover unobserved k-space samples; the resulting problem is non-convex, so the choice of numerical algorithm can significantly affect performance, computation, and memory. We present a simple, scalable approach called Convolutional Framework (CF). We demonstrate the feasibility and versatility of CF using measured data from 2D, 3D, and dynamic applications.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Image Segmentation Using Hybrid Representations

    00:11:42
    0 views
    This work explores a hybrid approach to segmentation as an alternative to a purely data driven approach. We introduce an end-to-end U-Net based network called DU-Net, which uses additional frequency preserving features, namely the Scattering Coefficients (SC) for medical image segmentation. SC are translation invariant and Lipschitz continuous to deformations which help DU-Net outperform other conventional CNN counterparts on four datasets and two segmentation tasks: Optic Disc and Optic Cup in color fundus images and fetal Head in ultrasound images. The proposed method shows remarkable improvement over the basic U-Net with performance competitive to state-of-the-art methods. The results indicate that it is possible to use a lighter network trained with fewer images (without any augmentation) to attain good segmentation results.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Gradient Artifact Correction for Simultaneous Eeg-Fmri Using Denoising Autoencoders

    00:14:24
    0 views
    EEG recorded during MRI acquisition suffers from severe artifacts due to the imaging gradients. Here, we explore the possibility of using denoising autoencoders for correcting for these artifacts. After hyperparameter optimization, the performance of the algorithm was compared against PCA on two different synthesized datasets. The first dataset was created by adding a template artifact to clean EEG data and randomly shifting it in time to simulate aliasing. While the second dataset was formed by filtering out the EEG frequencies and adding a known ground-truth clean EEG signal. The performance of each method was assessed by the RMSE relative to the clean EEG signal. In addition, the correlation coefficient compared to the artifact signal was used to measure the residual artifact level. On the second synthesized dataset, denoising autoencoders outperformed PCA by 4.3% in terms of RMSE, meaning they were able to better preserve the original signal while at the same time the correlation with the underlying artifact was reduced by 40%. These preliminary results merit further investigation on a larger dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Preliminary Studies on Training and Fine-Tuning Deep Denoiser Neural Networks in Learned D-Amp for Undersampled Real MR Measurements

    00:13:13
    0 views
    We investigated learned denoiser-based approximate message passing (LDAMP) with undersampled real MR measurements. In our preliminary results, LDAMP yielded favorable performance over BM3D-based AMP even though ground truth images are noisy and deep denoisers were trained only for Gaussian noise, not for undersampling artifacts. We further investigated the feasibility of using Stein?s unbiased risk estimator (SURE) to fine-tune deep denoisers with given undersampled MR measurement to reconstruct. Even though slight performance improvements (0.04dB) were observed for an example case, no visual improvement was observed.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Lightweight U-Net for High-Resolution Breast Imaging

    00:06:52
    0 views
    We study the fully convolutional neural networks in the context of malignancy detection for breast cancer screening. We solve a supervised segmentation task looking for an acceptable compromise between the precision of the network and the computational complexity.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Image-based Simulations of Tubular Network Formation

    00:20:58
    0 views
    The image-based simulations in biomedicine play an important role as the real image data are difficult to be fully and precisely annotated. An increasing capability of contemporary computers allows to model and simulate reasonably complicated structures and in the last years also the dynamic processes. In this paper, we introduce a complex 3D model that describes the structure and dynamics of the population of endothelial cells. The model is based on standard cellular Potts model. It describes the formation process of a complex tubular network of endothelial cells fully in 3D together with the simulation of the cell death called apoptosis. The generated network imitates the structure and behavior that can be observed in real phase-contrast microscopy. The generated image data may serve as a benchmark dataset for newly designed detection or tracking algorithms.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • BAENET: A Brain Age Estimation Network with 3D Skipping and Outlier Constraint Loss

    00:09:02
    0 views
    The potential pattern changes in brain micro-structure can be used for the brain development assessment in children and adolescents by MRI scans. In this paper, we propose a highly accurate and efficient end-to-end brain age estimation network (BAENET) on T1-weighted MRI images. On the network, 3D skipping and outlier constraint loss are designed to accommodate deeper network and increase the robustness. Besides, we incorporate the neuroimage domain knowledge into stratified sampling for better generalization ability for datasets with different age distributions, and gender learning for more gender-specific features during modeling. We verify the effectiveness of the proposed method on the public ABIDE2 and ADHD200 benchmark, consisting of 382 and 378 normal children scans respectively. Our BAENET achieves MAE of 1.11 and 1.16, significantly outperforming the best reported methods by 5.1% and 9.4%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Polyp Detection in Colonoscopy Videos by Bootstrapping Via Temporal Consistency

    00:13:55
    1 view
    Computer-aided polyp detection during colonoscopy is beneficial to reduce the risk of colorectal cancers. Deep learning techniques have made significant process in natural object detection. However, when applying those fully supervised methods to polyp detection, the performance is greatly depressed by the deficiency of labeled data. In this paper, we propose a novel bootstrapping method for polyp detection in colonoscopy videos by augmenting training data with temporal consistency. For a detection network that is trained on a small set of annotated polyp images, we fine-tune it with new samples selected from the test video itself, in order to more effectively represent the polyp morphology of current video. A strategy of selecting new samples is proposed by considering temporal consistency in the test video. Evaluated on 11954 endoscopic frames of the CVC-ClinicVideoDB dataset, our method yields great improvement on polyp detection for several detection networks, and achieves state-of-the-art performance on the benchmark dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Mitigating Adversarial Attacks on Medical Image Understanding Systems

    00:14:20
    0 views
    Deep learning systems are now being widely used to analyze lung cancer. However, recent work has shown a deep learning system can be easily fooled by intentionally adding some noise in the image. This is called as Adversarial attack. This paper presents an adversarial attack for malignancy prediction of lung nodules. We found that the adversarial attack can cause significant changes in lung nodule malignancy prediction accuracy. An ensemble-based defense strategy was developed to reduce the effect of an adversarial attack. A multi-initialization based CNN ensemble was utilized. We also explored adding adversarial images in the training set, which eventually reduced the rate of mis-classification and made the CNN models more robust to an adversarial attack. A subset of cases from the National Lung Screening Trial (NLST) dataset were used in our study. Initially, 75.1%, 75.5% and 76% classification accuracy were obtained from the three CNNs on original images (without an adversarial attack). Fast Gradient Sign Method (FGSM) and one-pixel attacks were analyzed. After the FGSM attack, 46.4%, 39.24%, and 39.71% accuracy was obtained from the 3 CNNs. Whereas, after a one pixel attack 72.15%, 73%, and 73% classification accuracy was achieved. FGSM caused much more damaged to CNN prediction. With a multi-initialization based ensemble and including adversarial images in the training set, 82.27% and 81.43% classification accuracy were attained after FGSM and one-pixel attacks respectively.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Knowledge Transfer between Datasets for Learning-Based Tissue Microstructure Estimation

    00:11:15
    0 views
    Learning-based approaches, especially those based on deep networks, have enabled high-quality estimation of tissue microstructure from low-quality diffusion magnetic resonance imaging (dMRI) scans, which are acquired with a limited number of diffusion gradients and a relatively poor spatial resolution. These learning-based approaches to tissue microstructure estimation require acquisitions of training dMRI scans with high-quality diffusion signals, which are densely sampled in the q-space and have a high spatial resolution. However, the acquisition of training scans may not be available for all datasets. Therefore, we explore knowledge transfer between different dMRI datasets so that learning-based tissue microstructure estimation can be applied for datasets where training scans are not acquired. Specifically, for a target dataset of interest, where only low-quality diffusion signals are acquired without training scans, we exploit the information in a source dMRI dataset acquired with high-quality diffusion signals. We interpolate the diffusion signals in the source dataset in the q-space using a dictionary-based signal representation, so that the interpolated signals match the acquisition scheme of the target dataset. Then, the interpolated signals are used together with the high-quality tissue microstructure computed from the source dataset to train deep networks that perform tissue microstructure estimation for the target dataset. Experiments were performed on brain dMRI scans with low-quality diffusion signals, where the benefit of the proposed strategy is demonstrated.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Joint Optimization of Sampling Pattern and Priors in Model-Based Deep Learning

    00:15:08
    0 views
    Deep learning methods are emerging as powerful alternatives for compressed sensing MRI to recover images from highly undersampled data. Unlike compressed sensing, the image redundancies that are captured by these models are not well understood. The lack of theoretical understanding also makes it challenging to choose the sampling pattern that would yield the best possible recovery. To overcome these challenges, we propose to optimize the sampling patterns and the parameters of the reconstruction block in a model-based deep learning framework. We show that the joint optimization by the model-based strategy results in improved performance than direct inversion CNN schemes due to better decoupling of the effect of sampling and image properties. The quantitative and qualitative results confirm the benefits of joint optimization by the model-based scheme over the direct inversion strategy.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Based Detection of Acute Aortic Syndrome in Contrast CT Images

    00:12:05
    0 views
    Acute aortic syndrome (AAS) is a group of life threatening conditions of the aorta. We have developed an end-to-end automatic approach to detect AAS in computed tomography (CT) images. Our approach consists of two steps. At first, we extract $N$ cross sections along the segmented aorta centerline for each CT scan. These cross sections are stacked together to form a new volume which is then classified using two different classifiers, a 3D convolutional neural network (3D CNN) and a multiple instance learning (MIL). We trained, validated, and compared two models on 2291 contrast CT volumes. We tested on a set aside cohort of 230 normal and 50 positive CT volumes. Our models detected AAS with an Area under Receiver Operating Characteristic curve (AUC) of 0.965 and 0.985 using 3DCNN and MIL, respectively.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • An Improved Deep Learning Approach for Thyroid Nodule Diagnosis

    00:17:09
    0 views
    Although thyroid ultrasonography (US) has been widely applied, it is still difficult to distinguish benign and malignant nodules. Currently, convolutional neural network (CNN) based methods have been proposed and shown promising performance for benign and malignant nodules classification. It is known that the US images are usually captured in multi-angles, and the same thyroid in different US images have inconsistent content. However, most of the existing CNN based methods extract features using fixed convolution kernels, which could be a big issue for processing US images. Moreover, fully-connected (FC) layers are usually adopted in CNN, which could cause the loss of inter-pixel relations. In this paper, we propose a new CNN which is integrated with squeeze-and-excitation (SE) module and maximum retention of inter-pixel relations module (CNN-SE-MPR). It can adaptively select features from different US images and preserve the inter-pixel relations. Moreover, we introduce transfer learning to avoid problems such as local optimum and data insufficiency. The proposed network is tested on 407 thyroid US images collected from cooperated hospitals. Confirmed by ablation experiments and the comparison experiments under the state-of-the-art methods, it is shown that our method improves the accuracy of the diagnosis results.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Towards Fully Automatic 2d Us to 3d CT/MR Registration: A Novel Segmentation-Based Strategy

    00:13:53
    0 views
    2D-US to 3D-CT/MR registration is a crucial module during minimally invasive ultrasound-guided liver tumor ablations. Many modern registration methods still require manual or semi-automatic slice pose initialization due to insufficient robustness of automatic methods. The state-of-the-art regression networks do not work well for liver 2D US to 3DCT/MR registration because of the tremendous inter-patientvariability of the liver anatomy. To address this unsolved problem, we propose a deep learning network pipeline which? instead of a regression ? starts with a classification network to recognize the coarse ultrasound transducer pose followed by a segmentation network to detect the target plane of the US image in the CT/MR volume. The rigid registration result is derived using plane regression. In contrast to the state-of-the-art regression networks, we do not estimate registration parameters from multi-modal images directly, but rather focus on segmenting the target slice plane in the volume. The experiments reveal that this novel registration strategy can identify the initial slice phase in a 3D volume more reliably than the standard regression-based techniques. The proposed method was evaluated with 1035 US images from 52 patients. We achieved angle and distance errors of 12.7?6.2? and 4.9?3.1 mm, clearly outperforming state-of-the-art re-gression strategy which results in 37.0?15.6? angle error and 19.0?11.6 mm distance error.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A 3D CNN with a Learnable Adaptive Shape Prior for Accurate Segmentation of Bladder Wall Using MR Images

    00:15:37
    0 views
    A 3D deep learning-based convolution neural network (CNN)is developed for accurate segmentation of pathological bladder(both wall border and pathology) using T2-weighted magnetic resonance imaging (T2W-MRI). Our system starts with a preprocessing step for data normalization to a unique space and extraction of a region-of-interest (ROI). The major stage utilizes a 3D CNN for pathological bladder segmentation, which contains a network, called CNN1, aims to segment the bladder wall (BW) with pathology. However, due to the similar visual appearance of BW and pathology, the CNN1 can not separate them. Thus, we developed another network (CNN2) with an additional pathway to extract BW only. The second pathway in CNN2 is fed with a 3Dlearnable adaptive shape prior model. To remove noisy and scattered predictions, the networks? soft outputs are refined using a fully connected conditional random field. Our framework achieved accurate segmentation results for the BW and tumor as documented by the Dice similarity coefficient and Hausdorff distance. Moreover, comparative results against the other segmentation approach documented the superiority of our framework to provide accurate results for pathological BW segmentation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Reflection Ultrasound Tomography Using Localized Freehand Scans

    00:11:50
    0 views
    Speed of sound (SOS) is a biomarker that aides clinicians in tracking the onset and progression of diseases such as breast cancer and fatty liver disease. In this paper, we propose a framework to generate accurate, 2D SOS maps with a commercial ultrasound probe. We simulate freehand ultrasound probe motion and use a multi-look framework for reflection travel time tomography. In these simulations, the ``measured'' travel times are computed using a bent-ray Eikonal solver and direct inversion for compressional speed of sound is performed. We have shown that the assumption of straight rays breaks down for large velocity perturbations (greater than 1 percent). The error increases 70 fold for a velocity perturbation increase of 1.5 percent. Moreover, the use of multiple looks greatly aides the inversion process. Simulated RMSE drops by roughly 15 dB when the maximum scanning angle is increased from 0 to 45 degrees.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Association between Dynamic Functional Connectivity and Intelligence

    00:14:51
    0 views
    Several studies have explored the relationship between intelligence and neuroimaging features. However, little is known about whether the temporal variations of functional connectivity of the brain regions at rest are relevant to the differences in intelligence. In this study, we have used the fMRI data and intelligence scores of 50 healthy adult subjects from the Human Connectome Project (HCP) database. We have investigated the correlation between individual intelligence scores and the total power of the high frequency components of the Fast Fourier transform (FFT) of the dynamic functional connectivity time series of the brain regions. We have found temporal variations of specific functional connections highly correlated with the individual intelligence scores. In other words, functional connections of individuals with high levels of the intelligence have smoother temporal variation or higher temporal stability than those of the individuals with low intelligence levels.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Robust Detection of Adversarial Attacs on Medical Images

    00:14:30
    0 views
    Although deep learning systems trained on medical images have shown state-of-the-art performance in many clinical pre- diction tasks, recent studies demonstrate that these systems can be fooled by carefully crafted adversarial images. It has raised concerns on the practical deployment of deep learning based medical image classification systems. To tackle this problem, we propose an unsupervised learning approach to detect adversarial attacks on medical images. Our approach is capable of detecting a wide range of adversarial attacks without knowing the attackers nor sacrificing the classification performance. More importantly, our approach can be easily embedded into any deep learning-based medical imaging system as a module to improve the system?s robustness. Experiments on a public chest X-ray dataset demonstrate the strong performance of our approach in defending adversarial attacks under both white-box and black-box settings.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Variational Autoencoder for Modeling Functional Brain Networks and ADHD Identification

    00:13:16
    0 views
    In the neuroimaging and brain mapping communities, researchers have proposed a variety of computational methods and tools to learn functional brain networks (FBNs). Recently, it has already been proven that deep learning can be applied on fMRI data with superb representation power over traditional machine learning methods. Limited by the high-dimension of fMRI volumes, deep learning suffers from the lack of data and overfitting. Generative models are known to have intrinsic ability of modeling small dataset and a deep variational autoencoder (DVAE) was proposed in this work to tackle the challenge of insufficient data and incomplete supervision. The FBNs learned from fMRI were examined to be interpretable and meaningful and it was proven that DVAE has better performance on neuroimaging dataset over traditional models. With an evaluation on ADHD200 dataset, DVAE performed excellent on classification accuracies on 4 sites.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Segmentation of Bone Vessels in 3d Micro-Ct Images Using the Monogenic Signal Phase and Watershed

    00:12:29
    0 views
    We propose an algorithm based on marker-controlled watershed and the monogenic signal phase asymmetry for the segmentation of bone and micro-vessels in mouse bone. The images are acquired using synchrotron radiation micro-computed tomography (SR-?CT). The marker image is generated with hysteresis thresholding and morphological filters. The control surface is generated using the phase asymmetry of the monogenic signal in order to detect edge-like structures only, as well as improving detection in low contrast areas, such as bone-vessel interfaces. The quality of segmentation is evaluated by comparing to manually segmented images using the Dice coefficient. The proposed method shows substantial improvement compared to a previously proposed method based on hysteresis thresholding, as well as compared to watershed using the gradient image as control surface. The algorithm was applied to images of healthy and metastatic bone, permitting quantification of both bone and vessel structures.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Substituting Gadolinium in Brain MRI Using DeepContrast

    00:14:45
    0 views
    Cerebral blood volume (CBV) is a hemodynamic correlate of oxygen metabolism and reflects brain activity and function. High-resolution CBV maps can be generated using the steady-state gadolinium-enhanced MRI technique. Such technique requires an intravenous injection of exogenous gadolinium based contrast agent (GBCA) and recent studies suggest that the GBCA can accumulate in the brain after frequent use. We hypothesize that endogenous sources of contrast might exist within the most conventional and commonly acquired structural MRI, potentially obviating the need for exogenous contrast. Here, we test this hypothesis by developing and optimizing a deep learning algorithm, which we call DeepContrast, in mice. We find that DeepContrast performs equally well as exogenous GBCA in mapping CBV of the normal brain tissue and enhancing glioblastoma. Together, these studies validate our hypothesis that a deep learning approach can potentially replace the need for GBCAs in brain MRI.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Bone Structures Extraction and Enhancement in Chest Radiographs Via CNN Trained on Synthetic Data

    00:09:39
    0 views
    In this paper, we present a deep learning based image processing technique for extraction of bone structures in chest radiographs using a U-Net FCNN. The U-Net was trained to accomplish the task in a fully supervised setting. To create the training image pairs, we employed simulated X-Ray or Digitally Reconstructed Radiographs (DRR), derived from 664 CT scans belonging to the LIDC-IDRI dataset. Using HU based segmentation of bone structures in the CT domain,a synthetic 2D ?Bone x-ray? DRR is produced and used for training the network. For the reconstruction loss, we utilize two loss functions- L1 Loss and perceptual loss. Once the bone structures are extracted, the original image can be enhanced by fusing the original input x-ray and the synthesized ?Bone X-ray?. We show that our enhancement technique is applicable to real x-ray data, and display our results on the NIH Chest X-Ray-14 dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Memory-Augmented Anomaly Generative Adversarial Network for Retinal Oct Images Screening

    00:03:54
    0 views
    Optical coherence tomography (OCT) plays an important role in retinal disease screening. Traditional classification-based screening methods require complicated annotation works. Due to the difficulty of collecting abnormal samples, some anomaly detection methods have been applied to screen retinal lesions only based on normal samples. However, most existing anomaly detection methods are time consuming and easily misjudging abnormal OCT images with implicit lesions like small drusen. To solve these problems, we propose a memory-augmented anomaly generative adversarial network (MA-GAN) for retinal OCT screening. Within the generator, we establish a memory module to enhance the detail expressing abilities of typical OCT normal patterns. Meanwhile, the discriminator of MA-GAN is decomposed orthogonally so that it has the encoding ability simultaneously. As a result, the abnormal image can be screened by the greater difference in the distribution of pixels and features between the original image and its reconstructed image. The model trained with 13000 normal OCT images reaches 0.875 AUC on the test set of 2000 normal images and 1000 anomalous images. And the inference time only takes 35 milliseconds for each image. Compared to other anomaly detection methods, our MA-GAN has the advantages in model accuracy and computation time for retinal OCT screening.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Assessment of Lung Biomechanics in COPD Using Image Registration

    00:08:26
    0 views
    Lung biomechanical properties can be used to detect disease, assess abnormal lung function, and track disease progression.In this work, we used computed tomography (CT) imaging to measure three biomechanical properties in the lungs of subjects with varying degrees of chronic obstructive pulmonary disease (COPD): the Jacobian determinant (J), a measure of volumetric expansion or contraction; the anisotropic deformation index (ADI), a measure of the magnitude of anisotropic deformation; and the the slab-rod index (SRI), a measure of the nature of anisotropy (i.e., whether the volume is deformed to a rod-like or slab-like shape). We analyzed CT data from247 subjects collected as part of the Subpopulations and Inter-mediate Outcome Measures in COPD Study (SPIROMICS). The results show that the mean J and mean ADI decrease as disease severity increases, indicating less volumetric expansion and more isotroic expansion with increased disease. No differences in average SRI index were observed across the different levels of disease. The methods and analysis described in this study may provide new insights into our understanding of the biomechanical behavior of the lung and the changesthat occur with COPD.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Airway Segmentation in Speech MRI Using the U-Net Architecture

    00:14:41
    1 view
    We develop a fully automated airway segmentation method to segment the vocal tract airway from surrounding soft tissue in speech MRI. We train a U-net architecture to learn the end to end mapping between a mid-sagittal image (at the input), and the manually segmented airway (at the output). We base our training on the open source University of Southern California?s (USC) speech morphology MRI database consisting of speakers producing a variety of sustained vowel and consonant sounds. Once trained, our model performs fast airway segmentations on unseen images at the order of 210 ms/slice on a modern CPU with 12 cores. Using manual segmentation as a reference, we evaluate the performances of the proposed U-net airway segmentation, against existing seed-growing segmentation, and manual segmentation from a different user. We demonstrate improved DICE similarity with U-net compared to seed-growing, and minor differences in DICE similarity of U-net compared to manual segmentation from the second user.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly Supervised Prostate TMA Classification Via Graph Convolutional Networks

    00:11:46
    0 views
    Histology-based grade classification is clinically important for many cancer types in stratifying patients into distinct treatment groups. In prostate cancer, the Gleason score is a grading system used to measure the aggressiveness of prostate cancer from the spatial organization of cells and the distribution of glands. However, the subjective interpretation of Gleason score often suffers from large interobserver and intraobserver variability. Previous work in deep learning-based objective Gleason grading requires manual pixel-level annotation. In this work, we propose a weakly-supervised approach for grade classification in tissue micro-arrays (TMA) using graph convolutional networks (GCNs), in which we model the spatial organization of cells as a graph to better capture the proliferation and community structure of tumor cells. We learn the morphometry of each cell using a contrastive predictive coding (CPC)-based self-supervised approach. Using five-fold cross-validation we demonstrate that our method can achieve a 0.9637?0.0131 AUC using only TMA-level labels. Our method also demonstrates a 36.36% improvement in AUC over standard GCNs with texture features and a 15.48% improvement over GCNs with VGG19 features. Our proposed pipeline can be used to objectively stratify low and high-risk cases, reducing inter- and intra-observer variability and pathologist workload.

Advertisement