IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 1 - 50 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Evaluating Multi-Class Segmentation Errors with Anatomical Priors

    00:13:07
    0 views
    Acquiring large scale annotations is challenging in medical image analysis because of the limited number of qualified annotators. Thus, it is essential to achieve high performance using a small number of labeled data, where the key lies in mining the most informative samples to annotate. In this paper, we propose two effective metrics which leverage anatomical priors to evaluate multi-class segmentation methods without ground truth (GT). Together with our smooth margin loss, these metrics can help to mine the most informative samples for training. In experiments, first we demonstrate the proposed metrics can clearly distinguish samples with different degree of errors in the task of pulmonary lobe segmentation. And then we show that our metrics synergized with the proposed loss function can reach the Pearson Correlation Coefficient (PCC) of 0.7447 with mean surface distance (MSD) and -0.5976 with Dice score, which implies the proposed metrics can be used to evaluate segmentation methods. Finally, we utilize our metrics as sample selection criteria in an active learning setting, which shows that the model trained with our anatomy based query achieves comparable performance with the one trained with random query and uncertainty based query using more annotated training data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Benchmarking Deep Nets MRI Reconstruction Models on the FastMRI Publicly Available Dataset

    00:15:27
    0 views
    The MRI reconstruction field lacked a proper data set that allowed for reproducible results on real raw data (i.e. complex-valued), especially when it comes to deep learning methods as this kind of methods require much more data than classical Compressed Sensing reconstruction. This lack is now filled by the fastMRI data set, and it is needed to evaluate recent deep learning models on this benchmark. Besides, these networks are written in different frameworks, in different repositories (if publicly available), it is therefore needed to have a common tool, publicly available, allowing a reproducible benchmark of the different methods and ease of building new models. We provide such a tool that allows the benchmark of different reconstruction deep learning models.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Region of Interest Identification for Cervical Cancer Images

    00:14:28
    0 views
    Every two minutes one woman dies of cervical cancer globally, due to lack of sufficient screening. Given a whole slide image (WSI) obtained by scanning a microscope glass slide for a Liquid Based Cytology (LBC) based Pap test, our goal is to assist the pathologist to determine presence of pre-cancerous or cancerous cervical anomalies. Inter-annotator variation, large image sizes, data imbalance, stain variations, and lack of good annotation tools make this problem challenging. Existing related work has focused on sub-problems like cell segmentation and cervical cell classification but does not provide a practically feasible holistic solution. We propose a practical system architecture which is based on displaying regions of interest on WSIs containing potential anomaly for review by pathologists to increase productivity. We build multiple deep learning classifiers as part of the proposed architecture. Our experiments with a dataset of ~19000 regions of interest provides an accuracy of ~89% for a balanced dataset both for binary as well as 6-class classification settings. Our deployed system provides a top-5 accuracy of ~94%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Spatially Constrained Deep Convolutional Neural Network for Nerve Fiber Segmentation in Corneal Confocal Microscopic Images Using Inaccurate Annotations

    00:12:04
    0 views
    Semantic image segmentation is one of the most important tasks in medical image analysis. Most state-of-the-art deep learning methods require a large number of accurately annotated examples for model training. However, accurate annotation is difficult to obtain especially in medical applications. In this paper, we propose a spatially constrained deep convolutional neural network (DCNN) to achieve smooth and robust image segmentation using inaccurately annotated labels for training. In our proposed method, image segmentation is formulated as a graph optimization problem that solved by a DCNN model learning process. The cost function to be optimized consists of a unary term that calculated by cross entropy measurement and a pairwise term that is based on enforcing a local label consistency. The proposed method has been evaluated based on corneal confocal microscopic (CCM) images for nerve fiber segmentation, where accurate annotations are extremely difficult to be obtained. Based on both quantitative result of a synthetic dataset and qualitative assessment of a real dataset, the proposed method has achieved superior performance in producing high quality segmentation results even with inaccurate labels for training.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • AF-SEG: An Annotation-Free Approach for Image Segmentation by Self-Supervision and Generative Adversarial Network

    00:08:35
    0 views
    Traditional segmentation methods are annotation-free but usually produce unsatisfactory results. The latest leading deep learning methods improve the results but require expensive and time-consuming pixel-level manual annotations. In this work, we propose a novel method based on self-supervision and generative adversarial network (GAN), which has high performance and requires no manual annotations. First, we perform traditional segmentation methods to obtain coarse segmentation. Then, we use GAN to generate a synthetic image, on which the image foreground is pixel-to-pixel corresponding to the coarse segmentation. Finally, we train the segmentation model with the data pairs of synthetic images and coarse segmentations. We evaluate our method on two types of segmentation tasks, including red blood cell (RBC) segmentation on microscope images and vessel segmentation on digital subtraction angiographies (DSA). The results show that our annotation-free method provides a considerable improvement over the traditional methods and achieves comparable accuracies with fully supervised methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning Probabilistic Fusion of Multilabel Lesion Contours

    00:14:22
    0 views
    Supervised machine learning algorithms, especially in the medical domain, are affected by considerable ambiguity in expert markings, primarily in proximity to lesion contours. In this study we address the case where the experts opinion for those ambiguous areas is considered as a distribution over the possible values. We propose a novel method that modifies the experts? distributional opinion at ambiguous areas by fusing their markings based on their sensitivity and specificity. The algorithm can be applied at the end of any label fusion algorithm that can handle soft values. The algorithm was applied to obtain consensus from soft Multiple Sclerosis (MS) segmentation masks. Soft MS segmentations are constructed from manual binary delineations by including lesion surrounding voxels in the segmentation mask with a reduced confidence weight. The method was evaluated on the MICCAI 2016 challenge dataset, and outperformed previous methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Oct Image Quality Evaluation Based on Deep and Shallow Features Fusion Network

    00:09:01
    0 views
    Optical coherence tomography (OCT) has become an important tool for the diagnosis of retinal diseases, and image quality assessment on OCT images has considerable clinical significance for guaranteeing the accuracy of diagnosis by ophthalmologists. Traditional OCT image quality assessment is usually based on hand-crafted features including signal strength index and signal to noise ratio. These features only reflect a part of image quality, but cannot be seen as a full representation on image quality. Especially, there is no detailed description of OCT image quality so far. In this paper, we firstly define OCT image quality as three grades (?Good?, ?Usable? and ?Poor?). Considering the diversity of image quality, we then propose a deep and shallow features fusion network (DSFF-Net) to conduct multiple label classification. The DSFF-Net combines deep and enhanced shallow features of OCT images to predict the image quality grade. The experimental results on a large OCT dataset show that our network obtains state-of-the-art performance, outperforming the other classical CNN networks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • An Efficient Hybrid Model for Kidney Tumor Segmentation in CT Images

    00:09:16
    0 views
    Kidney tumor segmentation from CT-volumes is essential for lesion diagnosis. Considering excessive GPU memory requirements for 3D medical images, slices and patches are exploited for training and inference in conventional U-Net variant architectures, which inevitably hampers contextual learning. In this paper, we propose a novel effective hybrid model for kidney tumor segmentation in CT images with two parts: 1) Foreground Segmentation Network; 2) Sparse PointCloud Segmentation Network. Specifically, Foreground Segmentation Network firstly segments the foreground, i.e., kidneys with tumors, from background in voxel grid using classical V-Net. Secondly, we represent the obtained foreground regions as point clouds and feed them into the Sparse PointCloud Segmentation Networks to conduct fine-grained segmentation for kidney and tumor. The critical module embedded in the second part is an efficient Submanifold Sparse Convolutional Networks (SSCNs). By exploiting SSCNs, our proposed model can take all foreground as input for better context learning in a memory-efficient manner, and consider the anisotropy of CT images as well. Experiments show that our model can achieve state-of-the-art tumor segmentation while reducing GPU resource demand significantly.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Restoration of Marker Occluded Hematoxylin and Eosin Stained Whole Slide Histology Images Using Generative Adversarial Networks

    00:14:27
    0 views
    It is common for pathologists to annotate specific regions of the tissue, such as tumor, directly on the glass slide with markers. Although this practice was helpful prior to the advent of histology whole slide digitization, it often occludes important details which are increasingly relevant to immuno-oncology due to recent advancements in digital pathology imaging techniques. The current work uses a generative adversarial network with cycle loss to remove these annotations while still maintaining the underlying structure of the tissue by solving an image-to-image translation problem. We train our network on up to 300 whole slide images with marker inks and show that 70% of the corrected image patches are indistinguishable from originally uncontaminated image tissue to a human expert. This portion increases 97% when we replace the human expert with a deep residual network. We demonstrated the fidelity of the method to the original image by calculating the correlation between image gradient magnitudes. We observed a revival of up to 94,000 nuclei per slide in our dataset, the majority of which were located on tissue border.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Recurrent Neural Networks for Compressive Video Reconstruction

    00:12:34
    0 views
    Single-pixel imaging allows low cost cameras to be built for imaging modalities where a conventional camera would either be too expensive or too cumbersome. This is very attractive for biomedical imaging applications based on hyperspectral measurements, such as image-guided surgery, which requires the full spectrum of fluorescence. A single-pixel camera essentially measures the inner product of the scene and a set of patterns. An inverse problem has to be solved to recover the original image from the raw measurement. The challenge in single-pixel imaging is to reconstruct the video sequence in real time from under-sampled data. Previous approaches have focused on the reconstruction of each frame independently, which fails to exploit the natural temporal redundancy in a video sequence. In this study, we propose a fast deep-learning reconstructor that exploits the spatio-temporal features in a video. In particular, we consider convolutional gated recurrent units that have low memory requirements. Our simulation shows than the proposed recurrent network improves the reconstruction quality compared to static approaches that reconstruct the video frames independently.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Lymphoma Segmentation in PET Images Based on Multi-view and Conv3D Fusion Strategy

    00:10:07
    0 views
    Due to the poor image information of lymphoma in PET images, it is still a challenge to segment them correctly. In this work, a fusion strategy by 2D multi-view and 3D networks is proposed to take full advantage of available information for segmentation. Firstly, we train three 2D network models from three orthogonal views based on 2D ResUnet, and train a 3D network model by using volumetric data based on 3D ResUnet. Then the obtained preliminary results (three 2D results and one 3D result) are fused by combing the original volumetric data based on a Conv3D fusion strategy. Finally, a series experiments are conducted on lymphoma dataset, and the results show that the proposed multi-view lymphoma co-segmentation scheme is promising, and it can improve the comprehensive performance by combing 2D multi-view and 3D networks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly-Supervised Prediction of Cell Migration Modes in Confocal Microscopy Images Using Bayesian Deep Learning

    00:08:56
    0 views
    Cell migration is pivotal for their development, physiology and disease treatment. A single cell on a 2D surface can utilize continuous or discontinuous migration modes. To comprehend the cell migration, an adequate quantification for single cell-based analysis is crucial. An automatized approach could alleviate tedious manual analysis, facilitating large-scale drug screening. Supervised deep learning has shown promising outcomes in computerized microscopy image analysis. However, their implication is limited due to the scarcity of carefully annotated data and uncertain deterministic outputs. We compare three deep learning models to study the problem of learning discriminative morphological representations using weakly annotated data for predicting the cell migration modes. We also estimate Bayesian uncertainty to describe the confidence of the probabilistic predictions. Amongst three compared models, DenseNet yielded the best results with a sensitivity of 87.91% at a false negative rate of 1.26%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • MixModule: Mixed CNN Kernel Module for Medical Image Segmentation

    00:15:06
    0 views
    Convolutional neural networks (CNNs) have been successfully applied to medical image classification, segmentation, and related tasks. Among the many CNNs architectures, U-Net and its improved versions based are widely used and achieve state-of-the-art performance these years. These improved architectures focus on structural improvements and the size of the convolution kernel is generally fixed. In this paper, we propose a module that combines the benefits of multiple kernel sizes and apply it to U-Net its variants. We test our module on three segmentation benchmark datasets and experimental results show significant improvement.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fast Automatic Parameter Selection for MRI Reconstruction

    00:13:42
    0 views
    This paper proposes an automatic parameter selection framework for optimizing the performance of parameter-dependent regularized reconstruction algorithms. The proposed approach exploits a convolutional neural network for direct estimation of the regularization parameters from the acquired imaging data. This method can provide very reliable parameter estimates in a computationally efficient way. The effectiveness of the proposed approach is verified on transform-learning-based magnetic resonance image reconstructions of two different publicly available datasets. This experiment qualitatively and quantitatively measures improvement in image reconstruction quality using the proposed parameter selection strategy versus both existing parameter selection solutions and a fully deep-learning reconstruction with limited training data. Based on the experimental results, the proposed method improves average reconstructed image peak signal-to-noise ratio by a dB or more versus all competing methods in both brain and knee datasets, over a range of subsampling factors and input noise levels.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Machine Learning and Graph Based Approach to Automatic Right Atrial Segmentation from Magnetic Resonance Imaging

    00:13:50
    0 views
    Manual delineation of the right atrium throughout the cardiac cycle is tedious and time-consuming, yet promising for early detection of right heart dysfunction. In this study, we developed a fully automated approach to right atrial segmentation in 4-chamber long-axis magnetic resonance image (MRI) cine sequences by applying a U-Net based neural network approach followed by a contour reconstruction and refinement algorithm. In contrast to U-Net, the proposed approach performs segmentation using open contours. This allows for exclusion of the tricuspid valve region from the atrial segmentation, an essential aspect in the analysis of atrial wall motion. The MR images were retrospectively acquired from 242 cine sequences which were manually segmented by an expert radiologist to produce the ground truth data. The neural network was trained over 600 epochs under six different hyperparameter configurations on 202 randomly selected sequences to recognize a dilated region surrounding the right atrial contour. A graph algorithm is then applied to the binary labels predicted by the trained model to accurately reconstruct the corresponding contours. Finally, the contours are refined by combining a nonrigid registration algorithm which tracks the deformation of the heart and a Gaussian process regression. Evaluation of the proposed method on the remaining 40 MR image sequences excluding a single outlier sequence yielded promising Sorensen--Dice coefficients and Hausdorff distances of 95.2% and 4.64 mm respectively before refinement and 94.9% and 4.38 mm afterward.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fusing Metadata and Dermoscopy Images for Skin Disease Diagnosis

    00:05:06
    0 views
    To date, it is still difficult and challenging to automatically classify dermoscopy images. Although the state-of-the-art convolutional networks were applied to solve the classification problem and achieved overall decent prediction results, there is still room for performance improvement, especially for rare disease categories. Considering that human dermatologists often make use of other information (e.g., body locations of skin lesions) to help diagnose, we propose using both dermoscopy images and non-image metadata for intelligent diagnosis of skin diseases. Specifically, the metadata information is innovatively applied to control the importance of different types of visual information during diagnosis. Comprehensive experiments with various deep learning model architectures demonstrated the superior performance of the proposed fusion approach especially for relatively rare diseases. All our codes will be made publicly available.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Task Fmri Guided Fiber Clustering Via a Deep Clustering Method

    00:07:48
    0 views
    Fiber clustering is a prerequisite step towards tract-based analysis for human brain, and it is very important to explain brain structure and function relationship. Over the last decade, it has been an open and challenging question as to what a reasonable clustering of fibers is. Specifically, the purpose of fiber clustering is to cluster the whole brain?s white matter fibers extracted from tractography into similar and meaningful fiber bundles, thus how to definite the ?similar and meaningful? metric decides the performance and possible application of a fiber clustering method. In the past, researchers typically divided the fibers into anatomical or structural similar bundles, but rarely divided them according to functional meanings. In this work, we proposed a novel fiber clustering method by adopting the functional and structural information and combined them into the input of a deep convolutional autoencoder with embedded clustering, which can better extract and use the features within the data. The experimental results show that the proposed method can cluster the whole brain?s fibers into functionally and structurally meaningful bundles.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Accelerating the Registration of Image Sequences by Spatio-Temporal Multilevel Strategies

    00:16:33
    0 views
    Multilevel strategies are an integral part of many image registration algorithms. These strategies are very well-known for avoiding undesirable local minima, providing an outstanding initial guess, and reducing overall computation time. State-of-the-art multilevel strategies build a hierarchy of discretization in the spatial dimensions. In this paper, we present a spatio-temporal strategy, where we introduce a hierarchical discretization in the temporal dimension at each spatial level. This strategy is suitable for a motion estimation problem where the motion is assumed smooth over time. Our strategy exploits the temporal smoothness among image frames by following a predictor-corrector approach. The strategy predicts the motion by a novel interpolation method and later corrects it by registration. The prediction step provides a good initial guess for the correction step, hence reduces the overall computational time for registration. The acceleration is achieved by a factor of 2.5 on average, over the state-of-the-art multilevel methods on three examined optical coherence tomography datasets.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Enumeration of Ampicillin-Resistant E. Coli in Blood Using Droplet Microfluidics and High-Speed Image Processing

    00:15:39
    0 views
    Bacteria entering the bloodstream causes bloodstream infection (BSI). Without proper treatment, BSI can lead to sepsis which is a life-threatening condition. Detection of bacteria in blood at the early stages of BSI can effectively prevent the development of sepsis. Using microfluidic droplets for single bacterium encapsulation provides single-digit bacterial detection sensitivity. In this study, samples of ampicillin-resistant E. coli in human blood were partitioned into millions of 30 ?m diameter microfluidic droplets and followed by 8-hour culturing. Thousands of fluorescent bacteria from a single colony filled up the positive droplets after the culturing process. A circle detection software based on Hough Transform was developed to count the number of positive droplets from fluorescence images. The period to process one image can be as short as 0.5 ms when the original image is pre-processed and binarized by the developed software.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Virtual Staining for Mitosis Detection in Breast Histopathology

    00:11:34
    0 views
    We propose a virtual staining methodology based on Generative Adversarial Networks to map histopathology images of breast cancer tissue from H&E stain to PHH3 and vice versa. We use the resulting synthetic images to build Convolutional Neural Networks (CNN) for automatic detection of mitotic figures, a strong prognostic biomarker used in routine breast cancer diagnosis and grading. We propose several scenarios, in which CNN trained with synthetically generated histopathology images perform on par with or even better than the same baseline model trained with real images. We discuss the potential of this application to scale the number of training samples without the need for manual annotations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Age-Conditioned Synthesis of Pediatric Computed Tomography with Auxiliary Classifier Generative Adversarial Networks

    00:13:04
    0 views
    Deep learning is a popular and powerful tool in computed tomography (CT) image processing such as organ segmentation, but its requirement of large training datasets remains a challenge. Even though there is a large anatomical variability for children during their growth, the training datasets for pediatric CT scans are especially hard to obtain due to risks of radiation to children. In this paper, we propose a method to conditionally synthesize realistic pediatric CT images using a new auxiliary classifier generative adversarial networks (ACGANs) architecture by taking account into age information. The proposed network generated age-conditioned high-resolution CT images to enrich pediatric training datasets.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 6-Month Infant Brain MRI Segmentation Guided by 24-Month Data Using Cycle-Consistent Adversarial Networks

    00:06:28
    0 views
    Due to the extremely low intensity contrast between the white matter (WM) and the gray matter (GM) at around 6 months of age (the isointense phase), it is difficult for manual annotation, hence the number of training labels is highly limited. Consequently, it is still challenging to automatically segment isointense infant brain MRI. Meanwhile, the contrast of intensity images in the early adult phase, such as 24 months of age, is a relatively better, which can be easily segmented by the well-developed tools, e.g., FreeSurfer. Therefore, the question is how could we employ these high-contrast images (such as 24-month-old images) to guide the segmentation of 6-month-old images. Motivated by the above purpose, we propose a method to explore the 24-month-old images for a reliable tissue segmentation of 6-month-old images. Specifically, we design a 3D-cycleGAN-Seg architecture to generate synthetic images of the isointense phase by transferring appearances between the two time-points. To guarantee the tissue segmentation consistency between 6-month-old and 24-month-old images, we employ features from generated segmentations to guide the training of the generator network. To further improve the quality of synthetic images, we propose a feature matching loss that computes the cosine distance between unpaired segmentation features of the real and fake images. Then, the transferred of 24-month-old images is used to jointly train the segmentation model on the 6-month-old images. Experimental results demonstrate a superior performance of the proposed method compared with the existing deep learning-based methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Autoregression and Structured Low-Rank Modeling of Sinograms

    00:14:17
    0 views
    The Radon transform converts an image into a sinogram, and is often used as a model of data acquisition for many tomographic imaging modalities. Although it is well-known that sinograms possess some redundancy, we observe in this work that they can have substantial additional redundancies that can be learned directly from incomplete data. In particular, we demonstrate that sinograms approximately satisfy multiple data-dependent shift-invariant local autoregression relationships. This autoregressive structure implies that samples from the sinogram can be accurately interpolated as a shift-invariant linear combination of neighboring sinogram samples, and that a Toeplitz or Hankel matrix formed from sinogram data should be approximately low-rank. This multi-fold redundancy can be used to impute missing sinogram values or for noise reduction, as we demonstrate with real X-ray CT data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Joint Model-Based Learning for Optimized Sampling in Paralell MRI

    00:12:41
    0 views
    Modern MRI schemes rely on parallel imaging hardware to accelerate the acquisition. The reconstruction quality heavily depends on the specific sampling pattern used during acquisition. The main focus of this work is to jointly optimize the sampling pattern and the deep prior in a Model-Based Deep Learning (MoDL)[1] framework with application to parallel MRI. Model-based schemes use the information of the sampling pattern within the reconstruction algorithm, thus decoupling the CNN block from changes in sampling pattern.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • RADIOGAN: Deep Convolutional Conditional Generative Adversial Network To Generate PET Images

    00:11:34
    1 view
    Generative neural networks are a very promising tool to solve the problem of lack of data, especially in medical imaging where only few data are provided sometimes. RADIOGAN is a new deep conditional architecture based on generative adversarial networks (GAN) and trained on FDG PET images for the synthesis of PET exams including physiological and/or pathological fixations. We show that walking in latent space can be used as a tool to evaluate the quality of the generated images. A multicenter database of 1606 patients was used (422 head and neck cancer, 189 lung cancer, 97esophageal cancer, 225 lymphomas and 675 without pathological fixations (considered as normal PET)). PET images were spatially normalized to an isotropic resolution of 2 mm3, normalized between [0 30] SUVs and then between [0 1] for use by RADIOGAN. The RADIOGAN architecture was built to take advantage of both DCGAN (Deep Convolutional GAN) and CGAN (Conditional GAN) to create a new DCCGAN (Deep Convolutional Conditional Generative Adversarial Network) architecture. In addition to the image, it uses the class (pathology) of the image as information. The generator generates a new PET image from a random vector Z, but by specifying the image class C it will be conditioned to generate an image belonging to the same class as C (normal patient, esophageal cancer, lung...). The model is trained for 300 iterations with an Adam optimizer and a learning rate of 0.0002. The metric is performance. RADIOGAN has been trained to generate realistic images, excluding non-realistic images (half patient, patient with 3legs or a head instead of the stomach, etc...). The generator takes as input a random vector and the desired class. Walking in a latent space results in a sequence of realistic images of patients slowly changing from one class to another (normal patient to lung cancer for example). RADIOGAN was compared with two state of the art DCGAN and CGAN architectures, and outperformed the other architectures by synthesizing high quality realistic images. RADIOGAN showed very promising results and can be extended to CT images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 3D Few-view CT Image Reconstruction with Deep Learning

    00:14:20
    1 view
    Few-view CT image reconstruction is an important approach to reduce the ionizing radiation dose associated with X-ray computed tomography (CT). In this paper, we propose a 3D deep-learning-based method for few-view CT image reconstruction from 3D projection data. The proposed method is validated and tested on a publicly available abdominal CT dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • DC-WCNN: A Deep Cascade of Wavelet Based Convolutional Neural Networks for MR Image Reconstruction

    00:13:16
    0 views
    Several variants of Convolutional Neural Networks (CNN) have been developed for Magnetic Resonance (MR) image reconstruction. Among them, U-Net has shown to be the baseline architecture for MR image reconstruction. However, sub-sampling is performed by its pooling layers, causing information loss which in turn leads to blur and missing fine details in the reconstructed image. We propose a modification to the U-Net architecture to recover fine structures. The proposed network is a wavelet packet transform based encoder-decoder CNN with residual learning called WCNN. The proposed WCNN has discrete wavelet transform instead of pooling and inverse wavelet transform instead of unpooling layers and residual connections. We also propose a deep cascaded framework (DC-WCNN) which consists of cascades of WCNN and k-space data fidelity units to achieve high quality MR reconstruction. Experimental results show that WCNN and DC-WCNN give promising results in terms of evaluation metrics and better recovery of fine details as compared to other methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Twin Classification in Resting-State Brain Connectivity

    00:20:50
    0 views
    Twin study is one of the major parts of human brain research that reveals the importance of environmental and genetic influences on different aspects of brain behavior and disorders. Accurate characterization of identical and fraternal twins allows us to inference on the genetic influence in a population. In this paper, we propose a novel pairwise classification pipeline to identify the zygosity of twin pairs using the resting state functional magnetic resonance images (rs-fMRI). The new feature representation is utilized to efficiently construct brain network for each subject. Specifically, we project the fMRI signal to a set of cosine series basis and use the projection coefficients as the compact and discriminative feature representation of noisy fMRI. The pairwise relation is encoded by a set of twinwise correlations between functional brain networks across brain regions. We further employ hill climbing variable selection to identify the most genetically affected brain regions. The proposed framework has been applied to 208 twin pairs in Human Connectome Project (HCP) and we achieved 92.23(?4.43)% classification accuracy.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Network-Based Feature Selection for Imaging Genetics: Application to Identifying Biomarkers for Parkinsons Disease

    00:10:17
    0 views
    Imaging genetics is a methodology for discovering associations between imaging and genetic variables. Many studies adopted sparse models such as sparse canonical correlation analysis (SCCA) for imaging genetics. These methods are limited to modeling the linear imaging genetics relationship and cannot capture the non-linear high-level relationship between the explored variables. Deep learning approaches are underexplored in imaging genetics, compared to their great successes in many other biomedical domains such as image segmentation and disease classification. In this work, we proposed a deep learning model to select genetic features that can explain the imaging features well. Our empirical study on simulated and real datasets demonstrated that our method outperformed the widely used SCCA method and was able to select important genetic features in a robust fashion. These promising results indicate our deep learning model has the potential to reveal new biomarkers to improve mechanistic understanding of the studied brain disorders.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Agglomerative Region-Based Analysis

    00:15:15
    0 views
    A fundamental problem in brain imaging is the identification of volumes whose features distinguish two populations. One popular solution, Voxel-Based Analyses (VBA), glues together contiguous voxels with significant intra-voxel population differences. VBA's output regions may not be spatially consistent: each voxel may show a unique population effect. We introduce Agglomerative Region-Based Analysis (ARBA), which mitigates this issue to increase sensitivity. ARBA is an Agglomerative Clustering procedure, like Ward's method, which segments image sets in a common space to greedily maximize a likelihood function. The resulting regions are pared down to a set of disjoint regions that show statistically significant population differences via Permutation Testing. ARBA is shown to increase sensitivity over VBA in a detection task on multivariate Diffusion MRI brain images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Tensor-Based Grading: A Novel Patch-Based Grading Approach for the Analysis of Deformation Fields

    00:14:09
    0 views
    The improvements in magnetic resonance imaging have led to the development of numerous techniques to better detect structural alterations caused by neurodegenerative diseases. Among these, the patch-based grading framework has been proposed to model local patterns of anatomical changes. This approach is attractive because of its low computational cost and its competitive performance. Other studies have proposed to analyze the deformations of brain structures using tensor-based morphometry, which is a highly interpretable approach. In this work, we propose to combine the advantages of these two approaches by extending the patch-based grading framework with a new tensor-based grading method that enables us to model patterns of local deformation using a log-Euclidean metric. We evaluate our new method in a study of the putamen for the classification of patients with pre-manifest Huntington's disease and healthy controls. Our experiments show a substantial increase in classification accuracy (87.5 pm 0.5 vs. 81.3 pm 0.6) compared to the existing patch-based grading methods, and a good complement to putamen volume, which is a primary imaging-based marker for the study of Huntington's disease.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Segmentation of Five Components in Four Chamber View of Fetal Echocardiography

    00:10:21
    0 views
    It is clinically significant to segment five components in four chamber view of fetal echocardiography, including four chambers and the descending aorta. This study completes the multi-disease segmentation and multi-class semantic segmentation of the five key components. After comparing the performance of DeeplabV3+ and U-net in the segmentation task, we choose the former as it provides accurate segmentation in other six disease groups as well as the normal group. With the data proportion balance strategy, the segmentation performance of the Ebstein?s anomaly group is improved significantly in spite of its small proportion. We empirically evaluate this strategy in terms of mean iou (MIOU), cross entropy loss (CE) and dice score (DS). The proportion of the atrial abnormality and ventricular abnormality in the entire data set is increased, so that the model learns more semantics. We simulate multiple scenes with uncertain attitudes of the fetus, which provides rich multi-scene semantic information and enhances the robustness of the model.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Learning for Compressed Sensing MRI Using CycleGAN

    00:13:50
    0 views
    Recently, deep learning based approaches for accelerated MRI have been extensively studied due to its high performance and reduced run time complexity. The existing deep learning methods for accelerated MRI are mostly supervised methods, where matched subsampled $k$-space data and fully sampled $k$-space data are necessary. However, it is hard to acquire fully sampled $k$-space data because of long scan time of MRI. Therefore, unsupervised method without matched label data has become a very important research topic. In this paper, we propose an unsupervised method using a novel cycle-consistent generative adversarial network (cycleGAN) with a single deep generator. We show that the proposed cycleGAN architecture can be derived from a dual formulation of the optimal transport with the penalized least squares cost. The results of experiments show that our method can remove aliasing patterns in downsampled MR images without the matched reference data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Coronary Wall Segmentation in CCTA Scans via a Hybrid Net with Contours Regularization

    00:16:26
    0 views
    Providing closed and well-connected boundaries of coronary artery is essential to assist cardiologists in the diagnosis of coronary artery disease (CAD). Recently, several deep learning-based methods have been proposed for boundary detection and segmentation in a medical image. However, when applied to coronary wall detection, they tend to produce disconnected and inaccurate boundaries. In this paper, we propose a novel boundary detection method for coronary arteries that focuses on the continuity and connectivity of the boundaries. In order to model the spatial continuity of consecutive images, our hybrid architecture takes a volume (i.e., a segment of the coronary artery) as input and detects the boundary of the target slice (i.e., the central slice of the segment). Then, to ensure closed boundaries, we propose a contour-constrained weighted Hausdorff distance loss. We evaluate our method on a dataset of 34 patients of coronary CT angiography scans with curved planar reconstruction (CCTA-CPR) of the arteries (i.e., cross-sections). Experiment results show that our method can produce smooth closed boundaries outperforming the state-of-the-art accuracy.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Patient-Specific Finetuning of Deep Learning Models for Adaptive Radiotherapy in Prostate CT

    00:13:36
    0 views
    Contouring of the target volume and Organs-At-Risk (OARs)is a crucial step in radiotherapy treatment planning. In an adaptive radiotherapy setting, updated contours need to be generated based on daily imaging. In this work, we lever-age personalized anatomical knowledge accumulated over the treatment sessions, to improve the segmentation accuracy of a pre-trained Convolution Neural Network (CNN), for a spe-cific patient. We investigate a transfer learning approach, fine-tuning the baseline CNN model to a specific patient, based on imaging acquired in earlier treatment fractions. The baseline CNN model is trained on a prostate CT dataset from one hospital of 379 patients. This model is then fine-tuned and tested on an independent dataset of another hospital of 18 patients,each having 7 to 10 daily CT scans. For the prostate, seminal vesicles, bladder and rectum, the model fine-tuned on each specific patient achieved a Mean Surface Distance (MSD) of 1.64?0.43 mm, 2.38?2.76 mm, 2.30?0.96 mm, and 1.24?0.89 mm, respectively, which was significantly better than the baseline model. The proposed personalized model adaptation is therefore very promising for clinical implementation in the context of adaptive radiotherapy of prostate cancer.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Single-Molecule Localization Microscopy Reconstruction Using Noise2Noise for Super-Resolution Imaging of Actin Filaments

    00:13:14
    0 views
    Single-molecule localization microscopy (SMLM) is a super-resolution imaging technique developed to image structures smaller than the diffraction limit. This modality results in sparse and non-uniform sets of localized blinks that need to be reconstructed to obtain a super-resolution representation of a tissue. In this paper, we explore the use of the Noise2Noise (N2N) paradigm to reconstruct the SMLM images. Noise2Noise is an image denoising technique where a neural network is trained with only pairs of noisy realizations of the data instead of using pairs of noisy/clean images, as performed with Noise2Clean (N2C). Here we have adapted Noise2Noise to the 2D SMLM reconstruction problem, exploring different pair creation strategies (fixed and dynamic). The approach was applied to synthetic data and to real 2D SMLM data of actin filaments. This revealed that N2N can achieve reconstruction performances close to the Noise2Clean training strategy, without having access to the super-resolution images. This could open the way to further improvement in SMLM acquisition speed and reconstruction performance.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Zero-Shot Adaptation to Simulate 3D Ultrasound Volume by Learning a Multilinear Separable 2D Convolutional Neural Network

    00:11:28
    0 views
    Ultrasound imaging relies on sensing of waves returned after interaction with scattering media present in biological tissues. An acoustic pulse transmitted by a single element transducer dilates along the direction of propagation, and is observed as 1D point spread function (PSF) in A-mode imaging. In 2D B-mode imaging, a 1D array of transducer elements is used and dilation of pulse is also observed along the direction of these elements, manifesting a 2D PSF. In 3D B-mode imaging using a 2D matrix of transducer elements, a 3D PSF is observed. Fast simulation of a 3D B-mode volume by way of convolutional transformer networks to learn the PSF family would require a training dataset of true 3D volumes which are not readily available. Here we start in Stage 0 with a simple physics based simulator in 3D to generate speckles from a tissue echogenicity map. Next in Stage 1, we learn a multilinear separable 2D convolutional neural network using 1D convolutions to model PSF family along direction of ultrasound propagation and orthogonal to it. This is adversarially trained using a visual Turing test on 2D ultrasound images. The PSF being circularly symmetric about an axis parallel to the direction of wave propagation, we simulate full 3D volume, by way of alternating the direction of 1D convolution along 2 axes that are mutually orthogonal to the direction of wave propagation. We validate performance using visual Turing test with experts and distribution similarity measures.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Tracking of Particles in Fluorescence Microscopy Images Using a Spatial Distance Model for Brownian Motion

    00:14:44
    0 views
    Automatic tracking of particles in fluorescence microscopy images is an important task to quantify the dynamic behavior of subcellular and virus structures. We present a novel iterative approach for tracking multiple particles in microscopy data based on a spatial distance model derived under Brownian motion. Our approach exploits the information that the most likely object position at the next time point is at a certain distance from the current position. Information from all particles in a temporal image sequence are combined and all motion-specific parameters are automatically estimated. Experiments using data of the Particle Tracking Challenge as well as real live cell microscopy data displaying hepatocyte growth factor receptors and virus structures show that our approach outperforms previous methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improved Simultaneous Multi-Slice Imaging for Perfusion Cardiac MRI Using Outer Volume Suppression and Regularized Reconstruction

    00:11:52
    0 views
    Perfusion cardiac MRI (CMR) is a radiation-free and non-invasive imaging tool which has gained increasing interest for the diagnosis of coronary artery disease. However, resolution and coverage are limited in perfusion CMR due to the necessity of single snap-shot imaging during the first-pass of a contrast agent. Simultaneous multi-slice (SMS) imaging has the potential for high acceleration rates with minimal signal-to-noise ratio (SNR) loss. However, its utility in CMR has been limited to moderate acceleration factors due to residual leakage artifacts from the extra-cardiac tissue such as the chest and the back. Outer volume suppression (OVS) with leakage-blocking reconstruction has been used to enable higher acceleration rates in perfusion CMR, but suffers from higher noise amplification. In this study, we sought to augment OVS-SMS/MB imaging with a regularized leakage-blocking reconstruction algorithm to improve image quality. Results from highly-accelerated perfusion CMR show that the method improves upon SMS-SPIRiT in terms of leakage reduction and split slice (ss)-GRAPPA in terms of noise mitigation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Feature Disentanglement Learning for Bone Suppression in Chest Radiographs

    00:10:16
    0 views
    Suppression of bony structures in chest radiographs is essential for many computer-aided diagnosis tasks. In this paper, we propose a Disentanglement AutoEncoder (DAE) for bone suppression. As the projection of 3D structures of bones and soft tissues overlap in 2D radiographs, their features are interwoven and need to be disentangled for effective bone suppression. Our DAE progressively separates the features of soft-tissues from that of the bony structure during the encoder phase and reconstructs the soft-tissue image based on the disentangled features of soft-tissue. Bone segmentation can be performed concurrently using the separated bony features through a separate multi-task branch. By training the model with multi-task supervision, we explicitly encourage the autoencoder to pay more attention to the locations of bones in order to avoid loss of soft-tissue information. The proposed method is shown to be effective in suppressing bone structures from chest radiographs with very little visual artifacts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning to Segment Vessels from Poorly Illuminated Fundus Images

    00:14:47
    0 views
    Segmentation of retinal vessels is important for determining various disease conditions, but deep learning approaches have been limited by the unavailability of large, publicly available, and annotated datasets. The paper addresses this problem and analyses the performance of U-Net architecture on DRIVE and RIM-ONE datasets. A different approach for data aug- mentation using vignetting masks is presented to create more annotated fundus data. Unlike most prior efforts that attempt transforming poor images to match the images in a training set, our approach takes better quality images (which have good expert labels) and transforms them to resemble poor quality target images. We apply substantial vignetting masks to the DRIVE dataset and then train a U-net on the result- ing lower quality images (using the corresponding expert la- bel data). We quantitatively show that our approach leads to better generalized networks, and we show qualitative perfor- mance improvements in RIM-ONE images (which lack expert labels).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • SynergyNet: A Fusion Framework for Multiple Sclerosis Brain MRI Segmentation with Local Refinement

    00:12:06
    0 views
    The high irregularity of multiple sclerosis (MS) lesions in sizes and numbers often proves difficult for automated systems on the task of MS lesion segmentation. Current State-of-the-art MS segmentation algorithms employ either only global perspective or just patch-based local perspective segmentation approaches. Although global image segmentation can obtain good segmentation for medium to large lesions, its performance on smaller lesions lags behind. On the other hand, patch-based local segmentation disregards spatial information of the brain. In this work, we propose SynergyNet, a network segmenting MS lesions by fusing data from both global and local perspectives to improve segmentation across different lesion sizes. We achieve global segmentation by leveraging the U-Net architecture and implement the local segmentation by augmenting U-Net with the Mask R-CNN framework. The sharing of lower layers between these two branches benefits end-to-end training and proves advantages over simple ensemble of the two frameworks. We evaluated our method on two separate datasets containing 765 and 21 volumes respectively. Our proposed method can improve 2.55% and 5.0% for Dice score and lesion true positive rates respectively while reducing over 20% in false positive rates in the first dataset, and improve in average 10% and 32% for Dice score and lesion true positive rates in the second dataset. Results suggest that our framework for fusing local and global perspectives is beneficial for segmentation of lesions with heterogeneous sizes.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Extracting Axial Depth and Trajectory Trend Using Astigmatism, Gaussian Fitting, and CNNs for Protein Tracking

    00:10:07
    0 views
    Accurate analysis of vesicle trafficking in live cells is challenging for a number of reasons: varying appearance, complex protein movement patterns, and imaging conditions. To allow fast image acquisition, we study how employing an astigmatism can be utilized for obtaining additional information that could make tracking more robust. We present two approaches for measuring the z position of individual vesicles.Firstly, Gaussian curve fitting with CNN-based denoising is applied to infer the absolute depth around the focal plane of each localized protein. We demonstrate that adding denoising yields more accurate estimation of depth while preserving the overall structure of the localized proteins. Secondly, we investigate if we can predict using a custom CNN architecture the axial trajectory trend. We demonstrate that this method performs well on calibration beads data without the need for denoising. By incorporating the obtained depth information into a trajectory analysis, we demonstrate the potential improvement in vesicle tracking.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Predicting Longitudinal Cognitive Scores Using Baseline Imaging and Clinical Variables

    00:15:04
    0 views
    Predicting the future course of a disease with limited information is an essential but challenging problem in health care. For older adults, especially the ones suffering from Alzheimer's disease, accurate prediction of their longitudinal trajectories of cognitive decline can facilitate appropriate prognostic clinical action. Increasing evidence has shown that longitudinal brain imaging data can aid in the prediction of cognitive trajectories. However, in many cases, only a single (baseline) measurement from imaging is available for prediction. We propose a novel model for predicting the trajectory of cognition, using only a baseline measurement, by leveraging the temporal dependence in cognition. On both a synthetic dataset and a real-world dataset, we demonstrate that our model is superior to prior approaches in predicting cognition trajectory over the next five years. We show that the model's ability to capture nonlinear interaction between features leads to improved performance. Further, the proposed model achieved significantly improved trajectory prediction in subjects at higher risk of cognitive decline (those with genetic risk and worse clinical profiles at baseline), highlighting its clinical utility.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Adversarial-Based Domain Adaptation Networks for Unsupervised Tumour Detection in Histopathology

    00:13:51
    0 views
    Developing effective deep learning models for histopathology applications is challenging, as the performance depends on large amounts of labelled training data, which is often unavailable. In this work, we address this issue by leveraging previously annotated histopathology images from unrelated source domains to build a model for the unlabelled target domain. Specifically, we propose the adversarial-based domain adaptation networks (ABDA-Net) for performing the tumour detection task in histopathology in a purely unsupervised manner. This methodology successfully promoted the alignment of the source and target feature distributions among independent datasets of three tumour types - Breast, Lung and Colon - to achieve an improvement of at least 17.51% in accuracy and 18.22% in area under the curve (AUC) when compared to a classifier trained on the source data only.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning to Detect Brain Lesions from Noisy Annotations

    00:09:06
    0 views
    Supervised training of deep neural networks in medical imaging applications relies heavily on expert-provided annotations. These annotations, however, are often imperfect, as voxel-by-voxel labeling of structures on 3D images is difficult and laborious. In this paper, we focus on one common type of label imperfection, namely, false negatives. Focusing on brain lesion detection, we propose a method to train a convolutional neural network (CNN) to segment lesions while simultaneously improving the quality of the training labels by identifying false negatives and adding them to the training labels. To identify lesions missed by annotators in the training data, our method makes use of the 1) CNN predictions, 2) prediction uncertainty estimated during training, and 3) prior knowledge about lesion size and features. On a dataset of 165 scans of children with tuberous sclerosis complex from five centers, our method achieved better lesion detection and segmentation accuracy than the baseline CNN trained on the noisy labels, and than several alternative techniques.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multiple Instance Learning Via Deep Hierarchical Exploration for Histology Image Classification

    00:11:54
    0 views
    We present a fast hierarchical method to detect a presence of cancerous tissue in histological images. The image is not examined in detail everywhere but only inside several small regions of interest, called glimpses. The final classification is done by aggregating classification scores from a CNN on leaf glimpses at the highest resolution. Unlike in existing attention-based methods, the glimpses form a tree structure, low resolution glimpses determining the location of several higher resolution glimpses using weighted sampling and a CNN approximation of the expected scores. We show that it is possible to perform the classification with just a small number of glimpses, leading to an important speedup with only a small performance deterioration. Learning is possible using image labels only, as in the multiple instance learning (MIL) setting.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-Branch Deformable Convolutional Neural Network with Label Distribution Learning for Fetal Brain Age Prediction

    00:13:15
    0 views
    MRI-based fetal brain age prediction is crucial for fetal brain development analysis and early diagnosis of congenital anomalies. The locations and directions of fetal brain are randomly variable and disturbed by adjacent organs, thus imposing great challenges to the fetal brain age prediction. To address this problem, we propose an effective framework based on a deformable convolutional neural network for fetal brain age prediction. Considering the fact of insufficient data, we introduce label distribution learning (LDL), which is able to deal with the small sample problem. We integrate the LDL information into our end-to-end network. Moreover, to fully utilize the complementary multi-view data of fetal brain MRI stacks, a multi-branch CNN is proposed to aggregate multi-view information. We evaluate our method on a fetal brain MRI dataset with 289 subjects and achieve promising age prediction performance.

Advertisement