IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 1 - 50 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • FGB: Feature Guidance Branch for Organ Detection in Medical Images

    00:11:57
    0 views
    In this paper, we propose a novel method that detects and locates different abdominal organs from CT images. We 1) utilize the distributions of organs on CT images as a prior to guide object localization; 2) design an efficient guidance map and propose an interpretable scoring method, feature guidance branch(FGB) to filtrate low-level feature maps by scoring for them; 3) establish effective relations among feature maps by visualization to enhance interpretability. Evaluated with three public datasets, the proposed method outperforms the baseline model on all tasks with a remarkable margin. Furthermore, we conduct exhaustive visualization experiments to verify the rationality and effectiveness of our proposed model.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Diffeomorphic Registration for Retinotopic Mapping Via Quasiconformal Mapping

    00:16:43
    0 views
    Human visual cortex is organized into several functional regions/areas. Identifying these visual areas of the human brain (i.e., V1, V2, V4, etc) is an important topic in neurophysiology and vision science. Retinotopic mapping via functional magnetic resonance imaging (fMRI) provides a non-invasive way of defining the boundaries of the visual areas. It is well known from neurophysiology studies that retinotopic mapping is diffeomorphic within each local area (i.e. locally smooth, differentiable, and invertible). However, due to the low signal-noise ratio of fMRI, the retinotopic maps from fMRI are often not diffeomorphic, making it difficult to delineate the boundaries of visual areas. The purpose of this work is to generate diffeomorphic retinotopic maps and improve the accuracy of the retinotopic atlas from fMRI measurements through the development of a specifically designed registration procedure. Although existing cortical surface registration methods are very advanced, most of them have not fully utilized the features of retinotopic mapping. By considering those features, we form a mathematical model for registration and solve it with numerical methods. We compared our registration with several popular methods on synthetic data. The results demonstrate that the proposed registration is superior to conventional methods for the registration of retinotopic maps. The application of our method to a real retinotopic mapping dataset also resulted in much smaller registration errors.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • SUNet: A Lesion Regularized Model for Simultaneous Diabetic Retinopathy and Diabetic Macular Edema Grading

    00:11:30
    0 views
    Diabetic retinopathy (DR), as a leading ocular disease, is often with a complication of diabetic macular edema (DME). However, most existing works only aim at DR grading but ignore the DME diagnosis, but doctors will do both tasks simultaneously. In this paper, motivated by the advantages of multi-task learning for image classification, and to mimic the behavior of clinicians in visual inspection for patients, we propose a feature Separation and Union Network (SUNet) for simultaneous DR and DME grading. Further, to improve the interpretability of the disease grading, a lesion regularizer is also imposed to regularize our network. Specifically, given an image, our SUNet first extracts a common feature for both DR and DME grading and lesion detection. Then a feature blending block is introduced which alternately uses feature separation and feature union for task-specific feature extraction,where feature separation learns task-specific features for lesion detection and DR and DME grading, and feature union aggregates features corresponding to lesion detection, DR and DME grading. In this way, we can distill the irrelevant features and leverage features of different but related tasks to improve the performance of each given task. Then the taskspecific features of the same task at different feature separation steps are concatenated for the prediction of each task. Extensive experiments on the very challenging IDRiD dataset demonstrate that our SUNet significantly outperforms existing methods for both DR and DME grading.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Stem-Based Dissection of Inferior Fronto-Occipital Fasciculus with a Deep Learning Model

    00:15:06
    0 views
    The aim of this work is to improve the virtual dissection of the Inferior Frontal Occipital Fasciculus (IFOF) by combining a recent insight on white matter anatomy from ex-vivo dissection and a data driven approach with a deep learning model. Current methods of tract dissection are not robust with respect to false positives and are neglecting the neuroanatomical waypoints of a given tract, like the stem. In this work we design a deep learning model to segment the stem of IFOF and we show how the dissection of the tract can be improved. The proposed method is validated on the Human Connectome Project dataset, where expert neuroanatomists segmented the IFOF on multiple subjects. In addition we compare the results to the most recent method in the literature for automatic tract dissection.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Tooth Segmentation and Labeling from Digital Dental Casts

    00:13:06
    0 views
    This paper presents an approach to automatic and accurate segmentation and identification of individual teeth from digital dental casts via deep graph convolutional neural networks. Instead of performing the teeth-gingiva and inter-tooth segmentation in two separate phases, the proposed method enables the simultaneous segmentation and identification of the gingiva and teeth. We perform the vertex-wise feature learning via the feature steered graph convolutional neural network (FeaStNet) [1] that dynamically updates the mapping between convolutional filters and local patches from digital dental casts. The proposed framework handles the tightly intertwined segmentation and labeling tasks with a novel constraint on crown shape distribution and concave contours to remove ambiguous labeling of neighboring teeth. We further enforce the smooth segmentation using the pairwise relationship in local patches to penalize rough and inaccurate region boundaries and regularize the vertex-wise labeling in the training process. The qualitative and quantitative evaluations on the digital dental casts obtained in the clinical orthodontics demonstrate that the proposed method achieves efficient and accurate tooth segmentation and produces performance improvements to the state-of-the-art.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Condensed U-Net (CU-Net): An Improved U-Net Architecture for Cell Segmentation Powered by 4x4 Max-Pooling Layers

    00:14:37
    0 views
    Recently, the U-Net has been the dominant approach in the cell segmentation task in biomedical images due to its success in a wide range of image recognition tasks. However, recent studies did not focus enough on updating the architecture of the U-Net and designing specialized loss functions for bioimage segmentation. We show that the U-Net architecture can achieve more successful results with efficient architectural improvements. We propose a condensed encoder-decoder scheme that employs the 4x4 max-pooling operation and triple convolutional layers. The proposed network architecture is trained using a novel combined loss function specifically designed for bioimage segmentation. On the benchmark datasets from the Cell Tracking Challenge, the experimental results show that the proposed cell segmentation system outperforms the U-Net.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Supervised Augmentation: Leverage Strong Annotation for Limited Data

    00:13:18
    0 views
    A previously less exploited dimension to approach the data scarcity challenge in medical imaging classification is to leverage strong annotation, when available data is limited but the annotation resource is plentiful. Strong annotation at finer level, such as region of interest, carries more information than simple image level annotation, therefore should theoretically improve performance of a classifier. In this work, we explored utilizing strong annotation by developing a new data augmentation method, which improved over common data augmentation (random crop and cutout) by significantly enriching augmentation variety and ensuring valid label given guidance from strong annotation. Experiments on a real world application of classifying gastroscopic images demonstrated that our method outperformed state-of-the-art methods by a large margin at all different settings of data scarcity. Additionally, our method is flexible to integrate with other CNN improvement techniques and handle data with mixed annotation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Combining Multimodal Information for Metal Artefact Reduction: An Unsupervised Deep Learning Framework

    00:15:05
    0 views
    Metal artefact reduction (MAR) techniques aim at removing metal-induced noise from clinical images. In Computed Tomography (CT), supervised deep learning approaches have been shown effective but limited in generalisability, as they mostly rely on synthetic data. In Magnetic Resonance Imaging (MRI) instead, no method has yet been introduced to correct the susceptibility artefact, still present even in MAR-specific acquisitions. In this work, we hypothesise that a multimodal approach to MAR would improve both CT and MRI. Given their different artefact appearance, their complementary information can compensate for the corrupted signal in either modality. We thus propose an unsupervised deep learning method for multimodal MAR. We introduce the use of Locally Normalised Cross Correlation as a loss term to encourage the fusion of multimodal information. Experiments show that our approach favours a smoother correction in the CT, while promoting signal recovery in the MRI.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Localization of Critical Findings in Chest X-Ray without Local Annotations Using Multi-Instance Learning

    00:14:48
    0 views
    The automatic detection of critical findings in chest X-rays (CXR), such as pneumothorax, is important for assisting radiologists in their clinical workflow like triaging time-sensitive cases and screening for incidental findings. While deep learning (DL) models has become a promising predictive technology with near-human accuracy, they commonly suffer from a lack of explainability, which is an important aspect for clinical deployment of DL models in the highly regulated healthcare industry. For example, localizing critical findings in an image is useful for explaining the predictions of DL classification algorithms. While there have been a host of joint classification and localization methods for computer vision, the state-of-the-art DL models require locally annotated training data in the form of pixel level labels or bounding box coordinates. In the medical domain, this requires an expensive amount of manual annotation by medical experts for each critical finding. This requirement becomes a major barrier for training models that can rapidly scale to various findings. In this work, we address these shortcomings with an interpretable DL algorithm based on multi-instance learning that jointly classifies and localizes critical findings in CXR without the need for local annotations. We show competitive classification results on three different critical findings (pneumothorax, pneumonia, and pulmonary edema) from three different CXR datasets.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Exploiting Uncertain Deep Networks for Data Cleaning in Digital Pathology

    00:12:47
    0 views
    With the advent of digital pathology, there has been an increasing interest in providing pathologists with machine learning tools, often based on deep learning, to obtain faster and more robust image assessment. Nonetheless, the accuracy of these tools relies on the generation of large training sets of pre-labeled images. This is typically a challenging and cumbersome process, requiring extensive pre-processing to remove spurious samples that may lead the training to failure. Unlike their plain counterparts, which tend to provide overconfident decisions and cannot identify samples they have not been specifically trained for, Bayesian Convolutional Neural Networks provide a reliable measure of classification uncertainty. In this study, we exploit this inherent capability to automatize the data cleaning phase of histopathological image assessment. Our experiments on a case study of Colorectal Cancer image classification demonstrate that our approach can boost the accuracy of downstream classification by 15% at least.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • DeepSEED: 3D Squeeze-and-Excitation Encoder-Decoder Convolutional Neural Networks for Pulmonary Nodule Detection

    00:07:10
    0 views
    Pulmonary nodule detection plays an important role in lung cancer screening with low-dose computed tomography (CT) scans. It remains challenging to build nodule detection deep learning models with good generalization performance due to unbalanced positive and negative samples. In order to overcome this problem and further improve state-of-the-art nodule detection methods, we develop a novel deep 3D convolutional neural network with an Encoder-Decoder structure in conjunction with a region proposal network. Particularly, we utilize a dynamically scaled cross entropy loss to reduce the false positive rate and combat the sample imbalance problem associated with nodule detection. We adopt the squeeze-and-excitation structure to learn effective image features and utilize inter-dependency information of different feature maps. We have validated our method based on publicly available CT scans with manually labelled ground-truth obtained from LIDC/IDRI dataset and its subset LUNA16 with thinner slices. Ablation studies and experimental results have demonstrated that our method could outperform state-of-the-art nodule detection methods by a large margin.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Leveraging Self-supervised Denoising for Image Segmentation

    00:14:52
    0 views
    Deep learning (DL) has arguably emerged as the method of choice for the detection and segmentation of biological structures in microscopy images. However, DL typically needs copious amounts of annotated training data that is for biomedical problems usually not available and excessively expensive to generate. Additionally, tasks become harder in the presence of noise, requiring even more high-quality training data. Hence, we propose to use denoising networks to improve the performance of other DL-based image segmentation methods. More specifically, we present ideas on how state-of-the-art self-supervised CARE networks can improve cell/nuclei segmentation in microscopy data. Using two state-of-the-art baseline methods, U-Net and StarDist, we show that our ideas consistently improve the quality of resulting segmentations, especially when only limited training data for noisy micrographs are available.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Characterizing the Propagation Pattern of Neurodegeneration in Alzheimers Disease by Longitudinal Network Analysis

    00:13:14
    0 views
    Converging evidence shows that Alzheimer?s disease (AD) is a neurodegenerative disease that represents a disconnection syndrome, whereby a large-scale brain network is progressively disrupted by one or more neuropathological processes. However, the mechanism by which pathological entities spread across a brain network is largely unknown. Since pathological burden may propagate trans-neuronally, we propose to characterize the propagation pattern of neuropathological events spreading across relevant brain networks that are regulated by the organization of the network. Specifically, we present a novel mixed-effect model to quantify the relationship between longitudinal network alterations and neuropathological events observed at specific brain regions, whereby the topological distance to hub nodes, high-risk AD genetics, and environmental factors (such as education) are considered as predictor variables. Similar to many cross-section studies, we find that AD-related neuropathology preferentially affects hub nodes. Furthermore, our statistical model provides strong evidence that abnormal neuropathological burden diffuses from hub nodes to non-hub nodes in a prion-like manner, whereby the propagation pattern follows the intrinsic organization of the large-scale brain network.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Robust Automatic Multiple Landmark Detection

    00:13:48
    0 views
    Reinforcement learning (RL) has proven to be a powerful tool for automatic single landmark detection in 3D medical images. In this work, we extend RL-based single landmark detection to detect multiple landmarks simultaneously in the presence of missing data in the form of defaced 3D head MR images. Our purposed technique is both time-efficient and robust to missing data. We demonstrate that adding auxiliary landmarks can improve the accuracy and robustness of estimating primary target landmark locations. The multi-agent deep Q-network (DQN) approach described here detects landmarks within 2mm, even in the presence of missing data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-Scale Unrolled Deep Learning Framework for Accelerated Magnetic Resonance Imaging

    00:10:23
    0 views
    Accelerating data acquisition in magnetic resonance imaging (MRI) has been of perennial interest due to its prohibitively slow data acquisition process. Recent trends in accelerating MRI employ data-centric deep learning frameworks due to its fast inference time and ?one-parameter-fit-all? principle unlike in traditional model-based acceleration techniques. Unrolled deep learning framework that combines the deep priors and model knowledge are robust compared to naive deep learning-based framework. In this paper, we propose a novel multiscale unrolled deep learning framework which learns deep image priors through multi-scale CNN and is combined with unrolled framework to enforce data-consistency and model knowledge. Essentially, this framework combines the best of both learning paradigms:model-based and data-centric learning paradigms. Proposed method is verified using several experiments on numerous data sets.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CEUS-Net: Lesion Segmentation in Dynamic Contrast-Enhanced Ultrasound with Feature-Reweighted Attention Mechanism

    00:11:51
    0 views
    Contrast-enhanced ultrasound (CEUS) has been a popular clinical imaging technique for the dynamic visualization of the tumor microvasculature. Due to the heterogeneous intratumor vessel distribution and ambiguous lesion boundary, automatic tumor segmentation in the CEUS sequence is challenging. To overcome these difficulties, we propose a novel network, CEUS-Net, which is a novel U-net network infused with our designed feature-reweighted dense blocks. Specifically, CEUS-Net incorporates the dynamic channel-wise feature re-weighting into the Dense block for adapting the importance of learned lesion-relevant features. Besides, in order to efficiently utilize dynamic characteristics of CEUS modality, our model attempts to learn spatial-temporal features encoded in diverse enhancement patterns using a multichannel convolutional module. The CEUS-Net has been tested on tumor segmentation tasks of CEUS images from breast and thyroid lesions. It results in the dice index of 0.84, and 0.78 for CEUS segmentation of breast and thyroid respectively.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Informative Retrieval Framework for Histopathology Whole Slides Images Based on Deep Hashing Network

    00:11:00
    0 views
    Histopathology image retrieval is an emerging application for Computer-aided cancer diagnosis. However, the current retrieval methods, especially the methods based on deep hashing, pay less attention to the characteristic of histopathology whole slide images (WSIs). The retrieved results are occasionally occupied by similar images from a few WSIs. The retrieval database cannot be sufficiently utilized. To solve these issues, we proposed an informative retrieval framework based on deep hashing network. Specifically, a novel loss function for the hashing network and a retrieval strategy are designed, which contributes more informative retrieval results without reducing the retrieval precision. The proposed method was verified on the ACDC-LungHP dataset and compared with the state-of-the-art method. The experimental results have demonstrated the effectiveness of our method in the retrieval of large-scale database containing histopathology while slide images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improving Lung Nodule Detection with Learnable Non-Maximum Suppression

    00:14:34
    0 views
    Current lung nodule detection methods generate several candidate regions per nodule, such that a Non-Maximum Suppression (NMS) algorithm is required to select a single region per nodule while eliminating the redundant ones. GossipNet is a 1D Neural Network (NN) for NMS, which can learn the NMS parameters rather than relying on handcrafted ones. However, GossipNet does not take advantage of image features to learn NMS. We use Faster R-CNN with ResNet18 for candidate region detection and present FeatureNMS --- a neural network that provides additional image features to the input of GossipNet, which result from a transformation over the voxel intensities of each candidate region in the CT image. Experiments indicate that FeatureNMS can improve nodule detection in 2.33% and 0.91%, on average, when compared to traditional NMS and the original GossipNet, respectively.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Diagnosing Colorectal Polyps in the Wild with Capsule Networks

    00:12:07
    0 views
    Colorectal cancer, largely arising from precursor lesions called polyps, remains one of the leading causes of cancer-related death worldwide. Current clinical standards require the resection and histopathological analysis of polyps due to test accuracy and sensitivity of optical biopsy methods falling substantially below recommended levels. In this study, we design a novel capsule network architecture (D-Caps) to improve the viability of optical biopsy of colorectal polyps. Our proposed method introduces several technical novelties including a novel capsule architecture with a capsule-average pooling (CAP) method to improve efficiency in large-scale image classification. We demonstrate improved results over the previous state-of-the-art convolutional neural network (CNN) approach by as much as 43%. This work provides an important benchmark on the new Mayo Polyp dataset, a significantly more challenging and larger dataset than previous polyp studies, with results stratified across all available categories, imaging devices and modalities, and focus modes to promote future direction into AI-driven colorectal cancer screening systems.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Quantitative Analysis of Microglia in Bright-Field Images of Zebrafish

    00:14:05
    1 view
    Microglia are known to play important roles in brain development and homeostasis, yet their molecular regulation is still poorly understood. Identification of microglia regulators is facilitated by genetic screening and studying the phenotypic effects in animal models. Zebrafish are ideal for this, as their external development and transparency allow in vivo imaging by bright-field microscopy in the larval stage. However, manual analysis of the images is very labor intensive. Here we present a computational method to automate the analysis. It merges the optical sections into an all-in-focus image to simplify the subsequent steps of segmenting the brain region and detecting the contained microglia for quantification and downstream statistical testing. Evaluation on a fully annotated data set of 50 zebrafish larvae shows that the method performs close to the human expert.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly Supervised Lesion Co-Segmentation on CT Scans

    00:14:17
    0 views
    Lesion segmentation in medical imaging serves as an effective tool for assessing tumor sizes and monitoring changes in growth. However, not only is manual lesion segmentation time-consuming, but it is also expensive and requires expert radiologist knowledge. Therefore many hospitals rely on a loose substitute called response evaluation criteria in solid tumors (RECIST). Although these annotations are far from precise, they are widely used throughout hospitals and are found in their picture archiving and communication systems (PACS). Therefore, these annotations have the potential to serve as a robust yet challenging means of weak supervision for training full lesion segmentation models. In this work, we propose a weakly-supervised co-segmentation model that first generates pseudo-masks from the RECIST slices and uses these as training labels for an attention-based convolutional neural network capable of segmenting common lesions from a pair of CT scans. To validate and test the model, we utilize the DeepLesion dataset, an extensive CT-scan lesion dataset that contains 32,735 PACS bookmarked images. Extensive experimental results demonstrate the efficacy of our co-segmentation approach for lesion segmentation with a mean Dice coefficient of 90.3%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Brain Lesion Detection Using a Robust Variational Autoencoder and Transfer Learning

    00:12:14
    0 views
    Automated brain lesion detection from multi-spectral MR images can assist clinicians by improving sensitivity as well as specificity in lesion studies. Supervised machine learning methods have been successful in lesion detection. However, these methods usually rely on a large number of manually delineated imaging data for specific imaging protocols and parameters and often do not generalize well to other imaging parameters and demographics. Most recently, unsupervised models such as Auto-Encoders have become attractive for lesion detection since they do not need access to manually delineated lesions. Despite the success of unsupervised models, using pre-trained models on an unseen dataset is still a challenge. This difficulty is because the new dataset may use different imaging parameters, demographics, and different pre-processing techniques. Additionally, using a clinical dataset that has anomalies and outliers can make unsupervised learning challenging since the outliers can unduly affect the performance of the learned models. These two difficulties make unsupervised lesion detection a particularly challenging task. The method proposed in this work addresses these issues using a two-prong strategy: (1) we use a robust variational autoencoder model that is based on robust statistics specifically, $beta$-divergence which can learn from data with outliers; (2) we use a transfer-learning method for learning models across datasets with different characteristics. Our results on MRI datasets demonstrate that we can improve the accuracy of lesion detection by adapting robust statistical models and transfer learning for a Variational Auto-Encoder model.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Separation of Metabolite and Macromolecule Signals for 1H-MRSI Using Learned Nonlinear Models

    00:14:11
    0 views
    This paper presents a novel method to reconstruct and separate metabolite and macromolecule (MM) signals in 1H magnetic resonance spectroscopic imaging (MRSI) data using learned nonlinear models. Specifically, deep autoencoder (DAE) networks were constructed and trained to learn the nonlinear low-dimensional manifolds, where the metabolite and MM signals reside individually. A regularized reconstruction formulation is proposed to integrate the learned models with signal encoding model to reconstruct and separate the metabolite and MM components. An efficient algorithm was developed to solve the associated optimization problem. The performance of the proposed method has been evaluated using simulation and experimental 1H-MRSI data. Efficient low-dimensional signal representation of the learned models and improved metabolite/MM separation over the standard parametric fitting based approach have been demonstrated.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Longitudinal Analysis of Mild Cognitive Impairment Via Sparse Smooth Network and Attention-Based Stacked Bi-Directional Long Short Term Memory

    00:08:52
    0 views
    Alzheimer's disease (AD) is a common irreversible neurodegenerative disease among elderlies. To identify the early stage of AD (i.e., mild cognitive impairment, MCI), many recent studies in the literature use only a single time point and ignore the conducive multi-time points information. Therefore, we propose a novel method that combines multi-time sparse smooth network with long short-term memory (LSTM) network to identify early and late MCIs from multi-time points of resting-state functional magnetic resonance imaging (rs-fMRI). Specifically, we first construct the sparse smooth brain network from rs-fMRI data at different time-points, then an attention based stacked bidirectional LSTM is used to extract features and analyze them longitudinally. Finally, we classify them using Softmax classifier. The proposed method is evaluated on the public Alzheimer's Disease Neuroimaging Phase II (ADNI-2) database and demonstrates the impressive erformance compared with the state-of-the-art methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep learning based MPI system matrix recovery to increase the spatial resolution of reconstructed images

    00:12:54
    0 views
    Magnetic particle imaging (MPI) data is commonly reconstructed using a system matrix acquired in a time-consuming calibration measurement. The calibration approach has the important advantage over model-based reconstruction that it takes the complex particle physics as well as system imperfections into account. This benefit comes for the cost that the system matrix needs to be re-calibrated whenever the scan parameters, particle types or even the particle environment (e.g. viscosity or temperature) changes. One route for reducing the calibration time is the sampling of the system matrix at a subset of the spatial positions of the intended field-of-view and employing system matrix recovery. Recent approaches used compressed sensing (CS) and achieved subsampling factors up to 28 that still allowed reconstructing MPI images of sufficient quality. In this work, we propose a novel framework with ComplexRGB-Loss and a 3d-System Matrix Recovery Network. We demonstrate that the 3d-SMRnet can recover a 3d system matrix with a subsampling factor of 64 in less than one minute. Furthermore, 3d-SMRnet outperforms CS in terms of system matrix quality, reconstructed image quality, and processing time. The advantage of our method is demonstrated by reconstructing open access MPI datasets. The model is further shown to be capable of inferring system matrices for different particle types.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Benchmark for Deep Learning Reconstruction Methods for Low-Dose Computed Tomography

    00:06:52
    0 views
    Over the last years, deep learning methods have significantly pushed the state-of-the-art results in applications like imaging, speech recognition and time series forecasting. This development also starts to apply to the field of computed tomography (CT). One of the main goals lies in the reduction of the potentially harmful radiation dose a patient is exposed to during the scan. Depending on the reduction strategy, such low-dose measurements can be more noisy or starkly under-sampled. Hence, achieving high quality reconstructions with classical methods can be challenging. Recently, a number of deep learning approaches were introduced for this task. Up to now, most of them have only been tested on datasets with a handful of patients and different setups, which makes them hard to compare. We introduced a comprehensive low photon count CT dataset, called LoDoPaB-CT, with over 40000 two-dimensional scan slices from more than 800 patients. We conduct an extensive study based on this dataset. Popular deep learning approaches from various categories, like post-processing, learned iterative schemes and fully learned inversion are included and compared against classical methods. The test covers image quality of the reconstructions, but also the aspect of the influence of the number of training samples. The latter is of interest to biomedical applications in general, since in many of them extensive datasets are currently not available. A novel variation to the Deep Image Prior (DIP) is investigated as well. The standard DIP is an iterative method that does not use any training data. The reconstruction process can take a long time compared to other methods. We propose a shared network architecture and ways to include training samples to simultaneously increase reconstruction quality and reduce the number of iterations. Our general results show that deep learning methods combining physical modeling and learning from data are able to significantly outperform classical approaches, even for a small number of training samples. This finding supports the current research of efficiently applying deep learning methods to three- or even four-dimensional CT data. This would allow for a new generation of CT machines. We encourage other researchers from the biomedical imaging community to develop and test their CT reconstruction methods on the LoDoPaB-CT dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Characterization of Resting Microvascular Dynamics in Skeletal Muscle Using Synchrosqueezing Transform of BOLD MRI and NIRS

    00:06:46
    0 views
    Spatial and temporal regulation of microvascular blood flow impact delivery of oxygen and nutrients to skeletal muscle. We examine temporal patterns in blood oxygenation leveldependent MRI from the calf muscle at rest in healthy subjects and compare with simultaneously acquired optical spectroscopy using a custom near-infrared system. Colocalized time series data are characterized using wavelet synchrosqueezing transform, and preliminary analysis shows inter-subject variability in endothelial functions as well as comparable energy distribution between modalities. This approach holds potential for detailed mapping of microvascular impairment.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • End-To-End Training of Neural Networks with Topological Loss

    00:10:44
    0 views
    We present a topological loss to train neural networks to segmentation fine structures with correct topology. The differentiable loss enforces the topology of the segmentation and the ground truth to be similar, based on the theory of persistent homology. The learnt network consistently outperform other methods in metrics relevant to structural accuracy. We also discuss applications of the method to other learning tasks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Evaluating Multi-Class Segmentation Errors with Anatomical Priors

    00:13:07
    0 views
    Acquiring large scale annotations is challenging in medical image analysis because of the limited number of qualified annotators. Thus, it is essential to achieve high performance using a small number of labeled data, where the key lies in mining the most informative samples to annotate. In this paper, we propose two effective metrics which leverage anatomical priors to evaluate multi-class segmentation methods without ground truth (GT). Together with our smooth margin loss, these metrics can help to mine the most informative samples for training. In experiments, first we demonstrate the proposed metrics can clearly distinguish samples with different degree of errors in the task of pulmonary lobe segmentation. And then we show that our metrics synergized with the proposed loss function can reach the Pearson Correlation Coefficient (PCC) of 0.7447 with mean surface distance (MSD) and -0.5976 with Dice score, which implies the proposed metrics can be used to evaluate segmentation methods. Finally, we utilize our metrics as sample selection criteria in an active learning setting, which shows that the model trained with our anatomy based query achieves comparable performance with the one trained with random query and uncertainty based query using more annotated training data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Benchmarking Deep Nets MRI Reconstruction Models on the FastMRI Publicly Available Dataset

    00:15:27
    0 views
    The MRI reconstruction field lacked a proper data set that allowed for reproducible results on real raw data (i.e. complex-valued), especially when it comes to deep learning methods as this kind of methods require much more data than classical Compressed Sensing reconstruction. This lack is now filled by the fastMRI data set, and it is needed to evaluate recent deep learning models on this benchmark. Besides, these networks are written in different frameworks, in different repositories (if publicly available), it is therefore needed to have a common tool, publicly available, allowing a reproducible benchmark of the different methods and ease of building new models. We provide such a tool that allows the benchmark of different reconstruction deep learning models.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Region of Interest Identification for Cervical Cancer Images

    00:14:28
    0 views
    Every two minutes one woman dies of cervical cancer globally, due to lack of sufficient screening. Given a whole slide image (WSI) obtained by scanning a microscope glass slide for a Liquid Based Cytology (LBC) based Pap test, our goal is to assist the pathologist to determine presence of pre-cancerous or cancerous cervical anomalies. Inter-annotator variation, large image sizes, data imbalance, stain variations, and lack of good annotation tools make this problem challenging. Existing related work has focused on sub-problems like cell segmentation and cervical cell classification but does not provide a practically feasible holistic solution. We propose a practical system architecture which is based on displaying regions of interest on WSIs containing potential anomaly for review by pathologists to increase productivity. We build multiple deep learning classifiers as part of the proposed architecture. Our experiments with a dataset of ~19000 regions of interest provides an accuracy of ~89% for a balanced dataset both for binary as well as 6-class classification settings. Our deployed system provides a top-5 accuracy of ~94%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Spatially Constrained Deep Convolutional Neural Network for Nerve Fiber Segmentation in Corneal Confocal Microscopic Images Using Inaccurate Annotations

    00:12:04
    0 views
    Semantic image segmentation is one of the most important tasks in medical image analysis. Most state-of-the-art deep learning methods require a large number of accurately annotated examples for model training. However, accurate annotation is difficult to obtain especially in medical applications. In this paper, we propose a spatially constrained deep convolutional neural network (DCNN) to achieve smooth and robust image segmentation using inaccurately annotated labels for training. In our proposed method, image segmentation is formulated as a graph optimization problem that solved by a DCNN model learning process. The cost function to be optimized consists of a unary term that calculated by cross entropy measurement and a pairwise term that is based on enforcing a local label consistency. The proposed method has been evaluated based on corneal confocal microscopic (CCM) images for nerve fiber segmentation, where accurate annotations are extremely difficult to be obtained. Based on both quantitative result of a synthetic dataset and qualitative assessment of a real dataset, the proposed method has achieved superior performance in producing high quality segmentation results even with inaccurate labels for training.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • AF-SEG: An Annotation-Free Approach for Image Segmentation by Self-Supervision and Generative Adversarial Network

    00:08:35
    0 views
    Traditional segmentation methods are annotation-free but usually produce unsatisfactory results. The latest leading deep learning methods improve the results but require expensive and time-consuming pixel-level manual annotations. In this work, we propose a novel method based on self-supervision and generative adversarial network (GAN), which has high performance and requires no manual annotations. First, we perform traditional segmentation methods to obtain coarse segmentation. Then, we use GAN to generate a synthetic image, on which the image foreground is pixel-to-pixel corresponding to the coarse segmentation. Finally, we train the segmentation model with the data pairs of synthetic images and coarse segmentations. We evaluate our method on two types of segmentation tasks, including red blood cell (RBC) segmentation on microscope images and vessel segmentation on digital subtraction angiographies (DSA). The results show that our annotation-free method provides a considerable improvement over the traditional methods and achieves comparable accuracies with fully supervised methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning Probabilistic Fusion of Multilabel Lesion Contours

    00:14:22
    0 views
    Supervised machine learning algorithms, especially in the medical domain, are affected by considerable ambiguity in expert markings, primarily in proximity to lesion contours. In this study we address the case where the experts opinion for those ambiguous areas is considered as a distribution over the possible values. We propose a novel method that modifies the experts? distributional opinion at ambiguous areas by fusing their markings based on their sensitivity and specificity. The algorithm can be applied at the end of any label fusion algorithm that can handle soft values. The algorithm was applied to obtain consensus from soft Multiple Sclerosis (MS) segmentation masks. Soft MS segmentations are constructed from manual binary delineations by including lesion surrounding voxels in the segmentation mask with a reduced confidence weight. The method was evaluated on the MICCAI 2016 challenge dataset, and outperformed previous methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Oct Image Quality Evaluation Based on Deep and Shallow Features Fusion Network

    00:09:01
    0 views
    Optical coherence tomography (OCT) has become an important tool for the diagnosis of retinal diseases, and image quality assessment on OCT images has considerable clinical significance for guaranteeing the accuracy of diagnosis by ophthalmologists. Traditional OCT image quality assessment is usually based on hand-crafted features including signal strength index and signal to noise ratio. These features only reflect a part of image quality, but cannot be seen as a full representation on image quality. Especially, there is no detailed description of OCT image quality so far. In this paper, we firstly define OCT image quality as three grades (?Good?, ?Usable? and ?Poor?). Considering the diversity of image quality, we then propose a deep and shallow features fusion network (DSFF-Net) to conduct multiple label classification. The DSFF-Net combines deep and enhanced shallow features of OCT images to predict the image quality grade. The experimental results on a large OCT dataset show that our network obtains state-of-the-art performance, outperforming the other classical CNN networks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • An Efficient Hybrid Model for Kidney Tumor Segmentation in CT Images

    00:09:16
    0 views
    Kidney tumor segmentation from CT-volumes is essential for lesion diagnosis. Considering excessive GPU memory requirements for 3D medical images, slices and patches are exploited for training and inference in conventional U-Net variant architectures, which inevitably hampers contextual learning. In this paper, we propose a novel effective hybrid model for kidney tumor segmentation in CT images with two parts: 1) Foreground Segmentation Network; 2) Sparse PointCloud Segmentation Network. Specifically, Foreground Segmentation Network firstly segments the foreground, i.e., kidneys with tumors, from background in voxel grid using classical V-Net. Secondly, we represent the obtained foreground regions as point clouds and feed them into the Sparse PointCloud Segmentation Networks to conduct fine-grained segmentation for kidney and tumor. The critical module embedded in the second part is an efficient Submanifold Sparse Convolutional Networks (SSCNs). By exploiting SSCNs, our proposed model can take all foreground as input for better context learning in a memory-efficient manner, and consider the anisotropy of CT images as well. Experiments show that our model can achieve state-of-the-art tumor segmentation while reducing GPU resource demand significantly.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Restoration of Marker Occluded Hematoxylin and Eosin Stained Whole Slide Histology Images Using Generative Adversarial Networks

    00:14:27
    0 views
    It is common for pathologists to annotate specific regions of the tissue, such as tumor, directly on the glass slide with markers. Although this practice was helpful prior to the advent of histology whole slide digitization, it often occludes important details which are increasingly relevant to immuno-oncology due to recent advancements in digital pathology imaging techniques. The current work uses a generative adversarial network with cycle loss to remove these annotations while still maintaining the underlying structure of the tissue by solving an image-to-image translation problem. We train our network on up to 300 whole slide images with marker inks and show that 70% of the corrected image patches are indistinguishable from originally uncontaminated image tissue to a human expert. This portion increases 97% when we replace the human expert with a deep residual network. We demonstrated the fidelity of the method to the original image by calculating the correlation between image gradient magnitudes. We observed a revival of up to 94,000 nuclei per slide in our dataset, the majority of which were located on tissue border.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Recurrent Neural Networks for Compressive Video Reconstruction

    00:12:34
    0 views
    Single-pixel imaging allows low cost cameras to be built for imaging modalities where a conventional camera would either be too expensive or too cumbersome. This is very attractive for biomedical imaging applications based on hyperspectral measurements, such as image-guided surgery, which requires the full spectrum of fluorescence. A single-pixel camera essentially measures the inner product of the scene and a set of patterns. An inverse problem has to be solved to recover the original image from the raw measurement. The challenge in single-pixel imaging is to reconstruct the video sequence in real time from under-sampled data. Previous approaches have focused on the reconstruction of each frame independently, which fails to exploit the natural temporal redundancy in a video sequence. In this study, we propose a fast deep-learning reconstructor that exploits the spatio-temporal features in a video. In particular, we consider convolutional gated recurrent units that have low memory requirements. Our simulation shows than the proposed recurrent network improves the reconstruction quality compared to static approaches that reconstruct the video frames independently.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Lymphoma Segmentation in PET Images Based on Multi-view and Conv3D Fusion Strategy

    00:10:07
    0 views
    Due to the poor image information of lymphoma in PET images, it is still a challenge to segment them correctly. In this work, a fusion strategy by 2D multi-view and 3D networks is proposed to take full advantage of available information for segmentation. Firstly, we train three 2D network models from three orthogonal views based on 2D ResUnet, and train a 3D network model by using volumetric data based on 3D ResUnet. Then the obtained preliminary results (three 2D results and one 3D result) are fused by combing the original volumetric data based on a Conv3D fusion strategy. Finally, a series experiments are conducted on lymphoma dataset, and the results show that the proposed multi-view lymphoma co-segmentation scheme is promising, and it can improve the comprehensive performance by combing 2D multi-view and 3D networks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly-Supervised Prediction of Cell Migration Modes in Confocal Microscopy Images Using Bayesian Deep Learning

    00:08:56
    0 views
    Cell migration is pivotal for their development, physiology and disease treatment. A single cell on a 2D surface can utilize continuous or discontinuous migration modes. To comprehend the cell migration, an adequate quantification for single cell-based analysis is crucial. An automatized approach could alleviate tedious manual analysis, facilitating large-scale drug screening. Supervised deep learning has shown promising outcomes in computerized microscopy image analysis. However, their implication is limited due to the scarcity of carefully annotated data and uncertain deterministic outputs. We compare three deep learning models to study the problem of learning discriminative morphological representations using weakly annotated data for predicting the cell migration modes. We also estimate Bayesian uncertainty to describe the confidence of the probabilistic predictions. Amongst three compared models, DenseNet yielded the best results with a sensitivity of 87.91% at a false negative rate of 1.26%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • MixModule: Mixed CNN Kernel Module for Medical Image Segmentation

    00:15:06
    0 views
    Convolutional neural networks (CNNs) have been successfully applied to medical image classification, segmentation, and related tasks. Among the many CNNs architectures, U-Net and its improved versions based are widely used and achieve state-of-the-art performance these years. These improved architectures focus on structural improvements and the size of the convolution kernel is generally fixed. In this paper, we propose a module that combines the benefits of multiple kernel sizes and apply it to U-Net its variants. We test our module on three segmentation benchmark datasets and experimental results show significant improvement.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fast Automatic Parameter Selection for MRI Reconstruction

    00:13:42
    0 views
    This paper proposes an automatic parameter selection framework for optimizing the performance of parameter-dependent regularized reconstruction algorithms. The proposed approach exploits a convolutional neural network for direct estimation of the regularization parameters from the acquired imaging data. This method can provide very reliable parameter estimates in a computationally efficient way. The effectiveness of the proposed approach is verified on transform-learning-based magnetic resonance image reconstructions of two different publicly available datasets. This experiment qualitatively and quantitatively measures improvement in image reconstruction quality using the proposed parameter selection strategy versus both existing parameter selection solutions and a fully deep-learning reconstruction with limited training data. Based on the experimental results, the proposed method improves average reconstructed image peak signal-to-noise ratio by a dB or more versus all competing methods in both brain and knee datasets, over a range of subsampling factors and input noise levels.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Machine Learning and Graph Based Approach to Automatic Right Atrial Segmentation from Magnetic Resonance Imaging

    00:13:50
    0 views
    Manual delineation of the right atrium throughout the cardiac cycle is tedious and time-consuming, yet promising for early detection of right heart dysfunction. In this study, we developed a fully automated approach to right atrial segmentation in 4-chamber long-axis magnetic resonance image (MRI) cine sequences by applying a U-Net based neural network approach followed by a contour reconstruction and refinement algorithm. In contrast to U-Net, the proposed approach performs segmentation using open contours. This allows for exclusion of the tricuspid valve region from the atrial segmentation, an essential aspect in the analysis of atrial wall motion. The MR images were retrospectively acquired from 242 cine sequences which were manually segmented by an expert radiologist to produce the ground truth data. The neural network was trained over 600 epochs under six different hyperparameter configurations on 202 randomly selected sequences to recognize a dilated region surrounding the right atrial contour. A graph algorithm is then applied to the binary labels predicted by the trained model to accurately reconstruct the corresponding contours. Finally, the contours are refined by combining a nonrigid registration algorithm which tracks the deformation of the heart and a Gaussian process regression. Evaluation of the proposed method on the remaining 40 MR image sequences excluding a single outlier sequence yielded promising Sorensen--Dice coefficients and Hausdorff distances of 95.2% and 4.64 mm respectively before refinement and 94.9% and 4.38 mm afterward.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fusing Metadata and Dermoscopy Images for Skin Disease Diagnosis

    00:05:06
    0 views
    To date, it is still difficult and challenging to automatically classify dermoscopy images. Although the state-of-the-art convolutional networks were applied to solve the classification problem and achieved overall decent prediction results, there is still room for performance improvement, especially for rare disease categories. Considering that human dermatologists often make use of other information (e.g., body locations of skin lesions) to help diagnose, we propose using both dermoscopy images and non-image metadata for intelligent diagnosis of skin diseases. Specifically, the metadata information is innovatively applied to control the importance of different types of visual information during diagnosis. Comprehensive experiments with various deep learning model architectures demonstrated the superior performance of the proposed fusion approach especially for relatively rare diseases. All our codes will be made publicly available.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Task Fmri Guided Fiber Clustering Via a Deep Clustering Method

    00:07:48
    0 views
    Fiber clustering is a prerequisite step towards tract-based analysis for human brain, and it is very important to explain brain structure and function relationship. Over the last decade, it has been an open and challenging question as to what a reasonable clustering of fibers is. Specifically, the purpose of fiber clustering is to cluster the whole brain?s white matter fibers extracted from tractography into similar and meaningful fiber bundles, thus how to definite the ?similar and meaningful? metric decides the performance and possible application of a fiber clustering method. In the past, researchers typically divided the fibers into anatomical or structural similar bundles, but rarely divided them according to functional meanings. In this work, we proposed a novel fiber clustering method by adopting the functional and structural information and combined them into the input of a deep convolutional autoencoder with embedded clustering, which can better extract and use the features within the data. The experimental results show that the proposed method can cluster the whole brain?s fibers into functionally and structurally meaningful bundles.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Accelerating the Registration of Image Sequences by Spatio-Temporal Multilevel Strategies

    00:16:33
    0 views
    Multilevel strategies are an integral part of many image registration algorithms. These strategies are very well-known for avoiding undesirable local minima, providing an outstanding initial guess, and reducing overall computation time. State-of-the-art multilevel strategies build a hierarchy of discretization in the spatial dimensions. In this paper, we present a spatio-temporal strategy, where we introduce a hierarchical discretization in the temporal dimension at each spatial level. This strategy is suitable for a motion estimation problem where the motion is assumed smooth over time. Our strategy exploits the temporal smoothness among image frames by following a predictor-corrector approach. The strategy predicts the motion by a novel interpolation method and later corrects it by registration. The prediction step provides a good initial guess for the correction step, hence reduces the overall computational time for registration. The acceleration is achieved by a factor of 2.5 on average, over the state-of-the-art multilevel methods on three examined optical coherence tomography datasets.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Enumeration of Ampicillin-Resistant E. Coli in Blood Using Droplet Microfluidics and High-Speed Image Processing

    00:15:39
    0 views
    Bacteria entering the bloodstream causes bloodstream infection (BSI). Without proper treatment, BSI can lead to sepsis which is a life-threatening condition. Detection of bacteria in blood at the early stages of BSI can effectively prevent the development of sepsis. Using microfluidic droplets for single bacterium encapsulation provides single-digit bacterial detection sensitivity. In this study, samples of ampicillin-resistant E. coli in human blood were partitioned into millions of 30 ?m diameter microfluidic droplets and followed by 8-hour culturing. Thousands of fluorescent bacteria from a single colony filled up the positive droplets after the culturing process. A circle detection software based on Hough Transform was developed to count the number of positive droplets from fluorescence images. The period to process one image can be as short as 0.5 ms when the original image is pre-processed and binarized by the developed software.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Virtual Staining for Mitosis Detection in Breast Histopathology

    00:11:34
    0 views
    We propose a virtual staining methodology based on Generative Adversarial Networks to map histopathology images of breast cancer tissue from H&E stain to PHH3 and vice versa. We use the resulting synthetic images to build Convolutional Neural Networks (CNN) for automatic detection of mitotic figures, a strong prognostic biomarker used in routine breast cancer diagnosis and grading. We propose several scenarios, in which CNN trained with synthetically generated histopathology images perform on par with or even better than the same baseline model trained with real images. We discuss the potential of this application to scale the number of training samples without the need for manual annotations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Age-Conditioned Synthesis of Pediatric Computed Tomography with Auxiliary Classifier Generative Adversarial Networks

    00:13:04
    0 views
    Deep learning is a popular and powerful tool in computed tomography (CT) image processing such as organ segmentation, but its requirement of large training datasets remains a challenge. Even though there is a large anatomical variability for children during their growth, the training datasets for pediatric CT scans are especially hard to obtain due to risks of radiation to children. In this paper, we propose a method to conditionally synthesize realistic pediatric CT images using a new auxiliary classifier generative adversarial networks (ACGANs) architecture by taking account into age information. The proposed network generated age-conditioned high-resolution CT images to enrich pediatric training datasets.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 6-Month Infant Brain MRI Segmentation Guided by 24-Month Data Using Cycle-Consistent Adversarial Networks

    00:06:28
    0 views
    Due to the extremely low intensity contrast between the white matter (WM) and the gray matter (GM) at around 6 months of age (the isointense phase), it is difficult for manual annotation, hence the number of training labels is highly limited. Consequently, it is still challenging to automatically segment isointense infant brain MRI. Meanwhile, the contrast of intensity images in the early adult phase, such as 24 months of age, is a relatively better, which can be easily segmented by the well-developed tools, e.g., FreeSurfer. Therefore, the question is how could we employ these high-contrast images (such as 24-month-old images) to guide the segmentation of 6-month-old images. Motivated by the above purpose, we propose a method to explore the 24-month-old images for a reliable tissue segmentation of 6-month-old images. Specifically, we design a 3D-cycleGAN-Seg architecture to generate synthetic images of the isointense phase by transferring appearances between the two time-points. To guarantee the tissue segmentation consistency between 6-month-old and 24-month-old images, we employ features from generated segmentations to guide the training of the generator network. To further improve the quality of synthetic images, we propose a feature matching loss that computes the cosine distance between unpaired segmentation features of the real and fake images. Then, the transferred of 24-month-old images is used to jointly train the segmentation model on the 6-month-old images. Experimental results demonstrate a superior performance of the proposed method compared with the existing deep learning-based methods.

Advertisement