IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 201 - 250 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Regularized Kurtosis Imaging and Its Comparison with Regularized Log Spectral Difference and Regularized Nakagami Imaging for Microwave Hyperthermia Hotspot Monitoring

    00:14:00
    1 view
    Microwave hyperthermia makes use of microwaves to deliver heat to biological tissues. Real time temperature monitoring during treatment is important for efficacy and effectiveness of the treatment. Non-invasive methods such as CT, MR and ultrasound (US) have been actively researched for use in hyperthermia monitoring. US has inherent advantages of real-time imaging, portability and non-ionizing nature. It is also known from the literature that acoustic properties of tissue are sensitive to temperature and this has been harnessed to track the evolution of hotspot and temperature in high temperature zones encountered in ablation treatments. However, their usage in low temperature zones typically observed in hyperthermia appears to be less explored. This study introduces an improved method of regularized Kurtosis imaging (RKI) and compares its performance against regularized log spectral difference (RLSD) and regularized Nakagami imaging methods. The performance of these methods is compared against the ground truth estimated from IR thermal images in an experimental study on tissue-mimicking PAG-agar based phantoms. The error in the area estimated by RKI was 10.6%. The error in the lateral and axial co-ordinate of the centroid was 5.92 % and 0.47%, respectively. The structural similarity index was 0.82 for RKI when compared with RLSD and RNI having score of 0.76 and 0.72, respectively. The results are promising and offer an alternative way to track the hotspot during microwave hyperthermia treatment.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fast High Dynamic Range MRI by Contrast Enhancement Networks

    00:11:01
    0 views
    HDR-MRI is a sophisticated non-linear image fusion technique for MRI which enhances image quality by fusing multiple contrast types into a single composite image. It offers improved outcomes in automatic segmentation and potentially in diagnostic power, but the existing technique is slow and requires accurate image co-registration in order to function reliably. In this work, a lightweight fully convolutional neural network architecture is developed with the goal of approximating HDR-MRI images in real-time. The resulting Contrast Enhancement Network (CEN) is capable of performing near-perfect (SSIM = 0.98) 2D approximations of HDR-MRI in 10ms and full 3D approximations in 1s, running two orders of magnitude faster than the original implementation. It is also able to perform the approximation (SSIM = 0.93) with only two of the three contrasts required to generate the original HDR-MRI image, while requiring no image co-registration.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Inception Capsule Network for Retinal Blood Vessel Segmentation and Centerline Extraction

    00:09:35
    0 views
    Automatic segmentation and centerline extraction of retinal blood vessels from fundus image data is crucial for early detection of retinal diseases. We have developed a novel deep learning method for segmentation and centerline extraction of retinal blood vessels which is based on the Capsule network in combination with the Inception architecture. Compared to state-of-the-art deep convolutional neural networks, our method has much fewer parameters due to its shallow architecture and generalizes well without using data augmentation. We performed a quantitative evaluation using the DRIVE dataset for both vessel segmentation and centerline extraction. Our method achieved state-of-the-art performance for vessel segmentation and outperformed existing methods for centerline extraction.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improved Functional MRI Activation Mapping in White Matter through Diffusion-Adapted Spatial Filtering

    00:14:36
    0 views
    Brain activation mapping using functional MRI (fMRI) based on blood oxygenation level-dependent (BOLD) contrast has been conventionally focused on probing gray matter, the BOLD contrast in white matter having been generally disregarded. Recent results have provided evidence of the functional significance of the white matter BOLD signal, showing at the same time that its correlation structure is highly anisotropic, and related to the diffusion tensor in shape and orientation. This evidence suggests that conventional isotropic Gaussian filters are inadequate for denoising white matter fMRI data, since they are incapable of adapting to the complex anisotropic domain of white matter axonal connections. In this paper we explore a graph-based description of the white matter developed from diffusion MRI data, which is capable of encoding the anisotropy of the domain. Based on this representation we design localized spatial filters that adapt to white matter structure by leveraging graph signal processing principles. The performance of the proposed filtering technique is evaluated on semi-synthetic data, where it shows potential for greater sensitivity and specificity in white matter activation mapping, compared to isotropic filtering.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Transformation Elastography: Converting Anisotropy to Isotropy

    00:15:53
    1 view
    Elastography refers to mapping mechanical properties in a material based on measuring wave motion in it using noninvasive optical, acoustic or magnetic resonance imaging methods. For example, increased stiffness will increase wavelength. Stiffness and viscosity can depend on both location and direction. A material with aligned fibers or layers may have different stiffness and viscosity values along the fibers or layers versus across them. Converting wave measurements into a mechanical property map or image is known as reconstruction. To make the reconstruction problem analytically tractable, isotropy and homogeneity are often assumed, and the effects of finite boundaries are ignored. But, infinite isotropic homogeneity is not the situation in most cases of interest, when there are pathological conditions, material faults or hidden anomalies that are not uniformly distributed in fibrous or layered structures of finite dimension. Introduction of anisotropy, inhomogeneity and finite boundaries complicates the analysis forcing the abandonment of analytically-driven strategies, in favor of numerical approximations that may be computationally expensive and yield less physical insight. A new strategy, Transformation Elastography (TE), is proposed that involves spatial distortion in order to make an anisotropic problem become isotropic. The fundamental underpinnings of TE have been proven in forward simulation problems. In the present paper a TE approach to inversion and reconstruction is introduced and validated based on numerical finite element simulations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multimodal fusion of imaging and genomics for lung cancer recurrence prediction

    00:14:14
    1 view
    Lung cancer has a high rate of recurrence in early-stage patients. Predicting the post-surgical recurrence in lung cancer patients has traditionally been approached using single modality information of genomics or radiology images. We investigate the potential of multimodal fusion for this task. By combining computed tomography (CT) images and genomics, we demonstrate improved prediction of recurrence using linear Cox proportional hazards models with elastic net regularization. We work on a recent non-small cell lung cancer (NSCLC) radiogenomics dataset of 130 patients and observe an increase in concordance-index values of up to 10%. Employing non-linear methods from the neural network literature, such as multi-layer perceptrons and visual-question answering fusion modules, did not improve performance consistently. This indicates the need for larger multimodal datasets and fusion techniques better adapted to this biological setting.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Software Tool to Read, Represent, Manipulate, and Apply N-Dimensional Spatial Transforms

    00:09:39
    0 views
    Spatial transforms formalize mappings between coordinates of objects in biomedical images. Transforms typically are the outcome of image registration methodologies, which estimate the alignment between two images. Image registration is a prominent task present in nearly all standard image processing and analysis pipelines. The proliferation of software implementations of image registration methodologies has resulted in a spread of data structures and file formats used to preserve and communicate transforms. This segregation of formats precludes the compatibility between tools and endangers the reproducibility of results. We propose a software tool capable of converting between formats and resampling images to apply transforms generated by the most popular neuroimaging packages and libraries (AFNI, FSL, FreeSurfer, ITK, and SPM). The proposed software is subject to continuous integration tests to check the compatibility with each supported tool after every change to the code base (https://github.com/poldracklab/nitransforms). Compatibility between software tools and imaging formats is a necessary bridge to ensure the reproducibility of results and enable the optimization and evaluation of current image processing and analysis workflows.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Segmentation and Uncertainty Measures of Cardiac Tissues on Optical Coherence Tomography Via Convolutional Neural Networks

    00:14:04
    0 views
    Segmentation of human cardiac tissue has a great potential to provide critical clinical guidance for Radiofrequency Ablation (RFA). Uncertainty in cardiac tissue segmentation is high because of the ambiguity of the subtle boundary and intra-/inter-physician variations. In this paper, we proposed a deep learning framework for Optical Coherence Tomography (OCT) cardiac segmentation with uncertainty measurement. Our proposed method employs additional dropout layers to assess the uncertainty of pixel-wise label prediction. In addition, we improve the segmentation performance by using focal loss to put more weights on mis-classified examples. Experimental results show that our method achieves high accuracy on pixel-wise label prediction. The feasibility of our method for uncertainty measurement is also demonstrated with excellent correspondence between uncertain regions within OCT images and heterogeneous regions within corresponding histology images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Visualisation of Medical Image Fusion and Translation for Accurate Diagnosis of High Grade Gliomas

    00:15:01
    0 views
    The medical image fusion combines two or more modalities into a single view while medical image translation synthesizes new images and assists in data augmentation. Together, these methods help in faster diagnosis of high grade malignant gliomas. However, they might be untrustworthy due to which neurosurgeons demand a robust visualisation tool to verify the reliability of the fusion and translation results before they make pre-operative surgical decisions. In this paper, we propose a novel approach to compute a confidence heat map between the source-target image pair by estimating the information transfer from the source to the target image using the joint probability distribution of the two images. We evaluate several fusion and translation methods using our visualisation procedure and showcase its robustness in enabling neurosurgeons to make finer clinical decisions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-Contrast MR Reconstruction with Enhanced Denoising Autoencoder Prior Learning

    00:12:04
    0 views
    This paper proposes an enhanced denoising autoencoder prior (EDAEP) learning framework for accurate multi-contrast MR image reconstruction. A multi-model structure with various noise levels is designed to capture features of different scales from different contrast images. Furthermore, a weighted aggregation strategy is proposed to balance the impact of different model outputs, making the performance of the proposed model more robust and stable while facing noise attacks. The model was trained to handle three different sampling patterns and different acceleration factors on two public datasets. Results demonstrate that our proposed method can improve the quality of reconstructed images and outperform the previous state-of-the-art approaches. The code is available at https://github.com/yqx7150.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Systematic Analysis and Automated Search of Hyper-parameters for Cell Classifier Training

    00:11:29
    0 views
    Performance and robustness of neural networks depend on a suitable choice of hyper-parameters, which is important in research as well as for the final deployment of deep learning algorithms. While a manual systematical analysis can be too time consuming, a fully automatic search is very dependent on the kind of hyper-parameters. For a cell classification network, we assess the individual effects of a large number of hyper-parameters and compare the resulting choice of hyper-parameters with state of the art search techniques. We further propose an approach for automated, successive search space reduction that yields well performing sets of hyper-parameters in a time-efficient way.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Quantized Representation for Enhanced Reconstruction

    00:04:24
    0 views
    In this paper, we propose a data driven Deep Quantized Latent Representation (DQLR) for high-quality data reconstruction in the Shoot Apical Meristem (SAM) of Arabidopsis thaliana. Our proposed framework utilizes multiple consecutive slices to learn a low dimensional latent space, quantize it and perform reconstruction using the quantized representation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Data Preprocessing via Compositions Multi-Channel Mri Images to Improve Brain Tumor Segmentation

    00:08:40
    0 views
    The magnetic resonance imaging (MRI) is the essential non- invasive diagnostics for the brain. It allows to build the de- tailed 3D image of the brain, notably including different types of soft tissues. In the paper, we compare how multi-channel data compo- sition and segmentation approach influences the model?s per- formance. Our aim consists of the binary segmentation with observing Dice and Recall Precision metrics. It is common to use 2D slices as input for the neural networks. Due to the multi-channel structure of MRI data, it means that there is a set of new ways (comparing with RGB images) how to com- bine data as input for machine learning algorithms. We eval- uate several possible combinations for multi-channel data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • AI-Enabled Systems in Medical Imaging: USFDA Research and Regulatory Pathways

    00:40:24
    0 views
    Kyle J. Myers, Ph.D., received bachelor?s degrees in Mathematics and Physics from Occidental College in 1980 and a Ph.D. in Optical Sciences from the University of Arizona in 1985. Since 1987 she has worked for the Center for Devices and Radiological Health of the FDA, where she is the Director of the Division of Imaging, Diagnostics, and Software Reliability in the Center for Devices and Radiological Health?s Office of Science and Engineering Laboratories. In this role she leads research programs in medical imaging systems and software tools including 3D breast imaging systems and CT devices, digital pathology systems, medical display devices, computer-aided diagnostics, biomarkers (measures of disease state, risk, prognosis, etc. from images as well as other assays and array technologies), and assessment strategies for imaging and other high-dimensional data sets from medical devices. She is the FDA Principal Investigator for the Computational Modeling and Simulation Project of the Medical Device Innovation Consortium. Along with Harrison H. Barrett, she is the coauthor of Foundations of Image Science, published by John Wiley and Sons in 2004 and winner of the First Biennial J.W. Goodman Book Writing Award from OSA and SPIE. She is an associate editor for the Journal of Medical Imaging as well as Medical Physics. Dr. Myers is a Fellow of AIMBE, OSA, SPIE, and a member of the National Academy of Engineering. She serves on SPIE?s Board of Directors (2018-2020).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • FRR-Net: Fast Recurrent Residual Networks for Real-Time Catheter Segmentation and Tracking in Endovascular Aneurysm Repair

    00:12:12
    0 views
    For endovascular aneurysm repair (EVAR), real-time and accurate segmentation and tracking of interventional instruments can aid in reducing radiation exposure, contrast agents and procedure time. Nevertheless, this task often comes with the challenges of the slender deformable structures with low contrast in noisy X-ray fluoroscopy. In this paper, a novel efficient network architecture, termed FRR-Net, is proposed for real-time catheter segmentation and tracking. The novelties of FRR-Net lie in the manner in which recurrent convolutional layers ensures better feature representation and the pre-trained lightweight components can improve model processing speed while ensuring performance. Quantitative and qualitative evaluation on images from 175 X-ray sequences of 30 patients demonstrate that the proposed approach significantly outperforms simpler baselines as well as the best previously-published result for this task, achieving the state-of-the-art performance.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Optimize Cnn Model for Fmri Signal Classification Via Adanet-Based Neural Architecture Search

    00:14:51
    0 views
    Recent studies showed that convolutional neural network (CNN) models possess remarkable capability of differentiating and characterizing fMRI signals from cortical gyri and sulci. In addition, visualization and analysis of the filters in the learned CNN models suggest that sulcal fMRI signals are more diverse and have higher frequency than gyral signals. However, it is not clear whether the gyral fMRI signals can be further divided into sub-populations, e.g., 3-hinge areas vs 2-hinge areas. It is also unclear whether the CNN models of two classes (gyral vs sulcal) classification can be further optimized for three classes (3-hinge gyral vs 2-hinge gyral vs sulcal) classification. To answer these questions, in this paper, we employed the AdaNet framework to design a neural architecture search (NAS) system for optimizing CNN models for three classes fMRI signal classification. The core idea is that AdaNet adaptively learns both the optimal structure of the CNN network and its weights so that the learnt CNN model can effectively extract discriminative features that maximize the classification accuracies of three classes of 3-hinge gyral, 2-hinge gyral and sulcal fMRI signals. We evaluated our framework on the Autism Brain Imaging Data Exchange (ABIDE) dataset, and experiments showed that our framework can obtained significantly better results, in terms of both classification accuracy and extracted features.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning Amyloid Pathology Progression from Longitudinal PiB-PET Images in Preclinical Alzheimer's Disease

    00:13:44
    0 views
    Amyloid accumulation is acknowledged to be a primary pathological event in Alzheimer's disease (AD). The literature suggests that propagation of amyloid occurs along neural pathways as a function of the disease process (prion-like transmission), but the pattern of spread in the preclinical stages of AD is still poorly understood. Previous studies have used diffusion processes to capture amyloid pathology propagation using various strategies and shown how future time-points can be predicted at the group level using a population-level structural connectivity template. But connectivity could be different between distinct subjects, and the current literature is unable to provide estimates of individual-level pathology propagation. We use a trainable network diffusion model that infers the propagation dynamics of amyloid pathology, conditioned on an individual-level connectivity network. We analyze longitudinal amyloid pathology burden in 16 gray matter (GM) regions known to be affected by AD, measured using Pittsburgh Compound B (PiB) positron emission tomography at 3 different time points for each subject. Experiments show that our model outperforms inference based on group-level trends for predicting future time points data (using individual-level connectivity networks). For group-level analysis, we find parameter differences (via permutation testing) between the models for APOE positive and APOE negative subjects.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Reservoir Computing for Jurkat T-Cell Segmentation in High Resolution Live Cell Ca2+ Fluorescence Microscopy

    00:13:04
    0 views
    The reservoir computing (RC) paradigm is exploited to detect Jurkat T cells and antibody-coated beads in fluorescence microscopy data. Recent progress in imaging of subcellular calcium (Ca2+) signaling offers a high spatial and temporal resolution to characterize early signaling events in T cells. However, data acquisition with dual-wavelength Ca2+ indicators, the photo-bleaching at high acquisition rate, low signal-to-noise ratio, and temporal fluctuations of image intensity entail corporation of post-processing techniques into Ca2+ imaging systems. Besides, although continuous recording enables real-time Ca2+ signal tracking in T cells, reliable automated algorithms must be developed to characterize the cells, and to extract the relevant information for conducting further statistical analyses. Here, we present a robust two-channel segmentation algorithm to detect Jurkat T lymphocytes as well as antibody-coated beads that are utilized to mimic cell-cell interaction and to activate the T cells in microscopy data. Our algorithm uses the reservoir computing framework to learn and recognize the cells -- taking the spatiotemporal correlations between pixels into account. A comparison of segmentation accuracy on testing data between our proposed method and the deep learning U-Net model confirms that the developed model provides accurate and computationally cheap solution to the cell segmentation problem.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Towards Shape-Based Knee Osteoarthritis Classification Using Graph Convolutional Networks

    00:12:25
    0 views
    We present a transductive learning approach for morphometric osteophyte grading based on geometric deep learning. We formulate the grading task as semi-supervised node classification problem on a graph embedded in shape space. To account for the high-dimensionality and non-Euclidean structure of shape space we employ a combination of an intrinsic dimension reduction together with a graph convolutional neural network. We demonstrate the performance of our derived classifier in comparisons to an alternative extrinsic approach.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Data-Aware Deep Supervised Method for Retinal Vessel Segmentation

    00:15:58
    0 views
    Accurate vessel segmentation in retinal images is vital for retinopathy diagnosis and analysis. However, existence of very thin vessels in low image contrast along with pathological conditions (e.g., capillary dilation or microaneurysms) render the segmentation task difficult. In this work, we present a novel approach for retinal vessel segmentation focusing on improving thin vessel segmentation. We develop a deep convolutional neural network (CNN), which exploits the specific characteristics of the input retinal data to use deep supervision, for improved segmentation accuracy. In particular, we use the average input retinal vessel width and match it with the layer-wise effective receptive fields (LERF) of the CNN to determine the location of the auxiliary supervision. This helps the network to pay more attention to thin vessels, that otherwise the network would 'ignore' during training. We verify our method on three public retinal vessel segmentation datasets (DRIVE, CHASE_DB1, and STARE), achieving better sensitivity (10.18% average increase) than state-of-the-art methods while maintaining comparable specificity, accuracy, and AUC.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Temporal Model for Task-Based Functional MR Images

    00:13:02
    0 views
    To better identify task-activated brain regions in task-based functional magnetic resonance imaging (tb-fMRI), various space-time models have been used to reconstruct image sequences from k-space data. These models decompose a fMRI timecourse into a static background and a dynamic foreground, aiming to separate task-correlated components from non-task signals. This paper proposes a model based on assumptions of the activation waveform shape and smoothness of the timecourse, and compare it to two contemporary tb-fMRI decomposition models. We experiment in the image domain using a simulated task with known region of interest, and a real visual task. The proposed model yields fewer false activations in task activation maps.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Novel Approach to Vertebral Compression Fracture Detection Using Imitation Learning and Patch Based Convolutional Neural Network

    00:14:33
    0 views
    Compression Fractures in vertebrae often go undetected clinically due to various reasons. If left untreated, they can lead to severe secondary fractures due to osteoporosis. We present here a novel fully automated approach to the detection of Vertebral Compression Fractures (VCF). The method involves 3D localisation of thoracic and lumbar spine regions using Deep reinforcement Learning and Imitation Learning. The localised region is then split into 2D sagittal slices around the coronal centre. Each slice is further divided into patches, on which a Convolutional Neural Network (CNN) is trained to detect compression fractures. Experiments for localisation achieved an average Jaccard Index/Dice Coefficient score of 74/85% for 144 CT chest images and 77/86% for 132 CT abdomen images. VCF Detection was performed on another 127 chest images and after localisation, resulted in an average fivefold cross validation accuracy of 80%, sensitivity of 79.87% and specificity of 80.73%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Semi-Supervised Joint Learning Approach to Left Ventricular Segmentation and Motion Tracking in Echocardiography

    00:14:50
    0 views
    Accurate interpretation and analysis of echocardiography is important in assessing cardiovascular health. However, motion tracking often relies on accurate segmentation of the myocardium, which can be difficult to obtain due to inherent ultrasound properties. In order to address this limitation, we propose a semi-supervised joint learning network that exploits overlapping features in motion tracking and segmentation. The network simultaneously trains two branches: one for motion tracking and one for segmentation. Each branch learns to extract features relevant to their respective tasks and shares them with the other. Learned motion estimations propagate a manually segmented mask through time, which is used to guide future segmentation predictions. Physiological constraints are introduced to enforce realistic cardiac behavior. Experimental results on synthetic and in vivo canine 2D+t echocardiographic sequences outperform some competing methods in both tasks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Framework for Epithelium Density Estimation in Prostate Multi-Parametric Magnetic Resonance Imaging

    00:14:37
    0 views
    Multi-parametric magnetic resonance imaging (mpMRI) permits non-invasive visualization and localization of clinically important cancers in the prostate. However, it cannot fully describe tumor heterogeneity and microstructures that are crucial for cancer management and treatment. Herein, we develop a deep learning framework that could predict epithelium density of the prostate in mpMRI. A deep convolutional neural network is built to estimate epithelium density per voxel-basis. Equipped with an advanced design of the neural network and loss function, the proposed method obtained a SSIM of 0.744 and a MAE of 6.448% in a cross-validation. It also outperformed the competing network. The results are promising as a potential tool to analyze tissue characteristics of the prostate in mpMRI.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Prediction of Language Impairments in Children Using Deep Relational Reasoning with DWI Data

    00:15:53
    0 views
    This paper proposes a new deep learning model using relational reasoning with diffusion-weighted imaging (DWI) data. We investigate how effectively and comprehensively DWI tractography-based connectome predicts the impairment of expressive and receptive language ability in individual children with focal epilepsy (FE). The proposed model constitutes a combination of a dilated convolutional neural network (CNN) and a relation network (RN), with the latter being applied to the dependencies of axonal connections across cortical regions in the whole brain. The presented results from 51 FE children demonstrate that the proposed model outperforms other existing state-of-the-art algorithms to predict language abilities without depending on connectome densities, with average improvement of up to 96.2% and 83.8% in expressive and receptive language prediction, respectively.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Architectural Hyperparameters, Atlas Granularity and Functional Connectivity with Diagnostic Value in Autism Spectrum Disorder

    00:15:34
    0 views
    Currently, the diagnosis of Autism Spectrum Disorder (ASD) is dependent upon a subjective, time-consuming evaluation of behavioral tests by an expert clinician. Non-invasive functional MRI (fMRI) characterizes brain connectivity and may be used to inform diagnoses and democratize medicine. However, successful construction of predictive models, such as deep learning models, from fMRI requires addressing key choices about the model's architecture, including the number of layers and number of neurons per layer. Meanwhile, deriving functional connectivity (FC) features from fMRI requires choosing an atlas with an appropriate level of granularity. Once an accurate diagnostic model has been built, it is vital to determine which features are predictive of ASD and if similar features are learned across atlas granularity levels. Identifying new important features extends our understanding of the biological underpinnings of ASD, while identifying features that corroborate past findings and extend across atlas levels instills model confidence. To identify aptly suited architectural configurations, probability distributions of the configurations of high versus low performing models are compared. To determine the effect of atlas granularity, connectivity features are derived from atlases with 3 levels of granularity and important features are ranked with permutation feature importance. Results show the highest performing models use between 2-4 hidden layers and 16-64 neurons per layer, granularity dependent. Connectivity features identified as important across all 3 atlas granularity levels include FC to the supplementary motor gyrus and language association cortex, regions whose abnormal development are associated with deficits in social and sensory processing common in ASD. Importantly, the cerebellum, often not included in functional analyses, is also identified as a region whose abnormal connectivity is highly predictive of ASD. Results of this study identify important regions to include in future studies of ASD, help assist in the selection of network architectures, and help identify appropriate levels of granularity to facilitate the development of accurate diagnostic models of ASD.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Anatomically Informed Bayesian Spatial Priors for fMRI Analysis

    00:14:24
    0 views
    Existing Bayesian spatial priors for functional magnetic resonance imaging (fMRI) data correspond to stationary isotropic smoothing filters that may oversmooth at anatomical boundaries. We propose two anatomically informed Bayesian spatial models for fMRI data with local smoothing in each voxel based on a tensor field estimated from a T1-weighted anatomical image. We show that our anatomically informed Bayesian spatial models results in posterior probability maps that follow the anatomical structure.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Stitching Methodology for Whole Slide Low-Cost Robotic Microscope Based on a Smartphone

    00:12:48
    0 views
    This work is framed within the general objective of helping to reduce the cost of telepathology in developing countries and rural areas with no access to automated whole slide imaging (WSI) scanners. We present an automated software pipeline to the problem of mosaicing images acquired with a smartphone, attached to a portable, low-cost, robotic microscopic scanner fabricated using 3D printing technology. To achieve this goal, we propose a robust and automatic workflow, which solves all necessary steps to obtain a stitched image, covering the area of interest, from a set of initial 2D grid of overlapping images, including vignetting correction, lens distortion correction, registration and blending. Optimized solutions, like Voronoi cells and Laplacian blending strategies, are adapted to the low-cost optics and scanner device, and solve imperfections caused using smartphone camera optics. The presented solution can obtain histopathological virtual slides with diagnostic value using a low-cost portable device.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improving Diagnosis of Autism Spectrum Disorder and Disentangling its Heterogeneous Functional Connectivity Patterns Using Capsule Networks

    00:09:13
    0 views
    Functional connectivity (FC) analysis is an appealing tool to aid diagnosis and elucidate the neurophysiological underpinnings of autism spectrum disorder (ASD). Many machine learning methods have been developed to distinguish ASD patients from healthy controls based on FC measures and identify abnormal FC patterns of ASD. Particularly, several studies have demonstrated that deep learning models could achieve better performance for ASD diagnosis than conventional machine learning methods. Although promising classification performance has been achieved by the existing machine learning methods, they do not explicitly model heterogeneity of ASD, incapable of disentangling heterogeneous FC patterns of ASD. To achieve an improved diagnosis and a better understanding of ASD, we adopt capsule networks (CapsNets) to build classifiers for distinguishing ASD patients from healthy controls based on FC measures and stratify ASD patients into groups with distinct FC patterns. Evaluation results based on a large multi-site dataset have demonstrated that our method not only obtained better classification performance than state-of-the-art alternative machine learning methods, but also identified clinically meaningful subgroups of ASD patients based on their vectorized classification outputs of the CapsNets classification model.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Jointly Analyzing Alzheimer'S Disease Related Structure-Function Using Deep Cross-Model Attention Network

    00:13:06
    0 views
    Reversing the pathology of Alzheimer's disease (AD) has so far not been possible, a more tractable way may be having the intervention in its earlier stage, such as mild cognitive impairment (MCI) which is considered as the precursor of AD. Recent advances in deep learning have triggered a new era in AD/MCI classification and a variety of deep models and algorithms have been developed to classify multiple clinical groups (e.g. aged normal control - CN vs. MCI) and AD conversion. Unfortunately, it is still largely unknown what is the relationship between the altered functional connectivity and structural connectome at individual level. In this work, we introduced a deep cross-model attention network (DCMAT) to jointly model brain structure and function. Specifically, DCMAT is composed of one RNN (Recurrent Neural Network) layer and multiple graph attention (GAT) blocks, which can effectively represent disease-specific functional dynamics on individual structural network. The designed attention layer (in GAT block) aims to learn deep relations among different brain regions when differentiating MCI from CN. The proposed DCMAT shows promising classification performance compared to recent studies. More importantly, our results suggest that the MCI related functional interactions might go beyond the directly connected brain regions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi Tissue Modelling of Diffusion MRI Signal Reveals Volume Fraction Bias

    00:10:56
    0 views
    This paper highlights a systematic bias in white matter tissue microstructure modelling via diffusion MRI that is due to the common, yet inaccurate, assumption that all brain tissues have a similar T2 response. We show that the concept of ``signal fraction'' is more appropriate to describe what have always been referred to as ``volume fraction''. This dichotomy is described from the theoretical point of view by analysing the mathematical formulation of the diffusion MRI signal. We propose a generalized multi tissue modelling framework that allows to compute the actual volume fractions. The Dmipy implementation of this framework is then used to verify the presence of this bias in four classical tissue microstructure models computed on two subjects from the Human Connectome Project database. The proposed paradigm shift exposes the research field of brain tissue microstructure estimation to the necessity of a systematic review of the results obtained in the past that takes into account the difference between the concept of volume fraction and tissue fraction.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Synthesis and Edition of Ultrasound Images Via Sketch Guided Progressive Growing GANs

    00:14:06
    0 views
    Ultrasound (US) is widely accepted in clinic for anatomical structure inspection. However, lacking in resources to practice US scan, novices often struggle to learn the operation skills. Also, in the deep learning era, automated US image analysis is limited by the lack of annotated samples. Efficiently synthesizing realistic, editable and high resolution US images can solve the problems. The task is challenging and previous methods can only partially complete it. In this paper, we devise a new framework for US image synthesis. Particularly, we firstly adopt a Sgan to introduce background sketch upon object mask in a conditioned generative adversarial network. With enriched sketch cues, Sgan can generate realistic US images with editable and fine-grained structure details. Although effective, Sgan is hard to generate high resolution US images. To achieve this, we further implant the Sgan into a progressive growing scheme (PGSgan). By smoothly growing both generator and discriminator, PGSgan can gradually synthesize US images from low to high resolution. By synthesizing ovary and follicle US images, our extensive perceptual evaluation, user study and segmentation results prove the promising efficacy and efficiency of the proposed PGSgan.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning of Cortical Surface Features Using Graph-Convolution Predicts Neonatal Brain Age and Neurodevelopmental Outcome

    00:14:46
    0 views
    We investigated the ability of graph convolutional network (GCN) that takes into account the mesh topology as a sparse graph to predict brain age for preterm neonates using cortical surface morphometrics, i.e. cortical thickness and sulcal depth. Compared to machine learning and deep learning methods that did not use the surface topological information, the GCN better predicted the ages for preterm neonates with none/mild perinatal brain injuries (NMI). We then tested the GCN trained using NMI brains to predict the age of neonates with severe brain injuries (SI). Results also displayed good accuracy (MAE=1.43 weeks), while the analysis of the interaction term (true age ? group) showed that the slope of the predicted brain age relative to the true age for the SI group was significantly less steep than the NMI group (p
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Free-Breathing Cardiovascular MRI Using a Plug-And-Play Method with Learned Denoiser

    00:12:49
    0 views
    Cardiac magnetic resonance imaging (CMR) is a noninvasive imaging modality that provides a comprehensive evaluation of the cardiovascular system. The clinical utility of CMR is hampered by long acquisition times, however. In this work, we propose and validate a plug-and-play (PnP) method for CMR reconstruction from undersampled multi-coil data. To fully exploit the rich image structure inherent in CMR, we pair the PnP framework with a deep learning (DL)-based denoiser that is trained using spatiotemporal patches from high-quality, breath-held cardiac cine images. The resulting "PnP-DL" method iterates over data consistency and denoising subroutines. We compare the reconstruction performance of PnP-DL to that of compressed sensing (CS) using eight breath-held and ten real-time (RT) free-breathing cardiac cine datasets. We find that, for breath-held datasets, PnP-DL offers more than one dB advantage over commonly used CS methods. For RT free-breathing datasets, where ground truth is not available, PnP-DL receives higher scores in qualitative evaluation. The results highlight the potential of PnP-DL to accelerate RT CMR.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Arterial Input Function and Tracer Kinetic Model Driven Network for Rapid Inference of Kinetic Maps in Dynamic Contrast Enhanced MRI (AIF-TK-Net)

    00:11:57
    0 views
    We propose a patient-specific arterial input function (AIF) and tracer kinetic (TK) model-driven network to rapidly estimate the extended Tofts-Kety kinetic model parameters in DCE-MRI. We term our network as AIF-TK-net, which maps an input comprising of an image patch of the DCE-time series and the patient-specific AIF to the output image patch of the TK parameters. We leverage the open-source NEURO-RIDER database of brain tumor DCE-MRI scans to train our network. Once trained, our model rapidly infers the TK maps of unseen DCE-MRI images on the order of a 0.34 sec/slice for a 256x256x65 time series data on a NVIDIA GeForce GTX 1080 Ti GPU. We show its utility on high time resolution DCE-MRI datasets where significant variability in AIFs across patients exists. We demonstrate that the proposed AIF-TK net considerably improves the TK parameter estimation accuracy in comparison to a network, which does not utilize the patient AIF.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Looking in the Right Place for Anomalies: Explainable AI through Automatic Location Learning

    00:12:53
    0 views
    Deep learning has now become the de facto approach to the recognition of anomalies in medical imaging. Their 'black box' way of classifying medical images into anomaly labels poses problems for their acceptance, particularly with clinicians. Current explainable AI methods offer justifications through visualizations such as heat maps but cannot guarantee that the network is focusing on the relevant image region fully containing the anomaly. In this paper, we develop an approach to explainable AI in which the anomaly is assured to be overlapping the expected location when present. This is made possible by automatically extracting location-specific labels from textual reports and learning the association of expected locations to labels using a hybrid combination of Bi-Directional Long Short-Term Memory Recurrent Neural Networks (Bi-LSTM) and DenseNet-121. Use of this expected location to bias the subsequent attention-guided inference network based on ResNet101 results in the isolation of the anomaly at the expected location when present. The method is evaluated on a large chest X-ray dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Transforming Intensity Distribution of Brain Lesions Via Conditional GANs for Segmentation

    00:13:26
    0 views
    Brain lesion segmentation is crucial for diagnosis, surgical planning, and analysis. Owing to the fact that pixel values of brain lesions in magnetic resonance (MR) scans are distributed over the wide intensity range, there is always a considerable overlap between the class-conditional densities of lesions. Hence, an accurate automatic brain lesion segmentation is still a challenging task. We present a novel architecture based on conditional generative adversarial networks (cGANs) to improve the lesion contrast for segmentation. To this end, we propose a novel generator adaptively calibrating the input pixel values, and a Markovian discriminator to estimate the distribution of tumors. We further propose the Enhancement and Segmentation GAN (Enh-Seg-GAN) which effectively incorporates the classifier loss into the adversarial one during training to predict the central labels of the sliding input patches. Particularly, the generated synthetic MR images are a substitute for the real ones to maximize lesion contrast while suppressing the background. The potential of proposed frameworks is confirmed by quantitative evaluation compared to the state-of-the-art methods on BraTS'13 dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning for Time Averaged Wall Shear Stress Prediction in Left Main Coronary Bifurcations

    00:11:23
    0 views
    Analysing blood flow in coronary arteries has often been suggested in aiding the prediction of cardiovascular disease (CVD) risk. Blood flow induced hemodynamic indices can function as predictive measures in this pursuit and a fast method to calculate these may allow patient specific treatment considerations for improved clinical outcomes in the future. In vivo measurements of these metrics are not practical and thus computational fluid dynamic simulations (CFD) are widely used to investigate blood flow conditions, but require costly computation time for large scale studies such as patient specific considerations in patients screened for CVD. This paper proposes a deep learning approach to estimating the well established hemodynamic risk indicator time average wall shear stress (TAWSS) based on the vessel geometry. The model predicts TAWSS with good accuracy, achieving cross validation results of average Mean Absolute error of 0.0407Pa and standard deviation of 0.002Pa on a 127 patient CT angiography dataset, while being several orders of magnitude faster than computational simulations, using the vessel radii, angles between bifurcation (branching) vessels, curvature and other geometrical features. This bypasses costly computational simulations and allows large scale population studies as required for meaningful CVD risk prediction.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Pneumothorax Segmentation with Effective Conditioned Post-Processing in Chest X-Ray

    00:11:02
    0 views
    Pneumothorax can be caused by a blunt chest injury, damage from underlying lung disease, or it may occur for no obvious reason at all. This is one of the complex problems to detect by human eyes, which can be solved with very high level of accuracy and simplify the clinical workflow. In clinical workflow pneumothorax is usually diagnosed by a radiologist and can sometimes be difficult to confirm, that's why one need an accurate AI detection algorithm. In order to improve the detection quality and performance automatic AI-based solutions became very popular. Recently deep learning approaches demonstrate its potential and strenghts in the medical image processing and problems which are very poorly visually distinguishable by human eyes. Proposed method presents the segmentation pipeline from the chest X-ray images with the multistep conditioned post- processing, resulting in significant improvement compare to any ?baseline? by decreasing the missed pneumothorax collapse regions and false positive detections. Obtained results demonstrate very high performance accuracy and strong robustness due to very similar performance on the double-stage test dataset with previously unseen distribution. Final Dice scores are 0.8821 and 0.8614 for ?stage 1? and ?stage 2? test dataset respectively what is resulted in top 0.01% standing of the private leaderboard on Kaggle competition platform.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Joint Low Dose CT denoising and kidney segmentation

    00:09:53
    0 views
    Low dose computed tomography (LDCT) as an imaging modality has more recently gained greater attention because of the lower dose of radiation it necessitates, alongside its wider use, cost-effectiveness and faster scanning time, making it suitable for screening, diagnosis and follow up studies. While segmentation is in itself a complex problem in imaging, image enhancement in LDCT is also a contentious issue. However, the convenience of the low dose radiation exposure makes the delineation and assessment of organs such as kidney, ureter, and bladder non-trivial. In this research, both image denoising and kidney segmentation tasks are addressed jointly via one multitask deep convolutional network. This multitasking scheme brings better results for both tasks compared to single-task methods. Also, to the best of our knowledge, this is a first time attempt at addressing these joint tasks in LDCT. The developed network is a conditional generative adversarial network (C-GAN) and is an extension of the image-to-image translation network. The dataset used for this experiment is from `Multi-Atlas labeling beyond the cranial vault' challenge containing CT scans and corresponding segmentation labels. The labels of 30 subjects, which are shared publicly, are used in this paper. In order to simulate the LDCT scans from CT scans, the method based on additive Poisson noise on sonograms of CT scans is used. Because of the limited number of subjects, the leave one subject out (LOSO) is used for evaluation. The proposed method works on CT slices, and the network is trained using all of the 2D axial slices from the training subjects and tested on all of the 2D axial slices of the test subjects not seen. Comparing the results of the original single task network (image-to-image translation) and the proposed multitask network (extended version called image-to-images translation) shows that for both tasks, the multitask method outperforms the single-task method while having only half of the network weights. For further investigation, two other conventional single task networks are also exploited for single tasks. The well-known U-net method for segmentation and recently proposed method WGAN for LDCT denoising. On average, the proposed method outperforms U-net by almost a 20% better Dice score and the WGAN by an almost 0.15 lower RMSE.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-resolution Graph Neural Network for Detecting Variations in Brain Connectivity

    00:06:48
    1 view
    In this work, we propose a novel CNN-based framework with adaptive graph transforms to learn the most disease-relevant connectome feature maps which have the highest discrimination power across-diagnostic categories. The backbone of our framework is a multi-resolution representation of the graph matrix which is steered by a set of wavelet-like graph transforms. Our graph learning framework outperforms conventional methods that predict diagnostic label for graphs.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Artificial Intelligence and Computational Pathology: Implications for Precision Medicine

    00:29:54
    1 view
    With the advent of digital pathology, there is an opportunity to develop computerized image analysis methods to not just detect and diagnose disease from histopathology tissue sections, but to also attempt to predict risk of recurrence, predict disease aggressiveness and long term survival. At the Center for Computational Imaging and Personalized Diagnostics, our team has been developing a suite of image processing and computer vision tools, specifically designed to predict disease progression and response to therapy via the extraction and analysis of image-based ?histological biomarkers? derived from digitized tissue biopsy specimens. These tools would serve as an attractive alternative to molecular based assays, which attempt to perform the same predictions. The fundamental hypotheses underlying our work are that: 1) the genomic expressions detected by molecular assays manifest as unique phenotypic alterations (i.e. histological biomarkers) visible in the tissue; 2) these histological biomarkers contain information necessary to predict disease progression and response to therapy; and 3) sophisticated computer vision algorithms are integral to the successful identification and extraction of these biomarkers. We have developed and applied these prognostic tools in the context of several different disease domains including ER+ breast cancer, prostate cancer, Her2+ breast cancer, ovarian cancer, and more recently medulloblastomas. For the purposes of this talk I will focus on our work in breast, prostate, rectal, oropharyngeal, and lung cancer.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Liver Guided Pancreas Segmentation

    00:10:52
    0 views
    In this paper, we propose and validate a location prior guided automatic pancreas segmentation framework based on 3D convolutional neural network (CNN). To guide pancreas segmentation, centroid of the pancreas used to determine its bounding box is calculated using the location of the liver which is firstly segmented by a 2D CNN. A linear relationship between centroids of the pancreas and the liver is proposed. After that, a 3D CNN is employed the input of which is the bounding box of the pancreas to get the final segmentation. A publicly accessible pancreas dataset including 54 subjects is used to quantify the performance of the proposed framework. Experimental results reveal outstanding performance of the proposed method in terms of both computational efficiency and segmentation accuracy compared to non-location guided segmentation. To be specific, the running time is 15 times faster and the segmentation accuracy in terms of Dice is higher by 4.29% (76.42% versus 80.71%).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Circular Anchors for the Detection of Hematopoietic Cells Using RetinaNet

    00:14:29
    0 views
    Analysis of the blood cell distribution in bone marrow is necessary for a detailed diagnosis of many hematopoietic diseases, such as leukemia. While this task is performed manually on microscope images in clinical routine, automating it could improve reliability and objectivity. Cell detection tasks in medical imaging have successfully been solved using deep learning, in particular with RetinaNet, a powerful network architecture that yields good detection results in this scenario. It utilizes axis-parallel, rectangular bounding boxes to describe an object's position and size. However, since cells are mostly circular, this is suboptimal. We replace RetinaNet's anchors with more suitable Circular Anchors, which cover the shape of cells more precisely. We further introduce an extension to the Non-maximum Suppression algorithm that copes with predictions that differ in size. Experiments on hematopoietic cells in bone marrow images show that these methods reduce the number of false positive predictions and increase detection accuracy.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • ErrorNet: Learning Error Representations from Limited Data to Improve Vascular Segmentation

    00:12:34
    0 views
    Deep convolutional neural networks have proved effective in segmenting lesions and anatomies in various medical imaging modalities. However, in the presence of small sample size and domain shift problems, these models often produce masks with non-intuitive segmentation mistakes. In this paper, we propose a segmentation framework called ErrorNet, which learns to correct these segmentation mistakes through the repeated process of injecting systematic segmentation errors to the segmentation result based on a learned shape prior, followed by attempting to predict the injected error. During inference, ErrorNet corrects the segmentation mistakes by adding the predicted error map to the initial segmentation result. ErrorNet has advantages over alternatives based on domain adaptation or CRF-based post processing, because it requires neither domain-specific parameter tuning nor any data from the target domains. We have evaluated ErrorNet using five public datasets for the task of retinal vessel segmentation. The selected datasets differ in size and patient population, allowing us to evaluate the effectiveness of ErrorNet in handling small sample size and domain shift problems. Our experiments demonstrate that ErrorNet outperforms a base segmentation model, a CRF-based post processing scheme, and a domain adaptation method, with a greater performance gain in the presence of the aforementioned dataset limitations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Coupling Principled Refinement with Bi-Directional Deep Estimation for Robust Deformable 3D Medical Image Registration

    00:09:59
    0 views
    Deformable 3D medical image registration is challenging due to the complicated transformations between image pairs. Traditional approaches estimate deformation fields by optimizing a task-guided energy embedded with physical priors, achieving high accuracy while suffering from expensive computational loads for the iterative optimization. Recently, deep networks, encoding the information underlying data examples, render fast predictions but severely dependent on training data and have limited flexibility. In this study, we develop a paradigm integrating the principled prior into a bi-directional deep estimation process. Inheriting from the merits of both domain knowledge and deep representation, our approach achieves a more efficient and stable estimation of deformation fields than the state-of-the-art, especially when the testing pairs exhibit great variations with the training.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CTF-Net: Retinal Vessel Segmentation Via Deep Coarse-To-Fine Supervision Network

    00:15:24
    0 views
    Retinal blood vessel structure plays an important role in the early diagnosis of diabetic retinopathy, which is a cause of blindness globally. However, the precise segmentation of retinal vessels is often extremely challenging due to the low contrast and noise of the capillaries. In this paper, we propose a novel model of deep coarse-to-fine supervision network (CTF-Net) to solve this problem. This model consists of two U-shaped architecture(coarse and fine segNet). The coarse segNet, which learns to predict probability retina map from input patchs, while the fine segNet refines the predicted map. To gain more paths to preserve the multi-scale and rich deep features information, we design an end-to-end training network instead of multi-stage learning framework to segment the retina vessel from coarse to fine. Furthermore, in order to improve feature representation and reduce the number of parameters of model, we introduce a novel feature augmentation module (FAM-residual block). Experiment results confirm that our method achieves the state-of-the-art performances on the popular datasets DRIVE, CHASE_DB1 and STARE.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly-Supervised Deep Stain Decomposition for Multiplex IHC Images

    00:10:48
    0 views
    Multiplex immunohistochemistry (mIHC) is an innovative and cost-effective method that simultaneously labels multiple biomarkers in the same tissue section. Current platforms support labeling six or more cell types with different colored stains that can be visualized with brightfield light microscopy. However, analyzing and interpreting multi-colored images comprised of thousands of cells is a challenging task for both pathologists and current image analysis methods. We propose a novel deep learning based method that predicts the concentration of different stains at every pixel of a whole slide image (WSI). Our method incorporates weak annotations as training data: manually placed dots labelling different cell types based on color. We compare our method with other approaches and observe favorable performance on mIHC images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Bounding Box Annotation of Chest X-Ray Data for Localization of Abnormalities

    00:20:30
    0 views
    Due to the increasing availability of public chest x-ray datasets over the last few years, automatic detection of findings and their locations in chest x-ray studies has become an important research area for AI application in healthcare. Whereas for finding classification tasks image-level labeling suffices, additional annotation in the form of bounding boxes is required for detection of finding textit{locations}. However, the process of marking findings in chest x-ray studies is both time consuming and costly as it needs to be performed by radiologists. To overcome this problem, weakly supervised approaches have been employed to depict finding locations as a byproduct of the classification task, but these approaches have not shown much promise so far. With this in mind, in this paper we propose an textit{automatic} approach for labeling chest x-ray images for findings and locations by leveraging radiology reports. Our labeling approach is anatomically textit{standardized} to the upper, middle, and lower lung zones for the left and right lung, and is composed of two stages. In the first stage, we use a lungs segmentation UNet model and an atlas of normal patients to mark the six lung zones on the image using standardized bounding boxes. In the second stage, the associated radiology report is used to label each lung zone as positive or negative for finding, resulting in a set of six labeled bounding boxes per image. Using this approach we were able to automatically annotate over 13,000 images in a matter of hours, and used this dataset to train an opacity detection model using RetinaNet to obtain results on a par with the state-of-the-art.

Advertisement