IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 1 - 50 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • DC-WCNN: A Deep Cascade of Wavelet Based Convolutional Neural Networks for MR Image Reconstruction

    00:13:16
    0 views
    Several variants of Convolutional Neural Networks (CNN) have been developed for Magnetic Resonance (MR) image reconstruction. Among them, U-Net has shown to be the baseline architecture for MR image reconstruction. However, sub-sampling is performed by its pooling layers, causing information loss which in turn leads to blur and missing fine details in the reconstructed image. We propose a modification to the U-Net architecture to recover fine structures. The proposed network is a wavelet packet transform based encoder-decoder CNN with residual learning called WCNN. The proposed WCNN has discrete wavelet transform instead of pooling and inverse wavelet transform instead of unpooling layers and residual connections. We also propose a deep cascaded framework (DC-WCNN) which consists of cascades of WCNN and k-space data fidelity units to achieve high quality MR reconstruction. Experimental results show that WCNN and DC-WCNN give promising results in terms of evaluation metrics and better recovery of fine details as compared to other methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Twin Classification in Resting-State Brain Connectivity

    00:20:50
    0 views
    Twin study is one of the major parts of human brain research that reveals the importance of environmental and genetic influences on different aspects of brain behavior and disorders. Accurate characterization of identical and fraternal twins allows us to inference on the genetic influence in a population. In this paper, we propose a novel pairwise classification pipeline to identify the zygosity of twin pairs using the resting state functional magnetic resonance images (rs-fMRI). The new feature representation is utilized to efficiently construct brain network for each subject. Specifically, we project the fMRI signal to a set of cosine series basis and use the projection coefficients as the compact and discriminative feature representation of noisy fMRI. The pairwise relation is encoded by a set of twinwise correlations between functional brain networks across brain regions. We further employ hill climbing variable selection to identify the most genetically affected brain regions. The proposed framework has been applied to 208 twin pairs in Human Connectome Project (HCP) and we achieved 92.23(?4.43)% classification accuracy.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Network-Based Feature Selection for Imaging Genetics: Application to Identifying Biomarkers for Parkinsons Disease

    00:10:17
    0 views
    Imaging genetics is a methodology for discovering associations between imaging and genetic variables. Many studies adopted sparse models such as sparse canonical correlation analysis (SCCA) for imaging genetics. These methods are limited to modeling the linear imaging genetics relationship and cannot capture the non-linear high-level relationship between the explored variables. Deep learning approaches are underexplored in imaging genetics, compared to their great successes in many other biomedical domains such as image segmentation and disease classification. In this work, we proposed a deep learning model to select genetic features that can explain the imaging features well. Our empirical study on simulated and real datasets demonstrated that our method outperformed the widely used SCCA method and was able to select important genetic features in a robust fashion. These promising results indicate our deep learning model has the potential to reveal new biomarkers to improve mechanistic understanding of the studied brain disorders.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Agglomerative Region-Based Analysis

    00:15:15
    0 views
    A fundamental problem in brain imaging is the identification of volumes whose features distinguish two populations. One popular solution, Voxel-Based Analyses (VBA), glues together contiguous voxels with significant intra-voxel population differences. VBA's output regions may not be spatially consistent: each voxel may show a unique population effect. We introduce Agglomerative Region-Based Analysis (ARBA), which mitigates this issue to increase sensitivity. ARBA is an Agglomerative Clustering procedure, like Ward's method, which segments image sets in a common space to greedily maximize a likelihood function. The resulting regions are pared down to a set of disjoint regions that show statistically significant population differences via Permutation Testing. ARBA is shown to increase sensitivity over VBA in a detection task on multivariate Diffusion MRI brain images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Tensor-Based Grading: A Novel Patch-Based Grading Approach for the Analysis of Deformation Fields

    00:14:09
    0 views
    The improvements in magnetic resonance imaging have led to the development of numerous techniques to better detect structural alterations caused by neurodegenerative diseases. Among these, the patch-based grading framework has been proposed to model local patterns of anatomical changes. This approach is attractive because of its low computational cost and its competitive performance. Other studies have proposed to analyze the deformations of brain structures using tensor-based morphometry, which is a highly interpretable approach. In this work, we propose to combine the advantages of these two approaches by extending the patch-based grading framework with a new tensor-based grading method that enables us to model patterns of local deformation using a log-Euclidean metric. We evaluate our new method in a study of the putamen for the classification of patients with pre-manifest Huntington's disease and healthy controls. Our experiments show a substantial increase in classification accuracy (87.5 pm 0.5 vs. 81.3 pm 0.6) compared to the existing patch-based grading methods, and a good complement to putamen volume, which is a primary imaging-based marker for the study of Huntington's disease.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Segmentation of Five Components in Four Chamber View of Fetal Echocardiography

    00:10:21
    0 views
    It is clinically significant to segment five components in four chamber view of fetal echocardiography, including four chambers and the descending aorta. This study completes the multi-disease segmentation and multi-class semantic segmentation of the five key components. After comparing the performance of DeeplabV3+ and U-net in the segmentation task, we choose the former as it provides accurate segmentation in other six disease groups as well as the normal group. With the data proportion balance strategy, the segmentation performance of the Ebstein?s anomaly group is improved significantly in spite of its small proportion. We empirically evaluate this strategy in terms of mean iou (MIOU), cross entropy loss (CE) and dice score (DS). The proportion of the atrial abnormality and ventricular abnormality in the entire data set is increased, so that the model learns more semantics. We simulate multiple scenes with uncertain attitudes of the fetus, which provides rich multi-scene semantic information and enhances the robustness of the model.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Learning for Compressed Sensing MRI Using CycleGAN

    00:13:50
    0 views
    Recently, deep learning based approaches for accelerated MRI have been extensively studied due to its high performance and reduced run time complexity. The existing deep learning methods for accelerated MRI are mostly supervised methods, where matched subsampled $k$-space data and fully sampled $k$-space data are necessary. However, it is hard to acquire fully sampled $k$-space data because of long scan time of MRI. Therefore, unsupervised method without matched label data has become a very important research topic. In this paper, we propose an unsupervised method using a novel cycle-consistent generative adversarial network (cycleGAN) with a single deep generator. We show that the proposed cycleGAN architecture can be derived from a dual formulation of the optimal transport with the penalized least squares cost. The results of experiments show that our method can remove aliasing patterns in downsampled MR images without the matched reference data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Coronary Wall Segmentation in CCTA Scans via a Hybrid Net with Contours Regularization

    00:16:26
    0 views
    Providing closed and well-connected boundaries of coronary artery is essential to assist cardiologists in the diagnosis of coronary artery disease (CAD). Recently, several deep learning-based methods have been proposed for boundary detection and segmentation in a medical image. However, when applied to coronary wall detection, they tend to produce disconnected and inaccurate boundaries. In this paper, we propose a novel boundary detection method for coronary arteries that focuses on the continuity and connectivity of the boundaries. In order to model the spatial continuity of consecutive images, our hybrid architecture takes a volume (i.e., a segment of the coronary artery) as input and detects the boundary of the target slice (i.e., the central slice of the segment). Then, to ensure closed boundaries, we propose a contour-constrained weighted Hausdorff distance loss. We evaluate our method on a dataset of 34 patients of coronary CT angiography scans with curved planar reconstruction (CCTA-CPR) of the arteries (i.e., cross-sections). Experiment results show that our method can produce smooth closed boundaries outperforming the state-of-the-art accuracy.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Patient-Specific Finetuning of Deep Learning Models for Adaptive Radiotherapy in Prostate CT

    00:13:36
    0 views
    Contouring of the target volume and Organs-At-Risk (OARs)is a crucial step in radiotherapy treatment planning. In an adaptive radiotherapy setting, updated contours need to be generated based on daily imaging. In this work, we lever-age personalized anatomical knowledge accumulated over the treatment sessions, to improve the segmentation accuracy of a pre-trained Convolution Neural Network (CNN), for a spe-cific patient. We investigate a transfer learning approach, fine-tuning the baseline CNN model to a specific patient, based on imaging acquired in earlier treatment fractions. The baseline CNN model is trained on a prostate CT dataset from one hospital of 379 patients. This model is then fine-tuned and tested on an independent dataset of another hospital of 18 patients,each having 7 to 10 daily CT scans. For the prostate, seminal vesicles, bladder and rectum, the model fine-tuned on each specific patient achieved a Mean Surface Distance (MSD) of 1.64?0.43 mm, 2.38?2.76 mm, 2.30?0.96 mm, and 1.24?0.89 mm, respectively, which was significantly better than the baseline model. The proposed personalized model adaptation is therefore very promising for clinical implementation in the context of adaptive radiotherapy of prostate cancer.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Single-Molecule Localization Microscopy Reconstruction Using Noise2Noise for Super-Resolution Imaging of Actin Filaments

    00:13:14
    0 views
    Single-molecule localization microscopy (SMLM) is a super-resolution imaging technique developed to image structures smaller than the diffraction limit. This modality results in sparse and non-uniform sets of localized blinks that need to be reconstructed to obtain a super-resolution representation of a tissue. In this paper, we explore the use of the Noise2Noise (N2N) paradigm to reconstruct the SMLM images. Noise2Noise is an image denoising technique where a neural network is trained with only pairs of noisy realizations of the data instead of using pairs of noisy/clean images, as performed with Noise2Clean (N2C). Here we have adapted Noise2Noise to the 2D SMLM reconstruction problem, exploring different pair creation strategies (fixed and dynamic). The approach was applied to synthetic data and to real 2D SMLM data of actin filaments. This revealed that N2N can achieve reconstruction performances close to the Noise2Clean training strategy, without having access to the super-resolution images. This could open the way to further improvement in SMLM acquisition speed and reconstruction performance.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Zero-Shot Adaptation to Simulate 3D Ultrasound Volume by Learning a Multilinear Separable 2D Convolutional Neural Network

    00:11:28
    0 views
    Ultrasound imaging relies on sensing of waves returned after interaction with scattering media present in biological tissues. An acoustic pulse transmitted by a single element transducer dilates along the direction of propagation, and is observed as 1D point spread function (PSF) in A-mode imaging. In 2D B-mode imaging, a 1D array of transducer elements is used and dilation of pulse is also observed along the direction of these elements, manifesting a 2D PSF. In 3D B-mode imaging using a 2D matrix of transducer elements, a 3D PSF is observed. Fast simulation of a 3D B-mode volume by way of convolutional transformer networks to learn the PSF family would require a training dataset of true 3D volumes which are not readily available. Here we start in Stage 0 with a simple physics based simulator in 3D to generate speckles from a tissue echogenicity map. Next in Stage 1, we learn a multilinear separable 2D convolutional neural network using 1D convolutions to model PSF family along direction of ultrasound propagation and orthogonal to it. This is adversarially trained using a visual Turing test on 2D ultrasound images. The PSF being circularly symmetric about an axis parallel to the direction of wave propagation, we simulate full 3D volume, by way of alternating the direction of 1D convolution along 2 axes that are mutually orthogonal to the direction of wave propagation. We validate performance using visual Turing test with experts and distribution similarity measures.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Tracking of Particles in Fluorescence Microscopy Images Using a Spatial Distance Model for Brownian Motion

    00:14:44
    0 views
    Automatic tracking of particles in fluorescence microscopy images is an important task to quantify the dynamic behavior of subcellular and virus structures. We present a novel iterative approach for tracking multiple particles in microscopy data based on a spatial distance model derived under Brownian motion. Our approach exploits the information that the most likely object position at the next time point is at a certain distance from the current position. Information from all particles in a temporal image sequence are combined and all motion-specific parameters are automatically estimated. Experiments using data of the Particle Tracking Challenge as well as real live cell microscopy data displaying hepatocyte growth factor receptors and virus structures show that our approach outperforms previous methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improved Simultaneous Multi-Slice Imaging for Perfusion Cardiac MRI Using Outer Volume Suppression and Regularized Reconstruction

    00:11:52
    0 views
    Perfusion cardiac MRI (CMR) is a radiation-free and non-invasive imaging tool which has gained increasing interest for the diagnosis of coronary artery disease. However, resolution and coverage are limited in perfusion CMR due to the necessity of single snap-shot imaging during the first-pass of a contrast agent. Simultaneous multi-slice (SMS) imaging has the potential for high acceleration rates with minimal signal-to-noise ratio (SNR) loss. However, its utility in CMR has been limited to moderate acceleration factors due to residual leakage artifacts from the extra-cardiac tissue such as the chest and the back. Outer volume suppression (OVS) with leakage-blocking reconstruction has been used to enable higher acceleration rates in perfusion CMR, but suffers from higher noise amplification. In this study, we sought to augment OVS-SMS/MB imaging with a regularized leakage-blocking reconstruction algorithm to improve image quality. Results from highly-accelerated perfusion CMR show that the method improves upon SMS-SPIRiT in terms of leakage reduction and split slice (ss)-GRAPPA in terms of noise mitigation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Feature Disentanglement Learning for Bone Suppression in Chest Radiographs

    00:10:16
    0 views
    Suppression of bony structures in chest radiographs is essential for many computer-aided diagnosis tasks. In this paper, we propose a Disentanglement AutoEncoder (DAE) for bone suppression. As the projection of 3D structures of bones and soft tissues overlap in 2D radiographs, their features are interwoven and need to be disentangled for effective bone suppression. Our DAE progressively separates the features of soft-tissues from that of the bony structure during the encoder phase and reconstructs the soft-tissue image based on the disentangled features of soft-tissue. Bone segmentation can be performed concurrently using the separated bony features through a separate multi-task branch. By training the model with multi-task supervision, we explicitly encourage the autoencoder to pay more attention to the locations of bones in order to avoid loss of soft-tissue information. The proposed method is shown to be effective in suppressing bone structures from chest radiographs with very little visual artifacts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning to Segment Vessels from Poorly Illuminated Fundus Images

    00:14:47
    0 views
    Segmentation of retinal vessels is important for determining various disease conditions, but deep learning approaches have been limited by the unavailability of large, publicly available, and annotated datasets. The paper addresses this problem and analyses the performance of U-Net architecture on DRIVE and RIM-ONE datasets. A different approach for data aug- mentation using vignetting masks is presented to create more annotated fundus data. Unlike most prior efforts that attempt transforming poor images to match the images in a training set, our approach takes better quality images (which have good expert labels) and transforms them to resemble poor quality target images. We apply substantial vignetting masks to the DRIVE dataset and then train a U-net on the result- ing lower quality images (using the corresponding expert la- bel data). We quantitatively show that our approach leads to better generalized networks, and we show qualitative perfor- mance improvements in RIM-ONE images (which lack expert labels).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • SynergyNet: A Fusion Framework for Multiple Sclerosis Brain MRI Segmentation with Local Refinement

    00:12:06
    0 views
    The high irregularity of multiple sclerosis (MS) lesions in sizes and numbers often proves difficult for automated systems on the task of MS lesion segmentation. Current State-of-the-art MS segmentation algorithms employ either only global perspective or just patch-based local perspective segmentation approaches. Although global image segmentation can obtain good segmentation for medium to large lesions, its performance on smaller lesions lags behind. On the other hand, patch-based local segmentation disregards spatial information of the brain. In this work, we propose SynergyNet, a network segmenting MS lesions by fusing data from both global and local perspectives to improve segmentation across different lesion sizes. We achieve global segmentation by leveraging the U-Net architecture and implement the local segmentation by augmenting U-Net with the Mask R-CNN framework. The sharing of lower layers between these two branches benefits end-to-end training and proves advantages over simple ensemble of the two frameworks. We evaluated our method on two separate datasets containing 765 and 21 volumes respectively. Our proposed method can improve 2.55% and 5.0% for Dice score and lesion true positive rates respectively while reducing over 20% in false positive rates in the first dataset, and improve in average 10% and 32% for Dice score and lesion true positive rates in the second dataset. Results suggest that our framework for fusing local and global perspectives is beneficial for segmentation of lesions with heterogeneous sizes.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Extracting Axial Depth and Trajectory Trend Using Astigmatism, Gaussian Fitting, and CNNs for Protein Tracking

    00:10:07
    0 views
    Accurate analysis of vesicle trafficking in live cells is challenging for a number of reasons: varying appearance, complex protein movement patterns, and imaging conditions. To allow fast image acquisition, we study how employing an astigmatism can be utilized for obtaining additional information that could make tracking more robust. We present two approaches for measuring the z position of individual vesicles.Firstly, Gaussian curve fitting with CNN-based denoising is applied to infer the absolute depth around the focal plane of each localized protein. We demonstrate that adding denoising yields more accurate estimation of depth while preserving the overall structure of the localized proteins. Secondly, we investigate if we can predict using a custom CNN architecture the axial trajectory trend. We demonstrate that this method performs well on calibration beads data without the need for denoising. By incorporating the obtained depth information into a trajectory analysis, we demonstrate the potential improvement in vesicle tracking.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Predicting Longitudinal Cognitive Scores Using Baseline Imaging and Clinical Variables

    00:15:04
    0 views
    Predicting the future course of a disease with limited information is an essential but challenging problem in health care. For older adults, especially the ones suffering from Alzheimer's disease, accurate prediction of their longitudinal trajectories of cognitive decline can facilitate appropriate prognostic clinical action. Increasing evidence has shown that longitudinal brain imaging data can aid in the prediction of cognitive trajectories. However, in many cases, only a single (baseline) measurement from imaging is available for prediction. We propose a novel model for predicting the trajectory of cognition, using only a baseline measurement, by leveraging the temporal dependence in cognition. On both a synthetic dataset and a real-world dataset, we demonstrate that our model is superior to prior approaches in predicting cognition trajectory over the next five years. We show that the model's ability to capture nonlinear interaction between features leads to improved performance. Further, the proposed model achieved significantly improved trajectory prediction in subjects at higher risk of cognitive decline (those with genetic risk and worse clinical profiles at baseline), highlighting its clinical utility.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Adversarial-Based Domain Adaptation Networks for Unsupervised Tumour Detection in Histopathology

    00:13:51
    0 views
    Developing effective deep learning models for histopathology applications is challenging, as the performance depends on large amounts of labelled training data, which is often unavailable. In this work, we address this issue by leveraging previously annotated histopathology images from unrelated source domains to build a model for the unlabelled target domain. Specifically, we propose the adversarial-based domain adaptation networks (ABDA-Net) for performing the tumour detection task in histopathology in a purely unsupervised manner. This methodology successfully promoted the alignment of the source and target feature distributions among independent datasets of three tumour types - Breast, Lung and Colon - to achieve an improvement of at least 17.51% in accuracy and 18.22% in area under the curve (AUC) when compared to a classifier trained on the source data only.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning to Detect Brain Lesions from Noisy Annotations

    00:09:06
    0 views
    Supervised training of deep neural networks in medical imaging applications relies heavily on expert-provided annotations. These annotations, however, are often imperfect, as voxel-by-voxel labeling of structures on 3D images is difficult and laborious. In this paper, we focus on one common type of label imperfection, namely, false negatives. Focusing on brain lesion detection, we propose a method to train a convolutional neural network (CNN) to segment lesions while simultaneously improving the quality of the training labels by identifying false negatives and adding them to the training labels. To identify lesions missed by annotators in the training data, our method makes use of the 1) CNN predictions, 2) prediction uncertainty estimated during training, and 3) prior knowledge about lesion size and features. On a dataset of 165 scans of children with tuberous sclerosis complex from five centers, our method achieved better lesion detection and segmentation accuracy than the baseline CNN trained on the noisy labels, and than several alternative techniques.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multiple Instance Learning Via Deep Hierarchical Exploration for Histology Image Classification

    00:11:54
    0 views
    We present a fast hierarchical method to detect a presence of cancerous tissue in histological images. The image is not examined in detail everywhere but only inside several small regions of interest, called glimpses. The final classification is done by aggregating classification scores from a CNN on leaf glimpses at the highest resolution. Unlike in existing attention-based methods, the glimpses form a tree structure, low resolution glimpses determining the location of several higher resolution glimpses using weighted sampling and a CNN approximation of the expected scores. We show that it is possible to perform the classification with just a small number of glimpses, leading to an important speedup with only a small performance deterioration. Learning is possible using image labels only, as in the multiple instance learning (MIL) setting.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-Branch Deformable Convolutional Neural Network with Label Distribution Learning for Fetal Brain Age Prediction

    00:13:15
    0 views
    MRI-based fetal brain age prediction is crucial for fetal brain development analysis and early diagnosis of congenital anomalies. The locations and directions of fetal brain are randomly variable and disturbed by adjacent organs, thus imposing great challenges to the fetal brain age prediction. To address this problem, we propose an effective framework based on a deformable convolutional neural network for fetal brain age prediction. Considering the fact of insufficient data, we introduce label distribution learning (LDL), which is able to deal with the small sample problem. We integrate the LDL information into our end-to-end network. Moreover, to fully utilize the complementary multi-view data of fetal brain MRI stacks, a multi-branch CNN is proposed to aggregate multi-view information. We evaluate our method on a fetal brain MRI dataset with 289 subjects and achieve promising age prediction performance.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Univariate Persistent Brain Network Feature Based on the Aggregated Cost of Cycles from the Nested Filtration Networks

    00:14:42
    0 views
    A threshold-free feature in brain network analysis can help circumvent the curse of arbitrary network thresholding for binary network conversions. Here, Persistent Homology is the inspiration for defining a new aggregation cost based on the number of cycles, or for tracking the first Betti number in a nested filtration network within the graph. Our theoretical analysis shows that the proposed aggregated cost of cycles (ACC) is monotonically increasing and thus we define a univariate persistent feature based on the shape of ACC. The proposed statistic has advantages compared to the First Betti Number Plot (BNP1), which only tracks the total number of cycles at each filtration. We show that our method is sensitive to both the topology of modular networks and the difference in the number of cycles in a network. Our method outperforms its counterparts in a synthetic dataset, while in a real-world one it achieves results comparable with the BNP1. Our proposed framework enriches univariate measures for discovering brain network dissimilarities for better categorization of distinct stages in Alzheimer?s Disease (AD).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly-Supervised Brain Tumor Classification with Global Diagnosis Label

    00:12:55
    0 views
    There is an increasing need for efficient and automatic evaluation of brain tumors on magnetic resonance images (MRI). Most of the previous works focus on segmentation, registration, and growth modeling of the most common primary brain tumor gliomas, or the classification of up to three types of brain tumors. In this work, we extend the study to eight types of brain tumors where only global diagnosis labels are given but not the slice-level labels. We propose a weakly supervised method and demonstrate that inferring disease types at the slice-level would help the global label prediction. We also provide an algorithm for feature extraction via randomly choosing connection paths through class-specific autoencoders with dropout to accommodate the small-dataset problem. Experimental results on both public and proprietary datasets are compared to the baseline methods. The classification with the weakly supervised setting on the proprietary data, consisting of 295 patients with eight different tumor types, shows close results to the upper bound in the supervised learning setting.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Instance Segmentation and Keypoint Detection for Spine Imaging Analysis

    00:04:14
    0 views
    1.INTRODUCTION Individuals diagnosed with degenerative bone diseases such as osteoporosis are more susceptible to vertebral fractures which comprise almost 50% of all osteoporotic fractures in the United States per year. Thoracolumbar vertebral body fractures can be classified as wedge, biconcave, or crush, depending on the anterior (Ha), middle (Hm), and posterior (Hp) heights of each vertebral body. However, determining these height measurements in clinical workflow is time-consuming and resource intensive. Instance segmentation and keypoint detection network designs offer the ability to determine Ha, Hm, and Hp, in order to classify deformities according to the semi- or fully quantitative method. We investigated the accuracy of such algorithms for analyzing sagittal spine CT and MR images. 2.MATERIALS AND METHODS Sagittal spine MRI (998) and CT scans (35) were split into training and testing data. The training set was augmented (?15?rotation, ?30% contrast and brightness, random cropping) to a final size of 5667 vertebrae (1269 CT & 4398 MR). A testing set of 238 MR and 15 CT scans was used to evaluate the neural network. Mask RCNN (architecture for basic instance segmentation) was modified to include a 2D- UNet head (for better segmentation) along with a Keypoint RCNN network to detect 6 relevant vertebral keypoints (network design in Figure 1 w/one network per imaging modality). The effectiveness of the neural network was measured using two parameters: Dice score (ranges from 0 to 1 where 1=predicted overlay is manually segmented ground truth) and keypoint error distance (distance of predicted key-point location compared to manually labeled reference point). 3.RESULTS The neural network achieved an overall Dice coefficient of 0.968 and key-point error distance of 0.984 millimeters on the testing dataset. Mean percent error in Ha, Hm, and Hp height calculations (based on keypoints) was 0.13%.The neural network was able to process each scan slice with a mean time of 1.492 seconds. 4.CONCLUSIONS The neural network was able to determine morphometric measurements for detecting spinal fractures with high accuracy on sagittal MR and CT images. This approach could simplify the screening, detection of changes, and surgical planning in patients with vertebral deformities and fractures by reducing the burden on radiologists who have to do measurements manually.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Generative union of surfaces model: Deep architectures re-explained

    00:06:50
    3 views
    In signal processing, the union of subspaces (UoS) model is widely used to represent a signal. This model provides foundation for the low-rank methods, dictionary learning and so on, which are usually used as a prior in compressed sensing. These methods are well-understood with rich theoretical guarantees. The performance of these methods was challenged by deep architectures. In this work, we would like to develop a generative model called union of surfaces (UoSs) model, which can enjoy the benefits of both classical methods and deep architectures. We will develop this generative model by discussing a) how to learn the generative surfaces from training data, and b) how to learn a function from training data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Localization of the Epileptogenic Zone Using Virtual Resection of Magnetoencephalography (MEG)- Based Brain Networks

    00:06:20
    0 views
    About two-thirds of patients with drug-resistant epilepsy (DRE) achieve seizure freedom after resection of the epileptogenic zone (EZ). Functional connectivity (FC) analysis may be valuable for increasing the success rate. A spectral graph measure based on virtual resection called control centrality (CoC) has been used with ictal electrocorticography to predict postoperative seizure outcome. Our study investigated whether CoC can be used with resting-state magnetoencephalography (rs-MEG) to localize the EZ. The performance of CoC was compared to that of another spectral measure called eigenvector centrality (EVC). EVC had greater sensitivity while CoC had greater specificity in localizing the EZ. This suggests that these measures are complementary and may be valuable for pre-surgical evaluations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly Supervised Vulnerable Plaques Detection by Ivoct Image

    00:08:57
    0 views
    Vulnerable plaque is a major factor leading to the onset of acute coronary syndrome (ACS), and accordingly, the detection of vulnerable plaques (VPs) could guide cardiologists to provide appropriate surgical treatments before the occurrence of an event. In general, hundreds of images are acquired for each patient during surgery. Hence a fast and accurate automatic detection algorithm is needed. However, VPs? detection requires extensive annotation of lesion?s boundary by an expert practitioner, unlike diagnoses. Therefore in this paper, a multiple instances learning-based method is proposed to locate VPs with the image-level labels only. In the proposed method, the clip proposal module, the feature extraction module, and the detection module are integrated to recognize VPs and detect the lesion area. Finally, experiments are performed on the 2017 IVOCT dataset to examine the task of weakly supervised detection of VPs. Although the bounding box of VPs is not used, the proposed method yields comparable performance with supervised learning methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • How to Extract More Information with Less Burden: Fundus Image Classification and Retinal Disease Localization with Ophthalmologist Intervention

    00:10:36
    0 views
    Image classification using deep convolutional neural networks (DCNN) has a competitive performance as compared to other state-of-the-art methods. Here, attention can be visualized as a heatmap to improve the explainability of DCNN. We generated the initial heatmaps by using gradient-based classification activation map (Grad-CAM). We first assume that these Grad-CAM heatmaps can reveal the lesion regions well, then apply the attention mining on these heatmaps. Another, we assume that these Grad-CAM heatmaps can't reveal the lesion regions well then apply the dissimilarity loss on these Grad-CAM heatmaps. In this study, we asked the ophthalmologists to select 30% of the heatmaps. Furthermore, we design knowledge preservation (KP) loss to minimize the discrepancy between heatmaps generated from the updated network and the selected heatmaps. Experiments revealed that our method improved accuracy from 90.1% to 96.2%. We also found that the attention regions are closer to the GT lesion regions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Liver Segmentation in CT with MRI Data: Zero-Shot Domain Adaptation by Contour Extraction and Shape Priors

    00:11:13
    0 views
    In this work we address the problem of domain adaptation for segmentation tasks with deep convolutional neural networks. We focus on managing the domain shift from MRI to CT volumes on the example of 3D liver segmentation. Domain adaptation between modalities is particularly of practical importance, as different hospital departments usually tend to use different imaging modalities and protocols in their clinical routine. Thus, training a model with source data from one department may not be sufficient for application in another institution. Most adaptation strategies make use of target domain samples and often additionally incorporate the corresponding ground truths from the target domain during the training process. In contrast to these approaches, we investigate the possibility of training our model solely on source domain data sets, i.e. we apply zero-shot domain adaptation. To compensate the missing target domain data, we use prior knowledge about both modalities to steer the model towards more general features during the training process. We particularly make use of fixed Sobel kernels to enhance contour information and apply anatomical priors, learned separately by a convolutional autoencoder. Although we completely discard including the target domain in the training process, our proposed approach improves a vanilla U-Net implementation drastically and yields promising segmentation results.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Estimation of Cell Cycle States of Human Melanoma Cells with Quantitative Phase Imaging and Deep Learning

    00:14:06
    0 views
    Visualization and classification of cell cycle stages in live cells requires the introduction of transient or stably expressing fluorescent markers. This is not feasible for all cell types, and can be time consuming to implement. Labelling of living cells also has the potential to perturb normal cellular function. Here we describe a computational strategy to estimate core cell cycle stages without markers by taking advantage of features extracted from information-rich ptychographic time-lapse movies. We show that a deep-learning approach can estimate the cell cycle trajectories of individual human melanoma cells from short 3-frame (~23 minute) snapshots, and can identify cell cycle arrest induced by chemotherapeutic agents targeting melanoma driver mutations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Discovering Salient Anatomical Landmarks by Predicting Human Gaze

    00:14:21
    0 views
    Anatomical landmarks are a crucial prerequisite for many medical imaging tasks. Usually, the set of landmarks for a given task is predefined by experts. The landmark locations for a given image are then annotated manually or via machine learning methods trained on manual annotations. In this paper, in contrast, we present a method to automatically discover and localize anatomical landmarks in medical images. Specifically, we consider landmarks that attract the visual attention of humans, which we term visually salient landmarks. We illustrate the method for fetal neurosonographic images. First, full-length clinical fetal ultrasound scans are recorded with live sonographer gaze-tracking. Next, a convolutional neural network (CNN) is trained to predict the gaze point distribution (saliency map) of the sonographers on scan video frames. The CNN is then used to predict saliency maps of unseen fetal neurosonographic images, and the landmarks are extracted as the local maxima of these saliency maps. Finally, the landmarks are matched across images by clustering the landmark CNN features. We show that the discovered landmarks can be used within affine image registration, with average landmark alignment errors between 4.1% and 10.9% of the fetal head long axis length.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Combining Shape Priors with Conditional Adversarial Networks for Improved Scapula Segmentation in MR Images

    00:14:41
    0 views
    This paper proposes an automatic method for scapula bone segmentation from Magnetic Resonance (MR) images using deep learning. The purpose of this work is to incorporate anatomical priors into a conditional adversarial framework, given a limited amount of heterogeneous annotated images. Our approach encourages the segmentation model to follow the global anatomical properties of the underlying anatomy through a learnt non-linear shape representation while the adversarial contribution refines the model by promoting realistic delineations. These contributions are evaluated on a dataset of 15 pediatric shoulder examinations, and compared to state-of-the-art architectures including UNet and recent derivatives. The significant improvements achieved bring new perspectives for the pre-operative management of musculo-skeletal diseases.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Cone-Angle Artifact Removal Using Differentiated Backprojection Domain Deep Learning

    00:13:51
    0 views
    For circular trajectory conebeam CT, Feldkamp, Davis, and Kress (FDK) algorithm is widely used for its reconstruction. However, the existence of cone-angle artifacts is fatal for the quality when using this algorithm. There are several model-based iterative reconstruction methods for the cone-angle artifacts removal, but these algorithms usually require repeated applications of computational expensive forward and backward.In this paper, we propose a novel deep learning approach for cone-angle artifact removal on differentiated backprojection domain, which performs a data-driven inversion of an ill-posed deconvolution problem related to the Hilbert transform. The reconstruction results along the coronal and sagittal directions are then combined by a spectral blending technique to minimize the spectral leakage. Experimental results show that our method provides superior performance to the existing methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • DRU-net: An Efficient Deep Convolutional Neural Network for Medical Image Segmentation

    00:11:45
    0 views
    Residual network (ResNet) and densely connected network (DenseNet) have significantly improved the training efficiency and performance of deep convolutional neural networks (DCNNs) mainly for object classification tasks. In this paper, we propose an efficient network architecture by considering advantages of both networks. The proposed method is integrated into an encoder-decoder DCNN model for medical image segmentation. Our method adds additional skip connections compared to ResNet but uses significantly fewer model parameters than DenseNet. We evaluate the proposed method on a public dataset (ISIC 2018 grand-challenge) for skin lesion segmentation and a local brain MRI dataset. In comparison with ResNet-based, DenseNet-based and attention network (AttnNet) based methods within the same encoder- decoder network structure, our method achieves significantly higher segmentation accuracy with fewer number of model parameters than DenseNet and AttnNet. The code is available on GitHub (GitHub link: https://github.com/MinaJf/DRU-net).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Leveraging Adaptive Color Augmentation in Convolutional Neural Networks for Deep Skin Lesion Segmentation

    00:13:49
    0 views
    Fully automatic detection of skin lesions in dermatoscopic images can facilitate early diagnosis and repression of malignant melanoma and non-melanoma skin cancer. Although convolutional neural networks are a powerful solution, they are limited by the illumination spectrum of annotated dermatoscopic screening images, where color is an important discriminative feature. In this paper, we propose an adaptive color augmentation technique to amplify data expression and model performance, while regulating color difference and saturation to minimize the risks of using synthetic data. Through deep visualization, we qualitatively identify and verify the semantic structural features learned by the network for discriminating skin lesions against normal skin tissue. The overall system achieves a Dice Ratio of 0.891 with 0.943 sensitivity and 0.932 specificity on the ISIC 2018 Testing Set for segmentation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Interpreting Medical Image Classifiers by Optimization Based Counterfactual Impact Analysis

    00:14:22
    0 views
    Clinical applicability of automated decision support systems depends on a robust, well-understood classification interpretation. Artificial neural networks while achieving class-leading scores fall short in this regard. Therefore, numerous approaches have been proposed that map a salient region of an image to a diagnostic classification. Utilizing heuristic methodology, like blurring and noise, they tend to produce diffuse, sometimes misleading results, hindering their general adoption. In this work we overcome these issues by presenting a model agnostic saliency mapping framework tailored to medical imaging. We replace heuristic techniques with a strong neighborhood conditioned inpainting approach, which avoids anatomically implausible artefacts. We formulate saliency attribution as a map-quality optimization task, enforcing constrained and focused attributions. Experiments on public mammography data show quantitatively and qualitatively more precise localization and clearer conveying results than existing state-of-the-art methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • In Silico Prediction of Cell Traction Forces

    00:12:13
    0 views
    Traction Force Microscopy (TFM) is a technique used to determine the tensions that a biological cell conveys to the underlying surface. Typically, TFM requires culturing cells on gels with fluorescent beads, followed by bead displacement calculations. We present a new method allowing to predict those forces from a regular fluorescent image of the cell. Using Deep Learning, we trained a Bayesian Neural Network adapted for pixel regression of the forces and show that it generalises on different cells of the same strain. The predicted forces are computed along with an approximated uncertainty, which shows whether the prediction is trustworthy or not. Using the proposed method could help estimating forces when calculating non-trivial bead displacements and can also free one of the fluorescent channels of the microscope. Code is available at https://github.com/wahlby-lab/InSilicoTFM.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Based Segmentation of Body Parts in CT Localizers and Application to Scan Planning

    00:14:46
    0 views
    In this paper, we propose a deep learning approach for the segmentation of body parts in computer tomography (CT) localizer images. Such images pose difficulties in the automatic image analysis on account of variable field-of-view, diverse patient positioning and image acquisition at low dose, but are of importance pertaining to their most prominent applications in scan planning and dose modulation. Following the success of deep learning technology in image segmentation applications, we investigate the use of a fully convolutional neural network architecture to achieve the segmentation of four anatomies: abdomen, chest, pelvis and brain. The method is further extended to generate plan boxes for individual as well as multiple combined anatomies, and compared against the existing techniques. The performance of the method is evaluated on 771 multi-site localizer images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fully Automatic Computer-aided Mass Detection and Segmentation via Pseudo-color Mammograms and Mask R-CNN

    00:11:43
    0 views
    Mammographic mass detection and segmentation are usually performed as serial and separate tasks, with segmentation often only performed on manually confirmed true positive detections in previous studies. We propose a fully-integrated computer-aided detection (CAD) system for simultaneous mammographic mass detection and segmentation without user intervention. The proposed CAD only consists of a pseudo-color image generation and a mass detection-segmentation stage based on Mask R-CNN. Grayscale mammograms are transformed into pseudo-color images based on multi-scale morphological sifting where mass-like patterns are enhanced to improve the performance of Mask R-CNN. Transfer learning with the Mask R-CNN is then adopted to simultaneously detect and segment masses on the pseudo-color images. Evaluated on the public dataset INbreast, the method outperforms the state-of-the-art methods by achieving an average true positive rate of 0.90 at 0.9 false positive per image and an average Dice similarity index of 0.88 for mass segmentation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A CNN Framework Based on Line Annotations for Detecting Nematodes in Microscopic Images

    00:13:25
    1 view
    Plant parasitic nematodes cause damage to crop plants on a global scale. Robust detetection on image data is a prerequisite for monitoring such nematodes, as well as for many biological studies involving the nematode C. elegans, a common model organism. Here, we propose a framework for detecting worm-shaped objects in microscopic images that is based on convolutional neural networks (CNNs). We annotate nematodes with curved lines along the body, which is more suitable for worm-shaped objects than bounding boxes. The trained model predicts worm skeletons and body endpoints. The endpoints serve to untangle the skeletons from which segmentation masks are reconstructed by estimating the body width at each location along the skeleton. With light-weight backbone networks, we achieve 75.85% precision, 73.02% recall on a potato cyst nematode data set and 84.20% precision, 85.63% recall on a public C. elegans data set.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Adaptive Regularization for Three-Dimensional Optical Diffraction Tomography

    00:13:20
    0 views
    Optical diffraction tomography (ODT) allows one to quantitatively measure the distribution of the refractive index of the sample. It relies on the resolution of an inverse scattering problem. Due to the limited range of views as well as optical aberrations and speckle noise, the quality of ODT reconstructions is usually better in lateral planes than in the axial direction. In this work, we propose an adaptive regularization to mitigate this issue. We first learn a dictionary from the lateral planes of an initial reconstruction that is obtained with a total-variation regularization. This dictionary is then used to enhance both the lateral and axial planes within a final reconstruction step. The proposed pipeline is validated on real data using an accurate nonlinear forward model. Comparisons with standard reconstructions are provided to show the benefit of the proposed framework.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Simultaneous Classification and Segmentation of Intracranial Hemorrhage Using a Fully Convolutional Neural Network

    00:12:59
    0 views
    Intracranial hemorrhage (ICH) is a critical disease that requires immediate diagnosis and treatment. Accurate detection, subtype classification and volume quantification of ICH are critical aspects in ICH diagnosis. Previous studies have applied deep learning techniques for ICH analysis but usually tackle the aforementioned tasks in a separate manner without taking advantage of information sharing between tasks. In this paper, we propose a multi-task fully convolutional network, ICHNet, for simultaneous detection, classification and segmentation of ICH. The proposed framework utilizes the inter-slice contextual information and has the flexibility in handling various label settings and task combinations. We evaluate the performance of our proposed architecture using a total of 1176 head CT scans and show that it improves the performance of both classification and segmentation tasks compared with single-task and baseline models.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 3D Conditional Adversarial Learning for Synthesizing Microscopic Neuron Image Using Skeleton-To-Neuron Translation

    00:14:49
    1 view
    The automatic reconstruction of single neuron cells from microscopic images is essential to enabling large-scale data-driven investigations in neuron morphology research. However, the performances of single neuron reconstruction algorithms are constrained by both the quantity and the quality of the annotated 3D microscopic images since the annotating single neuron models is highly labour intensive. We propose a framework for synthesizing microscopy-realistic 3D neuron images from simulated single neuron skeletons using conditional Generative Adversarial Networks (cGAN). We build the generator network with multi-resolution sub-modules to improve the output fidelity. We evaluate our framework on Janelia-Fly dataset from the BigNeuron project. With both qualitative and quantitative analysis, we show that the proposed framework outperforms the other state-of-the-art methods regarding the quality of the synthetic neuron images. We also show that combining the real neuron images and the synthetic images generated from our framework can improve the performance of neuron segmentation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning Optimal Shape Representations for Multi-Modal Image Registration

    00:13:46
    0 views
    In this work, we present a new strategy for the multi-modal registration of atypical structures with boundaries that are difficult to define in medical imaging (e.g. lymph nodes). Instead of using a standard Mutual Information (MI) similarity metric, we propose to use the combination of MI with the Modality Independent Neighbourhood Descriptors (MIND) that can help enhancing the organs of interest from their adjacent structures. Our key contribution is then to learn the MIND parameters which optimally represent specific registered structures. As we register atypical organs, Neural-Network approaches requiring large databases of annotated training data cannot be used. We rather strongly constrain our learning problem using the MIND formalism, so that the optimal representation of images depends on a limited amount of parameters. In our results, pure MI-based registration is compared with MI-MIND registration on 3D synthetic images and CT/MR images, leading to improved structure overlaps by using MI-MIND. To our knowledge, this is the first time that MIND-MI is evaluated and appears as relevant for multi-modal registration.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Quantification of Pulmonary Fissure Integrity: A Repeatability Analysis

    00:09:22
    0 views
    The pulmonary fissures divide the lungs into lobes and canvary widely in shape, appearance, and completeness. Fis-sure completeness, or integrity, has been studied to assessrelationships with airway function measurements, chronicobstructive pulmonary disease (COPD) progression, and col-lateral ventilation between lobes. Fissure integrity measuredfrom computed tomography (CT) images is already usedas a non-invasive method to screen emphysema patients forendobronchial valve treatment, as the procedure is not ef-fective when collateral ventilation is present. We describea method for automatically computing fissure integrity fromlung CT images. Our method is tested using 60 subjectsfrom a COPD study. We examine the repeatability of fis-sure integrity measurements across inspiration and expirationimages, assess changes in fissure integrity over time using alongitudinal dataset, and explore fissure integrity?s relation-ship with COPD severity.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Stimulus Speech Decoding from Human Cortex with Generative Adversarial Network Transfer Learning

    00:14:55
    0 views
    Decoding auditory stimulus from neural activity can enable neuroprosthetics and direct communication with the brain. Some recent studies have shown successful speech decoding from intracranial recording using deep learning models. However, scarcity of training data leads to low quality speech reconstruction which prevents a complete brain-computer-interface (BCI) application. In this work, we propose a transfer learning approach with a pre-trained GAN to disentangle representation and generation layers for decoding. We first pre-train a generator to produce spectrograms from a representation space using a large corpus of natural speech data. With a small amount of paired data containing the stimulus speech and corresponding ECoG signals, we then transfer it to a bigger network with an encoder attached before, which maps the neural signal to the representation space. To further improve the network generalization ability, we introduce a Gaussian prior distribution regularizer on the latent representation during the transfer phase. With at most 150 training samples for each tested subject, we achieve a state-of-the-art decoding performance. By visualizing the attention mask embedded in the encoder, we observe brain dynamics that are consistent with findings from previous studies investigating dynamics in the superior temporal gyrus (STG), pre-central gyrus (motor) and inferior frontal gyrus (IFG). Our findings demonstrate a high reconstruction accuracy using deep learning networks together with the potential to elucidate interactions across different brain regions during a cognitive task.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A One-Shot Learning Framework for Assessment of Fibrillar Collagen from Second Harmonic Generation Images of an Infarcted Myocardium

    00:11:24
    0 views
    Myocardial infarction (MI) is a scientific term that refers to heart attack. In this study, we combine induction of highly specific second harmonic generation (SHG) signals from non-centrosymmetric macromolecules such as fibrillar collagens together with two-photon excited cellular autofluorescence in infarcted mouse heart to quantitatively probe fibrosis, especially targeted at an early stage after MI. We present robust one-shot machine learning algorithms that enable determination of spatially resolved 2D structural organization of collagen as well as structural morphologies in heart tissues post-MI with spectral specificity and sensitivity. Detection, evaluation, and precise quantification of fibrosis extent at early stage would guide one to develop treatment therapies that may prevent further progression and determine heart transplant needs for patient survival.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Removing Structured Noise with Self-Supervised Blind-Spot Networks

    00:11:04
    0 views
    Removal of noise from fluorescence microscopy images is an important first step in many biological analysis pipelines. Current state-of-the-art supervised methods employ convolutional neural networks that are trained with clean (ground-truth) images. Recently, it was shown that self-supervised image denoising with blind spot networks achieves excellent performance even when ground-truth images are not available, as is common in fluorescence microscopy. However, these approaches, e.g. Noise2Void (N2V), generally assume pixel-wise independent noise, thus limiting their applicability in situations where spatially correlated (structured) noise is present. To overcome this limitation, we present Structured Noise2Void (StructN2V), a generalization of blind spot networks that enables removal of structured noise without requiring an explicit noise model or ground truth data. Specifically, we propose to use an extended blind mask (rather than a single pixel/blind spot), whose shape is adapted to the structure of the noise. We evaluate our approach on two real datasets and show that StructN2V considerably improves the removal of structured noise compared to existing standard and blind-spot based techniques.

Advertisement