IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 1 - 50 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning a Loss Function for Segmentation: A Feasibility Study

    00:13:17
    0 views
    When training neural networks for segmentation, the Dice loss is typically used. Alternative loss functions could help the networks achieve results with higher user acceptance and lower correction effort, but they cannot be used directly if they are not differentiable. As a solution, we propose to train a regression network to approximate the loss function and combine it with a U-Net to compute the loss during segmentation training. As an example, we introduce the contour Dice coefficient (CDC) that estimates the fraction of contour length that needs correction. Applied to CT bladder segmentation, we show that a weighted combination of Dice and CDC loss improves segmentations compared to using only Dice loss, with regard to both CDC and other metrics.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Left Atrial Segmentation from Magnetic Resonance Image Sequences Using Deep Convolutional Neural Network with Autoencoder

    00:14:22
    0 views
    This study presents a novel automated algorithm to segment the left atrium (LA) from 2, 3 and 4-chamber long-axis cardiac cine magnetic resonance image (MRI) sequences using deep convolutional neural network (CNN). The objective of the segmentation process is to delineate the boundary between myocardium and endocardium and exclude the mitral valve so that the results could be used for generating clinical measurements such as strain and strain rate. As such, the segmentation needs to be performed using open contours, a more challenging problem than the pixel-wise semantic segmentation performed by existing machine learning-based approaches such as U-net. The proposed neural net is built based on pre-trained CNN Inception V4 architecture, and it predicts a compressed vector by applying a multi-layer autoencoder, which is then back-projected into the segmentation contour of the LA to perform the delineation using open contours. Quantitative evaluations were performed to compare the performances of the proposed method and the current state-of-the-art U-net method. Both methods were trained using 6195 images acquired from 80 patients and evaluated using 1515 images acquired from 20 patients where the datasets were obtained retrospectively, and ground truth manual segmentation was provided by an expert radiologist. The proposed method yielded an average Dice score of 93.1 % and Hausdorff distance of 4.2 mm, whereas the U-net yielded 91.6 % and 11.9 mm for Dice score and Hausdorff distance metrics, respectively. The quantitative evaluations demonstrated that the proposed method performed significantly better than U-net in terms of Hausdorff distance in addition to providing open contour-based segmentation for the LA.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CNN in CT Image Segmentation: Beyond Loss Function for Exploiting Ground Truth Images

    00:11:43
    0 views
    Exploiting more information from ground truth (GT) images now is a new research direction for further improving CNN's performance in CT image segmentation. Previous methods focus on devising the loss function for fulfilling such a purpose. However, it is rather difficult to devise a general and optimization-friendly loss function. We here present a novel and practical method that exploits GT images beyond the loss function. Our insight is that feature maps of two CNNs trained respectively on GT and CT images should be similar on some metric space, because they both are used to describe the same objects for the same purpose. We hence exploit GT images by enforcing such two CNNs' feature maps to be consistent. We assess the proposed method on two data sets, and compare its performance to several competitive methods. Extensive experimental results show that the proposed method is effective, outperforming all the compared methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Topology Highlights Neural Deficits of Post-Stroke Aphasia Patients

    00:12:01
    0 views
    Statistical inference of topological features decoded by persistent homology, a topological data analysis (TDA) algorithm, has been found to reveal patterns in electroencephalographic (EEG) signals that are not captured by standard temporal and spectral analysis. However, a potential challenge for applying topological inference to large-scale EEG data is the ambiguity of performing statistical inference and computational bottleneck. To address this problem, we advance a unified permutation-based inference framework for testing statistical difference in the topological feature persistence landscape (PL) of multi-trial EEG signals. In this study, we apply the framework to compare the PLs in EEG signals recorded in participants with aphasia vs. a matched control group during altered auditory feedback tasks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Generating Controllable Ultrasound Images of the Fetal Head

    00:12:27
    0 views
    Synthesis of anatomically realistic ultrasound images could be potentially valuable in sonographer training and to provide training images for algorithms, but is a challenging technical problem. Generating examples where different image attributes can be controlled may also be useful for tasks such as semi-supervised classification and regression to augment costly human annotation. In this paper, we propose using an information maximizing generative adversarial network with a least-squares loss function to generate new examples of fetal brain ultrasound images from clinically acquired healthy subject twenty-week anatomy scans. The unsupervised network succeeds in disentangling natural clinical variations in anatomical visibility and image acquisition parameters, which allows for user-control in image generation. To evaluate our method, we also introduce an additional synthetic fetal ultrasound specific image quality metric called the Frechet SonoNet Distance (FSD) to quantitatively evaluate synthesis quality. To the best of our knowledge, this is the first work that generates ultrasound images with a generator network trained on clinical acquisitions where governing parameters can be controlled in a visually interpretable manner.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Complementary Network with Adaptive Receptive Fields for Melanoma Segmentation

    00:13:26
    0 views
    Automatic melanoma segmentation in dermoscopic images is essential in computer-aided diagnosis of skin cancer. Existing methods may suffer from the hole and shrink problems with limited segmentation performance. To tackle these issues, we propose a novel complementary network with adaptive receptive filed learning. Instead of regarding the segmentation task independently, we introduce a foreground network to detect melanoma lesions and a background network to mask non-melanoma regions. Moreover, we propose adaptive atrous convolution (AAC) and knowledge aggregation module (KAM) to fill holes and alleviate the shrink problems. AAC allows us to explicitly control the receptive field at multiple scales. KAM convolves shallow feature maps by dilated convolutions with adaptive receptive fields, which are adjusted according to deep feature maps. In addition, A novel mutual loss is proposed to utilize the dependency between the foreground and background networks, thereby enabling the reciprocally influence within these two networks. Consequently, this mutual training strategy enables the semi-supervised learning and improve the boundary-sensitivity. Training with Skin Imaging Collaboration (ISIC) 2018 skin lesion segmentation dataset, our method achieves a dice coefficient of 86.4% and shows better performance compared with state-of-the-art melanoma segmentation methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Probabilistic Inference for Camera Calibration in Light Microscopy under Circular Motion

    00:15:32
    0 views
    Robust and accurate camera calibration is essential for 3D reconstruction in light microscopy under circular motion. Conventional methods require either accurate key point matching or precise segmentation of the axial-view images. Both remain challenging because specimens often exhibit transparency/translucency in a light microscope. To address those issues, we propose a probabilistic inference based method for the camera calibration that does not require sophisticated image pre-processing. Based on 3D projective geometry, our method assigns a probability on each of a range of voxels that cover the whole object. The probability indicates the likelihood of a voxel belonging to the object to be reconstructed. Our method maximizes a joint probability that distinguishes the object from the background. Experimental results show that the proposed method can accurately recover camera configurations in both light microscopy and natural scene imaging. Furthermore, the method can be used to produce high-fidelity 3D reconstructions and accurate 3D measurements.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Computer Aided Diagnosis of Clinically Significant Prostate Cancer in Low-Risk Patients on Multi-Parametric Mr Images Using Deep Learning

    00:13:46
    0 views
    The purpose of this study was to develop a quantitative method for detection and segmentation of clinically significant (ISUP grade = 2) prostate cancer (PCa) in low-risk patient. A consecutive cohort of 356 patients (active surveillance) was selected and divided in two groups: 1) MRI and targeted-biopsy positive PCa, 2) MRI and standard-biopsy negative PCa. A 3D convolutional neural network was trained in three-fold cross validation with MRI and targeted-biopsy positive patient?s data using two mp-MRI sequences (T2-weighted, DWI-b800) and ADC map as input. After training, the model was tested on separate positive and negative patients to evaluate the performance. The model achieved an average area under the curve (AUC) of the receiver operating characteristics is 0.78 (sensitivity = 85%, specificity = 72%). The diagnostic performance of the proposed method in segmenting significant PCa and to conform non-significant PCa in low-risk patients is characterized by a good AUC and negative-predictive-value.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Using Transfer Learning and Class Activation Maps Supporting Detection and Localization of Femoral Fractures on Anteroposterior Radiographs

    00:06:02
    0 views
    Acute Proximal Femoral Fractures are a growing health concern among the aging population. These fractures are often associated with significant morbidity and mortality as well as reduced quality of life. Furthermore, with the increasing life expectancy owing to advances in healthcare, the number of proximal femoral fractures may increase by a factor of 2 to 3, since the majority of fractures occur in patients over the age of 65. In this paper, we show that by using transfer learning and leveraging pre-trained models, we can achieve very high accuracy in detecting fractures and that they can be localized utilizing class activation maps.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Compressed Sensing for Data Reduction in Synthetic Aperture Ultrasound Imaging: A Feasibility Study

    00:14:13
    1 view
    Compressed Sensing (CS) has been applied by a few researchers to improve the frame rate of synthetic aperture (SA) ultrasound imaging. However, there appear to be no reports on reducing the number of receive elements by exploiting CS approach. In our previous work, we have proposed a strategic undersampling scheme based on Gaussian distribution for focused ultrasound imaging. In this work, we propose and evaluate three sampling schemes for SA to acquire RF data from a reduced number of receive elements. The effect of sampling schemes on CS recovery was studied using simulation and experimental data. In spite of using only 50% of the receive elements, it was found that the ultrasound images using the Gaussian sampling scheme had comparable resolution and contrast with respect to the reference image obtained using all the receive elements. Thus, the findings suggest a possibility to reduce the receive channel count of SA ultrasound system without practically sacrificing the image quality.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Residual Simplified Reference Tissue Model with Covariance Estimation

    00:12:13
    0 views
    The simplified reference tissue model (SRTM) can robustly estimate binding potential (BP) without a measured arterial blood input function. Although a voxel-wise estimation of BP, so-called parametric image, is more useful than the region of interest (ROI) based estimation of BP, it is challenging to calculate the accurate parametric image due to lower signal-to-noise ratio (SNR) of dynamic PET images. To achieve reliable parametric imaging, temporal images are commonly smoothed prior to the kinetic parameter estimation, which degrades the resolution significantly. To address the problem, we propose a residual simplified reference tissue model (ResSRTM) using an approximate covariance matrix to robustly compute the parametric image with a high resolution. We define the residual dynamic data as full data except for each frame data, which has higher SNR and can achieve the accurate estimation of parametric image. Since dynamic images have correlations across temporal frames, we propose an approximate covariance matrix using neighbor voxels by assuming the noise statistics of neighbors are similar. In phantom simulation and real experiments, we demonstrate that the proposed method outperforms the conventional SRTM method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Spatially Informed Cnn for Automated Cone Detection in Adaptive Optics Retinal Images

    00:14:23
    0 views
    Adaptive optics (AO) scanning laser ophthalmoscopy offers cellular level in-vivo imaging of the human cone mosaic. Existing analysis of cone photoreceptor density in AO images require accurate identification of cone cells, which is a time and labor-intensive task. Recently, several methods have been introduced for automated cone detection in AO retinal images using convolutional neural networks (CNN). However, these approaches have been limited in their ability to correctly identify cones when applied to AO images originating from different locations in the retina, due to changes to the reflectance and arrangement of the cone mosaics with eccentricity. To address these limitations, we present an adapted CNN architecture that incorporates spatial information directly into the network. Our approach, inspired by conditional generative adversarial networks, embeds the retina location from which each AO image was acquired as part of the training. Using manual cone identification as ground truth, our evaluation shows general improvement over existing approaches when detecting cones in the middle and periphery regions of the retina, but decreased performance near the fovea.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Mouse: An End-To-End Auto-Context Refinement Framework for Brain Ventricle & Body Segmentation in Embryonic Mice Ultrasound Volumes

    00:11:50
    0 views
    The segmentation of the brain ventricle (BV) and body in embryonic mice high-frequency ultrasound (HFU) volumes can provide useful information for biological researchers. However, manual segmentation of the BV and body requires substantial time and expertise. This work proposes a novel deep learning based end-to-end auto-context re?nement framework, consisting of two stages. The ?rst stage produces a low resolution segmentation of the BV and body simultaneously. The resulting probability map for each object (BV or body) is then used to crop a region of interest (ROI) around the target object in both the original image and the probability map to provide context to the re?nement segmentation network. Joint training of the two stages provides signi?cant improvement in Dice Similarity Coef?cient (DSC) over using only the ?rst stage (0.818 to 0.906 for the BV, and 0.919 to 0.934 for the body). The proposed method signi?cantly reduces the inference time (102.36 to 0.09 s/volume ?1000x faster) while slightly improves the segmentation accuracy over the previous methods using slide-window approaches.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Model-Based Deep Learning for Reconstruction of Joint K-Q Under-Sampled High Resolution Diffusion MRI

    00:17:18
    0 views
    We propose a model-based deep learning architecture for the reconstruction of highly accelerated diffusion magnetic resonance imaging (MRI) that enables high-resolution imaging. The proposed reconstruction jointly recovers all the diffusion-weighted images in a single step from a joint k-q under-sampled acquisition in a parallel MRI setting. We propose the novel use of a pre-trained denoiser as a regularizer in a model-based reconstruction for the recovery of highly under-sampled data. Specifically, we designed the denoiser based on a general diffusion MRI tissue microstructure model for multi-compartmental modeling. By using a wide range of biologically plausible parameter values for the multi-compartmental microstructure model, we simulated diffusion signal that spans the entire microstructure parameter space. A neural network was trained in an unsupervised manner using a convolutional autoencoder to learn the diffusion MRI signal subspace. We employed the autoencoder in a model-based reconstruction that unrolls the iterations similar to the recently proposed MoDL framework. Specifically, we show that the autoencoder provides a strong denoising prior to recover the q-space signal. We show reconstruction results on a simulated brain dataset that shows high acceleration capabilities of the proposed method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning for High Speed Optical Coherence Elastography

    00:12:37
    0 views
    Mechanical properties of tissue provide valuable information for identifying lesions. One approach to obtain quantitative estimates of elastic properties is shear wave elastography with optical coherence elastography (OCE). However, given the shear wave velocity, it is still difficult to estimate elastic properties. Hence, we propose deep learning to directly predict elastic tissue properties from OCE data. We acquire 2D images with a frame rate of 30 kHz and use convolutional neural networks to predict gelatin concentration, which we use as a surrogate for tissue elasticity. We compare our deep learning approach to predictions from conventional regression models, using the shear wave velocity as a feature. Mean absolut prediction errors for the conventional approaches range from 1.32+-0.98 p.p. to 1.57+-1.30 p.p. whereas we report an error of 0.90+-0.84 p.p. for the convolutional neural network with 3D spatio-temporal input. Our results indicate that deep learning on spatio-temporal data outperforms elastography based on explicit shear wave velocity estimation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly Supervised Multi-Task Learning for Cell Detection and Segmentation

    00:15:01
    0 views
    Cell detection and segmentation is fundamental for all downstream analysis of digital pathology images. However, obtaining the pixel-level ground truth for single cell segmentation is extremely labor intensive. To overcome this challenge, we developed an end-to-end deep learning algorithm to perform both single cell detection and segmentation using only point labels. This is achieved through the combination of different task orientated point label encoding methods and a multi-task scheduler for training. We apply and validate our algorithm on PMS2 stained colon rectal cancer and tonsil tissue images. Compared to the state-of-the-art, our algorithm shows significant improvement in cell detection and segmentation without increasing the annotation efforts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Computer-Aided Diagnosis of Congenital Abnormalities of the Kidney and Urinary Tract in Children Using a Multi-Instance Deep Learning Method Based on Ultrasound Imaging Data

    00:14:13
    0 views
    Ultrasound images are widely used for diagnosis of congenital abnormalities of the kidney and urinary tract (CAKUT). Since a typical clinical ultrasound image captures 2D information of a specific view plan of the kidney and images of the same kidney on different planes have varied appearances, it is challenging to develop a computer aided diagnosis tool robust to ultrasound images in different views. To overcome this problem, we develop a multi-instance deep learning method for distinguishing children with CAKUT from controls based on their clinical ultrasound images, aiming to automatic diagnose the CAKUT in children based on ultrasound imaging data. Particularly, a multi-instance deep learning method was developed to build a robust pattern classifier to distinguish children with CAKUT from controls based on their ultrasound images in sagittal and transverse views obtained during routine clinical care. The classifier was built on imaging features derived using transfer learning from a pre-trained deep learning model with a mean pooling operator for fusing instance-level classification results. Experimental results have demonstrated that the multi-instance deep learning classifier performed better than classifiers built on either individual sagittal slices or individual transverse slices.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Models to Study the Early Stages of Parkinsons Disease

    00:15:23
    0 views
    Current physio-pathological data suggest that Parkinson's Disease (PD) symptoms are related to important alterations in subcortical brain structures. However, structural changes in these small structures remain difficult to detect for neuro-radiologists, in particular, at the early stages of the disease ('de novo' PD patients). The absence of a reliable ground truth at the voxel level prevents the application of traditional supervised deep learning techniques. In this work, we consider instead an anomaly detection approach and show that auto-encoders (AE) could provide an efficient anomaly scoring to discriminate 'de novo' PD patients using quantitative Magnetic Resonance Imaging (MRI) data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Impact of 1D and 2D Visualisation on EEG-fMRI Neurofeedback Training During a Motor Imagery Task

    00:13:46
    0 views
    Bi-modal EEG-fMRI neurofeedback (NF) is a new technique of great interest. First, it can improve the quality of NF training by combining different real-time information (haemodynamic and electrophysiological) from the participant's brain activity; Second, it has potential to better understand the link and the synergy between the two modalities (EEG-fMRI). However there are different ways to show to the participant his NF scores during bi-modal NF sessions. To improve data fusion methodologies, we investigate the impact of a 1D or 2D representation when a visual feedback is given during motor imagery task. Results show a better synergy between EEG and fMRI when a 2D display is used. Subjects have better fMRI scores when 1D is used for bi-modal EEG-fMRI NF sessions; on the other hand, they regulate EEG more specifically when the 2D metaphor is used.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Self-Supervised Physics-Based Deep Learning MRI Reconstruction without Fully-Sampled Data

    00:14:26
    0 views
    Deep learning (DL) has emerged as a tool for improving accelerated MRI reconstruction. A common strategy among DL methods is the physics-based approach, where a regularized iterative algorithm alternating between data consistency and a regularizer is unrolled for a finite number of iterations. This unrolled network is then trained end-to-end in a supervised manner, using fully-sampled data as ground truth for the network output. However, in a number of scenarios, it is difficult to obtain fully-sampled datasets, due to physiological constraints such as organ motion or physical constraints such as signal decay. In this work, we tackle this issue and propose a self-supervised learning strategy that enables physics-based DL reconstruction without fully-sampled data. Our approach is to divide the acquired sub-sampled points for each scan into two sets, one of which is used to enforce data consistency in the unrolled network and the other to define the loss for training. Results show that the proposed self-supervised learning method successfully reconstructs images without fully-sampled data, performing similarly to the supervised approach that is trained with fully-sampled references. This has implications for physics-based inverse problem approaches for other settings, where fully-sampled data is not available or possible to acquire.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Meshing of Anatomical Shapes for Deformable Medial Modeling: Application to the Placenta in 3d Ultrasound

    00:14:59
    0 views
    Deformable medial modeling is an approach to extracting clinically useful features of the morphological skeleton of anatomical structures in medical images. Similar to any deformable modeling technique, it requires a pre-defined model, or synthetic skeleton, of a class of shapes before modeling new instances of that class. The creation of synthetic skeletons often requires manual interaction, and the deformation of the synthetic skeleton to new target geometries is prone to registration errors if not well initialized. This work presents a fully automated method for creating synthetic skeletons (i.e., 3D boundary meshes with medial links) for flat, oblong shapes that are homeomorphic to a sphere. The method rotationally cross-sections the 3D shape, approximates a 2D medial model in each cross-section, and then defines edges between nodes of neighboring slices to create a regularly sampled 3D boundary mesh. In this study, we demonstrate the method on 62 segmentations of placentas in first-trimester 3D ultrasound images and evaluate its compatibility and representational accuracy with an existing deformable modeling method. The method may lead to extraction of new clinically meaningful features of placenta geometry, as well as facilitate other applications of deformable medial modeling in medical image analysis.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • VoteNet+ : An Improved Deep Learning Label Fusion Method for Multi-Atlas Segmentation

    00:13:35
    0 views
    In this work, we improve the performance of multi-atlas segmentation (MAS) by integrating the recently proposed VoteNet model with the joint label fusion (JLF) approach. Specifically, we first illustrate that using a deep convolutional neural network to predict atlas probabilities can better distinguish correct atlas labels from incorrect ones than relying on image intensity difference as is typical in JLF. Motivated by this finding, we propose network VoteNet+, an improved deep network to locally predict the probability of an atlas label to differ from the label of the target image. Furthermore, we show that JLF is more suitable for the VoteNet framework as a label fusion method than plurality voting. Lastly, we use Platt scaling to calibrate the probabilities of our new model. Results on LPBA40 3D MR brain images show that our proposed method can achieve better performance than VoteNet.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Extraction and Sign Determination of Respiratory Signal in Real-Time Cardiac Magnetic Resonance Imaging

    00:12:39
    0 views
    In real-time (RT) cardiac cine imaging, a stack of 2D slices is collected sequentially under free-breathing conditions. A complete heartbeat from each slice is then used for cardiac function quantification. The inter-slice respiratory mismatch can compromise accurate quantification of cardiac function. Methods based on principal components analysis (PCA) have been proposed to extract the respiratory signal from RT cardiac cine, but these methods cannot resolve the inter-slice sign ambiguity of the respiratory signal. In this work, we propose a fully automatic sign correction procedure based on the similarity of neighboring slices and correlation to the center-of-mass curve. The proposed method is evaluated in eleven volunteers, with ten slices per volunteer. The motion in a manually selected region-of-interest (ROI) is used as a reference. The results show that the extracted respiratory signal has a high, positive correlation with the reference in all cases. The qualitative assessment of images also shows that the proposed approach can accurately identify heartbeats, one from each slice, belonging to the same respiratory phase. This approach can improve cardiac function quantification for RT cine without manual intervention.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automating Vitiligo Skin Lesion Segmentation Using Convolutional Neural Networks

    00:12:09
    0 views
    The measurement of several skin conditions' progression and severity relies on the accurate segmentation (border detection) of lesioned skin images. One such condition is vitiligo. Existing methods for vitiligo image segmentation require manual intervention, which is time-inefficient, labor-intensive, and irreproducible between physicians. We introduce a convolutional neural network (CNN) that quickly and robustly performs such segmentations without manual intervention. We use the U-Net with a modified contracting path to generate an initial segmentation of the lesion. Then, we run the segmentation through the watershed algorithm using high-confidence pixels as "seeds." We train the network on 247 images with a variety of lesion sizes, complexities, and anatomical sites. Our network noticeably outperforms the state-of-the-art U-Net -- scoring a Jaccard Index (JI) of 73.6% (compared to 36.7%). Segmentation occurs in a few seconds, which is a substantial improvement from the previously proposed semi-autonomous watershed approach (2-29 minutes per image).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Current-Based Forward Solver for Electrical Impedance Tomography

    00:06:47
    0 views
    We present a new forward solver for the shunt model of 3D electrical impedance tomography (EIT). The new solver is based on a direct discretization of the conditions for the current density within the EIT region. Given a mesh over the region, the new solver finds firstly the amount of current flowing through each face of every element in the mesh, then the distribution of current density and finally the potential distribution. Results of simulation show that the new solver could give similar results as the traditional finite element method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • HEALPix View-order for 3D+time Radial Self-Navigated Motion-Corrected ZTE MRI

    00:06:56
    0 views
    In MRI there has been no 3D+time radial view-order which meets all the desired characteristics for simultaneous dynamic/high-resolution imaging, such as for self-navigated motion-corrected high resolution neuroimaging. In this work, we examine the use of Hierarchical Equal Area iso-Latitude Pixelization (HEALPix) for generation of three dimensional dynamic (3D+time) radial view-orders for MRI, and compare to a selection of commonly used 3D view-orders. The resulting trajectories were evaluated through simulation of the point spread function and slanted surface object suitable for modulation transfer function, contrast ratio, and SNR measurement. Results from the HEALPix view-order were compared to Generalized Spiral, Golden Means, and Random view-orders. We report the first use of the HEALPix view-order to acquire in-vivo brain images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improved resolution in structured illumination microscopy with 3D model-based restoration

    00:13:33
    0 views
    We investigate the performance of our previously developed three-dimensional (3D) model-based (MB) method [1] for 3D structured illumination microscopy (3D-SIM) using experimental 3D-SIM data. In addition, we demonstrate in simulation that we can further improve the performance of our 3D-MB approach by including a positivity constraint through the reconstruction of an auxiliary function as it was previously suggested in speckle SIM [2]. We emphasize that our methods remove out-of-focus light from the entire volume via 3D processing that relies on a 3D forward imaging model, thereby providing more accurate results compared to other approaches that rely on 2D processing of a single plane from a 3D-SIM dataset [3]. Our 3D-MB approach provides improved resolution and optical-sectioning over the standard 3D generalized Wiener filter (3D-GWF) [4] method (the only other method besides ours that performs 3D processing).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Detection of Micro-Fractures in Intravascular Optical Coherence Tomography (IVOCT) Images After Treating Coronary Arteries with Shockwave Intravascular Lithotripsy (IVL)

    00:06:54
    10 views
    Intravascular lithotripsy (IVL) is a plaque modification technique that delivers pressure waves to pre-treat heavily calcified vascular calcifications to aid successful stent deployment. IVL causes micro-fractures that can develop into macro-fractures which enable successful vessel expansion. Intravascular optical coherence tomography (IVOCT) has the penetration, resolution, and contrast to characterize coronary calcifications. We detected the presence of micro-fractures by comparing textures before and after IVL treatment (p = 0.0039). In addition, we used finite element model (FEM) to success-fully predict the location of macro-fracture. Results suggest that we can use our methods to understand and possibly clinically monitor IVL treatment.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning to Solve Inverse Problems in Imaging

    00:36:26
    0 views
    Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, tomographic reconstruction, MRI reconstruction, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. Recent advances in machine learning have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. In this talk, I will describe various classes of approaches to learned regularization, ranging from generative models to unrolled optimization perspectives, and explore their relative merits and sample complexities. We will also explore the difficulty of the underlying optimization task and how learned regularizers relate to oracle estimators.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Task Design to Meta-Train Medical Image Classifiers

    00:07:16
    0 views
    Meta-training has been empirically demonstrated to be the most effective pre-training method for few-shot learning of medical image classifiers (i.e., classifiers modeled with small training sets). However, the effectiveness of meta-training relies on the availability of a reasonable number of hand-designed classification tasks, which are costly to obtain, and consequently rarely available. In this paper, we propose a new method to unsupervisedly design a large number of classification tasks to meta-train medical image classifiers. We evaluate our method on a breast dynamically contrast enhanced magnetic resonance imaging (DCE-MRI) data set that has been used to benchmark few-shot training methods of medical image classifiers. Our results show that the proposed unsupervised task design to meta-train medical image classifiers builds a pre-trained model that, after fine-tuning, produces better classification results than other unsupervised and supervised pre-training methods, and competitive results with respect to meta-training that relies on hand-designed classification tasks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Volumetric Landmark Detection with a Multi-Scale Shift Equivariant Neural Network

    00:09:15
    0 views
    Deep neural networks yield promising results in a wide range of computer vision applications, including landmark detection. A major challenge for accurate anatomical landmark detection in volumetric images such as clinical CT scans is that large-scale data often constrain the capacity of the employed neural network architecture due to GPU memory limitations, which in turn can limit the precision of the output. We propose a multi-scale, end-to-end deep learning method that achieves fast and memory-efficient landmark detection in 3D images. Our architecture consists of blocks of shift-equivariant networks, each of which performs landmark detection at a different spatial scale. These blocks are connected from coarse to fine-scale, with differentiable resampling layers, so that all levels can be trained together. We also present a noise injection strategy that increases the robustness of the model and allows us to quantify uncertainty at test time. We evaluate our method for carotid artery bifurcations detection on 263 CT volumes and achieve a better than state-of-the-art accuracy with mean Euclidean distance error of 2.81mm.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Robust Algorithm for Denoising of Photon-Limited Dual-Energy Cone Beam CT Projections

    00:14:36
    0 views
    Dual-Energy CT offers significant advantages over traditional CT imaging because it offers energy-based awareness of the image content and facilitates material discrimination in the projection domain. The Dual-Energy CT concept has intrinsic redundancy that can be used for improving image quality, by jointly exploiting the high- and low-energy projections. In this paper we focus on noise reduction. This work presents the novel noise-reduction algorithm Dual Energy Shifted Wavelet Denoising (DESWD), which renders high-quality Dual-Energy CBCT projections out of noisy ones. To do so, we first apply a Generalized Anscombe Transform, enabling us to use denoising methods proposed for Gaussian noise statistics. Second, we use a 3D transformation to denoise all the projections at once. Finally we exploit the inter-channel redundancy of the projections to create sparsity in the signal for better denoising with a channel-decorrelation step. Our simulation experiments show that DESWD performs better than a state-of-the-art denoising method (BM4D) in limited photon-count imaging, while BM4D achieves excellent results for less noisy conditions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-modality Generative Adversarial Networks with Humor Consistency Loss for Brain MR Image Synthesis

    00:12:41
    0 views
    Magnetic Resonance (MR) images of different modalities can provide complementary information for clinical diagnosis, but whole modalities are often costly to access. Most existing methods only focus on synthesizing missing images between two modalities, which limits their robustness and efficiency when multiple modalities are missing. To address this problem, we propose a multi-modality generative adversarial network (MGAN) to synthesize three high-quality MR modalities (FLAIR, T1 and T1ce) from one MR modality T2 simultaneously. The experimental results show that the quality of the synthesized images by our proposed methods is better than the one synthesized by the baseline model, pix2pix. Besides, for MR brain image synthesis, it is important to preserve the critical tumor information in the generated modalities, so we further introduce a multi-modality tumor consistency loss to MGAN, called TC-MGAN. We use the synthesized modalities by TC-MGAN to boost the tumor segmentation accuracy, and the results demonstrate its effectiveness.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Volumetric Registration-Based Cleft Volume Estimation of Alveolar Cleft Grafting Procedures

    00:14:48
    0 views
    This paper presents a method for automatic estimation of the bony alveolar cleft volume of cleft lips and palates (CLP) patients from cone-beam computed tomography (CBCT) images via a fully convolutional neural network. The core of this method is the partial nonrigid registration of the CLP CBCT image with the incomplete maxilla and the template with the complete maxilla. We build our model on the 3D U-Net and parameterize the nonlinear mapping from the one-channel intensity CBCT image to six-channel inverse deformation vector fields (DVF). We enforce the partial maxillary registration using an adaptive irregular mask regarding the cleft in the registration process. When given inverse DVFs, the deformed template combined with volumetric Boolean operators are used to compute the cleft volume. To avoid the rough and inaccurate reconstructed cleft surface, we introduce an additional cleft shape constraint to fine-tune the parameters of the registration neural networks. The proposed method is applied to clinically-obtained CBCT images of CLP patients. The qualitative and quantitative experiments demonstrate the effectiveness and efficiency of our method in the volume completion and the bony cleft volume estimation compared with the state-of-the-art.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Adversarial Correction of Rigid MR Motion Artifacts

    00:14:48
    0 views
    Motion is one of the main sources for artifacts in magnetic resonance (MR) images. It can have significant consequences on the diagnostic quality of the resultant scans. Previously, supervised adversarial approaches have been suggested for the correction of MR motion artifacts. However, these approaches suffer from the limitation of required paired co-registered datasets for training which are often hard or impossible to acquire. Building upon our previous work, we introduce a new adversarial framework with a new generator architecture and loss function for the unsupervised correction of severe rigid motion artifacts in the brain region. Quantitative and qualitative comparisons with other supervised and unsupervised translation approaches showcase the enhanced performance of the introduced framework.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Annotation-Free Gliomas Segmentation Based on a Few Labeled General Brain Tumor Images

    00:09:15
    0 views
    Pixel-level labeling for medical image segmentation is time-consuming and sometimes infeasible. Therefore, using a small amount of labeled data in one domain to help train a reasonable segmentation model for unlabeled data in another domain becomes an important need in medical image segmentation. In this work, we propose a new segmentation framework based on unsupervised domain adaptation and semi-supervised learning, which uses a small amount of labeled general brain tumor images and learns an effective model to segment independent brain gliomas images. Our method contains two major parts. First, we use unsupervised domain adaptation to generate synthetic general brain tumor images from the brain gliomas images. Then, we apply semi-supervised learning method to train a segmentation model with a small number of labeled general brain tumor images and the unlabeled synthetic images. The experimental results show that our proposed method can use approximate 10% of labeled data to achieve a comparable accuracy of the model trained with all labeled data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 3D Optical Flow Estimation Combining 3D Census Signature and Total Variation Regularization

    00:13:37
    0 views
    We present a 3D variational optical flow method for fluorescence image sequences which preserves discontinuities in the computed flow field. We propose to minimize an energy function composed of a linearized 3D Census signature-based data term and a total variational (TV) regularizer. To demonstrate the efficiency of our method, we have applied real sequences depicting collagen network, where the motion field is expected to be discontinuous. We also favorably compare our results with two other motion estimation methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Low-Dose Cardiac-Gated Spect Via a Spatiotemporal Convolutional Neural Network

    00:01:24
    0 views
    In previous studies convolutional neural networks (CNN) have been demonstrated to be effective for suppressing the elevated imaging noise in low-dose single-photon emission computed tomography (SPECT). In this study, we investigate a spatiotemporal CNN model (ST-CNN) to exploit the signal redundancy in both spatial and temporal domains among the gate frames in a cardiac-gated sequence. In the experiments, we demonstrated the proposed ST-CNN model on a set of 119 clinical acquisitions with imaging dose reduced by four times. The quantitative results show that ST-CNN can lead to further improvement in the reconstructed myocardium in terms of the overall error level and the spatial resolution of the left ventricular (LV) wall. Compared to a spatial-only CNN, STCNN decreased the mean-squared-error of the reconstructed myocardium by 21.1% and the full-width at half-maximum of the LV wall by 5.3%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Spectral Data Augmentation Techniques to Quantify Lung Pathology from CT-Images

    00:12:20
    0 views
    Data augmentation is of paramount importance in biomedical image processing tasks, characterized by inadequate amounts of labelled data, to best use all of the data that is present. In-use techniques range from intensity transformations and elastic deformations, to linearly combining existing data points to make new ones. In this work, we propose the use of spectral techniques for data augmentation, using the discrete cosine and wavelet transforms. We empirically evaluate our approaches on a CT texture analysis task to detect abnormal lung-tissue in patients with cystic fibrosis. Empirical experiments show that the proposed spectral methods perform favourably as compared to the existing methods. When used in combination with existing methods, our proposed approach can increase the relative minor class segmentation performance by 44.1% over a simple replication baseline.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Learning of Contextual Information in Multiplex Immunofluorescence Tissue Cytometry

    00:14:40
    0 views
    New machine learning models designed to capture the histopathology of tissues should account not only for the phenotype and morphology of the cells, but also learn complex spatial relationships between them. To achieve this, we represent the tissue as an interconnected graph, where previously segmented cells become nodes of the graph. Then the relationships between cells are learned and embedded into a low-dimensional vector, using a Graph Neural Network. We name this Representation Learning based strategy NARO (NAtural Representation of biological Objects), a fully-unsupervised method that learns how to optimally encode cell phenotypes, morphologies, and cell-to-cell interactions from histological tissues labeled using multiplex immunohistochemistry. To validate NARO, we first use synthetically generated tissues to show that NARO?s generated embeddings can be used to cluster cells in meaningful, distinct anatomical regions without prior knowledge of constituent cell types and interactions. Then we test NARO on real multispectral images of human lung adenocarcinoma tissue samples, to show that the generated embeddings can indeed be used to automatically infer regions with different histopathological characteristics.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Completion Network for Reconstruction from Compressed Acquisition

    00:14:50
    0 views
    We consider here the problem of reconstructing an image from a few linear measurements. This problem has many biomedical applications, such as computerized tomography, magnetic resonance imaging and optical microscopy. While this problem has long been solved by compressed sensing methods, these are now outperformed by deep-learning approaches. However, understanding why a given network architecture works well is still an open question. In this study, we proposed to interpret the reconstruction problem as a Bayesian completion problem where the missing measurements are estimated from those acquired. From this point of view, a network emerges that includes a fully connected layer that provides the best linear completion scheme. This network has a lot fewer parameters to learn than direct networks, and it trains more rapidly than image-domain networks that correct pseudo inverse solutions. Although, this study focuses on computational optics, it might provide some insight for inverse problems that have similar formulations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Transcriptome-Supervised Classification of Tissue Morphology Using Deep Learning

    00:14:53
    0 views
    Deep learning has proven to successfully learn variations in tissue and cell morphology. Training of such models typically relies on expensive manual annotations. Here we conjecture that spatially resolved gene expression, e.i., the transcriptome, can be used as an alternative to manual annotations. In particular, we trained five convolutional neural networks with patches of different size extracted from locations defined by spatially resolved gene expression. The network is trained to classify tissue morphology related to two different genes, general tissue, as well as background, on an image of fluorescence stained nuclei in a mouse brain coronal section. Performance is evaluated on an independent tissue section from a different mouse brain, reaching an average Dice score of 0.51. Results may indicate that novel techniques for spatially resolved transcriptomics together with deep learning may provide a unique and unbiased way to find genotype-phenotype relationships.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Compensatory Brain Connection Discovery in Alzheimers Disease

    00:11:29
    0 views
    Identification of the specific brain networks that are vulnerable or resilient in neurodegenerative diseases can help to better understand the disease effects and derive new connectomic imaging biomarkers. In this work, we use brain connectivity to find pairs of structural connections that are negatively correlated with each other across Alzheimer?s disease (AD) and healthy populations. Such anti-correlated brain connections can be informative for identification of compensatory neuronal pathways and the mechanism of brain networks? resilience to AD. We find significantly anti-correlated connections in a public diffusion-MRI database, and then validate the results on other databases.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Spectral Graph Transformer Networks for Brain Surface Parcellation

    00:11:36
    0 views
    The analysis of the brain surface modeled as a graph mesh is a challenging task. Conventional deep learning approaches often rely on data lying in the Euclidean space. As an extension to irregular graphs, convolution operations are defined in the Fourier or spectral domain. This spectral domain is obtained by decomposing the graph Laplacian, which captures relevant shape information. However, the spectral decomposition across different brain graphs causes inconsistencies between the eigenvectors of individual spectral domains, causing the graph learning algorithm to fail. Current spectral graph convolution methods handle this variance by separately aligning the eigenvectors to a reference brain in a slow iterative step. This paper presents a novel approach for learning the transformation matrix required for aligning brain meshes using a direct data-driven approach. Our alignment and graph processing method provides a fast analysis of brain surfaces. The novel Spectral Graph Transformer (SGT) network proposed in this paper uses very few randomly sub-sampled nodes in the spectral domain to learn the alignment matrix for multiple brain surfaces. We validate the use of this SGT network along with a graph convolution network to perform cortical parcellation. Our method on 101 manually-labeled brain surfaces shows improved parcellation performance over a no-alignment strategy, gaining a significant speed (1400 fold) over traditional iterative alignment approaches.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Hemorrhage Detection from Coarsely Annotated Fundus Images in Diabetic Retinopathy

    00:12:08
    0 views
    In this paper, we proposed and validated a novel and effective pipeline for automatically detecting hemorrhage from coarsely-annotated fundus images in diabetic retinopathy. The proposed framework consisted of three parts: image preprocessing, training data refining, and object detection using a convolutional neural network with label smoothing. Contrast limited adaptive histogram equalization and adaptive gamma correction with weighting distribution were adopted to improve image quality by enhancing image contrast and correcting image illumination. To refine coarsely-annotated training data, we designed a bounding box refining network (BBR-net) to provide more accurate bounding box annotations. Combined with label smoothing, RetinaNet was implemented to alleviate mislabeling issues and automatically detect hemorrhages. The proposed method was trained and evaluated on a publicly available IDRiD dataset and also one of our private datasets. Experimental results showed that our BBR-net could effectively refine manually-delineated coarse hemorrhage annotations, with the average IoU being 0.8715 when compared with well-annotated bounding boxes. The proposed hemorrhage detection pipeline was compared to several alternatives and superior performance was observed.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fully Unsupervised Probabilistic Noise2void

    00:15:05
    0 views
    Image denoising is the first step in many biomedical image analysis pipelines and Deep Learning (DL) based methods are currently best performing. A new category of DL methods such as Noise2Void or Noise2Self can be used fully unsupervised, requiring nothing but the noisy data. However, this comes at the price of reduced reconstruction quality. The recently proposed Probabilistic Noise2Void (PN2V) improves results, but requires an additional noise model for which calibration data needs to be acquired. Here, we present improvements to PN2V that (i) replace histogram based noise models by parametric noise models, and (ii) show how suitable noise models can be created even in the absence of calibration data. This is a major step since it actually renders PN2V fully unsupervised. We demonstrate that all proposed improvements are not only academic but indeed relevant.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Temporally Adaptive-Dynamic Sparse Network for Modeling Disease Progression

    00:13:49
    0 views
    Alzheimer's disease (AD) is a neurodegenerative disorder with progressive impairment of memory and cognitive functions. Sparse coding (SC) has been demonstrated to be an efficient and effective method for AD diagnosis and prognosis. However, previous SC methods usually focus on the baseline data while ignoring the consistent longitudinal features with strong sparsity pattern along the disease progression. Additionally, SC methods extract sparse features from image patches separately rather than learn with the dictionary atoms across the entire subject. To address these two concerns and comprehensively capture temporal-subject sparse features towards earlier and better discriminability of AD, we propose a novel supervised SC network termed Temporally Adaptive-Dynamic Sparse Network (TADsNet) to uncover the sequential correlation and native subject-level codes from the longitudinal brain images. Our work adaptively updates the sparse codes to impose the temporal regularized correlation and dynamically mine the dictionary atoms to make use of entire subject-level features. Experimental results on ADNI-I cohort validate the superiority of our approach.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Interpreting Age Effects of Human Fetal Brain From Spontaneous FMRI Using Deep 3D Convolutional Neural Networks

    00:11:47
    0 views
    Understanding human fetal neurodevelopment is of great clinical importance as abnormal development is linked to adverse neuropsychiatric outcomes after birth. With the advances in functional Magnetic Resonance Imaging (fMRI), recent stud- ies focus on brain functional connectivity and have provided new insight into development of the human brain before birth. Deep Convolutional Neural Networks (CNN) have achieved remarkable success on learning directly from image data, yet have not been applied on fetal fMRI for understanding fetal neurodevelopment. Here, we bridge this gap by applying a novel application of 3D CNN to fetal blood oxygen-level dependence (BOLD) resting-state fMRI data. We build supervised CNN to isolate variation in fMRI signals that relate to younger v.s. older fetal age groups. Sensitivity analysis is then performed to identify brain regions in which changes in BOLD signal are strongly associated with fetal brain age. Based on the analysis, we discovered that regions that most strongly differentiate groups are largely bilateral, share similar distribution in older and younger age groups, and are areas of heightened metabolic activity in early human development.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Brain Organ Segmentation with 3d Fully Convolutional Neural Network for Radiation Therapy Treatment Planning

    00:15:52
    0 views
    3D organ contouring is an essential step in radiation therapy treatment planning for organ dose estimation as well as for optimizing plans to reduce organs-at-risk doses. Manual contouring is time-consuming and its inter-clinician variability adversely affects the outcomes study. Such organs also vary dramatically on sizes --- up to two orders of magnitude difference in volumes. In this paper, we present BrainSegNet, a novel 3D fully convolutional neural network (FCNN) based approach for the automatic segmentation of brain organs. BrainSetNet takes a multiple resolution paths approach and uses a weighted loss function to solve the major challenge of large variability in organ sizes. We evaluated our approach with a dataset of 46 Brain CT image volumes with corresponding expert organ contours as reference. Compared with those of LiviaNet and V-Net, BrainSegNet has a superior performance in segmenting tiny or thin organs, such as chiasm, optic nerves, and cochlea, and outperforms these methods in segmenting large organs as well. BrainSegNet can reduce the manual contouring time of a volume from an hour to less than two minutes, and holds high potential to improve the efficiency of radiation therapy workflow.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Breast Lesion Segmentation in Ultrasound Images with Limited Annotated Data

    00:14:10
    0 views
    Ultrasound (US) is one of the most commonly used imaging modalities in both diagnosis and surgical interventions due to its low-cost, safety, and non-invasive characteristic. US image segmentation is currently a unique challenge because of the presence of speckle noise. As manual segmentation requires considerable efforts and time, the development of automatic segmentation algorithms has attracted researchers? attention. Although recent methodologies based on convolutional neural networks have shown promising performances, their success relies on the availability of a large number of training data, which is prohibitively difficult for many applications. There- fore, in this study we propose the use of simulated US images and natural images as auxiliary datasets in order to pre-train our segmentation network, and then to fine-tune with limited in vivo data. We show that with as little as 19 in vivo images, fine-tuning the pre-trained network improves the dice score by 21% compared to training from scratch. We also demonstrate that if the same number of natural and simulation US images is available, pre-training on simulation data is preferable.

Advertisement