IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 1 - 50 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Fully 3D Cascaded Framework for Pancreas Segmentation

    00:07:33
    0 views
    Convolutional Neural Networks (CNNs) have achieved remarkable results for many medical image segmentation tasks. However, segmenting small and polymorphous organs (e.g., pancreas) in 3D CT images is still highly challenging due to the complexity of such organs and the difficulties in 3D context information learning restricted by limited GPU memory. In this paper, we present a Fully 3D Cascaded Framework for pancreas segmentation in 3D CT images. We develop a 3D detection network (PancreasNet) to regress the locations of pancreas regions, and two different scales of a 3D segmentation network (SEVoxNet) to segment pancreas in a cascaded manner based on the detection results of PancreasNet. Experiments on the public NIH pancreas segmentation dataset show that we achieve 85.93% in the mean DSC and 75.38% in the mean JI, outperforming state-of-the-art results and with the fastest inference time ever reported (~200 times faster).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Hierarchy-Constrained Network for Corneal Tissue Segmentation Based on Anterior Segment Oct Images

    00:03:37
    0 views
    Anterior segment optical coherence tomography (AS-OCT) is widely used to observe the corneal tissue structures in clinical ophthalmology. Accurate segmentation of corneal tissue interfaces is essential for corneal diseases diagnosis and surgical planning. However, image scattered noise and keratopathy make corneal tissue interface fitting results of the existing methods deviate. In this paper, we propose a hierarchy-constrained network, which combines hierarchical features by using of an elegant progressive feature-extraction module and boundary constraint to overcome these challenges. In the meantime, multi-level prediction fusion module is integrated to the network that eventually enables the output node to sufficiently absorb features extracted from various levels. Extensive experimental results on two datasets containing cornea images with multiple lesions show that our proposed method distinctly improves the accuracy of corneal tissue interfaces segmentation and outperforms other existing methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Random Forests for Small Sample Size Prediction with Medical Imaging Data

    00:10:47
    0 views
    Deep neural networks represent the state of the art for computer-aided medical imaging assessment, e.g. lesion detection, organ segmentation and disease classification. While for large datasets their superior performance is a clear argument, medical imaging data is often small and highly heterogeneous. In combination with the typical parameter amount in deep neural networks, this often leads to overfitting and results in a low level of generalization performance. We propose a straight-forward combination of random forests and deep neural networks for superior performance on medical imaging datasets with only small data, and provide an extensive evaluation of survival prediction for metastatic colorectal cancer patients using computed tomography imaging data, with our proposed method clearly outperforming other approaches.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Quantification of Macular Vasculature Changes from OCTA Images of Hematologic Patients

    00:14:52
    0 views
    Abnormal blood compositions can lead to abnormal blood flow which can influence the macular vasculature. Optical coherence tomography angiography (OCTA) makes it possible to study the macular vasculature and potential vascular abnormalities induced by hematological disorders. Here, we investigate vascular changes in control subjects and in hematologic patients before and after treatment. Since these changes are small, they are difficult to notice in the OCTA images. To quantify vascular changes, we propose a method for combined capillary registration, dictionary-based segmentation and local density estimation. Using this method, we investigate three patients and five controls, and our results show that we can detect small changes in the vasculature in patients with large changes in blood composition.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Vertebra-Focused Landmark Detection for Scoliosis Assessment

    00:13:39
    0 views
    Adolescent idiopathic scoliosis (AIS) is a lifetime disease that arises in children. Accurate estimation of Cobb angles of the scoliosis is essential for clinicians to make diagnosis and treatment decisions. The Cobb angles are measured according to the vertebrae landmarks. Existing regression-based methods for the vertebra landmark detection typically suffer from large dense mapping parameters and inaccurate landmark localization. The segmentation-based methods tend to predict connected or corrupted vertebra masks. In this paper, we propose a novel vertebra-focused landmark detection method. Our model first localizes the vertebra centers, based on which it then traces the four corner landmarks of the vertebra through the learned corner offset. In this way, our method is able to keep the order of the landmarks. The comparison results demonstrate the merits of our method in both Cobb angle measurement and landmark detection on low-contrast and ambiguous X-ray images. Code is available at: url{https://github.com/yijingru/Vertebra-Landmark-Detection}.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Bi-Modal Ultrasound Breast Cancer Diagnosis Via Multi-View Deep Neural Network SVM

    00:11:31
    1 view
    B-mode ultrasound and ultrasound elastography are two routine diagnostic modalities for breast cancer. Unfortunately, few efforts have paid attention to learn bi-modal ultrasound jointly. By combining multi-view deep mapping-based feature representation with SVM-based classification, we proposed a novel integrated deep learning model, multi-view deep neural network support vector machine (MDNNSVM), to achieve breast cancer diagnosis on bi-modal ultrasound. In particular, multi-view representation learning extracts and fuses the various ultrasound characteristics (also including hardness information of soft tissue) effectively to differentiate benign breast lesions from malignant. Further, the SVM-based objective function is used to learn a classifier jointly with DNN to improve diagnostic accuracy significantly. The experimental results on a real-world dataset of breast cancer verify the effectiveness of the MDNNSVM with the best value of classification accuracy (86.36%) and AUC (0.9079).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Segmentation of White Matter Tracts Using Multiple Brain Mri Sequences

    00:13:00
    0 views
    White matter tractography mapping is a must in neuro-surgical planning and navigation to minimize risks of iatrogenic damages. Clinical tractography pipelines still require time consuming manual operations and significant neuro-anatomical expertise, to accurately seed the tracts and remove tractography outliers. The automatic segmentation of white matter (WM) tracts using deep neural networks has been recently demonstrated. However, most of the works in this area use a single brain MRI sequence, whereas neuro-radiologists rely on 2 or more MRI sequences, e.g. T1w and the principal direction of diffusion (PDD), for pre-surgical WM mapping. In this work, we propose a novel neural architecture for the automatic segmentation of white matter tracts by fusing multiple MRI sequences. The proposed method is demonstrated and validated on joint T1w and PDD input sequences. It is shown to compare favorably against state-of-the art methods (Vnet, TractSeg) on the Human Connectome Project (HCP) brain scans dataset for clinically important WM tracts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation

    00:13:42
    0 views
    Brain tumor segmentation in magnetic resonance images (MRI) is necessary for diagnosis, monitoring and treatment, while manual segmentation is time-consuming, labor-intensive and subjective. In addition, single modality can?t provide enough information for accurate segmentation. In this paper, we propose a multi-modality fusion network based on attention mechanism for brain tumor segmentation. Our network includes four channel-independent encoding paths to independently extract features from four modalities, the feature fusion block to fuse the four features, and a decoding path to finally segment the tumor. The channel-independent encoding path can capture modality-specific features, However, not all the features extracted from the encoders are useful for segmentation. In this paper, we propose to use the attention mechanism to guide the fusion block. In this way, the modality-specific features can be separately recalibrated along the channel and space paths, which can suppress less informative features and emphasize the useful ones. The obtained shared latent feature representation is finally projected by the decoder to the brain tumor segmentation. The experiment results on BraTS 2017 dataset demonstrate the effectiveness of our proposed method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Low-Shot Learning of Automatic Dental Plaque Segmentation Based on Local-To-Global Feature Fusion

    00:17:03
    0 views
    The early detection of dental plaque could prevent periodontal diseases and dental caries, however, it is difficult to recognize it without the use of medical dyeing reagent due to the low contrast between dental plaque and teeth. To combat this problem, this paper introduces a novel low-shot learning method of the intelligent dental plaque segmentation directly using oral endoscope images. The key contribution is to conduct low-shot learning at the super-pixel level and integrate the super-pixels' global and local features towards better segmentation results. Our rationale is that, super-pixel based CNN feature focuses on the statistical distribution of plaques' color, heat kernel signature (HKS) aims to capture the local-to-global structure relationship in the nearby regions centering around plaque area, and circle-LBP feature depicts the local texture pattern on the plaque area. The experimental results confirm that our method outperforms the state-of-the-art methods based on small scale training datasets, and the user study demonstrates our method is more accurate than conventional manual results delineated by experienced dentists.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Digital Breast Tomosynthesis Reconstruction with Deep Neural Network for Improved Contrast and In-Depth Resolution

    00:13:28
    0 views
    Digital breast tomosynthesis (DBT) provides 3D reconstruction which reduces the superposition and overlapping of breast tissues compared to mammography, leading to increased sensitivity and specificity. However, due to the limited angular sampling, DBT images are still accompanied with severe artifacts and limited in-depth resolution. In this paper, we proposed a deep learning-based DBT reconstruction method to mitigate the limited angular artifacts and improve in-depth resolution. An unroll-type neural network was used with decoupled training for each unroll to reduce training-time computational cost. A novel region of interest loss on inserted microcalcifications was further proposed to improve the spatial resolution and contrast of the microcalcifications. The network was trained and tested on 176 realistic breast phantoms, and improved in-plane contrast (3.17 versus 0.43, p
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • H-Scan Format for Classification of Ultrasound Scatterers and Matched Comparison to Histology Measurements

    00:14:41
    0 views
    H-scan imaging is a new ultrasound (US) technique used to visualize the relative size of acoustic scatterers. The purpose of this study was to evaluate the sensitivity of H-scan US imaging to scatterer size and comparison to histological sections of tumor tissue. Image data was acquired using a programmable US scanner (Vantage 256, Verasonics Inc) equipped with a 256-element L22-8v capacitive micromachined ultrasonic transducer (CMUT, Kolo Medical). To generate the H-scan US image, three parallel convolution filters were applied to the radiofrequency (RF) data sequences to measure the relative strength of the backscattered US signals. H-scan US imaging was used to image a gelatin-based heterogenous phantom and breast tumor-bearing mice (N = 4). Excised tumor tissue underwent histologic processing and the cells were segmented to compute physical size measurements at the cellular level followed by spatial correlation with H-scan US image features. The in vitro results show that there was an improvement in the contrast-to-noise ratio (CNR) of 44.1% for H-scan compared to B-scan US imaging. Preliminary animal studies revealed there was a statistically significant relationship between H-scan US and physical size measures at the cell level (R2 > 0.95, p
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • DeepFocus: A Few-Shot Microscope Slide Auto-Focus Using a Sample Invariant CNN-Based Sharpness Function

    00:10:35
    0 views
    Autofocus (AF) methods are extensively used in biomicroscopy, for example to acquire timelapses, where the imaged objects tend to drift out of focus. AF algorithms determine an optimal distance by which to move the sample back into the focal plane. Current hardware-based methods require modifying the microscope and image-based algorithms either rely on many images to converge to the sharpest position or need training data and models specific to each instrument and imaging configuration. Here we propose DeepFocus, an AF method we implemented as a Micro-Manager plugin, and characterize its Convolutional Neural Network (CNN)-based sharpness function, which we observed to be depth co-variant and sample-invariant. Sample invariance allows our AF algorithm to converge to an optimal axial position within as few as three iterations using a model trained once for use with a wide range of optical microscopes and a single instrument-dependent calibration stack acquisition of a flat (but arbitrary) textured object. From experiments carried out both on synthetic and experimental data, we observed an average precision, given 3 measured images, of 0.30 ? 0.16 ?m with a 10?, NA 0.3 objective. We foresee that this performance and low image number will help limit photodamage during acquisitions with light-sensitive samples.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Adaptive Locally Low Rank and Sparsity Constrained Reconstruction for Accelerated Dynamic Mri

    00:14:33
    0 views
    Globally low rank and sparsity (GLRS) constrained techniques perform well under high acceleration factors, but may blur spatial details. Locally low rank and sparsity (LLRS) constrained techniques preserve the spatial details better, but are sensitive to the size of the local patches. We propose a novel adaptive locally low rank and sparsity (ALLRS) constrained reconstruction for accelerated dynamic MRI that preserves the spatial details in heterogeneous regions, and smooths preferentially in homogeneous regions by adapting the local patch size to the level of spatial details. Results from in vivo dynamic cardiac and liver MRI demonstrate that ALLRS achieves improved sharpness as well as peak signal-to-noise ratio and visual information fidelity index with suppressed under-sampling artifacts for up to 16-fold undersampling.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Directional Beam Focusing Based Dual Apodization Approach for Improved Vector Flow Imaging

    00:12:29
    0 views
    The design of apodization and windowing functions forms the predominant part of the beamforming process in ultrasound imaging. However, the detailed analysis of apodization in the case of beamforming for flow imaging is very limited in the literature. This paper introduces the concept of dual apodization technique in vector triangulation for ultrasound-based vector flow imaging. The approach utilizes the idea of multiple apodization to induce a steering effect at receive along with sidelobe suppression for the delay compensated radio-frequency (RF) signals. The method is investigated using extensive simulations for transverse flows with different flow profiles at different velocities. The simulation study using 192 element 3 MHz linear array shows an improvement in resolution and better clutter suppression. The proposed approach is further analyzed using various conventional and data adaptive apodization techniques for a gradient flow profile. The error variance in velocity magnitude estimate is as low as 5.0251?10-4 and the mean angle error is +0.7358? using Hanning-Gaussian apodization for a transverse gradient flow with a peak velocity of 0.25m/s.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Relational Learning between Multiple Pulmonary Nodules Via Deep Set Attention Transformers

    00:14:09
    0 views
    Diagnosis and treatment of multiple pulmonary nodules are clinically important but challenging. Prior studies on nodule characterization use solitary-nodule approaches on multiple nodular patients, which ignores the relations between nodules. In this study, we propose a multiple instance learning (MIL) approach and empirically prove the benefit to learn the relations between multiple nodules. By treating the multiple nodules from a same patient as a whole, critical relational information between solitary-nodule voxels is extracted. To our knowledge, it is the first study to learn the relations between multiple pulmonary nodules. Inspired by recent advances in natural language processing (NLP) domain, we introduce a self-attention transformer equipped with 3D CNN, named NoduleSAT, to replace typical pooling-based aggregation in multiple instance learning. Extensive experiments on lung nodule false positive reduction on LUNA16 database, and malignancy classification on LIDC-IDRI database, validate the effectiveness of the proposed method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Image-Domain Material Decomposition Using an Iterative Neural Network for Dual-Energy CT

    00:14:51
    0 views
    Image-domain material decomposition is susceptible to noise and artifacts in dual-energy CT (DECT) attenuation images. To obtain high quality material images from DECT, data-driven methods are attracting widespread attention. Iterative neural network (INN) approaches achieved high image reconstruction quality and low generalization error in several inverse imaging problems. BCD-Net is an INN of which architecture is constructed by generalizing a block coordinate descent (BCD) algorithm that solves model-based image reconstruction using learned convolutional regularizers. We propose a new INN architecture for DECT material decomposition by replacing a model-based image reconstruction module of BCD-Net with a model-based image decomposition (MBID) module. Experiments with the extended cardiactorso (XCAT) phantom and patient data show that the proposed method greatly improves image decomposition quality compared to a conventional MBID method using an edge-preserving hyperbola regularizer and a state-of-the-art learned MBID method that uses different pre-learned sparsifying transforms for different materials.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Segmentation-Based Method Combined with Dynamic Programming for Brain Midline Delineation

    00:13:31
    0 views
    The midline related pathological image features are crucial for evaluating the severity of brain compression caused by stroke or traumatic brain injury (TBI). The automated midline delineation not only improves the assessment and clinical decision making for patients with stroke symptoms or head trauma but also reduces the time of diagnosis. Nevertheless, most of the previous methods model the midline by localizing the anatomical points, which are hard to detect or even missing in severe cases. In this paper, we formulate the brain midline delineation as a segmentation task and propose a three-stage framework. The proposed framework firstly aligns an input CT image into the standard space. Then, the aligned image is processed by a midline detection network (MD-Net) integrated with the CoordConv Layer and Cascade AtrousCconv Module to obtain the probability map. Finally, we formulate the optimal midline selection as a pathfinding problem to solve the problem of the discontinuity of midline delineation. Experimental results show that our proposed framework can achieve superior performance on one in-house dataset and one public dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Cancer Sensitive Cascaded Networks (CSC-Net) for Efficient Histopathology Whole Slide Image Segmentation

    00:14:20
    0 views
    Automatic segmentation of histopathological whole slide images (WSIs) is challenging due to the high resolution and large scale. In this paper, we proposed a cascade strategy for fast segmentation of WSIs based on convolutional neural networks. Our segmentation framework consists of two U-Net structures which are trained with samples from different magnifications. Meanwhile, we designed a novel cancer sensitive loss (CSL), which is effective in improving the sensitivity of cancer segmentation of the first network and reducing the false positive rate of the second network. We conducted experiments on ACDC-LungHP dataset and compared our method with 2 state-of-the-art segmentation methods. The experimental results have demonstrated that the proposed method can improve the segmentation accuracy and meanwhile reduce the amount of computation. The dice score coefficient and precision of lung cancer segmentation are 0.694 and 0.947, respectively, which are superior to the compared methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Spectral Characterization of Functional MRI Data on Voxel-Resolution Cortical Graphs

    00:14:59
    0 views
    The human cortical layer exhibits a convoluted morphology that is unique to each individual. Conventional volumetric fMRI processing schemes take for granted the rich information provided by the underlying anatomy. We present a method to study fMRI data on subject-specific cerebral hemisphere cortex (CHC) graphs, which encode the cortical morphology at the resolution of voxels. We study graph spectral energy metrics associated to fMRI data of 100 subjects from the Human Connectome Project database, across seven tasks. Experimental results signify the strength of CHC graphs' Laplacian eigenvector bases in capturing subtle spatial patterns specific to different functional loads as well as experimental conditions within each task.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Physics-Motivated DNN for X-Ray CT Scatter Correction

    00:14:39
    1 view
    The scattering of photons by the imaged object in X-ray computed tomography (CT) produces degradations of the reconstructions in the form of streaks, cupping, shading artifacts and decreased contrast. We describe a new physics-motivated deep-learning-based method to estimate scatter and correct for it in the acquired projection measurements. The method incorporates both an initial reconstruction and the scatter-corrupted measurements using a specific deep neural network architecture and a cost function tailored to the problem. Numerical experiments show significant improvement over a recent projection-based deep neural network method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Metal Artifact Reduction and Intra Cochlear Anatomy Segmentation in CT Images of the Ear with a Multi-Resolution Multi-Task 3D Network

    00:15:05
    0 views
    Segmenting the intra-cochlear anatomy structures (ICAs) in post-implantation CT (Post-CT) images of the cochlear implant (CI) recipients is challenging due to the strong artifacts produced by the metallic CI electrodes. We propose a multi-resolution multi-task deep network which synthesizes an artifact-free image and segments the ICAs in the Post-CT images simultaneously. The output size of the synthesis branch is 1/64 of that of the segmentation branch. This reduces and the memory usage for training, while generating segmentation labels at a high resolution. In this preliminary study, we use the segmentation results of an automatic method as the ground truth to provide supervision to train our model, and we achieve a median Dice index value of 0.792. Our experiments also confirm the usefulness of the multi-task learning.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • ESCELL: Emergent Symbolic Cellular Language

    00:10:26
    0 views
    We present ESCELL, a method for developing an emergent symbolic language of communication between multiple agents reasoning about cells. We show how agents are able to cooperate and communicate successfully in the form of symbols similar to human language to accomplish a task in the form of a referential game (Lewis? signaling game). In one form of the game, a sender and a receiver observe a set of cells from 5 different cell phenotypes. The sender is told one cell is a target and is allowed to send one symbol to the receiver from a fixed arbitrary vocabulary size. The receiver relies on the information in the symbol to identify the target cell. We train the sender and receiver networks to develop an innate emergent language between themselves to accomplish this task. We observe that the networks are able to successfully identify cells from 5 different phenotypes with an accuracy of 93.2%. We also introduce a new form of the signaling game where the sender is shown one image instead of all the images that the receiver sees. The networks successfully develop an emergent language to get an identification accuracy of 77.8%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Bending Loss Regularized Network for Nuclei Segmentation in Histopathology Images

    00:16:40
    0 views
    Separating overlapped nuclei is a major challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on public datasets; however, their performance in segmenting over-lapped nuclei are limited. To address the issue, we propose the bending loss regularized network for nuclei segmentation. The proposed bending loss defines high penalties to contour points with large curvatures, and applies small pen-alties to contour points with small curvature. Minimizing the bending loss can avoid generating contours that encompass multiple nuclei. The proposed approach is validated on the MoNuSeg dataset using five quantitative metrics. It outperforms six state-of-the-art approaches on the following metrics: Aggregate Jaccard Index, Dice, Recognition Quality, and Panoptic Quality.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Synaptic Partner Assignment Using Attentional Voxel Association Networks

    00:13:58
    0 views
    Connectomics aims to recover a complete set of synaptic connections within a dataset imaged by volume electron microscopy. Many systems have been proposed for locating synapses, and recent research has included a way to identify the synaptic partners that communicate at a synaptic cleft. We reframe the problem of identifying synaptic partners as directly generating the mask of the synaptic partners from a given cleft. We train a convolutional network to perform this task. The network takes the local image context and a binary mask representing a single cleft as input. It is trained to produce two binary output masks: one which labels the voxels of the presynaptic partner within the input image, and another similar labeling for the postsynaptic partner. The cleft mask acts as an attentional gating signal for the network. We find that an implementation of this approach performs well on a dataset of mouse somatosensory cortex, and evaluate it as part of a combined system to predict both clefts and connections.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Estimating Reproducible Functional Networks Associated with Task Dynamics Using Unsupervised LSTMs

    00:14:21
    0 views
    We propose a method for estimating more reproducible functional networks that are more strongly associated with dynamic task activity by using recurrent neural networks with long short term memory (LSTMs). The LSTM model is trained in an unsupervised manner to learn to generate the functional magnetic resonance imaging (fMRI) time-series data in regions of interest. The learned functional networks can then be used for further analysis, e.g., correlation analysis to determine functional networks that are strongly associated with an fMRI task paradigm. We test our approach and compare to other methods for decomposing functional networks from fMRI activity on 2 related but separate datasets that employ a biological motion perception task. We demonstrate that the functional networks learned by the LSTM model are more strongly associated with the task activity and dynamics compared to other approaches. Furthermore, the patterns of network association are more closely replicated across subjects within the same dataset as well as across datasets. More reproducible functional networks are essential for better characterizing the neural correlates of a target task.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Recursive Segmentation Refinement Without Manual Annotations

    00:06:57
    0 views
    Most deep learning segmentation architectures require a large amount of training data, comprised of thousands of manually annotated images. Despite being time consuming to create, manual annotations are more accurate than algorithmic segmentations and, therefore, result in better training. Here we describe a strategy that utilizes iterative learning and ground truth refinement improve segmentations without using manual annotations. Alternating Cyclic Immunofluorescent (Cyclic-IF) stains and averaging prediction masks at each iteration are implemented to reduce the propagation of errors through the models. Segmentation performed using this strategy reaches the accuracy of manual segmentation after a few iterations. Using this strategy can reduce the number of manual annotations needed to produce accurate segmentation masks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep learning based phase imaging with uncertainty quantification

    00:25:49
    0 views
    Emerging deep learning based computational microscopy techniques promise novel imaging capabilities beyond traditional techniques. In this talk, I will discuss our recent efforts in building such techniques that provide improved scalability and reliability. I will demonstrate a physics guided deep learning imaging approach that enables designing highly efficient multiplexed data acquisition schemes and fully leverages the powerful deep learning-based inverse problem framework (Fig. 1a). We apply this approach to large space-bandwidth product phase microscopy using intensity-only measurements, implemented on a simple LED-array based computational microscopy platform [1] (Fig. 1b). The trained network is shown to be robust to sample variations and various experimental imperfections (Fig. 1c). I will discuss an uncertainty quantification framework to assess the reliability of the deep learning predictions. Quantifying the uncertainty provides per-pixel evaluation of the prediction?s confidence level as well as the quality of the model and dataset. This uncertainty learning framework is widely applicable to build reliable deep learning-based biomedical imaging techniques.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Geometric Understanding of Convolutional Neural Networks

    00:26:43
    0 views
    Encoder-decoder networks using convolutional neural network (CNN) architecture have been extensively used in deep learning literatures thanks to its excellent performance for various inverse problems. Inspired by recent theoretical understanding on {generalizability}, {expressivity} and {optimization landscape} of neural networks, as well as the theory of deep convolutional framelets, here we provide a unified theoretical framework that leads to a better understanding of the geometry of encoder-decoder CNN. Our unified framework shows that encoder-decoder CNN architecture is closely related to nonlinear frame representation using combinatorial convolution frames, whose expressivity increases exponentially with the depth. We also demonstrate the importance of skipped connection in terms of expressivity and optimization landscape. As an extension of this geometric understanding, we show that a novel attention scheme combined with bootstrapping and subnetwork aggregation improves network expressivity with minimal complexity increases. In particular, the attention module is shown to provide a redundant representation and an increased number of piecewise linear regions that improve the expressivity of the network. Thanks to the increased expressivity, the proposed network modification improves the reconstruction performance. As a proof of concept, we provide several modifications of the popular neural network baseline U-Net that is often used for image reconstruction. Experimental results show that the modified U-Net produces significantly better reconstruction results with negligible complexity increases.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Generalization Power of Deep Learning Approaches: Learning from the Output of Classical Image Processing Methods

    00:06:29
    0 views
    Due to the scarcity in the availability of large amount of labeled samples, training a deep learning model is becoming a very challenging task in computational pathology. In this paper, we show a solution for automatically generating labeled samples using classical image processing methods and exploring the generalization power of a DL approach for region segmentation. A Recurrent Residual U-Net (R2U-Net) model is trained on the labeled samples generated by employing Blue Ratio (BR) estimate along with an adaptive thresholding approach and verified by an expert pathologist. Testing of the system is performed on a set of completely new samples collected from a different Whole Slide Image (WSI). The R2U-Net shows significantly better performance compared to the BR with adaptive thresholding method alone, which proves the generalizability and robustness of DL methods for segmentation tasks in computational pathology.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Context Based Deep Learning Approach for Unbalanced Medical Image Segmentation

    00:13:40
    0 views
    Automated medical image segmentation is an important step in many medical procedures. Recently, deep learning networks have been widely used for various medical image segmentation tasks, with U-Net and generative adversarial nets (GANs) being some of the commonly used ones. Foreground-background class imbalance is a common occurrence in medical images, and U-Net has difficulty in handling class imbalance because of its cross entropy (CE) objective function. Similarly, GAN also suffers from class imbalance because the discriminator looks at the entire image to classify it as real or fake. Since the discriminator is essentially a deep learning classifier, it is incapable of correctly identifying minor changes in small structures. To address these issues, we propose a novel context based CE loss function for U-Net, and a novel architecture Seg-GLGAN. The context based CE is a linear combination of CE obtained over the entire image and its region of interest (ROI). In Seg-GLGAN, we introduce a novel context discriminator to which the entire image and its ROI are fed as input, thus enforcing local context. We conduct extensive experiments using two challenging unbalanced datasets: PROMISE12 and ACDC. We observe that segmentation results obtained from our methods give better segmentation metrics as compared to various baseline methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning with Less Data Via Weakly Labeled Patch Classification in Digital Pathology

    00:14:00
    0 views
    In Digital Pathology (DP), labeled data is generally very scarce due to the requirement that medical experts provide annotations. We address this issue by learning transferable features from weakly labeled data, which are collected from various parts of the body and are organized by non-medical experts. In this paper, we show that features learned from such weakly labeled datasets are indeed transferable and allow us to achieve highly competitive patch classification results on the colorectal cancer (CRC) dataset and the PatchCamelyon (PCam) dataset by using an order of magnitude less labeled data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Open-Set Oct Image Recognition with Synthetic Learning

    00:11:38
    0 views
    Due to new eye diseases discovered every year, doctors may encounter some rare or unknown diseases. Similarly, in medical image recognition field, many practical medical classification tasks may encounter the case where some testing samples belong to some rare or unknown classes that have never been observed or included in the training set, which is termed as an open-set problem. As rare diseases samples are difficult to be obtained and included in the training set, it is reasonable to design an algorithm that recognizes both known and unknown diseases. Towards this end, this paper leverages a novel generative adversarial network (GAN) based synthetic learning for open-set retinal optical coherence tomography (OCT) image recognition. Specifically, we first train an auto-encoder GAN and a classifier to reconstruct and classify the observed images, respectively. Then a subspace-constrained synthesis loss is introduced to generate images that locate near the boundaries of the subspace of images corresponding to each observed disease, meanwhile, these images cannot be classified by the pre-trained classifier. In other words, these synthesized images are categorized into an unknown class. In this way, we can generate images belonging to the unknown class, and add them into the original dataset to retrain the classifier for the unknown disease discovery
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • An Alternating Projection-Image Domains Algorithm for Spectral CT

    00:14:03
    0 views
    Spectral computerized tomography (Spectral CT) is a medical and biomedical imaging technique which uses the spectral information of the attenuated X-ray beam. Energy-resolved photon-counting detector is a promising technology for improved spectral CT imaging and allows to obtain material selective images. Two different kind of approaches resolve the problem of spectral reconstruction consisting of material decomposition and tomographic reconstruction: the two-step methods which are most often projection based methods, and the one-step methods. While the projection based methods are interesting for the fast computational time, it is not easy to introduce some spatial priors in the image domain contrary to one-step methods. We present a one-step method combining, in an alternating minimization scheme, a multi-material decomposition in the projection domain and a regularized tomographic reconstruction introducing the spatial priors in the image domain. We present and discuss promising results from experimental data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Domain Adaptation for Cross-Device Oct Lesion Detection Via Learning Adaptive Features

    00:03:57
    0 views
    Optical coherence tomography (OCT) is widely used in computer-aided medical diagnosis of retinal pathologies. Deep convolutional network has been successfully applied to detect lesions from OCT images. Different OCT imaging devices inevitably cause variation in the distribution between training phase and testing phase, which will lead to extremely reduction on model performance. Most existing unsupervised domain adaptation methods are mainly focused on lesion segmentation, there are few studies on lesion detection tasks especially for OCT images. In this paper, we propose a novel unsupervised domain adaptation framework adaptively learning feature representation to achieve cross-device lesion detection for OCT images. Firstly, we design global and local adversarial discriminators to force the networks to learn device-independent features. Secondly, we develop a non-parameter adaptive feature norm into global adversarial discriminator to stabilize the discrimination in target domain. Finally, we perform the validation experiment on lesion detection task across two OCT devices. The results exhibit that the proposed framework has promising performance compared with other unsupervised domain adaptation approaches.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Generalizable Framework for Domain-Specific Nonrigid Registration: Application to Cardiac Ultrasound

    00:10:27
    0 views
    Many applications of nonrigid point set registration could benefit from a domain-specific model of allowed deformations. We observe that registration methods using mixture models optimize a differentiable log-likelihood function and are thus amenable to gradient-based optimization. In theory, this allows optimization of any transformations that are expressed as arbitrarily nested differentiable functions. In practice such optimization problems are readily handled with modern machine learning tools. We demonstrate, in experiments on synthetic data generated from a model of the left cardiac ventricle, that complex nested transformations can be robustly optimized using this approach. As a realistic application, we also use the method to propagate the model through an entire cardiac ultrasound sequence. We conclude that this approach, which works with both points and oriented points, provides an easily generalizable framework in which complex, application-specific transformation models may be constructed and optimized.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning and Unsupervised Fuzzy C-Means Based Level-Set Segmentation for Liver Tumor

    00:07:51
    1 view
    In this paper, we propose and validate a novel level-set method integrating an enhanced edge indicator and an automatically derived initial curve for CT based liver tumor segmentation. In the beginning, a 2D U-net is used to localize the liver and a 3D fully convolutional network (FCN) is used to refine the liver segmentation as well as to localize the tumor. The refined liver segmentation is used to remove non-liver tissues for subsequent tumor segmentation. Given that the tumor segmentation obtained from the aforementioned 3D FCN is typically imperfect, we adopt a novel level-set method to further improve the tumor segmentation. Specifically, the probabilistic distribution of the liver tumor is estimated using fuzzy c-means clustering and then utilized to enhance the object indication function used in level-set. The proposed segmentation pipeline was found to have an outstanding performance in terms of both liver and liver tumor.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deconvolution for Improved Multifractal Characterization of Tissues in Ultrasound Imaging

    00:15:12
    0 views
    Several existing studies showed the interest of estimating the multifractal properties of tissues in ultrasound (US) imaging. However, US images are not carrying information only about the tissues, but also about the US scanner. Deconvolution methods are a common way to restore the tissue reflectivity function, but, to our knowledge, their impact on estimated fractal or multifractal behavior has not been studied yet. The objective of this paper is to investigate this influence through a dedicated simulation pipeline and an in vivo experiment.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Determination of the Fetal Cardiac Cycle in Ultrasound Using Spatio-Temporal Neural Networks

    00:10:26
    0 views
    The characterization of the fetal cardiac cycle is an important determination of fetal health and stress. The anomalous appearance of different anatomical structures during different phases of the heart cycle is a key indicator of fetal congenital hearth disease. However, locating the fetal heart using ultrasound is challenging, as the heart is small and indistinct. In this paper, we present a viewpoint agnostic solution that automatically characterizes the cardiac cycle in clinical ultrasound scans of the fetal heart. When estimating the state of the cardiac cycle, our model achieves a mean-squared error of 0.177 between the ground truth cardiac cycle and our prediction. We also show that our network is able to localize the heart, despite the lack of labels indicating the location of the heart in the training process.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Enriching Statistical Inferences on Brain Connectivity for Alzheimer's Disease Analysis Via Latent Space Graph Embedding

    00:11:26
    0 views
    We develop a graph node embedding Deep Neural Network that leverage on statistical outcome measure and graph structure given in the data. The objective is to identify regions of interests (ROIs) in the brain that are affected by topological changes of brain connectivity due to specific neurodegenerative diseases by enriching statistical group analysis. We tackle this problem by learning a latent space where statistical inference can be made more effectively. Our experiments on a large-scale Alzheimer's Disease dataset show promising result identifying ROIs that show statistically significant group differences separating even early and late Mild Cognitive Impairment (MCI) groups whose effect sizes are very subtle.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Progressive Abdominal Segmentation with Adaptively Hard Region Prediction and Feature Enhancement

    00:07:02
    0 views
    Abdominal multi-organ segmentation achieves much attention in recent medical image analysis. In this paper, we propose a novel progressive framework to promote the segmentation accuracy of abdominal organs with various shapes and small sizes. The entire framework consists of three parts: 1) a Global Segmentation Module extracting the pixel-wise global feature representation; 2) a Localization Module adaptively discovering the top-n hard local regions and effective both in training and testing phase; 3) an Enhancement Module enhancing the features of hard local regions and aggregating with the global features to refine the final representation. Specifically, we predefine 512 region proposals on the cross-sectional view of the CT image to generate coordinates pseudo labels which can supervise Localization Module. In the training phase, we calculate the segmentation error of each region proposal and select the eight ones with the lowest Dice scores as the hard regions. Once these hard regions are determined, their center coordinates are adopted as the pseudo labels to train the Localization Network by using Manhattan Distance Loss. For inference, the entire model directly accomplishes the hard region localization and feature enhancement to promote pixel-wise accuracy. Without bells and whistles, extensive experimental results demonstrate that the proposed method outperforms its counterparts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Lung Nodule Malignancy Classification Based on NLSTx Data

    00:14:48
    0 views
    While several datasets containing CT images of lung nodules exist, they do not contain definitive diagnoses and often rely on radiologists' visual assessment for malignancy rating. This is in spite of the fact that lung cancer is one of the top three most frequently misdiagnosed diseases based on visual assessment. In this paper, we propose a dataset of difficult-to-diagnose lung nodules based on data from the National Lung Screening Trial (NLST), which we refer to as NLSTx. In NLSTx, each malignant nodule has a definitive ground truth label from biopsy. Herein, we also propose a novel deep convolutional neural network (CNN) / recurrent neural network framework that allows for use of pre-trained 2-D convolutional feature extractors, similar to those developed in the ImageNet challenge. Our results show that the proposed framework achieves comparable performance to an equivalent 3-D CNN while requiring half the number of parameters.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 3D Ultrasound Generation from Partial 2D Observations Using Fully Convolutional and Spatial Transformation Networks

    00:12:16
    0 views
    External beam radiation therapy (EBRT) is a therapeutic modality often used for the treatment of various types of cancer. EBRT?s efficiency highly depends on accurate tracking of the target to be treated and therefore requires the use of real-time imaging modalities such as ultrasound (US) during treatment. While US is cost effective and non-ionizing, 2D US is not well suited to track targets that displace in 3D, while 3D US is challenging to integrate in real-time due to insufficient temporal frequency. In this work, we present a 3D inference model based on fully convolutional networks combined with a spatial transformative network (STN) layer, which given a 2D US image and a baseline 3D US volume as inputs, can predict the deformation of the baseline volume to generate an up-to-date 3D US volume in real-time. We train our model using 20 4D liver US sequences taken from the CLUST15 3D tracking challenge, testing the model on image tracking sequences. The proposed model achieves a normalized cross-correlation of 0.56 in an ablation study and a mean landmark location error of 2.92 ? 1.67mm for target anatomy tracking. These promising results demonstrate the potential of generative STN models for predicting 3D motion fields during EBRT.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A New Spatially Adaptive Tv Regularization for Digital Breast Tomosynthesis

    00:14:21
    0 views
    Digital breast tomosynthesis images provide volumetric morphological information of the breast helping physicians to detect malign lesions. In this work, we propose a new spatially adaptive total variation (SATV) regularization function allowing to preserve adequately the shape of small objects such as microcalcifications while ensuring a high quality restoration of the background tissues. First, an original formulation for the weighted gradient field is introduced, that efficiently incorporates prior knowledge on the location of small objects. Then, we derive our SATV regularization, and integrate it in a novel 3D reconstruction approach for DBT. Experimental results carried out on both phantom and clinical data show the great interest of our method for the recovery of DBT volumes showing small lesions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Segmentation and Classification of Melanoma and Nevus in Whole Slide Images

    00:16:08
    0 views
    The incidence of skin cancer cases and specifically melanoma has tripled since the 1990s in The Netherlands. The early detection of melanoma can lead to an almost 100% 5-year survival prognosis dropping drastically when detected later. Studies show that pathologists can have a discordance reporting of melanoma to nevi up to 14.3%. An automated method could help support pathologists in diagnosing melanoma and prioritize cases based on a risk assessment. Our method used 563 whole slide images to train and test a system comprising of two models that segment and classify skin sections to melanoma, nevus or negative for both. We used 232 slides for training and validation and the remaining 331 for testing. The first model uses a U-Net architecture to perform a semantic segmentation and the output of that model was used to feed a convolution neural network to classify the WSI with a global label. Our method achieved a Dice score of 0.835 ? 0.08 on the segmentation of the validation set and an weighted F1-score of 0.954 on the independent test dataset. Out of the 176 melanoma slides, the algorithm managed to classify 173 correctly. Out of the 62 nevi slides the algorithm managed to correctly classify 57.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Two-Layer Residual Sparsifying Transform Learning for Image Reconstruction

    00:12:50
    0 views
    Signal models based on sparsity, low-rank and other properties have been exploited for image reconstruction from limited and corrupted data in medical imaging and other computational imaging applications. In particular, sparsifying transform models have shown promise in various applications, and offer numerous advantages such as efficiencies in sparse coding and learning. This work investigates pre-learning a two-layer extension of the transform model for image reconstruction, wherein the transform domain or filtering residuals of the image are further sparsified in the second layer. The proposed block coordinate descent optimization algorithms involve highly efficient updates. Preliminary numerical experiments demonstrate the usefulness of a two-layer model over the previous related schemes for CT image reconstruction from low-dose measurements.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Prior-aware CNN with Multi-Task Learning for Colon Images Analysis

    00:13:43
    0 views
    Adenocarcinoma is the most common cancer, the pathological diagnosis for it is of great significance. Specifically, the degree of gland differentiation is vital for defining the grade of adenocarcinoma. Following this domain knowledge, we encode glandular regions as prior information in convolutional neural network (CNN), guiding the network's preference for glands when inferring. In this work, we propose a prior-aware CNN framework with multi-task learning for pathological colon images analysis, which contains gland segmentation and grading classification branches simultaneously. The segmentation's probability map also acts as the spatial attention for grading, emphasizing the glandular tissue and removing noise of irrelevant parts. Experiments reveal that the proposed framework achieves accuracy of 97.04% and AUC of 0.9971 on grading. Meanwhile, our model can predict gland regions with mIoU of 0.8134. Importantly, it is based on the clinical-pathological diagnostic criteria of adenocarcinoma, which makes our model more interpretable.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Myocardial T1-Mapping Framework with Recurrent and U-Net Convolutional Neural Networks

    00:15:39
    0 views
    Noise and aliasing artifacts arise in various accelerated cardiac magnetic resonance (CMR) imaging applications. In accelerated myocardial T1-mapping, the traditional three-parameter based nonlinear regression may not provide accurate estimates due to sensitivity to noise. A deep neural network-based framework is proposed to address this issue. The DeepT1 framework consists of recurrent and U-net convolution networks to produce a single output map from the noisy and incomplete measurements. The results show that DeepT1 provides noise-robust estimates compared to the traditional pixel-wise three parameter fitting.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Wnet: An End-To-End Atlas-Guided and Boundary-Enhanced Network for Medical Image Segmentation

    00:11:09
    0 views
    Medical image segmentation is one of the most important pre-processing steps in computer-aided diagnosis, but it is a challenging task because of the complex background and fuzzy boundary. To tackle these issues, we propose a double U-shape-based architecture named WNet, which is capable of capturing exact positions as well as sharpening their boundary. We first build an atlas-guided segmentation network (AGSN) to obtain a position-aware segmentation map by incorporating prior knowledge on human anatomy. We further devise a boundary-enhanced refinement network (BERN) to yield a clear boundary by hybridizing a Multi-scale Structure Similarity (MS-SSIM) loss function and making full use of refinement at training and inference in an end-to-end way. Experimental results show that the proposed WNet can accurately capture an organ with sharpened details and hence improves the performance on two datasets compared to the previous state-of-the-arts. Index Terms?Probabilistic atlas, Atlas-guided segmentation network, Boundary-enhanced refinement network.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • An Evaluation of Regularization Strategies for Subsampled Single-Shell Diffusion MRI

    00:10:34
    0 views
    Conventional single-shell diffusion MRI experiments acquire sampled values of the diffusion signal from the surface of a sphere in q-space. However, to reduce data acquisition time, there has been recent interest in using regularization to enable q-space undersampling. Although different regularization strategies have been proposed for this purpose (i.e., sparsity-promoting of the spherical ridgelet representation and Laplace-Beltrami Tikhonov regularization), there has not been a systematic evaluation of the strengths, weaknesses, and potential synergies of the different regularizers. In this work, we use real diffusion MRI data to systematically evaluate the performance characteristics of these different approaches and determine whether one approach is fundamentally more powerful than the other. Results from retrospective subsampling experiments suggest that both regularization strategies offer largely similar reconstruction performance (though with different levels of computational complexity) with some degree of synergy (albeit, relatively minor).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Mapping Cerebral Connectivity Changes after Mild Traumatic Brain Injury in Older Adults Using Diffusion Tensor Imaging and Riemannian Matching of Elastic Curves

    00:12:52
    0 views
    Although diffusion tensor imaging (DTI) can identify white matter (WM) changes due to mild traumatic brain injury (mTBI), the task of within-subject longitudinal matching of DTI streamlines remains challenging in this condition. Here we combine (A) automatic, atlas-informed labeling of WM streamline clusters with (B) streamline prototyping and (C) Riemannian matching of elastic curves to quantify within-subject changes in WM structure properties, focusing on the arcuate fasciculus. The approach is demonstrated in a group of geriatric mTBI patients imaged acutely and ~6 months post-injury. Results highlight the utility of differen-tial geometry approaches when quantifying brain connectivity alterations due to mTBI.

Advertisement