Showing 1 - 50 of 459
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Automated Quantitative Analysis of Microglia in Bright-Field Images of Zebrafish
Microglia are known to play important roles in brain development and homeostasis, yet their molecular regulation is still poorly understood. Identification of microglia regulators is facilitated by genetic screening and studying the phenotypic effects in animal models. Zebrafish are ideal for this, as their external development and transparency allow in vivo imaging by bright-field microscopy in the larval stage. However, manual analysis of the images is very labor intensive. Here we present a computational method to automate the analysis. It merges the optical sections into an all-in-focus image to simplify the subsequent steps of segmenting the brain region and detecting the contained microglia for quantification and downstream statistical testing. Evaluation on a fully annotated data set of 50 zebrafish larvae shows that the method performs close to the human expert.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Automatic Classification of Artery/Vein from Single Wavelength Fundus Images
Vessels are regions of prominent interest in retinal fundus images. Classification of vessels into arteries and veins can be used to assess the oxygen saturation level, which is one of the indicators for the risk of stroke, condition of diabetic retinopathy, and hypertension. In practice, dual-wavelength images are obtained to emphasize arteries and veins separately. In this paper, we propose an automated technique for the classification of arteries and veins from single-wavelength fundus images using convolutional neural networks employing the ResNet-50 backbone and squeeze-excite blocks. We formulate the artery-vein identification problem as a three-class classification problem where each pixel is labeled as belonging to an artery, vein, or the background. The proposed method is trained on publicly available fundus image datasets, namely RITE, LES-AV, IOSTAR, and cross-validated on the HRF dataset. The standard performance metrics, such as average sensitivity, specificity, accuracy, and area under the curve for the datasets mentioned above, are 92.8%, 93.4%, 93.4%, and 97.5%, respectively, which are superior to the state-of-the-art methods.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Spectral Characterization of Functional MRI Data on Voxel-Resolution Cortical Graphs
The human cortical layer exhibits a convoluted morphology that is unique to each individual. Conventional volumetric fMRI processing schemes take for granted the rich information provided by the underlying anatomy. We present a method to study fMRI data on subject-specific cerebral hemisphere cortex (CHC) graphs, which encode the cortical morphology at the resolution of voxels. We study graph spectral energy metrics associated to fMRI data of 100 subjects from the Human Connectome Project database, across seven tasks. Experimental results signify the strength of CHC graphs' Laplacian eigenvector bases in capturing subtle spatial patterns specific to different functional loads as well as experimental conditions within each task.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Wnet: An End-To-End Atlas-Guided and Boundary-Enhanced Network for Medical Image Segmentation
Medical image segmentation is one of the most important pre-processing steps in computer-aided diagnosis, but it is a challenging task because of the complex background and fuzzy boundary. To tackle these issues, we propose a double U-shape-based architecture named WNet, which is capable of capturing exact positions as well as sharpening their boundary. We first build an atlas-guided segmentation network (AGSN) to obtain a position-aware segmentation map by incorporating prior knowledge on human anatomy. We further devise a boundary-enhanced refinement network (BERN) to yield a clear boundary by hybridizing a Multi-scale Structure Similarity (MS-SSIM) loss function and making full use of refinement at training and inference in an end-to-end way. Experimental results show that the proposed WNet can accurately capture an organ with sharpened details and hence improves the performance on two datasets compared to the previous state-of-the-arts. Index Terms?Probabilistic atlas, Atlas-guided segmentation network, Boundary-enhanced refinement network.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Assessment of Lung Biomechanics in COPD Using Image Registration
Lung biomechanical properties can be used to detect disease, assess abnormal lung function, and track disease progression.In this work, we used computed tomography (CT) imaging to measure three biomechanical properties in the lungs of subjects with varying degrees of chronic obstructive pulmonary disease (COPD): the Jacobian determinant (J), a measure of volumetric expansion or contraction; the anisotropic deformation index (ADI), a measure of the magnitude of anisotropic deformation; and the the slab-rod index (SRI), a measure of the nature of anisotropy (i.e., whether the volume is deformed to a rod-like or slab-like shape). We analyzed CT data from247 subjects collected as part of the Subpopulations and Inter-mediate Outcome Measures in COPD Study (SPIROMICS). The results show that the mean J and mean ADI decrease as disease severity increases, indicating less volumetric expansion and more isotroic expansion with increased disease. No differences in average SRI index were observed across the different levels of disease. The methods and analysis described in this study may provide new insights into our understanding of the biomechanical behavior of the lung and the changesthat occur with COPD.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
A High-Powered Brain Age Prediction Model Based on Convolutional Neural Network
Predicting individual chronological age based on neuroimaging data is very promising and important for understanding the trajectory of normal brain development. In this work, we proposed a new model to predict brain age ranging from 12 to 30 years old, based on structural magnetic resonance imaging and a deep learning approach with reduced model complexity and computational cost. We found that this model can predict brain age accurately not only in the training set (N = 1721, mean absolute error is 1.89 in 10-fold cross validation) but in an independent validation set (N = 226, mean absolute error is 1.96), substantially outperforming the previous published models. Given the considerable accuracy and generalizability, it is promising to further deploy our model in the clinic and help to investigate the pathophysiology of neurodevelopmental disorders.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Multi-Echo Recovery with Field Inhomogeneity Compensation Using Structured Low-Rank Matrix Completion
Echo-planar imaging (EPI), which is the main workhorse of functional MRI, suffers from field inhomogeneity-induced geometric distortions. The amount of distortion is proportional to the readout duration, which restricts the maximum achievable spatial resolution. The spatially varying nature of the T2* decay makes it challenging for EPI schemes with a single echo time to obtain good sensitivity to functional activations in different brain regions. Despite the use of parallel MRI and multislice acceleration, the number of different echo times that can be acquired in a reasonable TR is limited. The main focus of this work is to introduce a rosette-based acquisition scheme and a structured low-rank reconstruction algorithm to overcome the above challenges. The proposed scheme exploits the exponential structure of the time series to recover distortion-free images from several echoes simultaneously.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Improved Functional MRI Activation Mapping in White Matter through Diffusion-Adapted Spatial Filtering
Brain activation mapping using functional MRI (fMRI) based on blood oxygenation level-dependent (BOLD) contrast has been conventionally focused on probing gray matter, the BOLD contrast in white matter having been generally disregarded. Recent results have provided evidence of the functional significance of the white matter BOLD signal, showing at the same time that its correlation structure is highly anisotropic, and related to the diffusion tensor in shape and orientation. This evidence suggests that conventional isotropic Gaussian filters are inadequate for denoising white matter fMRI data, since they are incapable of adapting to the complex anisotropic domain of white matter axonal connections. In this paper we explore a graph-based description of the white matter developed from diffusion MRI data, which is capable of encoding the anisotropy of the domain. Based on this representation we design localized spatial filters that adapt to white matter structure by leveraging graph signal processing principles. The performance of the proposed filtering technique is evaluated on semi-synthetic data, where it shows potential for greater sensitivity and specificity in white matter activation mapping, compared to isotropic filtering.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Deep Learning of Cortical Surface Features Using Graph-Convolution Predicts Neonatal Brain Age and Neurodevelopmental Outcome
We investigated the ability of graph convolutional network (GCN) that takes into account the mesh topology as a sparse graph to predict brain age for preterm neonates using cortical surface morphometrics, i.e. cortical thickness and sulcal depth. Compared to machine learning and deep learning methods that did not use the surface topological information, the GCN better predicted the ages for preterm neonates with none/mild perinatal brain injuries (NMI). We then tested the GCN trained using NMI brains to predict the age of neonates with severe brain injuries (SI). Results also displayed good accuracy (MAE=1.43 weeks), while the analysis of the interaction term (true age ? group) showed that the slope of the predicted brain age relative to the true age for the SI group was significantly less steep than the NMI group (p0.0001), indicating that SI can decelerate early postnatal growth. To understand regional contributions to age prediction, we applied GCNs separately to the vertices within each cortical parcellation. The middle cingulate cortex that is known to be one of the thickest cortical regions in the neonatal period showed the best accuracy in age prediction (MAE = 1.24 weeks). Furthermore, we found that the regional brain ages computed using GCN models in several frontal cortices significantly correlated with cognitive abilities at 3 years of age. Furthermore, the brain predicted age in part of the superior temporal cortex, which is the auditory and language processing locus, was related to language functional scores at 3 years. Our results demonstrate the potential of the GCN models for predicting brain age as well as localizing brain regions contributing to the prediction of age and future cognitive outcome.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Linear Mixed Models Minimise False Positive Rate and Enhance Precision of Mass Univariate Vertex-Wise Analyses of Grey-Matter
We evaluated the statistical power, family wise error rate (FWER) and precision of several competing methods that perform mass-univariate vertex-wise analyses of grey-matter (thickness and surface area). In particular, we compared several generalised linear models (GLMs, current state of the art) to linear mixed models (LMMs) that have proven superior in genomics. We used phenotypes simulated from real vertex-wise data and a large sample size (N=8,662) which may soon become the norm in neuroimaging. No method ensured a FWER5% (at a vertex or cluster level) after applying Bonferroni correction for multiple testing. LMMs should be preferred to GLMs as they minimise the false positive rate and yield smaller clusters of associations. Associations on real phenotypes must be interpreted with caution, and replication may be warranted to conclude about an association.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Impact of 1D and 2D Visualisation on EEG-fMRI Neurofeedback Training During a Motor Imagery Task
Bi-modal EEG-fMRI neurofeedback (NF) is a new technique of great interest. First, it can improve the quality of NF training by combining different real-time information (haemodynamic and electrophysiological) from the participant's brain activity; Second, it has potential to better understand the link and the synergy between the two modalities (EEG-fMRI). However there are different ways to show to the participant his NF scores during bi-modal NF sessions. To improve data fusion methodologies, we investigate the impact of a 1D or 2D representation when a visual feedback is given during motor imagery task. Results show a better synergy between EEG and fMRI when a 2D display is used. Subjects have better fMRI scores when 1D is used for bi-modal EEG-fMRI NF sessions; on the other hand, they regulate EEG more specifically when the 2D metaphor is used.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Interpreting Age Effects of Human Fetal Brain From Spontaneous FMRI Using Deep 3D Convolutional Neural Networks
Understanding human fetal neurodevelopment is of great clinical importance as abnormal development is linked to adverse neuropsychiatric outcomes after birth. With the advances in functional Magnetic Resonance Imaging (fMRI), recent stud- ies focus on brain functional connectivity and have provided new insight into development of the human brain before birth. Deep Convolutional Neural Networks (CNN) have achieved remarkable success on learning directly from image data, yet have not been applied on fetal fMRI for understanding fetal neurodevelopment. Here, we bridge this gap by applying a novel application of 3D CNN to fetal blood oxygen-level dependence (BOLD) resting-state fMRI data. We build supervised CNN to isolate variation in fMRI signals that relate to younger v.s. older fetal age groups. Sensitivity analysis is then performed to identify brain regions in which changes in BOLD signal are strongly associated with fetal brain age. Based on the analysis, we discovered that regions that most strongly differentiate groups are largely bilateral, share similar distribution in older and younger age groups, and are areas of heightened metabolic activity in early human development.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Zero-Shot Medical Image Artifact Reduction
Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients' characteristics, surrounding environment, etc. However, existing deep learning based artifact reduction methods are restricted by their training set with specific predetermined artifact type and pattern. As such, they have limited clinical adoption. In this paper, we introduce a "Zero-Shot" medical image Artifact Reduction (ZSAR) framework, which leverages the power of deep learning but without using general pre-trained networks or any clean image reference. Specifically, we utilize the low internal visual entropy of an image and train a light-weight image-specific artifact reduction network to reduce artifacts in an image at test-time. We use Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) as vehicles to show that ZSAR can reduce artifacts better than the state-of-the-art both qualitatively and quantitatively, while using shorter test time. To the best of our knowledge, this is the first deep learning framework that reduces artifacts in medical images without using a priori training set.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Diagnosing Colorectal Polyps in the Wild with Capsule Networks
Colorectal cancer, largely arising from precursor lesions called polyps, remains one of the leading causes of cancer-related death worldwide. Current clinical standards require the resection and histopathological analysis of polyps due to test accuracy and sensitivity of optical biopsy methods falling substantially below recommended levels. In this study, we design a novel capsule network architecture (D-Caps) to improve the viability of optical biopsy of colorectal polyps. Our proposed method introduces several technical novelties including a novel capsule architecture with a capsule-average pooling (CAP) method to improve efficiency in large-scale image classification. We demonstrate improved results over the previous state-of-the-art convolutional neural network (CNN) approach by as much as 43%. This work provides an important benchmark on the new Mayo Polyp dataset, a significantly more challenging and larger dataset than previous polyp studies, with results stratified across all available categories, imaging devices and modalities, and focus modes to promote future direction into AI-driven colorectal cancer screening systems.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Enumeration of Ampicillin-Resistant E. Coli in Blood Using Droplet Microfluidics and High-Speed Image Processing
Bacteria entering the bloodstream causes bloodstream infection (BSI). Without proper treatment, BSI can lead to sepsis which is a life-threatening condition. Detection of bacteria in blood at the early stages of BSI can effectively prevent the development of sepsis. Using microfluidic droplets for single bacterium encapsulation provides single-digit bacterial detection sensitivity. In this study, samples of ampicillin-resistant E. coli in human blood were partitioned into millions of 30 ?m diameter microfluidic droplets and followed by 8-hour culturing. Thousands of fluorescent bacteria from a single colony filled up the positive droplets after the culturing process. A circle detection software based on Hough Transform was developed to count the number of positive droplets from fluorescence images. The period to process one image can be as short as 0.5 ms when the original image is pre-processed and binarized by the developed software.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Adversarial-Based Domain Adaptation Networks for Unsupervised Tumour Detection in Histopathology
Developing effective deep learning models for histopathology applications is challenging, as the performance depends on large amounts of labelled training data, which is often unavailable. In this work, we address this issue by leveraging previously annotated histopathology images from unrelated source domains to build a model for the unlabelled target domain. Specifically, we propose the adversarial-based domain adaptation networks (ABDA-Net) for performing the tumour detection task in histopathology in a purely unsupervised manner. This methodology successfully promoted the alignment of the source and target feature distributions among independent datasets of three tumour types - Breast, Lung and Colon - to achieve an improvement of at least 17.51% in accuracy and 18.22% in area under the curve (AUC) when compared to a classifier trained on the source data only.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Automatic Quantification of Pulmonary Fissure Integrity: A Repeatability Analysis
The pulmonary fissures divide the lungs into lobes and canvary widely in shape, appearance, and completeness. Fis-sure completeness, or integrity, has been studied to assessrelationships with airway function measurements, chronicobstructive pulmonary disease (COPD) progression, and col-lateral ventilation between lobes. Fissure integrity measuredfrom computed tomography (CT) images is already usedas a non-invasive method to screen emphysema patients forendobronchial valve treatment, as the procedure is not ef-fective when collateral ventilation is present. We describea method for automatically computing fissure integrity fromlung CT images. Our method is tested using 60 subjectsfrom a COPD study. We examine the repeatability of fis-sure integrity measurements across inspiration and expirationimages, assess changes in fissure integrity over time using alongitudinal dataset, and explore fissure integrity?s relation-ship with COPD severity.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Classification of Lung Nodules in Ct Volumes Using the Lung-Rads TM Guidelines with Uncertainty Parameterization
Currently, lung cancer is the most lethal in the world. In order to make screening and follow-up a little more systematic, guidelines have been proposed. Therefore, this study aimed to create a diagnostic support approach by providing a patient label based on the LUNG-RADS^TM guidelines. The only input required by the system is the nodule centroid to take the region of interest for the input of the classification system. With this in mind, two deep learning networks were evaluated: a Wide Residual Network and a DenseNet. Taking into account the annotation uncertainty we proposed to use sample weights that are introduced in the loss function, allowing nodules with a high agreement in the annotation process to take a greater impact on the training error than its counterpart. The best result was achieved with the Wide Residual Network with sample weights achieving a nodule-wise LUNG-RADS^TM labelling accuracy of 0.735?0.003.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
A Physics-Motivated DNN for X-Ray CT Scatter Correction
The scattering of photons by the imaged object in X-ray computed tomography (CT) produces degradations of the reconstructions in the form of streaks, cupping, shading artifacts and decreased contrast. We describe a new physics-motivated deep-learning-based method to estimate scatter and correct for it in the acquired projection measurements. The method incorporates both an initial reconstruction and the scatter-corrupted measurements using a specific deep neural network architecture and a cost function tailored to the problem. Numerical experiments show significant improvement over a recent projection-based deep neural network method.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
An Evaluation of Regularization Strategies for Subsampled Single-Shell Diffusion MRI
Conventional single-shell diffusion MRI experiments acquire sampled values of the diffusion signal from the surface of a sphere in q-space. However, to reduce data acquisition time, there has been recent interest in using regularization to enable q-space undersampling. Although different regularization strategies have been proposed for this purpose (i.e., sparsity-promoting of the spherical ridgelet representation and Laplace-Beltrami Tikhonov regularization), there has not been a systematic evaluation of the strengths, weaknesses, and potential synergies of the different regularizers. In this work, we use real diffusion MRI data to systematically evaluate the performance characteristics of these different approaches and determine whether one approach is fundamentally more powerful than the other. Results from retrospective subsampling experiments suggest that both regularization strategies offer largely similar reconstruction performance (though with different levels of computational complexity) with some degree of synergy (albeit, relatively minor).
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Airway Segmentation in Speech MRI Using the U-Net Architecture
We develop a fully automated airway segmentation method to segment the vocal tract airway from surrounding soft tissue in speech MRI. We train a U-net architecture to learn the end to end mapping between a mid-sagittal image (at the input), and the manually segmented airway (at the output). We base our training on the open source University of Southern California?s (USC) speech morphology MRI database consisting of speakers producing a variety of sustained vowel and consonant sounds. Once trained, our model performs fast airway segmentations on unseen images at the order of 210 ms/slice on a modern CPU with 12 cores. Using manual segmentation as a reference, we evaluate the performances of the proposed U-net airway segmentation, against existing seed-growing segmentation, and manual segmentation from a different user. We demonstrate improved DICE similarity with U-net compared to seed-growing, and minor differences in DICE similarity of U-net compared to manual segmentation from the second user.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
SU-Net and DU-Net Fusion for Tumour Segmentation in Histopathology Images
In this work, a fusion framework is proposed for automatic cancer detection and segmentation in whole-slide histopathology images. The framework includes two parts of fusion, multi-scale fusion, and sub-datasets fusion. For a particular type of cancer, histopathological images often demonstrate large morphological variances, the performance of an individual trained network is usually limited. We develop a fusion model that integrates two types of U-net structures: Shallow U-net (SU-net) and Deep U-net (DU-net), trained with a variety of multiple re-scaled images and different subsets of images, and finally ensemble a unified output. Smoothing and noise elimination are conducted using convolutional Conditional Random Fields (CRFs)cite{CRF}. The proposed model is validated on Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology (ACDC@LungHP) challenge in ISBI2019 and Digestive-System Pathological Detection and Segmentation Challenge 2019(DigestPath 2019) in MICCAI2019. Our method achieves a dice coefficient of 0.7968 in ACDC@LungHP and 0.773 in DigestPath 2019, and the result of ACDC@LungHP challenge is ranked in third place on the board.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Building an Ex Vivo Atlas of the Earliest Brain Regions Affected by Alzheimer's Disease Pathology
Earliest neuropathological changes in Alzheimer?s Disease (AD) emerge in the medial temporal lobe (MTL). In order for MRI biomarkers to detect changes linked specifically to AD pathology (as opposed to aging or other pathological factors) macroscopic patterns of structural change in the MTL must be linked to the underlying neuropathology. To provide such a linkage, we are conducting an autopsy imaging study combining ex vivo MRI and serial histopathology. Information from multiple subjects can be studied by creating a ?population average? atlas of the MTL. We present a groupwise registration approach for constructing the atlas that is able to successfully capture the complex structure of the MTL, and anatomical variability across subjects. This atlas allows us to generate maps of cortical thickness measurements and identify regions in the MTL where structural changes correlate most strongly with AD progression. We show that using this atlas, we are able to find a significant correlation between atrophy and AD pathology in the MTL sub-regions associated with the earliest stages of AD pathology as described by Braak and Braak.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Transformation Elastography: Converting Anisotropy to Isotropy
Elastography refers to mapping mechanical properties in a material based on measuring wave motion in it using noninvasive optical, acoustic or magnetic resonance imaging methods. For example, increased stiffness will increase wavelength. Stiffness and viscosity can depend on both location and direction. A material with aligned fibers or layers may have different stiffness and viscosity values along the fibers or layers versus across them. Converting wave measurements into a mechanical property map or image is known as reconstruction. To make the reconstruction problem analytically tractable, isotropy and homogeneity are often assumed, and the effects of finite boundaries are ignored. But, infinite isotropic homogeneity is not the situation in most cases of interest, when there are pathological conditions, material faults or hidden anomalies that are not uniformly distributed in fibrous or layered structures of finite dimension. Introduction of anisotropy, inhomogeneity and finite boundaries complicates the analysis forcing the abandonment of analytically-driven strategies, in favor of numerical approximations that may be computationally expensive and yield less physical insight. A new strategy, Transformation Elastography (TE), is proposed that involves spatial distortion in order to make an anisotropic problem become isotropic. The fundamental underpinnings of TE have been proven in forward simulation problems. In the present paper a TE approach to inversion and reconstruction is introduced and validated based on numerical finite element simulations.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Free-Breathing Cardiovascular MRI Using a Plug-And-Play Method with Learned Denoiser
Cardiac magnetic resonance imaging (CMR) is a noninvasive imaging modality that provides a comprehensive evaluation of the cardiovascular system. The clinical utility of CMR is hampered by long acquisition times, however. In this work, we propose and validate a plug-and-play (PnP) method for CMR reconstruction from undersampled multi-coil data. To fully exploit the rich image structure inherent in CMR, we pair the PnP framework with a deep learning (DL)-based denoiser that is trained using spatiotemporal patches from high-quality, breath-held cardiac cine images. The resulting "PnP-DL" method iterates over data consistency and denoising subroutines. We compare the reconstruction performance of PnP-DL to that of compressed sensing (CS) using eight breath-held and ten real-time (RT) free-breathing cardiac cine datasets. We find that, for breath-held datasets, PnP-DL offers more than one dB advantage over commonly used CS methods. For RT free-breathing datasets, where ground truth is not available, PnP-DL receives higher scores in qualitative evaluation. The results highlight the potential of PnP-DL to accelerate RT CMR.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
3D Biological Cell Reconstruction with Multi-View Geometry
3D cell modeling is an important tool for visualizing cellular structures and events, and generating accurate data for further quantitative geometric morphological analyses on cellular structures. Current methods involve highly specialized and expensive setups as well as experts in microscopy and 3D reconstruction to produce time- and work-intensive insight into cellular events. We developed a new system that reconstructs the surface geometry of 3D cellular structures from 2D image sequences in a fast and automatic way. The system rotated cells in a microfluidic device, while their images were captured by a video camera. The multi-view geometry theory was introduced to microscopy imaging to model the imaging system and define the 3D reconstruction as an inverse problem. Finally, we successfully demonstrated the reconstruction of cellular structures in their natural state.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Self-Supervised Physics-Based Deep Learning MRI Reconstruction without Fully-Sampled Data
Deep learning (DL) has emerged as a tool for improving accelerated MRI reconstruction. A common strategy among DL methods is the physics-based approach, where a regularized iterative algorithm alternating between data consistency and a regularizer is unrolled for a finite number of iterations. This unrolled network is then trained end-to-end in a supervised manner, using fully-sampled data as ground truth for the network output. However, in a number of scenarios, it is difficult to obtain fully-sampled datasets, due to physiological constraints such as organ motion or physical constraints such as signal decay. In this work, we tackle this issue and propose a self-supervised learning strategy that enables physics-based DL reconstruction without fully-sampled data. Our approach is to divide the acquired sub-sampled points for each scan into two sets, one of which is used to enforce data consistency in the unrolled network and the other to define the loss for training. Results show that the proposed self-supervised learning method successfully reconstructs images without fully-sampled data, performing similarly to the supervised approach that is trained with fully-sampled references. This has implications for physics-based inverse problem approaches for other settings, where fully-sampled data is not available or possible to acquire.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Automatic Brain Organ Segmentation with 3d Fully Convolutional Neural Network for Radiation Therapy Treatment Planning
3D organ contouring is an essential step in radiation therapy treatment planning for organ dose estimation as well as for optimizing plans to reduce organs-at-risk doses. Manual contouring is time-consuming and its inter-clinician variability adversely affects the outcomes study. Such organs also vary dramatically on sizes --- up to two orders of magnitude difference in volumes. In this paper, we present BrainSegNet, a novel 3D fully convolutional neural network (FCNN) based approach for the automatic segmentation of brain organs. BrainSetNet takes a multiple resolution paths approach and uses a weighted loss function to solve the major challenge of large variability in organ sizes. We evaluated our approach with a dataset of 46 Brain CT image volumes with corresponding expert organ contours as reference. Compared with those of LiviaNet and V-Net, BrainSegNet has a superior performance in segmenting tiny or thin organs, such as chiasm, optic nerves, and cochlea, and outperforms these methods in segmenting large organs as well. BrainSegNet can reduce the manual contouring time of a volume from an hour to less than two minutes, and holds high potential to improve the efficiency of radiation therapy workflow.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Functional Multi-Connectivity: A Novel Approach to Assess Multi-Way Entanglement between Networks and Voxels
The interactions among brain entities, commonly computed through pair-wise functional connectivity, are assumed to be manifestations of information processing which drive function. However, this focus on large-scale networks and their pair-wise temporal interactions is likely missing important information contained within fMRI data. We propose leveraging multi-connected features at both the voxel- and network-level to capture ?multi-way entanglement? between networks and voxels, providing improved resolution of interconnected brain functional hierarchy. Entanglement refers to each brain network being heavily enmeshed with the activity of other networks. Under our multi-connectivity assumption, elements of a system simultaneously communicate and interact with each other through multiple pathways. As such we move beyond the typical pair-wise temporal partial or full correlation. We propose a framework to estimate functional multi-connectivity (FMC) by computing the relationship between system-wide connections of intrinsic connectivity networks (ICNs). Results show that FMC obtains information which is different from standard pair-wise analyses.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Breast Lesion Segmentation in Ultrasound Images with Limited Annotated Data
Ultrasound (US) is one of the most commonly used imaging modalities in both diagnosis and surgical interventions due to its low-cost, safety, and non-invasive characteristic. US image segmentation is currently a unique challenge because of the presence of speckle noise. As manual segmentation requires considerable efforts and time, the development of automatic segmentation algorithms has attracted researchers? attention. Although recent methodologies based on convolutional neural networks have shown promising performances, their success relies on the availability of a large number of training data, which is prohibitively difficult for many applications. There- fore, in this study we propose the use of simulated US images and natural images as auxiliary datasets in order to pre-train our segmentation network, and then to fine-tune with limited in vivo data. We show that with as little as 19 in vivo images, fine-tuning the pre-trained network improves the dice score by 21% compared to training from scratch. We also demonstrate that if the same number of natural and simulation US images is available, pre-training on simulation data is preferable.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
CNN Detection of New and Enlarging Multiple Sclerosis Lesions from Longitudinal MRI Using Subtraction Images
Accurate detection and segmentation of new lesional activity in longitudinal Magnetic Resonance Images (MRIs) of patients with Multiple Sclerosis (MS) is important for monitoring disease activity, as well as for assessing treatment effects.In this work, we present the first deep learning framework to automatically detect and segment new and enlarging (NE) T2w lesions from longitudinal brain MRIs acquired from relapsing-remitting MS (RRMS) patients.The proposed framework is an adapted 3D U-Net [1] which includes as inputs the reference multi-modal MRI and T2-weighted lesion maps, as well an attention mechanism based on the subtraction MRI (between the two timepoints) which serves to assist the network in learning to differentiate between real anatomical change and artifactual change, while constraining the search space for small lesions. Experiments on a large, proprietary, multi-center, multi-modal, clinical trial dataset consisting of 1677 multi-modal scans illustrate that network achieves high overall detection accuracy (detection AUC=.95), outperforming (1)a U-Net without an attention mechanism (detection AUC=.93), (2) a framework based on subtracting independent T2-weighted segmentations (detection AUC=.57), and (3) DeepMedic(detection AUC=.84), particularly for small lesions. In addition, the method was able to accurately classify patients as active/inactive with (sensitivities of .69 and specificities of .97).
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Looking in the Right Place for Anomalies: Explainable AI through Automatic Location Learning
Deep learning has now become the de facto approach to the recognition of anomalies in medical imaging. Their 'black box' way of classifying medical images into anomaly labels poses problems for their acceptance, particularly with clinicians. Current explainable AI methods offer justifications through visualizations such as heat maps but cannot guarantee that the network is focusing on the relevant image region fully containing the anomaly. In this paper, we develop an approach to explainable AI in which the anomaly is assured to be overlapping the expected location when present. This is made possible by automatically extracting location-specific labels from textual reports and learning the association of expected locations to labels using a hybrid combination of Bi-Directional Long Short-Term Memory Recurrent Neural Networks (Bi-LSTM) and DenseNet-121. Use of this expected location to bias the subsequent attention-guided inference network based on ResNet101 results in the isolation of the anomaly at the expected location when present. The method is evaluated on a large chest X-ray dataset.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Software Tool to Read, Represent, Manipulate, and Apply N-Dimensional Spatial Transforms
Spatial transforms formalize mappings between coordinates of objects in biomedical images. Transforms typically are the outcome of image registration methodologies, which estimate the alignment between two images. Image registration is a prominent task present in nearly all standard image processing and analysis pipelines. The proliferation of software implementations of image registration methodologies has resulted in a spread of data structures and file formats used to preserve and communicate transforms. This segregation of formats precludes the compatibility between tools and endangers the reproducibility of results. We propose a software tool capable of converting between formats and resampling images to apply transforms generated by the most popular neuroimaging packages and libraries (AFNI, FSL, FreeSurfer, ITK, and SPM). The proposed software is subject to continuous integration tests to check the compatibility with each supported tool after every change to the code base (https://github.com/poldracklab/nitransforms). Compatibility between software tools and imaging formats is a necessary bridge to ensure the reproducibility of results and enable the optimization and evaluation of current image processing and analysis workflows.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Learning a Self-Inverse Network for Bidirectional MRI Image Synthesis
The one-to-one mapping is necessary for MRI image synthesis as MRI images are unique to the patient. State-of-the-art approaches for image synthesis from domain X to domain Y learn a convolutional neural network that meticulously maps between the domains. A different network is typically implemented to map along the opposite direction, from Y to X. In this paper, we explore the possibility of only wielding one network for bi-directional image synthesis. In other words, such an autonomous learning network implements a self-inverse function. A self-inverse network shares several distinct advantages: only one network instead of two, better generalization and more restricted parameter space. Most importantly, a self-inverse function guarantees a one-to-one mapping, a property that cannot be guaranteed by earlier approaches that are not self-inverse. The experiments on MRI T1 and T2 images show that, compared with the baseline approaches that use two separate models for the image synthesis along with two directions, our self-inverse network achieves better synthesis results in terms of standard metrics. Finally, our sensitivity analysis confirms the feasibility of learning a one-to-one mapping function for MRI image synthesis.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Experimentally-Generated Ground Truth for Detecting Cell Types in an Image-Based Immunotherapy Screen
Chimeric antigen receptor is an immunotherapy whereby T lymphocytes are engineered to selectively attack cancer cells. Image-based screens of CAR-T cells, combining phase contrast and fluorescence microscopy, suffer from the gradual quenching of the fluorescent signal, making the reliable monitoring of cell populations across time-lapse imagery difficult. We propose to leverage the available fluorescent markers as an experimentally-generated ground truth, without recourse to manual annotation. With some simple image processing, we are able to segment and assign cell type classes automatically. This ground truth is sufficient to train a neural object detection system from the phase contrast signal alone, potentially eliminating the need for the cumbersome fluorescent markers. This approach will underpin the development of cheap and robust microscope-based protocols to quantify CAR-T activity against tumor cells in vitro.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
MR Imaging and Spectroscopy for Biomarker Characterization in Golden Retriever Muscular Dystrophy Tissue Samples
Custom double-tuned birdcage coils were constructed to enable concurrent evaluation of a number of NMR indices in the golden retriever muscular dystrophy (GRMD) model of Duchenne muscular dystrophy (DMD). Seven rectus femoris muscle samples from dogs with ages ranging from 3 to 30 months were studied. 1H T1-weighted (T1w) and T2-weighted (T2w) images, 23Na images, and 31P spectra were acquired for each sample. 1H T1w and T2w images showed a decrease in T2w/T1w signal ratio for the four older (=12 months) samples when compared to younger samples. Other NMR indices unexpectedly showed no significant correlation with age. The collection time of samples and varying levels of disease severity may have attributed to these results. Regardless, the associated custom coils and positioner developed to enable multi-nuclear studies will enable future work to investigate NMR-based biomarkers in the numerous GRMD samples available to our group.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Transfer-GAN: Multimodal CT Image Super-Resolution Via Transfer Generative Adversarial Networks
Multimodal CT scans, including non-contrast CT, CT perfusion, and CT angiography, are widely used in acute stroke diagnosis and therapeutic planning. While each imaging modality has its advantage in brain cross-sectional feature visualizations, the varying image resolution of different modalities hinders the ability of the radiologist to discern consistent but subtle suspicious findings. Besides, higher image quality requires a high radiation dose, leading to increases in health risks such as cataract formation and cancer induction. In this work, we propose a deep learning-based method Transfer-GAN that utilizes generative adversarial networks and transfer learning to improve multimodal CT image resolution and to lower the necessary radiation exposure. Through extensive experiments, we demonstrate that transfer learning from multimodal CT provides substantial visualization and quantity enhancement compare to the training without learning the prior knowledge.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
ESCELL: Emergent Symbolic Cellular Language
We present ESCELL, a method for developing an emergent symbolic language of communication between multiple agents reasoning about cells. We show how agents are able to cooperate and communicate successfully in the form of symbols similar to human language to accomplish a task in the form of a referential game (Lewis? signaling game). In one form of the game, a sender and a receiver observe a set of cells from 5 different cell phenotypes. The sender is told one cell is a target and is allowed to send one symbol to the receiver from a fixed arbitrary vocabulary size. The receiver relies on the information in the symbol to identify the target cell. We train the sender and receiver networks to develop an innate emergent language between themselves to accomplish this task. We observe that the networks are able to successfully identify cells from 5 different phenotypes with an accuracy of 93.2%. We also introduce a new form of the signaling game where the sender is shown one image instead of all the images that the receiver sees. The networks successfully develop an emergent language to get an identification accuracy of 77.8%.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Fully-Automated Semantic Segmentation of Wireless Capsule Endoscopy Abnormalities
Wireless capsule endoscopy (WCE) is a minimally invasive procedure performed with a tiny swallowable optical endoscope that allows exploration of the human digestive tract. The medical device transmits tens of thousands of colour images, which are manually reviewed by a medical expert. This paper highlights the significance of using inputs from multiple colour spaces to train a classical U-Net model for automated semantic segmentation of eight WCE abnormalities. We also present a novel approach of grouping similar abnormalities during the training phase. Experimental results on the KID datasets demonstrate that a U-Net with 4-channel inputs outperforms the single-channel U-Net providing state-of-the-art semantic segmentation of WCE abnormalities.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
A One-Shot Learning Framework for Assessment of Fibrillar Collagen from Second Harmonic Generation Images of an Infarcted Myocardium
Myocardial infarction (MI) is a scientific term that refers to heart attack. In this study, we combine induction of highly specific second harmonic generation (SHG) signals from non-centrosymmetric macromolecules such as fibrillar collagens together with two-photon excited cellular autofluorescence in infarcted mouse heart to quantitatively probe fibrosis, especially targeted at an early stage after MI. We present robust one-shot machine learning algorithms that enable determination of spatially resolved 2D structural organization of collagen as well as structural morphologies in heart tissues post-MI with spectral specificity and sensitivity. Detection, evaluation, and precise quantification of fibrosis extent at early stage would guide one to develop treatment therapies that may prevent further progression and determine heart transplant needs for patient survival.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Multiple Instance Learning Via Deep Hierarchical Exploration for Histology Image Classification
We present a fast hierarchical method to detect a presence of cancerous tissue in histological images. The image is not examined in detail everywhere but only inside several small regions of interest, called glimpses. The final classification is done by aggregating classification scores from a CNN on leaf glimpses at the highest resolution. Unlike in existing attention-based methods, the glimpses form a tree structure, low resolution glimpses determining the location of several higher resolution glimpses using weighted sampling and a CNN approximation of the expected scores. We show that it is possible to perform the classification with just a small number of glimpses, leading to an important speedup with only a small performance deterioration. Learning is possible using image labels only, as in the multiple instance learning (MIL) setting.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Age-Conditioned Synthesis of Pediatric Computed Tomography with Auxiliary Classifier Generative Adversarial Networks
Deep learning is a popular and powerful tool in computed tomography (CT) image processing such as organ segmentation, but its requirement of large training datasets remains a challenge. Even though there is a large anatomical variability for children during their growth, the training datasets for pediatric CT scans are especially hard to obtain due to risks of radiation to children. In this paper, we propose a method to conditionally synthesize realistic pediatric CT images using a new auxiliary classifier generative adversarial networks (ACGANs) architecture by taking account into age information. The proposed network generated age-conditioned high-resolution CT images to enrich pediatric training datasets.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Weakly Supervised Lesion Co-Segmentation on CT Scans
Lesion segmentation in medical imaging serves as an effective tool for assessing tumor sizes and monitoring changes in growth. However, not only is manual lesion segmentation time-consuming, but it is also expensive and requires expert radiologist knowledge. Therefore many hospitals rely on a loose substitute called response evaluation criteria in solid tumors (RECIST). Although these annotations are far from precise, they are widely used throughout hospitals and are found in their picture archiving and communication systems (PACS). Therefore, these annotations have the potential to serve as a robust yet challenging means of weak supervision for training full lesion segmentation models. In this work, we propose a weakly-supervised co-segmentation model that first generates pseudo-masks from the RECIST slices and uses these as training labels for an attention-based convolutional neural network capable of segmenting common lesions from a pair of CT scans. To validate and test the model, we utilize the DeepLesion dataset, an extensive CT-scan lesion dataset that contains 32,735 PACS bookmarked images. Extensive experimental results demonstrate the efficacy of our co-segmentation approach for lesion segmentation with a mean Dice coefficient of 90.3%.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
CSAF-CNN: Cross-Layer Spatial Attention Map Fusion Network for Organ-At-Risk Segmentation in Head and Neck CT Images
Accurate segmentation of organ at risk (OARs) in head and neck CT images is critical for planning of radiotherapy of the nasopharynx cancer. In segmentation tasks, fully convolutional networks (FCNs) are widely used. Recently, as a kind of attention module, concurrent squeeze and excitation (scSE) blocks in FCNs are proved to have good performance. However, the attention feature maps generated by scSE blocks are isolated from each other, which doesn?t help network notice the similarities among different feature maps. Consequently, we propose cross-layer spatial attention map fusion network (CSAF-CNN) to fuse different spatial attention maps to solve this problem. In addition, we introduce a top-k exponential logarithmic dice loss (TELD-Loss) in OARs segmentation, which effectively alleviates the serious sample imbalance problem of this task. We evaluate our framework in the head & neck CT scans of nasopharynx cancer patients in StructSeg 2019 challenge. We validate the effectiveness of the proposed method through ablation study, and achieve very competitive results.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Stimulus Speech Decoding from Human Cortex with Generative Adversarial Network Transfer Learning
Decoding auditory stimulus from neural activity can enable neuroprosthetics and direct communication with the brain. Some recent studies have shown successful speech decoding from intracranial recording using deep learning models. However, scarcity of training data leads to low quality speech reconstruction which prevents a complete brain-computer-interface (BCI) application. In this work, we propose a transfer learning approach with a pre-trained GAN to disentangle representation and generation layers for decoding. We first pre-train a generator to produce spectrograms from a representation space using a large corpus of natural speech data. With a small amount of paired data containing the stimulus speech and corresponding ECoG signals, we then transfer it to a bigger network with an encoder attached before, which maps the neural signal to the representation space. To further improve the network generalization ability, we introduce a Gaussian prior distribution regularizer on the latent representation during the transfer phase. With at most 150 training samples for each tested subject, we achieve a state-of-the-art decoding performance. By visualizing the attention mask embedded in the encoder, we observe brain dynamics that are consistent with findings from previous studies investigating dynamics in the superior temporal gyrus (STG), pre-central gyrus (motor) and inferior frontal gyrus (IFG). Our findings demonstrate a high reconstruction accuracy using deep learning networks together with the potential to elucidate interactions across different brain regions during a cognitive task.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Mapping Cerebral Connectivity Changes after Mild Traumatic Brain Injury in Older Adults Using Diffusion Tensor Imaging and Riemannian Matching of Elastic Curves
Although diffusion tensor imaging (DTI) can identify white matter (WM) changes due to mild traumatic brain injury (mTBI), the task of within-subject longitudinal matching of DTI streamlines remains challenging in this condition. Here we combine (A) automatic, atlas-informed labeling of WM streamline clusters with (B) streamline prototyping and (C) Riemannian matching of elastic curves to quantify within-subject changes in WM structure properties, focusing on the arcuate fasciculus. The approach is demonstrated in a group of geriatric mTBI patients imaged acutely and ~6 months post-injury. Results highlight the utility of differen-tial geometry approaches when quantifying brain connectivity alterations due to mTBI.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Metal Artifact Reduction and Intra Cochlear Anatomy Segmentation in CT Images of the Ear with a Multi-Resolution Multi-Task 3D Network
Segmenting the intra-cochlear anatomy structures (ICAs) in post-implantation CT (Post-CT) images of the cochlear implant (CI) recipients is challenging due to the strong artifacts produced by the metallic CI electrodes. We propose a multi-resolution multi-task deep network which synthesizes an artifact-free image and segments the ICAs in the Post-CT images simultaneously. The output size of the synthesis branch is 1/64 of that of the segmentation branch. This reduces and the memory usage for training, while generating segmentation labels at a high resolution. In this preliminary study, we use the segmentation results of an automatic method as the ground truth to provide supervision to train our model, and we achieve a median Dice index value of 0.792. Our experiments also confirm the usefulness of the multi-task learning.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Adversarial Normalization for Multi Domain Image Segmentation
Image normalization is a critical step in medical imaging. This step is often done on a per-dataset basis, preventing current segmentation algorithms from the full potential of exploiting jointly normalized information across multiple datasets. To solve this problem, we propose an adversarial normalization approach for image segmentation which learns common normalizing functions across multiple datasets while retaining image realism. The adversarial training provides an optimal normalizer that improves both the segmentation accuracy and the discrimination of unrealistic normalizing functions. Our contribution therefore leverages common imaging information from multiple domains. The optimality of our common normalizer is evaluated by combining brain images from both infants and adults. Results on the challenging iSEG and MRBrainS datasets reveal the potential of our adversarial normalization approach for segmentation, with Dice improvements of up to 59.6% over the baseline.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Weakly Supervised Prostate TMA Classification Via Graph Convolutional Networks
Histology-based grade classification is clinically important for many cancer types in stratifying patients into distinct treatment groups. In prostate cancer, the Gleason score is a grading system used to measure the aggressiveness of prostate cancer from the spatial organization of cells and the distribution of glands. However, the subjective interpretation of Gleason score often suffers from large interobserver and intraobserver variability. Previous work in deep learning-based objective Gleason grading requires manual pixel-level annotation. In this work, we propose a weakly-supervised approach for grade classification in tissue micro-arrays (TMA) using graph convolutional networks (GCNs), in which we model the spatial organization of cells as a graph to better capture the proliferation and community structure of tumor cells. We learn the morphometry of each cell using a contrastive predictive coding (CPC)-based self-supervised approach. Using five-fold cross-validation we demonstrate that our method can achieve a 0.9637?0.0131 AUC using only TMA-level labels. Our method also demonstrates a 36.36% improvement in AUC over standard GCNs with texture features and a 15.48% improvement over GCNs with VGG19 features. Our proposed pipeline can be used to objectively stratify low and high-risk cases, reducing inter- and intra-observer variability and pathologist workload.
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Learning to Detect Brain Lesions from Noisy Annotations
Supervised training of deep neural networks in medical imaging applications relies heavily on expert-provided annotations. These annotations, however, are often imperfect, as voxel-by-voxel labeling of structures on 3D images is difficult and laborious. In this paper, we focus on one common type of label imperfection, namely, false negatives. Focusing on brain lesion detection, we propose a method to train a convolutional neural network (CNN) to segment lesions while simultaneously improving the quality of the training labels by identifying false negatives and adding them to the training labels. To identify lesions missed by annotators in the training data, our method makes use of the 1) CNN predictions, 2) prediction uncertainty estimated during training, and 3) prior knowledge about lesion size and features. On a dataset of 165 scans of children with tuberous sclerosis complex from five centers, our method achieved better lesion detection and segmentation accuracy than the baseline CNN trained on the noisy labels, and than several alternative techniques.
Cart
Create Account
Sign In