IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 251 - 300 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Arterial Input Function and Tracer Kinetic Model Driven Network for Rapid Inference of Kinetic Maps in Dynamic Contrast Enhanced MRI (AIF-TK-Net)

    00:11:57
    0 views
    We propose a patient-specific arterial input function (AIF) and tracer kinetic (TK) model-driven network to rapidly estimate the extended Tofts-Kety kinetic model parameters in DCE-MRI. We term our network as AIF-TK-net, which maps an input comprising of an image patch of the DCE-time series and the patient-specific AIF to the output image patch of the TK parameters. We leverage the open-source NEURO-RIDER database of brain tumor DCE-MRI scans to train our network. Once trained, our model rapidly infers the TK maps of unseen DCE-MRI images on the order of a 0.34 sec/slice for a 256x256x65 time series data on a NVIDIA GeForce GTX 1080 Ti GPU. We show its utility on high time resolution DCE-MRI datasets where significant variability in AIFs across patients exists. We demonstrate that the proposed AIF-TK net considerably improves the TK parameter estimation accuracy in comparison to a network, which does not utilize the patient AIF.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Looking in the Right Place for Anomalies: Explainable AI through Automatic Location Learning

    00:12:53
    0 views
    Deep learning has now become the de facto approach to the recognition of anomalies in medical imaging. Their 'black box' way of classifying medical images into anomaly labels poses problems for their acceptance, particularly with clinicians. Current explainable AI methods offer justifications through visualizations such as heat maps but cannot guarantee that the network is focusing on the relevant image region fully containing the anomaly. In this paper, we develop an approach to explainable AI in which the anomaly is assured to be overlapping the expected location when present. This is made possible by automatically extracting location-specific labels from textual reports and learning the association of expected locations to labels using a hybrid combination of Bi-Directional Long Short-Term Memory Recurrent Neural Networks (Bi-LSTM) and DenseNet-121. Use of this expected location to bias the subsequent attention-guided inference network based on ResNet101 results in the isolation of the anomaly at the expected location when present. The method is evaluated on a large chest X-ray dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Transforming Intensity Distribution of Brain Lesions Via Conditional GANs for Segmentation

    00:13:26
    0 views
    Brain lesion segmentation is crucial for diagnosis, surgical planning, and analysis. Owing to the fact that pixel values of brain lesions in magnetic resonance (MR) scans are distributed over the wide intensity range, there is always a considerable overlap between the class-conditional densities of lesions. Hence, an accurate automatic brain lesion segmentation is still a challenging task. We present a novel architecture based on conditional generative adversarial networks (cGANs) to improve the lesion contrast for segmentation. To this end, we propose a novel generator adaptively calibrating the input pixel values, and a Markovian discriminator to estimate the distribution of tumors. We further propose the Enhancement and Segmentation GAN (Enh-Seg-GAN) which effectively incorporates the classifier loss into the adversarial one during training to predict the central labels of the sliding input patches. Particularly, the generated synthetic MR images are a substitute for the real ones to maximize lesion contrast while suppressing the background. The potential of proposed frameworks is confirmed by quantitative evaluation compared to the state-of-the-art methods on BraTS'13 dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning for Time Averaged Wall Shear Stress Prediction in Left Main Coronary Bifurcations

    00:11:23
    0 views
    Analysing blood flow in coronary arteries has often been suggested in aiding the prediction of cardiovascular disease (CVD) risk. Blood flow induced hemodynamic indices can function as predictive measures in this pursuit and a fast method to calculate these may allow patient specific treatment considerations for improved clinical outcomes in the future. In vivo measurements of these metrics are not practical and thus computational fluid dynamic simulations (CFD) are widely used to investigate blood flow conditions, but require costly computation time for large scale studies such as patient specific considerations in patients screened for CVD. This paper proposes a deep learning approach to estimating the well established hemodynamic risk indicator time average wall shear stress (TAWSS) based on the vessel geometry. The model predicts TAWSS with good accuracy, achieving cross validation results of average Mean Absolute error of 0.0407Pa and standard deviation of 0.002Pa on a 127 patient CT angiography dataset, while being several orders of magnitude faster than computational simulations, using the vessel radii, angles between bifurcation (branching) vessels, curvature and other geometrical features. This bypasses costly computational simulations and allows large scale population studies as required for meaningful CVD risk prediction.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Pneumothorax Segmentation with Effective Conditioned Post-Processing in Chest X-Ray

    00:11:02
    0 views
    Pneumothorax can be caused by a blunt chest injury, damage from underlying lung disease, or it may occur for no obvious reason at all. This is one of the complex problems to detect by human eyes, which can be solved with very high level of accuracy and simplify the clinical workflow. In clinical workflow pneumothorax is usually diagnosed by a radiologist and can sometimes be difficult to confirm, that's why one need an accurate AI detection algorithm. In order to improve the detection quality and performance automatic AI-based solutions became very popular. Recently deep learning approaches demonstrate its potential and strenghts in the medical image processing and problems which are very poorly visually distinguishable by human eyes. Proposed method presents the segmentation pipeline from the chest X-ray images with the multistep conditioned post- processing, resulting in significant improvement compare to any ?baseline? by decreasing the missed pneumothorax collapse regions and false positive detections. Obtained results demonstrate very high performance accuracy and strong robustness due to very similar performance on the double-stage test dataset with previously unseen distribution. Final Dice scores are 0.8821 and 0.8614 for ?stage 1? and ?stage 2? test dataset respectively what is resulted in top 0.01% standing of the private leaderboard on Kaggle competition platform.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Joint Low Dose CT denoising and kidney segmentation

    00:09:53
    0 views
    Low dose computed tomography (LDCT) as an imaging modality has more recently gained greater attention because of the lower dose of radiation it necessitates, alongside its wider use, cost-effectiveness and faster scanning time, making it suitable for screening, diagnosis and follow up studies. While segmentation is in itself a complex problem in imaging, image enhancement in LDCT is also a contentious issue. However, the convenience of the low dose radiation exposure makes the delineation and assessment of organs such as kidney, ureter, and bladder non-trivial. In this research, both image denoising and kidney segmentation tasks are addressed jointly via one multitask deep convolutional network. This multitasking scheme brings better results for both tasks compared to single-task methods. Also, to the best of our knowledge, this is a first time attempt at addressing these joint tasks in LDCT. The developed network is a conditional generative adversarial network (C-GAN) and is an extension of the image-to-image translation network. The dataset used for this experiment is from `Multi-Atlas labeling beyond the cranial vault' challenge containing CT scans and corresponding segmentation labels. The labels of 30 subjects, which are shared publicly, are used in this paper. In order to simulate the LDCT scans from CT scans, the method based on additive Poisson noise on sonograms of CT scans is used. Because of the limited number of subjects, the leave one subject out (LOSO) is used for evaluation. The proposed method works on CT slices, and the network is trained using all of the 2D axial slices from the training subjects and tested on all of the 2D axial slices of the test subjects not seen. Comparing the results of the original single task network (image-to-image translation) and the proposed multitask network (extended version called image-to-images translation) shows that for both tasks, the multitask method outperforms the single-task method while having only half of the network weights. For further investigation, two other conventional single task networks are also exploited for single tasks. The well-known U-net method for segmentation and recently proposed method WGAN for LDCT denoising. On average, the proposed method outperforms U-net by almost a 20% better Dice score and the WGAN by an almost 0.15 lower RMSE.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-resolution Graph Neural Network for Detecting Variations in Brain Connectivity

    00:06:48
    1 view
    In this work, we propose a novel CNN-based framework with adaptive graph transforms to learn the most disease-relevant connectome feature maps which have the highest discrimination power across-diagnostic categories. The backbone of our framework is a multi-resolution representation of the graph matrix which is steered by a set of wavelet-like graph transforms. Our graph learning framework outperforms conventional methods that predict diagnostic label for graphs.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Artificial Intelligence and Computational Pathology: Implications for Precision Medicine

    00:29:54
    1 view
    With the advent of digital pathology, there is an opportunity to develop computerized image analysis methods to not just detect and diagnose disease from histopathology tissue sections, but to also attempt to predict risk of recurrence, predict disease aggressiveness and long term survival. At the Center for Computational Imaging and Personalized Diagnostics, our team has been developing a suite of image processing and computer vision tools, specifically designed to predict disease progression and response to therapy via the extraction and analysis of image-based ?histological biomarkers? derived from digitized tissue biopsy specimens. These tools would serve as an attractive alternative to molecular based assays, which attempt to perform the same predictions. The fundamental hypotheses underlying our work are that: 1) the genomic expressions detected by molecular assays manifest as unique phenotypic alterations (i.e. histological biomarkers) visible in the tissue; 2) these histological biomarkers contain information necessary to predict disease progression and response to therapy; and 3) sophisticated computer vision algorithms are integral to the successful identification and extraction of these biomarkers. We have developed and applied these prognostic tools in the context of several different disease domains including ER+ breast cancer, prostate cancer, Her2+ breast cancer, ovarian cancer, and more recently medulloblastomas. For the purposes of this talk I will focus on our work in breast, prostate, rectal, oropharyngeal, and lung cancer.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Volumetric Registration of Brain Cortical Regions by Automatic Landmark Matching and Large Deformation Diffeomorphisms

    00:15:16
    0 views
    A well-known challenge in fMRI data analysis is the excessive variability in the MR signal and the high level of random and structured noise. A common solution to deal with such high variability/noice is to recruit a large number of subjects to enhance the statistical power to detect any scientific effect. To achieve this, the morphologies of the sample brains are required to be warped into a standard space. However, human's cerebral cortices are highly convoluted, with large inter-subject morphological variability that renders the task of registration challenging. Currently, the state-of-the-art non-linear registration methods perform poorly on brains' cortical regions particularly on aging and clinical populations. To alleviate this issue, we propose a landmark-guided and region-based image registration method. We have evaluated our method by warping the brain cortical regions of both young and older participants into the standard space. Compared with the state-of-the-art method, we showed our method significantly (t = 117; p = 0) improved the overlap of the cortical regions (Dice increased by 57%). We concluded our method can significantly improve the registration accuracy of brain cortical regions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • An augmentation-Free Rotation Invariant Classification Scheme on Point-Cloud and Its Application to Neuroimaging

    00:12:27
    0 views
    Recent years have witnessed the emergence and increasing popularity of 3D medical imaging techniques with the development of 3D sensors and technology. However, achieving geometric invariance in the processing of 3D medical images is computationally expensive but nonetheless essential due to the presence of possible errors caused by rigid registration techniques. An alternative way to analyze medical imaging is by understanding the 3D shapes represented in terms of point-cloud. Though in the medical imaging community, 3D point-cloud processing is not a ``go-to'' choice, it is a canonical way to preserve rotation invariance. Unfortunately, due to the presence of discrete topology, one can not use the standard convolution operator on point-cloud. {it To the best of our knowledge, the existing ways to do ``convolution'' can not preserve the rotation invariance without explicit data augmentation.} Therefore, we propose a rotation invariant convolution operator by inducing topology from hypersphere. Experimental validation has been performed on publicly available OASIS dataset in terms of classification accuracy between subjects with (without) dementia, demonstrating the usefulness of our proposed method in terms of begin{inparaenum}[bfseries (a)] item model complexity item classification accuracy and last but most important item invariance to rotations.end{inparaenum}
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • SSN: A Stair-Shape Network for Real-Time Polyp Segmentation in Colonoscopy Images

    00:09:19
    0 views
    Colorectal cancer is one of the most life-threatening malignancies, commonly occurring from intestinal polyps. Currently, clinical colonoscopy exam is an effective way for early detection of polyps and is often conducted in real-time manner. But, colonoscopy analysis is time-consuming and suffers a high miss rate. In this paper, we develop a novel stair-shape network (SSN) for real-time polyp segmentation in colonoscopy images (not merely for simple detection). Our new model is much faster than U-Net, yet yields better performance for polyp segmentation. The model first utilizes four blocks to extract spatial features at the encoder stage. The subsequent skip connection with a Dual Attention Module for each block and a final Multi-scale Fusion Module are used to fully fuse features of different scales. Based on abundant data augmentation and strong supervision of auxiliary losses, our model can learn much more information for polyp segmentation. Our new polyp segmentation method attains high performance on several datasets (CVC-ColonDB, CVC-ClinicDB, and EndoScene), outperforming state-of-the-art methods. Our network can also be applied to other imaging tasks for real-time segmentation and clinical practice.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Dynamics of Brain Activity Captured by Graph Signal Processing of Neuroimaging Data to Predict Human Behaviour

    00:11:53
    0 views
    Joint structural and functional modelling of the brain based on multimodal imaging increasingly show potential in elucidating the underpinnings of human cognition. In the graph signal processing (GSP) approach for neuroimaging, brain activity patterns are viewed as graph signals expressed on the structural brain graph built from anatomical connectivity. The energy fraction between functional signals that are in line with structure (termed alignment) and those that are not (liberality), has been linked to behaviour. Here, we examine whether there is also information of interest at the level of temporal fluctuations of alignment and liberality. We consider the prediction of an array of behavioural scores, and show that in many cases, a dynamic characterisation yields additional significant insight.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Self-Supervised Representation Learning for Ultrasound Video

    00:12:17
    0 views
    Recent advances in deep learning have achieved promising performance for medical image analysis, while in most cases ground-truth annotations from human experts are necessary to train the deep model. In practice, such annotations are expensive to collect and can be scarce for medical imaging applications. Therefore, there is significant interest in learning representations from unlabelled raw data. In this paper, we propose a self-supervised learning approach to learn meaningful and transferable representations from medical imaging video without any type of human annotation. We assume that in order to learn such a representation, the model should identify anatomical structures from the unlabelled data. Therefore we force the model to address anatomy-aware tasks with free supervision from the data itself. Specifically, the model is designed to correct the order of a reshuffled video clip and at the same time predict the geometric transformation applied to the video clip. Experiments on fetal ultrasound video show that the proposed approach can effectively learn meaningful and strong representations, which transfer well to downstream tasks like standard plane detection and saliency prediction.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Non-Rigid 2d-3d Registration Using Convolutional Autoencoders

    00:13:31
    0 views
    In this paper, we propose a novel neural network-based framework for the non-rigid 2D-3D registration of the lateral cephalogram and the volumetric cone-beam CT (CBCT) images. The task is formulated as an embedding problem, where we utilize the statistical volumetric representation and embed the X-ray image to a code vector regarding the non-rigid volumetric deformations. In particular, we build a deep ResNet-based encoder to infer the code vector from the input X-ray image. We design a decoder to generate digitally reconstructed radiographs (DRRs) from the non-rigidly deformed volumetric image determined by the code vector. The parameters of the encoder are optimized by minimizing the difference between synthetic DRRs and input X-ray images in an unsupervised way. Without geometric constraints from multi-view X-ray images, we exploit structural constraints of the multi-scale feature pyramid in similarity analysis. The training process is unsupervised and does not require paired 2D X-ray images and 3D CBCT images. The system allows constructing a volumetric image from a single X-ray image and realizes the 2D-3D registration between the lateral cephalograms and CBCT images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Photoshopping Colonoscopy Video Frames

    00:14:38
    0 views
    The automatic detection of frames containing polyps from a colonoscopy video sequence is an important first step for a fully automated colonoscopy analysis tool. Typically, such detection system is built using a large annotated data set of frames with and without polyps, which is expensive to be obtained. In this paper, we introduce a new system that detects frames containing polyps as anomalies from a distribution of frames from exams that do not contain any polyps. The system is trained using a one-class training set consisting of colonoscopy frames without polyps -- such training set is considerably less expensive to obtain, compared to the 2-class data set mentioned above. During inference, the system is only able to reconstruct frames without polyps, and when it tries to reconstruct a frame with polyp, it automatically removes (i.e., photoshop) it from the frame -- the difference between the input and reconstructed frames is used to detect frames with polyps. We name our proposed model as anomaly detection generative adversarial network (ADGAN), comprising a dual GAN with two generators and two discriminators. To test our framework, we use a new colonoscopy data set with 14317 images, split as a training set with 13350 images without polyps, and a testing set with 290 abnormal images containing polyps and 677 normal images without polyps. We show that our proposed approach achieves the state-of-the-art result on this data set, compared with recently proposed anomaly detection systems.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Filtered Delay Weight Multiply and SUM (F-DwMAS) Beamforming for Ultrasound Imaging: Preliminary Results

    00:13:13
    0 views
    In this paper, the development of a modified beamforming method, named as, Filtered Delay Weight Multiply and Sum (F-DwMAS) beamforming is reported. The developed F-DwMAS method was investigated on a minimum-redundancy synthetic aperture technique, called as 2 Receive Synthetic Aperture Focussing Technique (2R-SAFT), which uses one element in the transmit and two consecutive elements in the receive, for achieving high-quality imaging in low complex ultrasound systems. Notably, in F-DwMAS, an additional aperture window function is designed and incorporated to the recently introduced F-DMAS method. The different methods of F-DwMAS, F-DMAS and Delay and Sum (DAS) were compared in terms of Lateral Resolution (LR), Axial Resolution (AR), Contrast Ratio (CR) and contrast-to-noise ratio (CNR) in a simulation study. Results show that the proposed F-DwMAS resulted in improvements of LR by 22.86 % and 25.19 %, AR by 5.18 % and 11.06 % and CR by 152 % and 112.8 % compared to those obtained using F-DMAS and DAS, respectively. However, CNR of F-DwMAS was 12.3 % less compared to DAS, but 103.09 % more than F-DMAS. Hence, it can be concluded that the image quality improved by F-DwMAS compared to DAS and F-DMAS.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Novel Framework for Grading Autism Severity Using Task-Based Fmri

    00:12:39
    1 view
    Autism is a developmental disorder associated with difficulties in communication and social interaction. Currently, the gold standard in autism diagnosis is the autism diagnostic observation schedule (ADOS) interviews that assign a score indicating the level of severity for each individual. However, current researchers investigate developing objective technologies to diagnose autism employing brain image modalities. One of such image modalities is task-based functional MRI which exhibits alterations in functional activity that is believed to be important in explaining autism causative factors. Although autism is defined over a wide spectrum, previous diagnosis approaches only divide subjects into normal or autistic. In this paper, a novel framework for grading the severity level of autistic subjects using task-based fMRI data is presented. A speech experiment is used to obtain local features related to the functional activity of the brain. According to ADOS reports, the adopted dataset of 39 subjects is classified to three groups (13 subjects per group): mild, moderate and severe. Individual analysis with the general linear model (GLM) is used for feature extraction for each 246 brain areas according to the Brainnetome atlas (BNT). Our classification results are obtained by random forest classifier after recursive feature elimination (RFE) with 72% accuracy. Finally, we validate our selected features by applying higher-level group analysis to prove how informative they are and to infer the significant statistical differences between groups
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • AirwayNet-SE: A Simple-Yet-Effective Approach to Improve Airway Segmentation Using Context Scale Fusion

    00:12:00
    0 views
    Accurate segmentation of airways from chest CT scans is crucial for pulmonary disease diagnosis and surgical navigation. However, the intra-class variety of airways and their intrinsic tree-like structure pose challenges to the development of automatic segmentation methods. Previous work that exploits convolutional neural networks (CNNs) does not take context scales into consideration, leading to performance degradation on peripheral bronchiole. We propose the two-step AirwayNet-SE, a Simple-yet-Effective CNNs-based approach to improve airway segmentation. The first step is to adopt connectivity modeling to transform the binary segmentation task into 26-connectivity prediction task, facilitating the model?s comprehension of airway anatomy. The second step is to predict connectivity with a two-stage CNNs-based approach. In the first stage, a Deep-yet-Narrow Network (DNN) and a Shallow-yet-Wide Network (SWN) are respectively utilized to learn features with large-scale and small-scale context knowledge. These two features are fused in the second stage to predict each voxel's probability of being airway and its connectivity relationship between neighbors. We trained our model on 50 CT scans from public datasets and tested on another 20 scans. Compared with state-of-the-art airway segmentation methods, the robustness and superiority of the AirwayNet-SE confirmed the effectiveness of large-scale and small-scale context fusion. In addition, we released our manual airway annotations of 60 CT scans from public datasets for supervised airway segmentation study.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • High-Frequency Quantitative Photoacoustic Imaging and Pixel-Level Tissue Classification

    00:07:57
    0 views
    The recently proposed frequency-domain technique for photoacoustic (PA) image formation helps to differentiate between different-sized structures. Although this technique has provided encouraging preliminary results, it currently lacks a mathematical framework. H-scan ultrasound (US) imaging was introduced for characterizing acoustic scattering behavior at the pixel level. This US imaging technique relies on matching a model that describes US image formation to the mathematics of a class of Gaussian-weighted Hermite polynomial (GWHP) functions. Herein, we propose the extrapolation of the H-scan US image processing method to the analysis of PA signals. Radiofrequency (RF) PA data were obtained using a Vevo 3100 with LAZR-X system (Fujifilm VisualSonics). Experiments were performed using tissue-mimicking phantoms embedded optical absorbing spherical scatterers. Overall, preliminary results demonstrate that H-scan US-based processing of PA signals can help distinguish micrometer-sized objects of varying size.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Features for Modeling Perceptual Similarity in Microcalcification Lesion Retrieval

    00:11:06
    0 views
    Retrieving cases with similar image features has been found to be effective for improving the diagnostic accuracy of microcalcification (MC) lesions in mammograms. However, a major challenge in such an image-retrieval approach is how to determine a retrieved lesion image has diagnostically similar features to that of a query case. We investigate the feasibility of modeling perceptually similar MC lesions by using deep learning features extracted from two types of deep neural networks, of which one is a supervised-learning network developed for the task of MC detection and the other is a denoising autoencoder network. In the experiments, the deep learning features were compared against the perceptual similarity scores collected from a reader study on 1,000 MC lesion image pairs. The results indicate that the deep learning features can potentially be more effective for modelling the notion of perceptual similarity of MC lesions than traditional handcrafted texture features.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Validating Uncertainty in Medical Image Translation

    00:14:46
    0 views
    Medical images are increasingly used as input to deep neural networks to produce quantitative values that aid researchers and clinicians. However, standard deep neural networks do not provide a reliable measure of uncertainty in those quantitative values. Recent work has shown that using dropout during training and testing can provide estimates of uncertainty. In this work, we investigate using dropout to estimate epistemic and aleatoric uncertainty in a CT-to-MR image translation task. We show that both types of uncertainty are captured, as defined, providing confidence in the output uncertainty estimates.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Depression Detection Via Facial Expressions Using Multiple Instance Learning

    00:11:06
    0 views
    Depression affects more than 300 million people around the world and is the leading cause of disability in USA for individuals ages from 15 to 44. The damage of it compares to most common diseases like cancer, diabetes, or heart disease according to the WHO report. However, people with depression symptoms sometimes do not receive proper treatment due to access barriers. In this paper, we propose a method that automatically detects depression using only landmarks of facial expressions, which are easy to collect with less privacy exposure. We deal with the coarse-grained labels i.e. one final label for the long-time series video clips, which is the common cases in applications, through the integration of feature manipulation and multiple instance learning. The effectiveness of our method is compared to other visual based methods, and our method even outperforms multi-modal methods that use multiple modalities.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fine-Grained Multi-Instance Classification in Microscopy through Deep Attention

    00:10:32
    0 views
    Fine-grained object recognition and classification in biomedical images poses a number of challenges. Images typically contain multiple instances (e.g. glands) and the recognition of salient structures is confounded by visually complex backgrounds. Due to the cost of data acquisition or the limited availability of specimens, data sets tend to be small. We propose a simple yet effective attention based deep architecture to address these issues, specially to achieve improved background suppression and recognition of multiple instances per image. Attention maps per instance are learnt in an end-to-end fashion. Microscopic images of fungi (new data) and a publicly available Breast Cancer Histology benchmark data set are used to demonstrate the performance of the proposed approach. Our algorithm comparison suggests that the proposed approach advances the state-of-the-art.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Semi-Supervised Multi-Domain Multi-Task Training for Metastatic Colon Lymph Node Diagnosis from Abdominal CT

    00:06:40
    0 views
    The diagnosis of the presence of metastatic lymph nodes from abdominal computed tomography (CT) scans is an essential task performed by radiologists to guide radiation and chemotherapy treatment. State-of-the-art deep learning classifiers trained for this task usually rely on a training set containing CT volumes and their respective image-level (i.e., global) annotation. However, the lack of annotations for the localisation of the regions of interest (ROIs) containing lymph nodes can limit classification accuracy due to the small size of the relevant ROIs in this problem. The use of lymph node ROIs together with global annotations in a multi-task training process has the potential to improve classification accuracy, but the high cost involved in obtaining the ROI annotation for the same samples that have global annotations is a roadblock for this alternative. We address this limitation by introducing a new training strategy from two data sets: one containing the global annotations, and another (publicly available) containing only the lymph node ROI localisation. We term our new strategy semi-supervised multi-domain multi-task training, where the goal is to improve the diagnosis accuracy on the globally annotated data set by incorporating the ROI annotations from a different domain. Using a private data set containing global annotations and a public data set containing lymph node ROI localisation, we show that our proposed training mechanism improves the area under the ROC curve for the classification task compared to several training method baselines.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Towards Uncertainty Quantification for Electrode Bending Prediction in Stereotactic Neurosurgery

    00:15:46
    0 views
    Implantation accuracy of electrodes during stereotactic neurosurgery is necessary to ensure safety and efficacy. However, electrodes deflect from planned trajectories. Although mechanical models and data-driven approaches have been proposed for trajectory prediction, they lack to report uncertainty of the predictions. We propose to use Monte Carlo (MC) dropout on neural networks to quantify uncertainty of predicted electrode local displacement. We compute image features of 23 stereoelectroencephalography cases (241 electrodes) and use them as inputs to a neural network to regress electrode local displacement. We use MC dropout with 200 stochastic passes to quantify uncertainty of predictions. To validate our approach, we define a baseline model without dropout and compare it to a stochastic model using 10-fold cross-validation. Given a starting planned trajectory, we predicted electrode bending using inferred local displacement at the tip via simulation. We found MC dropout performed better than a non-stochastic baseline model and provided confidence intervals along the predicted trajectory of electrodes. We believe this approach facilitates better decision making for electrode bending prediction in surgical planning.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Remove Appearance Shift for Ultrasound Image Segmentation Via Fast and Universal Style Transfer

    00:10:18
    0 views
    Deep Neural Networks (DNNs) suffer from the performance degradation when image appearance shift occurs, especially in ultrasound (US) image segmentation. In this paper, we propose a novel and intuitive framework to remove the appearance shift, and hence improve the generalization ability of DNNs. Our work has three highlights. First, we follow the spirit of universal style transfer to remove appearance shifts, which was not explored before for US images. Without sacrificing image structure details, it enables the arbitrary style-content transfer. Second, accelerated with Adaptive Instance Normalization block, our framework achieved real-time speed required in the clinical US scanning. Third, an efficient and effective style image selection strategy is proposed to ensure the target-style US image and testing content US image properly match each other. Experiments on two large US datasets demonstrate that our methods are superior to state-of-the-art methods on making DNNs robust against various appearance shifts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 3D Biological Cell Reconstruction with Multi-View Geometry

    00:13:26
    0 views
    3D cell modeling is an important tool for visualizing cellular structures and events, and generating accurate data for further quantitative geometric morphological analyses on cellular structures. Current methods involve highly specialized and expensive setups as well as experts in microscopy and 3D reconstruction to produce time- and work-intensive insight into cellular events. We developed a new system that reconstructs the surface geometry of 3D cellular structures from 2D image sequences in a fast and automatic way. The system rotated cells in a microfluidic device, while their images were captured by a video camera. The multi-view geometry theory was introduced to microscopy imaging to model the imaging system and define the 3D reconstruction as an inverse problem. Finally, we successfully demonstrated the reconstruction of cellular structures in their natural state.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Analysis of Consistency in Structural and Functional Connectivity of Human Brain

    00:16:14
    0 views
    Analysis of structural and functional connectivity of brain has become a fundamental approach in neuroscientific research. Despite several studies reporting consistent similarities as well as differences for structural and resting state (rs) functional connectomes, a comparative investigation of connectomic consistency between the two modalities is still lacking. Nonetheless, connectomic analysis comprising both connectivity types necessitate extra attention as consistency of connectivity differs across modalities, possibly affecting the interpretation of the results. In this study, we present a comprehensive analysis of consistency in structural and rs-functional connectomes obtained from longitudinal diffusion MRI and rs-fMRI data of a single healthy subject. We contrast consistency of deterministic and probabilistic tracking with that of full, positive, and negative functional connectivities across various connectome generation schemes, using correlation as a measure of consistency.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CNN Detection of New and Enlarging Multiple Sclerosis Lesions from Longitudinal MRI Using Subtraction Images

    00:14:18
    0 views
    Accurate detection and segmentation of new lesional activity in longitudinal Magnetic Resonance Images (MRIs) of patients with Multiple Sclerosis (MS) is important for monitoring disease activity, as well as for assessing treatment effects.In this work, we present the first deep learning framework to automatically detect and segment new and enlarging (NE) T2w lesions from longitudinal brain MRIs acquired from relapsing-remitting MS (RRMS) patients.The proposed framework is an adapted 3D U-Net [1] which includes as inputs the reference multi-modal MRI and T2-weighted lesion maps, as well an attention mechanism based on the subtraction MRI (between the two timepoints) which serves to assist the network in learning to differentiate between real anatomical change and artifactual change, while constraining the search space for small lesions. Experiments on a large, proprietary, multi-center, multi-modal, clinical trial dataset consisting of 1677 multi-modal scans illustrate that network achieves high overall detection accuracy (detection AUC=.95), outperforming (1)a U-Net without an attention mechanism (detection AUC=.93), (2) a framework based on subtracting independent T2-weighted segmentations (detection AUC=.57), and (3) DeepMedic(detection AUC=.84), particularly for small lesions. In addition, the method was able to accurately classify patients as active/inactive with (sensitivities of .69 and specificities of .97).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Back Shape Measurement and Three-Dimensional Reconstruction of Spinal Shape Using One Kinect Sensor

    00:06:11
    0 views
    Spinal screening relies mainly on direct clinical diagnosis or X-ray examination (which generates harmful radioactive exposure to human body). In general, the lack of knowledge in this area will prevent parents to discover adolescents? spinal deformation problems at children?s early age. Therefore, we propose a low-cost, easy to use, radiation free and high accuracy method to quickly reconstruct the three-dimensional shape of the spine, which can be used to evaluation of spinal deformation. Firstly, the depth images collected by Kinect sensor are transformed into three-dimensional point clouds. Then, the features of anatomic landmark points and spinous processes (SP) line are classi?ed and extracted. Finally, the correlation model of SP line and spine midline is established to reconstruct the spine. The results show that the proposed method can extract anatomic landmark points and evaluate scoliosis accurately (average RMS error of 5 mm and 3 degree), which is feasible and promising.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Stan: Small Tumor-Aware Network for Breast Ultrasound Image Segmentation

    00:15:05
    0 views
    Breast tumor segmentation provides accurate tumor boundary, and serves as a key step toward further cancer quantification. Although deep learning-based approaches have been proposed and achieved promising results, existing approaches have difficulty in detecting small breast tumors. The capacity to detecting small tumors is particular-ly important in finding early stage cancers using computer-aided diagnosis (CAD) systems. In this paper, we propose a novel deep learning architecture called Small Tumor-Aware Network (STAN), to improve the performance of segmenting tumors with different size. The new architecture integrates both rich context information and high-resolution image features. We validate the proposed approach using seven quantitative metrics on two public breast ultrasound datasets. The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Clinical Workflow Simulator for Intelligent Chest X-ray Worklist Prioritization

    00:06:23
    0 views
    Growing radiologic workload and shortage of medical experts worldwide often lead to delayed or even unreported examinations, which poses a risk for patient?s safety in case of unrecognized findings in chest radiographs (CXR). The aim of this work was to evaluate, whether deep learning algorithms for an intelligent worklist prioritization can optimize the radiology workflow by reducing the report turnaround times (RTAT) for critical findings, instead of reporting according to the First-In-First-Out-Principle (FIFO). Furthermore, we investigated the problem of false negative prediction in the context of worklist prioritization. We developed a simulation framework by analyzing the current workflow at a university hospital. The framework can be used to simulate clinical workdays. To assess the potential benefit of an intelligent worklist prioritization, three different workflow simulations were run and RTAT were compared: FIFO (non-prioritized), Prio1 and Prio2 (prioritized based on urgency, without/with MAXwait). For Prio2, the highest urgency will be assigned after a maximum waiting time. Examination triage was performed by ""ChestXCheck"", a convolutional neural network, classifying eight different pathological findings ranked in descending order of urgency: pneumothorax, pleural effusion, infiltrate, congestion, atelectasis, cardiomegaly, mass and foreign object. For statistical analysis of the RTAT changes, we used the Welch?s t-test. The average RTAT for all critical findings was significantly reduced by both Prio simulations compared to the FIFO simulation (e.g. pneumothorax: 32.1 min vs. 69.7 min; p
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Merge-Residual Learning for Time-of-Flight MRI

    00:13:19
    0 views
    Time-of-flight MR angiography(TOF MRA) is a predominant imaging method to visualize flow within the vessels, especially for neurovascular imaging. To measure flow precisely, it is hugely beneficial if MR scans can be accelerated whereas the acquisition of MR scans are inherently very slow. For compensation, a myriad of deep-learning based reconstruction methods to reconstruct MRI from under-sampled k-space have been proposed most of which aims for supervised learning. In this work, we propose a novel unsupervised method for TOF MRA acceleration for cases where we have limited access to paired training data. By taking advantage of both residual and non-residual connections of neural network architecture, we also propose a merge-residual connection which is highly efficient for de-aliasing the pattern introduced from aggressive down-sampling.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Medical Image Synthesis with Improved Cycle-Gan: CT from Cect

    00:11:55
    0 views
    Contrast-enhanced CT and unenhanced CT images enable radiologists to remove certain organs such as bone, which is helpful for disease diagnosis. However, these two images obtained at different times are not aligned due to patient movement. To address this issue, we propose a medical image synthesis method that can be applied to obtain unenhanced CT images from contrast-enhanced CT images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Novel Application of Nonlinear Apodization for Medical Imaging

    00:06:57
    0 views
    Presented here is a nonlinear apodization (NLA) method for processing magnetic resonance (MR) and ultrasound (US) images, which has been modified from its original use in processing radar imagery. This technique reduces Gibb?s artifacts (ringing) while preserving the boundary edges and the mainlobe width of the impulse response. This is done by selecting, pixel-by-pixel, the specific signal-domain windowing function (cosine-on-pedestal) that optimizes each point throughout the image. The windows are chosen from an infinite but bounded set, determined by weighting coefficients for the cosine-on-pedestal equation and the values of the pixels adjacent to the point of interest. By using this method, total sidelobe suppression is achievable without degrading the resolution of the mainlobe. In radar applications, this nonlinear apodization technique has shown to require fewer operations per pixel than other traditional apodization techniques. The preliminary results from applications on MR and US data are presented here.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning a Loss Function for Segmentation: A Feasibility Study

    00:13:17
    0 views
    When training neural networks for segmentation, the Dice loss is typically used. Alternative loss functions could help the networks achieve results with higher user acceptance and lower correction effort, but they cannot be used directly if they are not differentiable. As a solution, we propose to train a regression network to approximate the loss function and combine it with a U-Net to compute the loss during segmentation training. As an example, we introduce the contour Dice coefficient (CDC) that estimates the fraction of contour length that needs correction. Applied to CT bladder segmentation, we show that a weighted combination of Dice and CDC loss improves segmentations compared to using only Dice loss, with regard to both CDC and other metrics.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Left Atrial Segmentation from Magnetic Resonance Image Sequences Using Deep Convolutional Neural Network with Autoencoder

    00:14:22
    0 views
    This study presents a novel automated algorithm to segment the left atrium (LA) from 2, 3 and 4-chamber long-axis cardiac cine magnetic resonance image (MRI) sequences using deep convolutional neural network (CNN). The objective of the segmentation process is to delineate the boundary between myocardium and endocardium and exclude the mitral valve so that the results could be used for generating clinical measurements such as strain and strain rate. As such, the segmentation needs to be performed using open contours, a more challenging problem than the pixel-wise semantic segmentation performed by existing machine learning-based approaches such as U-net. The proposed neural net is built based on pre-trained CNN Inception V4 architecture, and it predicts a compressed vector by applying a multi-layer autoencoder, which is then back-projected into the segmentation contour of the LA to perform the delineation using open contours. Quantitative evaluations were performed to compare the performances of the proposed method and the current state-of-the-art U-net method. Both methods were trained using 6195 images acquired from 80 patients and evaluated using 1515 images acquired from 20 patients where the datasets were obtained retrospectively, and ground truth manual segmentation was provided by an expert radiologist. The proposed method yielded an average Dice score of 93.1 % and Hausdorff distance of 4.2 mm, whereas the U-net yielded 91.6 % and 11.9 mm for Dice score and Hausdorff distance metrics, respectively. The quantitative evaluations demonstrated that the proposed method performed significantly better than U-net in terms of Hausdorff distance in addition to providing open contour-based segmentation for the LA.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CNN in CT Image Segmentation: Beyond Loss Function for Exploiting Ground Truth Images

    00:11:43
    0 views
    Exploiting more information from ground truth (GT) images now is a new research direction for further improving CNN's performance in CT image segmentation. Previous methods focus on devising the loss function for fulfilling such a purpose. However, it is rather difficult to devise a general and optimization-friendly loss function. We here present a novel and practical method that exploits GT images beyond the loss function. Our insight is that feature maps of two CNNs trained respectively on GT and CT images should be similar on some metric space, because they both are used to describe the same objects for the same purpose. We hence exploit GT images by enforcing such two CNNs' feature maps to be consistent. We assess the proposed method on two data sets, and compare its performance to several competitive methods. Extensive experimental results show that the proposed method is effective, outperforming all the compared methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Topology Highlights Neural Deficits of Post-Stroke Aphasia Patients

    00:12:01
    0 views
    Statistical inference of topological features decoded by persistent homology, a topological data analysis (TDA) algorithm, has been found to reveal patterns in electroencephalographic (EEG) signals that are not captured by standard temporal and spectral analysis. However, a potential challenge for applying topological inference to large-scale EEG data is the ambiguity of performing statistical inference and computational bottleneck. To address this problem, we advance a unified permutation-based inference framework for testing statistical difference in the topological feature persistence landscape (PL) of multi-trial EEG signals. In this study, we apply the framework to compare the PLs in EEG signals recorded in participants with aphasia vs. a matched control group during altered auditory feedback tasks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Generating Controllable Ultrasound Images of the Fetal Head

    00:12:27
    0 views
    Synthesis of anatomically realistic ultrasound images could be potentially valuable in sonographer training and to provide training images for algorithms, but is a challenging technical problem. Generating examples where different image attributes can be controlled may also be useful for tasks such as semi-supervised classification and regression to augment costly human annotation. In this paper, we propose using an information maximizing generative adversarial network with a least-squares loss function to generate new examples of fetal brain ultrasound images from clinically acquired healthy subject twenty-week anatomy scans. The unsupervised network succeeds in disentangling natural clinical variations in anatomical visibility and image acquisition parameters, which allows for user-control in image generation. To evaluate our method, we also introduce an additional synthetic fetal ultrasound specific image quality metric called the Frechet SonoNet Distance (FSD) to quantitatively evaluate synthesis quality. To the best of our knowledge, this is the first work that generates ultrasound images with a generator network trained on clinical acquisitions where governing parameters can be controlled in a visually interpretable manner.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Complementary Network with Adaptive Receptive Fields for Melanoma Segmentation

    00:13:26
    0 views
    Automatic melanoma segmentation in dermoscopic images is essential in computer-aided diagnosis of skin cancer. Existing methods may suffer from the hole and shrink problems with limited segmentation performance. To tackle these issues, we propose a novel complementary network with adaptive receptive filed learning. Instead of regarding the segmentation task independently, we introduce a foreground network to detect melanoma lesions and a background network to mask non-melanoma regions. Moreover, we propose adaptive atrous convolution (AAC) and knowledge aggregation module (KAM) to fill holes and alleviate the shrink problems. AAC allows us to explicitly control the receptive field at multiple scales. KAM convolves shallow feature maps by dilated convolutions with adaptive receptive fields, which are adjusted according to deep feature maps. In addition, A novel mutual loss is proposed to utilize the dependency between the foreground and background networks, thereby enabling the reciprocally influence within these two networks. Consequently, this mutual training strategy enables the semi-supervised learning and improve the boundary-sensitivity. Training with Skin Imaging Collaboration (ISIC) 2018 skin lesion segmentation dataset, our method achieves a dice coefficient of 86.4% and shows better performance compared with state-of-the-art melanoma segmentation methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Probabilistic Inference for Camera Calibration in Light Microscopy under Circular Motion

    00:15:32
    0 views
    Robust and accurate camera calibration is essential for 3D reconstruction in light microscopy under circular motion. Conventional methods require either accurate key point matching or precise segmentation of the axial-view images. Both remain challenging because specimens often exhibit transparency/translucency in a light microscope. To address those issues, we propose a probabilistic inference based method for the camera calibration that does not require sophisticated image pre-processing. Based on 3D projective geometry, our method assigns a probability on each of a range of voxels that cover the whole object. The probability indicates the likelihood of a voxel belonging to the object to be reconstructed. Our method maximizes a joint probability that distinguishes the object from the background. Experimental results show that the proposed method can accurately recover camera configurations in both light microscopy and natural scene imaging. Furthermore, the method can be used to produce high-fidelity 3D reconstructions and accurate 3D measurements.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Computer Aided Diagnosis of Clinically Significant Prostate Cancer in Low-Risk Patients on Multi-Parametric Mr Images Using Deep Learning

    00:13:46
    0 views
    The purpose of this study was to develop a quantitative method for detection and segmentation of clinically significant (ISUP grade = 2) prostate cancer (PCa) in low-risk patient. A consecutive cohort of 356 patients (active surveillance) was selected and divided in two groups: 1) MRI and targeted-biopsy positive PCa, 2) MRI and standard-biopsy negative PCa. A 3D convolutional neural network was trained in three-fold cross validation with MRI and targeted-biopsy positive patient?s data using two mp-MRI sequences (T2-weighted, DWI-b800) and ADC map as input. After training, the model was tested on separate positive and negative patients to evaluate the performance. The model achieved an average area under the curve (AUC) of the receiver operating characteristics is 0.78 (sensitivity = 85%, specificity = 72%). The diagnostic performance of the proposed method in segmenting significant PCa and to conform non-significant PCa in low-risk patients is characterized by a good AUC and negative-predictive-value.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Using Transfer Learning and Class Activation Maps Supporting Detection and Localization of Femoral Fractures on Anteroposterior Radiographs

    00:06:02
    0 views
    Acute Proximal Femoral Fractures are a growing health concern among the aging population. These fractures are often associated with significant morbidity and mortality as well as reduced quality of life. Furthermore, with the increasing life expectancy owing to advances in healthcare, the number of proximal femoral fractures may increase by a factor of 2 to 3, since the majority of fractures occur in patients over the age of 65. In this paper, we show that by using transfer learning and leveraging pre-trained models, we can achieve very high accuracy in detecting fractures and that they can be localized utilizing class activation maps.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Compressed Sensing for Data Reduction in Synthetic Aperture Ultrasound Imaging: A Feasibility Study

    00:14:13
    1 view
    Compressed Sensing (CS) has been applied by a few researchers to improve the frame rate of synthetic aperture (SA) ultrasound imaging. However, there appear to be no reports on reducing the number of receive elements by exploiting CS approach. In our previous work, we have proposed a strategic undersampling scheme based on Gaussian distribution for focused ultrasound imaging. In this work, we propose and evaluate three sampling schemes for SA to acquire RF data from a reduced number of receive elements. The effect of sampling schemes on CS recovery was studied using simulation and experimental data. In spite of using only 50% of the receive elements, it was found that the ultrasound images using the Gaussian sampling scheme had comparable resolution and contrast with respect to the reference image obtained using all the receive elements. Thus, the findings suggest a possibility to reduce the receive channel count of SA ultrasound system without practically sacrificing the image quality.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Residual Simplified Reference Tissue Model with Covariance Estimation

    00:12:13
    0 views
    The simplified reference tissue model (SRTM) can robustly estimate binding potential (BP) without a measured arterial blood input function. Although a voxel-wise estimation of BP, so-called parametric image, is more useful than the region of interest (ROI) based estimation of BP, it is challenging to calculate the accurate parametric image due to lower signal-to-noise ratio (SNR) of dynamic PET images. To achieve reliable parametric imaging, temporal images are commonly smoothed prior to the kinetic parameter estimation, which degrades the resolution significantly. To address the problem, we propose a residual simplified reference tissue model (ResSRTM) using an approximate covariance matrix to robustly compute the parametric image with a high resolution. We define the residual dynamic data as full data except for each frame data, which has higher SNR and can achieve the accurate estimation of parametric image. Since dynamic images have correlations across temporal frames, we propose an approximate covariance matrix using neighbor voxels by assuming the noise statistics of neighbors are similar. In phantom simulation and real experiments, we demonstrate that the proposed method outperforms the conventional SRTM method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Spatially Informed Cnn for Automated Cone Detection in Adaptive Optics Retinal Images

    00:14:23
    0 views
    Adaptive optics (AO) scanning laser ophthalmoscopy offers cellular level in-vivo imaging of the human cone mosaic. Existing analysis of cone photoreceptor density in AO images require accurate identification of cone cells, which is a time and labor-intensive task. Recently, several methods have been introduced for automated cone detection in AO retinal images using convolutional neural networks (CNN). However, these approaches have been limited in their ability to correctly identify cones when applied to AO images originating from different locations in the retina, due to changes to the reflectance and arrangement of the cone mosaics with eccentricity. To address these limitations, we present an adapted CNN architecture that incorporates spatial information directly into the network. Our approach, inspired by conditional generative adversarial networks, embeds the retina location from which each AO image was acquired as part of the training. Using manual cone identification as ground truth, our evaluation shows general improvement over existing approaches when detecting cones in the middle and periphery regions of the retina, but decreased performance near the fovea.

Advertisement