IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 1 - 50 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Volumetric Registration of Brain Cortical Regions by Automatic Landmark Matching and Large Deformation Diffeomorphisms

    00:15:16
    0 views
    A well-known challenge in fMRI data analysis is the excessive variability in the MR signal and the high level of random and structured noise. A common solution to deal with such high variability/noice is to recruit a large number of subjects to enhance the statistical power to detect any scientific effect. To achieve this, the morphologies of the sample brains are required to be warped into a standard space. However, human's cerebral cortices are highly convoluted, with large inter-subject morphological variability that renders the task of registration challenging. Currently, the state-of-the-art non-linear registration methods perform poorly on brains' cortical regions particularly on aging and clinical populations. To alleviate this issue, we propose a landmark-guided and region-based image registration method. We have evaluated our method by warping the brain cortical regions of both young and older participants into the standard space. Compared with the state-of-the-art method, we showed our method significantly (t = 117; p = 0) improved the overlap of the cortical regions (Dice increased by 57%). We concluded our method can significantly improve the registration accuracy of brain cortical regions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • An augmentation-Free Rotation Invariant Classification Scheme on Point-Cloud and Its Application to Neuroimaging

    00:12:27
    0 views
    Recent years have witnessed the emergence and increasing popularity of 3D medical imaging techniques with the development of 3D sensors and technology. However, achieving geometric invariance in the processing of 3D medical images is computationally expensive but nonetheless essential due to the presence of possible errors caused by rigid registration techniques. An alternative way to analyze medical imaging is by understanding the 3D shapes represented in terms of point-cloud. Though in the medical imaging community, 3D point-cloud processing is not a ``go-to'' choice, it is a canonical way to preserve rotation invariance. Unfortunately, due to the presence of discrete topology, one can not use the standard convolution operator on point-cloud. {it To the best of our knowledge, the existing ways to do ``convolution'' can not preserve the rotation invariance without explicit data augmentation.} Therefore, we propose a rotation invariant convolution operator by inducing topology from hypersphere. Experimental validation has been performed on publicly available OASIS dataset in terms of classification accuracy between subjects with (without) dementia, demonstrating the usefulness of our proposed method in terms of begin{inparaenum}[bfseries (a)] item model complexity item classification accuracy and last but most important item invariance to rotations.end{inparaenum}
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • SSN: A Stair-Shape Network for Real-Time Polyp Segmentation in Colonoscopy Images

    00:09:19
    0 views
    Colorectal cancer is one of the most life-threatening malignancies, commonly occurring from intestinal polyps. Currently, clinical colonoscopy exam is an effective way for early detection of polyps and is often conducted in real-time manner. But, colonoscopy analysis is time-consuming and suffers a high miss rate. In this paper, we develop a novel stair-shape network (SSN) for real-time polyp segmentation in colonoscopy images (not merely for simple detection). Our new model is much faster than U-Net, yet yields better performance for polyp segmentation. The model first utilizes four blocks to extract spatial features at the encoder stage. The subsequent skip connection with a Dual Attention Module for each block and a final Multi-scale Fusion Module are used to fully fuse features of different scales. Based on abundant data augmentation and strong supervision of auxiliary losses, our model can learn much more information for polyp segmentation. Our new polyp segmentation method attains high performance on several datasets (CVC-ColonDB, CVC-ClinicDB, and EndoScene), outperforming state-of-the-art methods. Our network can also be applied to other imaging tasks for real-time segmentation and clinical practice.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Dynamics of Brain Activity Captured by Graph Signal Processing of Neuroimaging Data to Predict Human Behaviour

    00:11:53
    0 views
    Joint structural and functional modelling of the brain based on multimodal imaging increasingly show potential in elucidating the underpinnings of human cognition. In the graph signal processing (GSP) approach for neuroimaging, brain activity patterns are viewed as graph signals expressed on the structural brain graph built from anatomical connectivity. The energy fraction between functional signals that are in line with structure (termed alignment) and those that are not (liberality), has been linked to behaviour. Here, we examine whether there is also information of interest at the level of temporal fluctuations of alignment and liberality. We consider the prediction of an array of behavioural scores, and show that in many cases, a dynamic characterisation yields additional significant insight.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Self-Supervised Representation Learning for Ultrasound Video

    00:12:17
    0 views
    Recent advances in deep learning have achieved promising performance for medical image analysis, while in most cases ground-truth annotations from human experts are necessary to train the deep model. In practice, such annotations are expensive to collect and can be scarce for medical imaging applications. Therefore, there is significant interest in learning representations from unlabelled raw data. In this paper, we propose a self-supervised learning approach to learn meaningful and transferable representations from medical imaging video without any type of human annotation. We assume that in order to learn such a representation, the model should identify anatomical structures from the unlabelled data. Therefore we force the model to address anatomy-aware tasks with free supervision from the data itself. Specifically, the model is designed to correct the order of a reshuffled video clip and at the same time predict the geometric transformation applied to the video clip. Experiments on fetal ultrasound video show that the proposed approach can effectively learn meaningful and strong representations, which transfer well to downstream tasks like standard plane detection and saliency prediction.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Non-Rigid 2d-3d Registration Using Convolutional Autoencoders

    00:13:31
    0 views
    In this paper, we propose a novel neural network-based framework for the non-rigid 2D-3D registration of the lateral cephalogram and the volumetric cone-beam CT (CBCT) images. The task is formulated as an embedding problem, where we utilize the statistical volumetric representation and embed the X-ray image to a code vector regarding the non-rigid volumetric deformations. In particular, we build a deep ResNet-based encoder to infer the code vector from the input X-ray image. We design a decoder to generate digitally reconstructed radiographs (DRRs) from the non-rigidly deformed volumetric image determined by the code vector. The parameters of the encoder are optimized by minimizing the difference between synthetic DRRs and input X-ray images in an unsupervised way. Without geometric constraints from multi-view X-ray images, we exploit structural constraints of the multi-scale feature pyramid in similarity analysis. The training process is unsupervised and does not require paired 2D X-ray images and 3D CBCT images. The system allows constructing a volumetric image from a single X-ray image and realizes the 2D-3D registration between the lateral cephalograms and CBCT images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Photoshopping Colonoscopy Video Frames

    00:14:38
    0 views
    The automatic detection of frames containing polyps from a colonoscopy video sequence is an important first step for a fully automated colonoscopy analysis tool. Typically, such detection system is built using a large annotated data set of frames with and without polyps, which is expensive to be obtained. In this paper, we introduce a new system that detects frames containing polyps as anomalies from a distribution of frames from exams that do not contain any polyps. The system is trained using a one-class training set consisting of colonoscopy frames without polyps -- such training set is considerably less expensive to obtain, compared to the 2-class data set mentioned above. During inference, the system is only able to reconstruct frames without polyps, and when it tries to reconstruct a frame with polyp, it automatically removes (i.e., photoshop) it from the frame -- the difference between the input and reconstructed frames is used to detect frames with polyps. We name our proposed model as anomaly detection generative adversarial network (ADGAN), comprising a dual GAN with two generators and two discriminators. To test our framework, we use a new colonoscopy data set with 14317 images, split as a training set with 13350 images without polyps, and a testing set with 290 abnormal images containing polyps and 677 normal images without polyps. We show that our proposed approach achieves the state-of-the-art result on this data set, compared with recently proposed anomaly detection systems.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Filtered Delay Weight Multiply and SUM (F-DwMAS) Beamforming for Ultrasound Imaging: Preliminary Results

    00:13:13
    0 views
    In this paper, the development of a modified beamforming method, named as, Filtered Delay Weight Multiply and Sum (F-DwMAS) beamforming is reported. The developed F-DwMAS method was investigated on a minimum-redundancy synthetic aperture technique, called as 2 Receive Synthetic Aperture Focussing Technique (2R-SAFT), which uses one element in the transmit and two consecutive elements in the receive, for achieving high-quality imaging in low complex ultrasound systems. Notably, in F-DwMAS, an additional aperture window function is designed and incorporated to the recently introduced F-DMAS method. The different methods of F-DwMAS, F-DMAS and Delay and Sum (DAS) were compared in terms of Lateral Resolution (LR), Axial Resolution (AR), Contrast Ratio (CR) and contrast-to-noise ratio (CNR) in a simulation study. Results show that the proposed F-DwMAS resulted in improvements of LR by 22.86 % and 25.19 %, AR by 5.18 % and 11.06 % and CR by 152 % and 112.8 % compared to those obtained using F-DMAS and DAS, respectively. However, CNR of F-DwMAS was 12.3 % less compared to DAS, but 103.09 % more than F-DMAS. Hence, it can be concluded that the image quality improved by F-DwMAS compared to DAS and F-DMAS.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Novel Framework for Grading Autism Severity Using Task-Based Fmri

    00:12:39
    0 views
    Autism is a developmental disorder associated with difficulties in communication and social interaction. Currently, the gold standard in autism diagnosis is the autism diagnostic observation schedule (ADOS) interviews that assign a score indicating the level of severity for each individual. However, current researchers investigate developing objective technologies to diagnose autism employing brain image modalities. One of such image modalities is task-based functional MRI which exhibits alterations in functional activity that is believed to be important in explaining autism causative factors. Although autism is defined over a wide spectrum, previous diagnosis approaches only divide subjects into normal or autistic. In this paper, a novel framework for grading the severity level of autistic subjects using task-based fMRI data is presented. A speech experiment is used to obtain local features related to the functional activity of the brain. According to ADOS reports, the adopted dataset of 39 subjects is classified to three groups (13 subjects per group): mild, moderate and severe. Individual analysis with the general linear model (GLM) is used for feature extraction for each 246 brain areas according to the Brainnetome atlas (BNT). Our classification results are obtained by random forest classifier after recursive feature elimination (RFE) with 72% accuracy. Finally, we validate our selected features by applying higher-level group analysis to prove how informative they are and to infer the significant statistical differences between groups
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • AirwayNet-SE: A Simple-Yet-Effective Approach to Improve Airway Segmentation Using Context Scale Fusion

    00:12:00
    0 views
    Accurate segmentation of airways from chest CT scans is crucial for pulmonary disease diagnosis and surgical navigation. However, the intra-class variety of airways and their intrinsic tree-like structure pose challenges to the development of automatic segmentation methods. Previous work that exploits convolutional neural networks (CNNs) does not take context scales into consideration, leading to performance degradation on peripheral bronchiole. We propose the two-step AirwayNet-SE, a Simple-yet-Effective CNNs-based approach to improve airway segmentation. The first step is to adopt connectivity modeling to transform the binary segmentation task into 26-connectivity prediction task, facilitating the model?s comprehension of airway anatomy. The second step is to predict connectivity with a two-stage CNNs-based approach. In the first stage, a Deep-yet-Narrow Network (DNN) and a Shallow-yet-Wide Network (SWN) are respectively utilized to learn features with large-scale and small-scale context knowledge. These two features are fused in the second stage to predict each voxel's probability of being airway and its connectivity relationship between neighbors. We trained our model on 50 CT scans from public datasets and tested on another 20 scans. Compared with state-of-the-art airway segmentation methods, the robustness and superiority of the AirwayNet-SE confirmed the effectiveness of large-scale and small-scale context fusion. In addition, we released our manual airway annotations of 60 CT scans from public datasets for supervised airway segmentation study.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • High-Frequency Quantitative Photoacoustic Imaging and Pixel-Level Tissue Classification

    00:07:57
    0 views
    The recently proposed frequency-domain technique for photoacoustic (PA) image formation helps to differentiate between different-sized structures. Although this technique has provided encouraging preliminary results, it currently lacks a mathematical framework. H-scan ultrasound (US) imaging was introduced for characterizing acoustic scattering behavior at the pixel level. This US imaging technique relies on matching a model that describes US image formation to the mathematics of a class of Gaussian-weighted Hermite polynomial (GWHP) functions. Herein, we propose the extrapolation of the H-scan US image processing method to the analysis of PA signals. Radiofrequency (RF) PA data were obtained using a Vevo 3100 with LAZR-X system (Fujifilm VisualSonics). Experiments were performed using tissue-mimicking phantoms embedded optical absorbing spherical scatterers. Overall, preliminary results demonstrate that H-scan US-based processing of PA signals can help distinguish micrometer-sized objects of varying size.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Features for Modeling Perceptual Similarity in Microcalcification Lesion Retrieval

    00:11:06
    0 views
    Retrieving cases with similar image features has been found to be effective for improving the diagnostic accuracy of microcalcification (MC) lesions in mammograms. However, a major challenge in such an image-retrieval approach is how to determine a retrieved lesion image has diagnostically similar features to that of a query case. We investigate the feasibility of modeling perceptually similar MC lesions by using deep learning features extracted from two types of deep neural networks, of which one is a supervised-learning network developed for the task of MC detection and the other is a denoising autoencoder network. In the experiments, the deep learning features were compared against the perceptual similarity scores collected from a reader study on 1,000 MC lesion image pairs. The results indicate that the deep learning features can potentially be more effective for modelling the notion of perceptual similarity of MC lesions than traditional handcrafted texture features.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Validating Uncertainty in Medical Image Translation

    00:14:46
    0 views
    Medical images are increasingly used as input to deep neural networks to produce quantitative values that aid researchers and clinicians. However, standard deep neural networks do not provide a reliable measure of uncertainty in those quantitative values. Recent work has shown that using dropout during training and testing can provide estimates of uncertainty. In this work, we investigate using dropout to estimate epistemic and aleatoric uncertainty in a CT-to-MR image translation task. We show that both types of uncertainty are captured, as defined, providing confidence in the output uncertainty estimates.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Depression Detection Via Facial Expressions Using Multiple Instance Learning

    00:11:06
    0 views
    Depression affects more than 300 million people around the world and is the leading cause of disability in USA for individuals ages from 15 to 44. The damage of it compares to most common diseases like cancer, diabetes, or heart disease according to the WHO report. However, people with depression symptoms sometimes do not receive proper treatment due to access barriers. In this paper, we propose a method that automatically detects depression using only landmarks of facial expressions, which are easy to collect with less privacy exposure. We deal with the coarse-grained labels i.e. one final label for the long-time series video clips, which is the common cases in applications, through the integration of feature manipulation and multiple instance learning. The effectiveness of our method is compared to other visual based methods, and our method even outperforms multi-modal methods that use multiple modalities.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fine-Grained Multi-Instance Classification in Microscopy through Deep Attention

    00:10:32
    0 views
    Fine-grained object recognition and classification in biomedical images poses a number of challenges. Images typically contain multiple instances (e.g. glands) and the recognition of salient structures is confounded by visually complex backgrounds. Due to the cost of data acquisition or the limited availability of specimens, data sets tend to be small. We propose a simple yet effective attention based deep architecture to address these issues, specially to achieve improved background suppression and recognition of multiple instances per image. Attention maps per instance are learnt in an end-to-end fashion. Microscopic images of fungi (new data) and a publicly available Breast Cancer Histology benchmark data set are used to demonstrate the performance of the proposed approach. Our algorithm comparison suggests that the proposed approach advances the state-of-the-art.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Semi-Supervised Multi-Domain Multi-Task Training for Metastatic Colon Lymph Node Diagnosis from Abdominal CT

    00:06:40
    0 views
    The diagnosis of the presence of metastatic lymph nodes from abdominal computed tomography (CT) scans is an essential task performed by radiologists to guide radiation and chemotherapy treatment. State-of-the-art deep learning classifiers trained for this task usually rely on a training set containing CT volumes and their respective image-level (i.e., global) annotation. However, the lack of annotations for the localisation of the regions of interest (ROIs) containing lymph nodes can limit classification accuracy due to the small size of the relevant ROIs in this problem. The use of lymph node ROIs together with global annotations in a multi-task training process has the potential to improve classification accuracy, but the high cost involved in obtaining the ROI annotation for the same samples that have global annotations is a roadblock for this alternative. We address this limitation by introducing a new training strategy from two data sets: one containing the global annotations, and another (publicly available) containing only the lymph node ROI localisation. We term our new strategy semi-supervised multi-domain multi-task training, where the goal is to improve the diagnosis accuracy on the globally annotated data set by incorporating the ROI annotations from a different domain. Using a private data set containing global annotations and a public data set containing lymph node ROI localisation, we show that our proposed training mechanism improves the area under the ROC curve for the classification task compared to several training method baselines.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Towards Uncertainty Quantification for Electrode Bending Prediction in Stereotactic Neurosurgery

    00:15:46
    0 views
    Implantation accuracy of electrodes during stereotactic neurosurgery is necessary to ensure safety and efficacy. However, electrodes deflect from planned trajectories. Although mechanical models and data-driven approaches have been proposed for trajectory prediction, they lack to report uncertainty of the predictions. We propose to use Monte Carlo (MC) dropout on neural networks to quantify uncertainty of predicted electrode local displacement. We compute image features of 23 stereoelectroencephalography cases (241 electrodes) and use them as inputs to a neural network to regress electrode local displacement. We use MC dropout with 200 stochastic passes to quantify uncertainty of predictions. To validate our approach, we define a baseline model without dropout and compare it to a stochastic model using 10-fold cross-validation. Given a starting planned trajectory, we predicted electrode bending using inferred local displacement at the tip via simulation. We found MC dropout performed better than a non-stochastic baseline model and provided confidence intervals along the predicted trajectory of electrodes. We believe this approach facilitates better decision making for electrode bending prediction in surgical planning.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Remove Appearance Shift for Ultrasound Image Segmentation Via Fast and Universal Style Transfer

    00:10:18
    0 views
    Deep Neural Networks (DNNs) suffer from the performance degradation when image appearance shift occurs, especially in ultrasound (US) image segmentation. In this paper, we propose a novel and intuitive framework to remove the appearance shift, and hence improve the generalization ability of DNNs. Our work has three highlights. First, we follow the spirit of universal style transfer to remove appearance shifts, which was not explored before for US images. Without sacrificing image structure details, it enables the arbitrary style-content transfer. Second, accelerated with Adaptive Instance Normalization block, our framework achieved real-time speed required in the clinical US scanning. Third, an efficient and effective style image selection strategy is proposed to ensure the target-style US image and testing content US image properly match each other. Experiments on two large US datasets demonstrate that our methods are superior to state-of-the-art methods on making DNNs robust against various appearance shifts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Linear Mixed Models Minimise False Positive Rate and Enhance Precision of Mass Univariate Vertex-Wise Analyses of Grey-Matter

    00:12:24
    0 views
    We evaluated the statistical power, family wise error rate (FWER) and precision of several competing methods that perform mass-univariate vertex-wise analyses of grey-matter (thickness and surface area). In particular, we compared several generalised linear models (GLMs, current state of the art) to linear mixed models (LMMs) that have proven superior in genomics. We used phenotypes simulated from real vertex-wise data and a large sample size (N=8,662) which may soon become the norm in neuroimaging. No method ensured a FWER
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 3D Biological Cell Reconstruction with Multi-View Geometry

    00:13:26
    0 views
    3D cell modeling is an important tool for visualizing cellular structures and events, and generating accurate data for further quantitative geometric morphological analyses on cellular structures. Current methods involve highly specialized and expensive setups as well as experts in microscopy and 3D reconstruction to produce time- and work-intensive insight into cellular events. We developed a new system that reconstructs the surface geometry of 3D cellular structures from 2D image sequences in a fast and automatic way. The system rotated cells in a microfluidic device, while their images were captured by a video camera. The multi-view geometry theory was introduced to microscopy imaging to model the imaging system and define the 3D reconstruction as an inverse problem. Finally, we successfully demonstrated the reconstruction of cellular structures in their natural state.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Analysis of Consistency in Structural and Functional Connectivity of Human Brain

    00:16:14
    0 views
    Analysis of structural and functional connectivity of brain has become a fundamental approach in neuroscientific research. Despite several studies reporting consistent similarities as well as differences for structural and resting state (rs) functional connectomes, a comparative investigation of connectomic consistency between the two modalities is still lacking. Nonetheless, connectomic analysis comprising both connectivity types necessitate extra attention as consistency of connectivity differs across modalities, possibly affecting the interpretation of the results. In this study, we present a comprehensive analysis of consistency in structural and rs-functional connectomes obtained from longitudinal diffusion MRI and rs-fMRI data of a single healthy subject. We contrast consistency of deterministic and probabilistic tracking with that of full, positive, and negative functional connectivities across various connectome generation schemes, using correlation as a measure of consistency.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CNN Detection of New and Enlarging Multiple Sclerosis Lesions from Longitudinal MRI Using Subtraction Images

    00:14:18
    0 views
    Accurate detection and segmentation of new lesional activity in longitudinal Magnetic Resonance Images (MRIs) of patients with Multiple Sclerosis (MS) is important for monitoring disease activity, as well as for assessing treatment effects.In this work, we present the first deep learning framework to automatically detect and segment new and enlarging (NE) T2w lesions from longitudinal brain MRIs acquired from relapsing-remitting MS (RRMS) patients.The proposed framework is an adapted 3D U-Net [1] which includes as inputs the reference multi-modal MRI and T2-weighted lesion maps, as well an attention mechanism based on the subtraction MRI (between the two timepoints) which serves to assist the network in learning to differentiate between real anatomical change and artifactual change, while constraining the search space for small lesions. Experiments on a large, proprietary, multi-center, multi-modal, clinical trial dataset consisting of 1677 multi-modal scans illustrate that network achieves high overall detection accuracy (detection AUC=.95), outperforming (1)a U-Net without an attention mechanism (detection AUC=.93), (2) a framework based on subtracting independent T2-weighted segmentations (detection AUC=.57), and (3) DeepMedic(detection AUC=.84), particularly for small lesions. In addition, the method was able to accurately classify patients as active/inactive with (sensitivities of .69 and specificities of .97).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Back Shape Measurement and Three-Dimensional Reconstruction of Spinal Shape Using One Kinect Sensor

    00:06:11
    0 views
    Spinal screening relies mainly on direct clinical diagnosis or X-ray examination (which generates harmful radioactive exposure to human body). In general, the lack of knowledge in this area will prevent parents to discover adolescents? spinal deformation problems at children?s early age. Therefore, we propose a low-cost, easy to use, radiation free and high accuracy method to quickly reconstruct the three-dimensional shape of the spine, which can be used to evaluation of spinal deformation. Firstly, the depth images collected by Kinect sensor are transformed into three-dimensional point clouds. Then, the features of anatomic landmark points and spinous processes (SP) line are classi?ed and extracted. Finally, the correlation model of SP line and spine midline is established to reconstruct the spine. The results show that the proposed method can extract anatomic landmark points and evaluate scoliosis accurately (average RMS error of 5 mm and 3 degree), which is feasible and promising.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Stan: Small Tumor-Aware Network for Breast Ultrasound Image Segmentation

    00:15:05
    0 views
    Breast tumor segmentation provides accurate tumor boundary, and serves as a key step toward further cancer quantification. Although deep learning-based approaches have been proposed and achieved promising results, existing approaches have difficulty in detecting small breast tumors. The capacity to detecting small tumors is particular-ly important in finding early stage cancers using computer-aided diagnosis (CAD) systems. In this paper, we propose a novel deep learning architecture called Small Tumor-Aware Network (STAN), to improve the performance of segmenting tumors with different size. The new architecture integrates both rich context information and high-resolution image features. We validate the proposed approach using seven quantitative metrics on two public breast ultrasound datasets. The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Clinical Workflow Simulator for Intelligent Chest X-ray Worklist Prioritization

    00:06:23
    0 views
    Growing radiologic workload and shortage of medical experts worldwide often lead to delayed or even unreported examinations, which poses a risk for patient?s safety in case of unrecognized findings in chest radiographs (CXR). The aim of this work was to evaluate, whether deep learning algorithms for an intelligent worklist prioritization can optimize the radiology workflow by reducing the report turnaround times (RTAT) for critical findings, instead of reporting according to the First-In-First-Out-Principle (FIFO). Furthermore, we investigated the problem of false negative prediction in the context of worklist prioritization. We developed a simulation framework by analyzing the current workflow at a university hospital. The framework can be used to simulate clinical workdays. To assess the potential benefit of an intelligent worklist prioritization, three different workflow simulations were run and RTAT were compared: FIFO (non-prioritized), Prio1 and Prio2 (prioritized based on urgency, without/with MAXwait). For Prio2, the highest urgency will be assigned after a maximum waiting time. Examination triage was performed by ""ChestXCheck"", a convolutional neural network, classifying eight different pathological findings ranked in descending order of urgency: pneumothorax, pleural effusion, infiltrate, congestion, atelectasis, cardiomegaly, mass and foreign object. For statistical analysis of the RTAT changes, we used the Welch?s t-test. The average RTAT for all critical findings was significantly reduced by both Prio simulations compared to the FIFO simulation (e.g. pneumothorax: 32.1 min vs. 69.7 min; p
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Merge-Residual Learning for Time-of-Flight MRI

    00:13:19
    0 views
    Time-of-flight MR angiography(TOF MRA) is a predominant imaging method to visualize flow within the vessels, especially for neurovascular imaging. To measure flow precisely, it is hugely beneficial if MR scans can be accelerated whereas the acquisition of MR scans are inherently very slow. For compensation, a myriad of deep-learning based reconstruction methods to reconstruct MRI from under-sampled k-space have been proposed most of which aims for supervised learning. In this work, we propose a novel unsupervised method for TOF MRA acceleration for cases where we have limited access to paired training data. By taking advantage of both residual and non-residual connections of neural network architecture, we also propose a merge-residual connection which is highly efficient for de-aliasing the pattern introduced from aggressive down-sampling.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Medical Image Synthesis with Improved Cycle-Gan: CT from Cect

    00:11:55
    0 views
    Contrast-enhanced CT and unenhanced CT images enable radiologists to remove certain organs such as bone, which is helpful for disease diagnosis. However, these two images obtained at different times are not aligned due to patient movement. To address this issue, we propose a medical image synthesis method that can be applied to obtain unenhanced CT images from contrast-enhanced CT images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Novel Application of Nonlinear Apodization for Medical Imaging

    00:06:57
    0 views
    Presented here is a nonlinear apodization (NLA) method for processing magnetic resonance (MR) and ultrasound (US) images, which has been modified from its original use in processing radar imagery. This technique reduces Gibb?s artifacts (ringing) while preserving the boundary edges and the mainlobe width of the impulse response. This is done by selecting, pixel-by-pixel, the specific signal-domain windowing function (cosine-on-pedestal) that optimizes each point throughout the image. The windows are chosen from an infinite but bounded set, determined by weighting coefficients for the cosine-on-pedestal equation and the values of the pixels adjacent to the point of interest. By using this method, total sidelobe suppression is achievable without degrading the resolution of the mainlobe. In radar applications, this nonlinear apodization technique has shown to require fewer operations per pixel than other traditional apodization techniques. The preliminary results from applications on MR and US data are presented here.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning a Loss Function for Segmentation: A Feasibility Study

    00:13:17
    0 views
    When training neural networks for segmentation, the Dice loss is typically used. Alternative loss functions could help the networks achieve results with higher user acceptance and lower correction effort, but they cannot be used directly if they are not differentiable. As a solution, we propose to train a regression network to approximate the loss function and combine it with a U-Net to compute the loss during segmentation training. As an example, we introduce the contour Dice coefficient (CDC) that estimates the fraction of contour length that needs correction. Applied to CT bladder segmentation, we show that a weighted combination of Dice and CDC loss improves segmentations compared to using only Dice loss, with regard to both CDC and other metrics.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Left Atrial Segmentation from Magnetic Resonance Image Sequences Using Deep Convolutional Neural Network with Autoencoder

    00:14:22
    0 views
    This study presents a novel automated algorithm to segment the left atrium (LA) from 2, 3 and 4-chamber long-axis cardiac cine magnetic resonance image (MRI) sequences using deep convolutional neural network (CNN). The objective of the segmentation process is to delineate the boundary between myocardium and endocardium and exclude the mitral valve so that the results could be used for generating clinical measurements such as strain and strain rate. As such, the segmentation needs to be performed using open contours, a more challenging problem than the pixel-wise semantic segmentation performed by existing machine learning-based approaches such as U-net. The proposed neural net is built based on pre-trained CNN Inception V4 architecture, and it predicts a compressed vector by applying a multi-layer autoencoder, which is then back-projected into the segmentation contour of the LA to perform the delineation using open contours. Quantitative evaluations were performed to compare the performances of the proposed method and the current state-of-the-art U-net method. Both methods were trained using 6195 images acquired from 80 patients and evaluated using 1515 images acquired from 20 patients where the datasets were obtained retrospectively, and ground truth manual segmentation was provided by an expert radiologist. The proposed method yielded an average Dice score of 93.1 % and Hausdorff distance of 4.2 mm, whereas the U-net yielded 91.6 % and 11.9 mm for Dice score and Hausdorff distance metrics, respectively. The quantitative evaluations demonstrated that the proposed method performed significantly better than U-net in terms of Hausdorff distance in addition to providing open contour-based segmentation for the LA.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CNN in CT Image Segmentation: Beyond Loss Function for Exploiting Ground Truth Images

    00:11:43
    0 views
    Exploiting more information from ground truth (GT) images now is a new research direction for further improving CNN's performance in CT image segmentation. Previous methods focus on devising the loss function for fulfilling such a purpose. However, it is rather difficult to devise a general and optimization-friendly loss function. We here present a novel and practical method that exploits GT images beyond the loss function. Our insight is that feature maps of two CNNs trained respectively on GT and CT images should be similar on some metric space, because they both are used to describe the same objects for the same purpose. We hence exploit GT images by enforcing such two CNNs' feature maps to be consistent. We assess the proposed method on two data sets, and compare its performance to several competitive methods. Extensive experimental results show that the proposed method is effective, outperforming all the compared methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Topology Highlights Neural Deficits of Post-Stroke Aphasia Patients

    00:12:01
    0 views
    Statistical inference of topological features decoded by persistent homology, a topological data analysis (TDA) algorithm, has been found to reveal patterns in electroencephalographic (EEG) signals that are not captured by standard temporal and spectral analysis. However, a potential challenge for applying topological inference to large-scale EEG data is the ambiguity of performing statistical inference and computational bottleneck. To address this problem, we advance a unified permutation-based inference framework for testing statistical difference in the topological feature persistence landscape (PL) of multi-trial EEG signals. In this study, we apply the framework to compare the PLs in EEG signals recorded in participants with aphasia vs. a matched control group during altered auditory feedback tasks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Generating Controllable Ultrasound Images of the Fetal Head

    00:12:27
    0 views
    Synthesis of anatomically realistic ultrasound images could be potentially valuable in sonographer training and to provide training images for algorithms, but is a challenging technical problem. Generating examples where different image attributes can be controlled may also be useful for tasks such as semi-supervised classification and regression to augment costly human annotation. In this paper, we propose using an information maximizing generative adversarial network with a least-squares loss function to generate new examples of fetal brain ultrasound images from clinically acquired healthy subject twenty-week anatomy scans. The unsupervised network succeeds in disentangling natural clinical variations in anatomical visibility and image acquisition parameters, which allows for user-control in image generation. To evaluate our method, we also introduce an additional synthetic fetal ultrasound specific image quality metric called the Frechet SonoNet Distance (FSD) to quantitatively evaluate synthesis quality. To the best of our knowledge, this is the first work that generates ultrasound images with a generator network trained on clinical acquisitions where governing parameters can be controlled in a visually interpretable manner.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Complementary Network with Adaptive Receptive Fields for Melanoma Segmentation

    00:13:26
    0 views
    Automatic melanoma segmentation in dermoscopic images is essential in computer-aided diagnosis of skin cancer. Existing methods may suffer from the hole and shrink problems with limited segmentation performance. To tackle these issues, we propose a novel complementary network with adaptive receptive filed learning. Instead of regarding the segmentation task independently, we introduce a foreground network to detect melanoma lesions and a background network to mask non-melanoma regions. Moreover, we propose adaptive atrous convolution (AAC) and knowledge aggregation module (KAM) to fill holes and alleviate the shrink problems. AAC allows us to explicitly control the receptive field at multiple scales. KAM convolves shallow feature maps by dilated convolutions with adaptive receptive fields, which are adjusted according to deep feature maps. In addition, A novel mutual loss is proposed to utilize the dependency between the foreground and background networks, thereby enabling the reciprocally influence within these two networks. Consequently, this mutual training strategy enables the semi-supervised learning and improve the boundary-sensitivity. Training with Skin Imaging Collaboration (ISIC) 2018 skin lesion segmentation dataset, our method achieves a dice coefficient of 86.4% and shows better performance compared with state-of-the-art melanoma segmentation methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Probabilistic Inference for Camera Calibration in Light Microscopy under Circular Motion

    00:15:32
    0 views
    Robust and accurate camera calibration is essential for 3D reconstruction in light microscopy under circular motion. Conventional methods require either accurate key point matching or precise segmentation of the axial-view images. Both remain challenging because specimens often exhibit transparency/translucency in a light microscope. To address those issues, we propose a probabilistic inference based method for the camera calibration that does not require sophisticated image pre-processing. Based on 3D projective geometry, our method assigns a probability on each of a range of voxels that cover the whole object. The probability indicates the likelihood of a voxel belonging to the object to be reconstructed. Our method maximizes a joint probability that distinguishes the object from the background. Experimental results show that the proposed method can accurately recover camera configurations in both light microscopy and natural scene imaging. Furthermore, the method can be used to produce high-fidelity 3D reconstructions and accurate 3D measurements.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Computer Aided Diagnosis of Clinically Significant Prostate Cancer in Low-Risk Patients on Multi-Parametric Mr Images Using Deep Learning

    00:13:46
    0 views
    The purpose of this study was to develop a quantitative method for detection and segmentation of clinically significant (ISUP grade = 2) prostate cancer (PCa) in low-risk patient. A consecutive cohort of 356 patients (active surveillance) was selected and divided in two groups: 1) MRI and targeted-biopsy positive PCa, 2) MRI and standard-biopsy negative PCa. A 3D convolutional neural network was trained in three-fold cross validation with MRI and targeted-biopsy positive patient?s data using two mp-MRI sequences (T2-weighted, DWI-b800) and ADC map as input. After training, the model was tested on separate positive and negative patients to evaluate the performance. The model achieved an average area under the curve (AUC) of the receiver operating characteristics is 0.78 (sensitivity = 85%, specificity = 72%). The diagnostic performance of the proposed method in segmenting significant PCa and to conform non-significant PCa in low-risk patients is characterized by a good AUC and negative-predictive-value.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Using Transfer Learning and Class Activation Maps Supporting Detection and Localization of Femoral Fractures on Anteroposterior Radiographs

    00:06:02
    0 views
    Acute Proximal Femoral Fractures are a growing health concern among the aging population. These fractures are often associated with significant morbidity and mortality as well as reduced quality of life. Furthermore, with the increasing life expectancy owing to advances in healthcare, the number of proximal femoral fractures may increase by a factor of 2 to 3, since the majority of fractures occur in patients over the age of 65. In this paper, we show that by using transfer learning and leveraging pre-trained models, we can achieve very high accuracy in detecting fractures and that they can be localized utilizing class activation maps.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Compressed Sensing for Data Reduction in Synthetic Aperture Ultrasound Imaging: A Feasibility Study

    00:14:13
    0 views
    Compressed Sensing (CS) has been applied by a few researchers to improve the frame rate of synthetic aperture (SA) ultrasound imaging. However, there appear to be no reports on reducing the number of receive elements by exploiting CS approach. In our previous work, we have proposed a strategic undersampling scheme based on Gaussian distribution for focused ultrasound imaging. In this work, we propose and evaluate three sampling schemes for SA to acquire RF data from a reduced number of receive elements. The effect of sampling schemes on CS recovery was studied using simulation and experimental data. In spite of using only 50% of the receive elements, it was found that the ultrasound images using the Gaussian sampling scheme had comparable resolution and contrast with respect to the reference image obtained using all the receive elements. Thus, the findings suggest a possibility to reduce the receive channel count of SA ultrasound system without practically sacrificing the image quality.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Residual Simplified Reference Tissue Model with Covariance Estimation

    00:12:13
    0 views
    The simplified reference tissue model (SRTM) can robustly estimate binding potential (BP) without a measured arterial blood input function. Although a voxel-wise estimation of BP, so-called parametric image, is more useful than the region of interest (ROI) based estimation of BP, it is challenging to calculate the accurate parametric image due to lower signal-to-noise ratio (SNR) of dynamic PET images. To achieve reliable parametric imaging, temporal images are commonly smoothed prior to the kinetic parameter estimation, which degrades the resolution significantly. To address the problem, we propose a residual simplified reference tissue model (ResSRTM) using an approximate covariance matrix to robustly compute the parametric image with a high resolution. We define the residual dynamic data as full data except for each frame data, which has higher SNR and can achieve the accurate estimation of parametric image. Since dynamic images have correlations across temporal frames, we propose an approximate covariance matrix using neighbor voxels by assuming the noise statistics of neighbors are similar. In phantom simulation and real experiments, we demonstrate that the proposed method outperforms the conventional SRTM method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Spatially Informed Cnn for Automated Cone Detection in Adaptive Optics Retinal Images

    00:14:23
    0 views
    Adaptive optics (AO) scanning laser ophthalmoscopy offers cellular level in-vivo imaging of the human cone mosaic. Existing analysis of cone photoreceptor density in AO images require accurate identification of cone cells, which is a time and labor-intensive task. Recently, several methods have been introduced for automated cone detection in AO retinal images using convolutional neural networks (CNN). However, these approaches have been limited in their ability to correctly identify cones when applied to AO images originating from different locations in the retina, due to changes to the reflectance and arrangement of the cone mosaics with eccentricity. To address these limitations, we present an adapted CNN architecture that incorporates spatial information directly into the network. Our approach, inspired by conditional generative adversarial networks, embeds the retina location from which each AO image was acquired as part of the training. Using manual cone identification as ground truth, our evaluation shows general improvement over existing approaches when detecting cones in the middle and periphery regions of the retina, but decreased performance near the fovea.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Mouse: An End-To-End Auto-Context Refinement Framework for Brain Ventricle & Body Segmentation in Embryonic Mice Ultrasound Volumes

    00:11:50
    0 views
    The segmentation of the brain ventricle (BV) and body in embryonic mice high-frequency ultrasound (HFU) volumes can provide useful information for biological researchers. However, manual segmentation of the BV and body requires substantial time and expertise. This work proposes a novel deep learning based end-to-end auto-context re?nement framework, consisting of two stages. The ?rst stage produces a low resolution segmentation of the BV and body simultaneously. The resulting probability map for each object (BV or body) is then used to crop a region of interest (ROI) around the target object in both the original image and the probability map to provide context to the re?nement segmentation network. Joint training of the two stages provides signi?cant improvement in Dice Similarity Coef?cient (DSC) over using only the ?rst stage (0.818 to 0.906 for the BV, and 0.919 to 0.934 for the body). The proposed method signi?cantly reduces the inference time (102.36 to 0.09 s/volume ?1000x faster) while slightly improves the segmentation accuracy over the previous methods using slide-window approaches.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Model-Based Deep Learning for Reconstruction of Joint K-Q Under-Sampled High Resolution Diffusion MRI

    00:17:18
    0 views
    We propose a model-based deep learning architecture for the reconstruction of highly accelerated diffusion magnetic resonance imaging (MRI) that enables high-resolution imaging. The proposed reconstruction jointly recovers all the diffusion-weighted images in a single step from a joint k-q under-sampled acquisition in a parallel MRI setting. We propose the novel use of a pre-trained denoiser as a regularizer in a model-based reconstruction for the recovery of highly under-sampled data. Specifically, we designed the denoiser based on a general diffusion MRI tissue microstructure model for multi-compartmental modeling. By using a wide range of biologically plausible parameter values for the multi-compartmental microstructure model, we simulated diffusion signal that spans the entire microstructure parameter space. A neural network was trained in an unsupervised manner using a convolutional autoencoder to learn the diffusion MRI signal subspace. We employed the autoencoder in a model-based reconstruction that unrolls the iterations similar to the recently proposed MoDL framework. Specifically, we show that the autoencoder provides a strong denoising prior to recover the q-space signal. We show reconstruction results on a simulated brain dataset that shows high acceleration capabilities of the proposed method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning for High Speed Optical Coherence Elastography

    00:12:37
    0 views
    Mechanical properties of tissue provide valuable information for identifying lesions. One approach to obtain quantitative estimates of elastic properties is shear wave elastography with optical coherence elastography (OCE). However, given the shear wave velocity, it is still difficult to estimate elastic properties. Hence, we propose deep learning to directly predict elastic tissue properties from OCE data. We acquire 2D images with a frame rate of 30 kHz and use convolutional neural networks to predict gelatin concentration, which we use as a surrogate for tissue elasticity. We compare our deep learning approach to predictions from conventional regression models, using the shear wave velocity as a feature. Mean absolut prediction errors for the conventional approaches range from 1.32+-0.98 p.p. to 1.57+-1.30 p.p. whereas we report an error of 0.90+-0.84 p.p. for the convolutional neural network with 3D spatio-temporal input. Our results indicate that deep learning on spatio-temporal data outperforms elastography based on explicit shear wave velocity estimation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly Supervised Multi-Task Learning for Cell Detection and Segmentation

    00:15:01
    0 views
    Cell detection and segmentation is fundamental for all downstream analysis of digital pathology images. However, obtaining the pixel-level ground truth for single cell segmentation is extremely labor intensive. To overcome this challenge, we developed an end-to-end deep learning algorithm to perform both single cell detection and segmentation using only point labels. This is achieved through the combination of different task orientated point label encoding methods and a multi-task scheduler for training. We apply and validate our algorithm on PMS2 stained colon rectal cancer and tonsil tissue images. Compared to the state-of-the-art, our algorithm shows significant improvement in cell detection and segmentation without increasing the annotation efforts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Computer-Aided Diagnosis of Congenital Abnormalities of the Kidney and Urinary Tract in Children Using a Multi-Instance Deep Learning Method Based on Ultrasound Imaging Data

    00:14:13
    0 views
    Ultrasound images are widely used for diagnosis of congenital abnormalities of the kidney and urinary tract (CAKUT). Since a typical clinical ultrasound image captures 2D information of a specific view plan of the kidney and images of the same kidney on different planes have varied appearances, it is challenging to develop a computer aided diagnosis tool robust to ultrasound images in different views. To overcome this problem, we develop a multi-instance deep learning method for distinguishing children with CAKUT from controls based on their clinical ultrasound images, aiming to automatic diagnose the CAKUT in children based on ultrasound imaging data. Particularly, a multi-instance deep learning method was developed to build a robust pattern classifier to distinguish children with CAKUT from controls based on their ultrasound images in sagittal and transverse views obtained during routine clinical care. The classifier was built on imaging features derived using transfer learning from a pre-trained deep learning model with a mean pooling operator for fusing instance-level classification results. Experimental results have demonstrated that the multi-instance deep learning classifier performed better than classifiers built on either individual sagittal slices or individual transverse slices.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Models to Study the Early Stages of Parkinsons Disease

    00:15:23
    0 views
    Current physio-pathological data suggest that Parkinson's Disease (PD) symptoms are related to important alterations in subcortical brain structures. However, structural changes in these small structures remain difficult to detect for neuro-radiologists, in particular, at the early stages of the disease ('de novo' PD patients). The absence of a reliable ground truth at the voxel level prevents the application of traditional supervised deep learning techniques. In this work, we consider instead an anomaly detection approach and show that auto-encoders (AE) could provide an efficient anomaly scoring to discriminate 'de novo' PD patients using quantitative Magnetic Resonance Imaging (MRI) data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Impact of 1D and 2D Visualisation on EEG-fMRI Neurofeedback Training During a Motor Imagery Task

    00:13:46
    0 views
    Bi-modal EEG-fMRI neurofeedback (NF) is a new technique of great interest. First, it can improve the quality of NF training by combining different real-time information (haemodynamic and electrophysiological) from the participant's brain activity; Second, it has potential to better understand the link and the synergy between the two modalities (EEG-fMRI). However there are different ways to show to the participant his NF scores during bi-modal NF sessions. To improve data fusion methodologies, we investigate the impact of a 1D or 2D representation when a visual feedback is given during motor imagery task. Results show a better synergy between EEG and fMRI when a 2D display is used. Subjects have better fMRI scores when 1D is used for bi-modal EEG-fMRI NF sessions; on the other hand, they regulate EEG more specifically when the 2D metaphor is used.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Self-Supervised Physics-Based Deep Learning MRI Reconstruction without Fully-Sampled Data

    00:14:26
    0 views
    Deep learning (DL) has emerged as a tool for improving accelerated MRI reconstruction. A common strategy among DL methods is the physics-based approach, where a regularized iterative algorithm alternating between data consistency and a regularizer is unrolled for a finite number of iterations. This unrolled network is then trained end-to-end in a supervised manner, using fully-sampled data as ground truth for the network output. However, in a number of scenarios, it is difficult to obtain fully-sampled datasets, due to physiological constraints such as organ motion or physical constraints such as signal decay. In this work, we tackle this issue and propose a self-supervised learning strategy that enables physics-based DL reconstruction without fully-sampled data. Our approach is to divide the acquired sub-sampled points for each scan into two sets, one of which is used to enforce data consistency in the unrolled network and the other to define the loss for training. Results show that the proposed self-supervised learning method successfully reconstructs images without fully-sampled data, performing similarly to the supervised approach that is trained with fully-sampled references. This has implications for physics-based inverse problem approaches for other settings, where fully-sampled data is not available or possible to acquire.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Meshing of Anatomical Shapes for Deformable Medial Modeling: Application to the Placenta in 3d Ultrasound

    00:14:59
    0 views
    Deformable medial modeling is an approach to extracting clinically useful features of the morphological skeleton of anatomical structures in medical images. Similar to any deformable modeling technique, it requires a pre-defined model, or synthetic skeleton, of a class of shapes before modeling new instances of that class. The creation of synthetic skeletons often requires manual interaction, and the deformation of the synthetic skeleton to new target geometries is prone to registration errors if not well initialized. This work presents a fully automated method for creating synthetic skeletons (i.e., 3D boundary meshes with medial links) for flat, oblong shapes that are homeomorphic to a sphere. The method rotationally cross-sections the 3D shape, approximates a 2D medial model in each cross-section, and then defines edges between nodes of neighboring slices to create a regularly sampled 3D boundary mesh. In this study, we demonstrate the method on 62 segmentations of placentas in first-trimester 3D ultrasound images and evaluate its compatibility and representational accuracy with an existing deformable modeling method. The method may lead to extraction of new clinically meaningful features of placenta geometry, as well as facilitate other applications of deformable medial modeling in medical image analysis.

Advertisement