Learning to Segment Vessels from Poorly Illuminated Fundus Images

This video program is a part of the Premium package:

Learning to Segment Vessels from Poorly Illuminated Fundus Images


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Learning to Segment Vessels from Poorly Illuminated Fundus Images

0 views
  • Share
Create Account or Sign In to post comments
Segmentation of retinal vessels is important for determining various disease conditions, but deep learning approaches have been limited by the unavailability of large, publicly available, and annotated datasets. The paper addresses this problem and analyses the performance of U-Net architecture on DRIVE and RIM-ONE datasets. A different approach for data aug- mentation using vignetting masks is presented to create more annotated fundus data. Unlike most prior efforts that attempt transforming poor images to match the images in a training set, our approach takes better quality images (which have good expert labels) and transforms them to resemble poor quality target images. We apply substantial vignetting masks to the DRIVE dataset and then train a U-net on the result- ing lower quality images (using the corresponding expert la- bel data). We quantitatively show that our approach leads to better generalized networks, and we show qualitative perfor- mance improvements in RIM-ONE images (which lack expert labels).
Segmentation of retinal vessels is important for determining various disease conditions, but deep learning approaches have been limited by the unavailability of large, publicly available, and annotated datasets. The paper addresses this problem and analyses the performance of U-Net architecture on DRIVE and RIM-ONE datasets. A different approach for data aug- mentation using vignetting masks is presented to create more annotated fundus data. Unlike most prior efforts that attempt transforming poor images to match the images in a training set, our approach takes better quality images (which have good expert labels) and transforms them to resemble poor quality target images. We apply substantial vignetting masks to the DRIVE dataset and then train a U-net on the result- ing lower quality images (using the corresponding expert la- bel data). We quantitatively show that our approach leads to better generalized networks, and we show qualitative perfor- mance improvements in RIM-ONE images (which lack expert labels).