Recurrent Neural Networks for Compressive Video Reconstruction

Single-pixel imaging allows low cost cameras to be built for imaging modalities where a conventional camera would either be too expensive or too cumbersome. This is very attractive for biomedical imaging applications based on hyperspectral measurements, such as image-guided surgery, which requires the full spectrum of fluorescence. A single-pixel camera essentially measures the inner product of the scene and a set of patterns. An inverse problem has to be solved to recover the original image from the raw measurement. The challenge in single-pixel imaging is to reconstruct the video sequence in real time from under-sampled data. Previous approaches have focused on the reconstruction of each frame independently, which fails to exploit the natural temporal redundancy in a video sequence. In this study, we propose a fast deep-learning reconstructor that exploits the spatio-temporal features in a video. In particular, we consider convolutional gated recurrent units that have low memory requirements. Our simulation shows than the proposed recurrent network improves the reconstruction quality compared to static approaches that reconstruct the video frames independently.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Videos in this product

Recurrent Neural Networks for Compressive Video Reconstruction

00:12:34
0 views
Single-pixel imaging allows low cost cameras to be built for imaging modalities where a conventional camera would either be too expensive or too cumbersome. This is very attractive for biomedical imaging applications based on hyperspectral measurements, such as image-guided surgery, which requires the full spectrum of fluorescence. A single-pixel camera essentially measures the inner product of the scene and a set of patterns. An inverse problem has to be solved to recover the original image from the raw measurement. The challenge in single-pixel imaging is to reconstruct the video sequence in real time from under-sampled data. Previous approaches have focused on the reconstruction of each frame independently, which fails to exploit the natural temporal redundancy in a video sequence. In this study, we propose a fast deep-learning reconstructor that exploits the spatio-temporal features in a video. In particular, we consider convolutional gated recurrent units that have low memory requirements. Our simulation shows than the proposed recurrent network improves the reconstruction quality compared to static approaches that reconstruct the video frames independently.