Learning to Solve Inverse Problems in Imaging

Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, tomographic reconstruction, MRI reconstruction, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. Recent advances in machine learning have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. In this talk, I will describe various classes of approaches to learned regularization, ranging from generative models to unrolled optimization perspectives, and explore their relative merits and sample complexities. We will also explore the difficulty of the underlying optimization task and how learned regularizers relate to oracle estimators.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Videos in this product

Learning to Solve Inverse Problems in Imaging

00:36:26
0 views
Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, tomographic reconstruction, MRI reconstruction, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. Recent advances in machine learning have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. In this talk, I will describe various classes of approaches to learned regularization, ranging from generative models to unrolled optimization perspectives, and explore their relative merits and sample complexities. We will also explore the difficulty of the underlying optimization task and how learned regularizers relate to oracle estimators.