Defocus blur often degrades the performance of image understanding, such as object recognition and image segmentation. Restoring an all-in-focus image from its defocused version is highly beneficial to visual information processing and many photographic applications, despite being a severely ill-posed problem. We propose a novel convolutional neural network architecture AIFNet for removing spatially-varying defocus blur from a single defocused image. To remedy the lack of real defocused image datasets, we leverage light field synthetic aperture and refocusing techniques to generate a large set of realistic defocused and all-in-focus image pairs depicting a variety of natural scenes for network training. AIFNet consists of three modules:defocus map estimation, deblurring and domain adaptation. The effects and performance of various network components are extensively evaluated. We also compare our method with existing solutions using several publicly available datasets. Quantitative and qualitative evaluations demonstrate that AIFNet shows the state-of-the-art performance.
Todo