Matching Low-Quality Photo to DSLR-Quality with Deep Convolutional Networks

Weihao Xia, Chengxi Yang, Yujiu Yang, Wenxiu Sun

Off-the-shelf smartphone cameras typical fail to achieve the quality results of Digital Single Lens Reflex (DSLR) cameras due to their physical limitations. In the cases of autonomous driving or surveillance systems where primitive cameras are usually employed, follow-up work may hardly proceed since the low-quality images result in strong obstacles. However, most existing photo quality enhancement methods focus on certain attributes such as super-resolution, generic photo quality enhancement has not been addressed as its entirely. In this work, we formulate this problem as an image quality matching problem under image translation framework and propose an end-to-end learning approach that translates low-quality photos captured by cameras with limited capabilities into DSLR-quailty photos. Unlike most other methods without direction of enhancement, our approach matches low-quality photos to DSLR-quailty counterparts. Qualitative and quantitative comparisons have shown that our method improves the existing state-of-art in terms of structural similarity measure, peak signal-to-noise ratio and by visual appearance, where artifacts and content changes are significantly reduced. Extensive experiments show its potential as a preprocessing module to translate image quality to target domain.