Unsupervised Stereo Matching with Occlusion-Aware Loss
Ningqi Luo, Chengxi Yang, Wenxiu Sun, Binheng Song
Deep learning methods have shown very promising results for regressing dense disparity maps directly from stereo image pairs. However, apart from a few public datasets such as Kitti, the ground-truth disparity needed for supervised training is hardly available. In this paper, we propose an unsupervised stereo matching approach with a novel occlusion-aware reconstruction loss. Together with smoothness loss and left-right consistency loss to enforce the disparity smoothness and correctness, the deep neural network can be well trained without requiring any ground-truth disparity data. To verify the effectiveness of the proposed method, we train and test our approach without ground-truth disparity data. Competitive results can be achieved on the public datasets (Kitti Stereo 2012, Kitti Stereo 2015, Cityscape) and our self-collected driving dataset that contains diverse driving scenario compared to the public datasets.