Official implementation of CVPR2020 paper Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields paper link
NYUv2OC++ dataset(only for test use) download link
- PyTorch >= 0.4
- OpenCV
- CUDA >= 8.0(Only tested with CUDA >= 8.0)
- Easydict
sh download.sh
#Use depth only as input
cd model/nyu/df_nyu_depth_only
python train.py -d 0 -f <path-to-list-file> --dataset nyu
#Use RGB image as guidance
cd model/nyu/df_nyu_rgb_guidance
python train.py -d 0 -f <path-to-list-file> --dataset nyu
#Use depth only as input
cd model/nyu/df_nyu_depth_only
python test.py -d 0 --dataset nyu --save_path <path-to-result> -f <path-to-list-file> --load_ckpt <path-checkpoint>
#Use RGB image as guidance
cd model/nyu/df_nyu_rgb_guidance
python train.py -d 0 --dataset nyu --save_path <path-to-result> -f <path-to-list-file> --load_ckpt <path-checkpoint>
@InProceedings{Ramamonjisoa_2020_CVPR,
author = {Ramamonjisoa, Michael and Du, Yuming and Lepetit, Vincent},
title = {Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
The model can be trained with only synthetic data(Scenenet for example), and generalize naturally on real data.
The code is based on TorchSeg
The NYUv2-OC++ is annotated manually by 4 PhD students major in computer vision. Special thanks to Yang Xiao and Xuchong Qiu for their help in annotating the NYUv2-OC++ dataset.