This repository includes datasets, code and results for our paper:
Weakly Supervised RGB-D Salient Object Detection with Prediction Consistency Training and Active Scribble Boosting, TIP'22
- Linux with Python ≥ 3.6
- CUDA == 9.2
- PyTorch == 1.4 and torchvision that matches the PyTorch installation
- cv2, tqdm, scikit-learn
Note, other PyTorch and CUDA versions may bring performance degradation.
-
We manually re-label two widely used publicly available RGB-D SOD benchmarks (i.e., NJU2K and NLPR) with scribble annotattions, and use them as the training datasets. Please find the scribble datasets from Google Drive.
-
We use seven commonly used RGB-D SOD benchmarks (i.e., DES, LFSD, NJU2K_Test, NLPR_Test, SIP, SSD, STERE) as the testing datasets. Please find the testing datasets from Google Drive.
Download and unzip the training and testing datasets. The dataset directory needs to have the following directory structure:
dataset
├── train_data/
| └── {depth,gray,gt,gt_mask,img,mask}/
└── test_data/
└── {depth,gt,img}/
└── {DES,LFSD,NJU2K_Test,NLPR_Test,SIP,SSD,STERE}/
The gt
, mask
and gt_mask
in train_data contain foreground scribbles, foreground+background scribbles and ground-truth masks respectively. The gray
contains grayscale images converted using this code.
We also provide the coarse scribbles labeled by annotator2 in Google Drive.
- To train a warm-up stage model, run
python train.py --output_dir /path/to/checkpoint_dir --warmup_stage
- To generate saliency maps using a trained warm-up stage model, run
python test.py --model_path /path/to/checkpoint_file --warmup_stage
- To train a mutual learning stage model, run
python train.py --output_dir /path/to/checkpoint_dir --warmup_model /path/to/checkpoint_file
- To generate saliency maps using a trained mutual learning stage model, run
python test.py --model_path /path/to/checkpoint_file
-
Python evaluation for MAE metric, run
mae_eval.py
-
MATLAB evaluation for all metrics, run
./matlab_measure/rgbd_metric.m
Our pre-trained models and predicted saliency maps are available. All models are trained on a single NVIDIA V100-32G GPU. Average results over seven RGB-D SOD benchmarks are reported.
Name | MAE | download | |||
---|---|---|---|---|---|
SSAL-D | .8399 | .8243 | .8959 | .0612 | saliency maps |
SCWC-D | .8415 | .8280 | .9008 | .0604 | saliency maps |
BBSNet-W | .8469 | .8072 | .9030 | .0593 | saliency maps |
ours w/o pct | .8529 | .8359 | .9041 | .0566 | model | saliency maps |
ours | .8633 | .8398 | .9096 | .0549 | model | saliency maps |
ours (anno2) | .8550 | .8245 | .9036 | .0596 | model | saliency maps |
ours+ | .8619 | .8401 | .9094 | .0543 | model | saliency maps |
ours+10%ABS | .8677 | .8456 | .9115 | .0529 | model | saliency maps |
-
SSAL-D
,SCWC-D
andBBSNet-W
are our implemented scribble-based RGB-D SOD variants. -
ours w/o pct
is the warm-up stage model. To obtain saliency maps using the pre-trained models run
python test.py --model_path /path/to/checkpoint_file --warmup_stage
ours
andours(anno2)
are the mutual learning stage models trained with scribbles labeled by annotator1 (default) and annotator2 respectively. To obtain saliency maps using the pre-trained models run
python test.py --model_path /path/to/checkpoint_file
ours+
andours+10%ABS
are the second-round models, whereours+
is self-training model without extra scribbles andours+10%ABS
is trained with 10% extra scribbles selected by ABS. To obtain saliency maps using the pre-trained models run
python test.py --model_path /path/to/checkpoint_file --second_round
If you find this project useful for your research, please use the following BibTeX entry.
@article{xu2022weakly,
title={Weakly Supervised RGB-D Salient Object Detection with Prediction Consistency Training and Active Scribble Boosting},
author={Xu, Yunqiu and Yu, Xin and Zhang, Jing and Zhu, Linchao and Wang, Dadong},
journal={IEEE Transactions on Image Processing},
year={2022},
volume={31},
pages={2148-2161},
doi={10.1109/TIP.2022.3151999}
}
This project is released under the MIT license.
We build the project based on Scribble_Saliency. Thanks for their contribution.
If you have any questions, please drop me an email: [email protected]