Giter VIP home page Giter VIP logo

guoxiawang / doobnet Goto Github PK

View Code? Open in Web Editor NEW
29.0 5.0 11.0 3.37 MB

Caffe implementation of DOOBNet https://arxiv.org/abs/1806.03772

License: MIT License

MATLAB 7.60% Mathematica 0.03% Java 0.03% M 0.14% Python 8.65% CMake 2.58% Makefile 0.61% Dockerfile 0.07% C++ 74.19% Cuda 5.71% Shell 0.39%
object-boundary-detection boundary-detection edge-detection occlusion-boundary-detection occlusion-edge-detection object-occlusion-boundary-detection bsds500 piod caffe

doobnet's Introduction

DOOBNet: Deep Object Occlusion Boundary Detection from an Image (arXiv) accepted by ACCV2018[Oral]

Created by Guoxia Wang.

Introduction

Object occlusion boundary detection is a fundamental and crucial research problem in computer vision. This is challenging to solve as encountering the extreme boundary/non-boundary class imbalance during training an object occlusion boundary detector. In this paper, we propose to address this class imbalance by up-weighting the loss contribution of false negative and false positive examples with our novel Attention Loss function. We also propose a unified end-to-end multi-task deep object occlusion boundary detection network (DOOBNet) by sharing convolutional features to simultaneously predict object boundary and occlusion orientation. DOOBNet adopts an encoder-decoder structure with skip connection in order to automatically learn multi-scale and multi-level features. We significantly surpass the state-of-the-art on the PIOD dataset (ODS F-score of .702) and the BSDS ownership dataset (ODS F-score of .555), as well as improving the detecting speed to as 0.037s per image on the PIOD dataset.

Citation

If you find DOOBNet useful in your research, please consider citing:

@article{wang2018doobnet,
  Title = {DOOBNet: Deep Object Occlusion Boundary Detection from an Image},
  Author = {Guoxia Wang and XiaoChuan Wang and Frederick W. B. Li and Xiaohui Liang},
  Journal = {arXiv preprint arXiv:1806.03772},
  Year = {2018}
}

Demo

Here, we assume that you locate in the DOOBNet root directory $DOOBNET_ROOT.

If you want to run our DOOBNet quickly, you need to download our trained model from DOOBNet PIOD and save the doobnet_piod.caffemodel to $DOOBNET_ROOT/examples/doobnet/Models/. Then move to the folder and run the python demo script.

cd $DOOBNET_ROOT/examples/doobnet
python doobnet_demo.py

Data Preparation

PASCAL Instance Occlusion Dataset (PIOD)

You may download the dataset original images from PASCAL VOC 2010 and annotations from here. Then you should copy or move JPEGImages folder in PASCAL VOC 2010 and Data folder and val_doc_2010.txt in PIOD to data/PIOD/. You will have the following directory structure:

PIOD
|_ Data
|  |_ <id-1>.mat
|  |_ ...
|  |_ <id-n>.mat
|_ JPEGImages 
|  |_ <id-1>.jpg
|  |_ ...
|  |_ <id-n>.jpg
|_ val_doc_2010.txt

Now, you can use data convert tool to augment and generate HDF5 format data for DOOBNet.

mkdir data/PIOD/Augmentation

python doobscripts/doobnet_mat2hdf5_edge_ori.py \
--dataset PIOD \
--label-dir data/PIOD/Data \
--img-dir data/PIOD/JPEGImages \
--piod-val-list-file data/PIOD/val_doc_2010.txt \
--output-dir data/PIOD/Augmentation

BSDS ownership

For BSDS ownership dataset, you may download the dataset original images from BSDS300 and annotations from here. Then you should copy or move BSDS300 folder in BSDS300-images and trainfg and testfg folder in BSDS_theta to data/BSDSownership/. And you will have the following directory structure:

BSDSownership
|_ trainfg
|  |_ <id-1>.mat
|  |_ ...
|  |_ <id-n>.mat
|_ testfg
|  |_ <id-1>.mat
|  |_ ...
|  |_ <id-n>.mat
|_ BSDS300
|  |_ images
|     |_ train
|        |_ <id-1>.jpg
|        |_ ...
|        |_ <id-n>.jpg
|     |_ ...
|  |_ ...

Note that BSDS ownership's test set are split from 200 train images (100 for train, 100 for test). More information you can check ids in trainfg and testfg folder and ids in BSDS300/images/train folder, or refer to here

Run the following code for BSDS ownership dataset.

mkdir data/BSDSownership/Augmentation

python doobscripts/doobnet_mat2hdf5_edge_ori.py \
--dataset BSDSownership \
--label-dir data/BSDSownership/trainfg \
--img-dir data/BSDSownership/BSDS300/images/train \
--bsdsownership-testfg data/BSDSownership/testfg \
--output-dir data/BSDSownership/Augmentation 

Training

Firstly, you need to download the Res50 weight file from Res50 and save resnet50.caffemodel to the folder $DOOBNET_ROOT/models/resnet/.

PASCAL Instance Occlusion Dataset (PIOD)

For training DOOBNet on PIOD training dataset, you can run:

cd $DOOBNET_ROOT/examples/doobnet/PIOD

./train.sh

When training completed, you need to modify model = '../Models/doobnet_piod.caffemodel' in deploy_doobnet_piod.py and then run python deploy_doobnet_piod.py to get the results on PIOD testing dataset. For comparation, you can also download our trained model from DOOBNet PIOD.

BSDS ownership

For training DOOBNet on BSDS ownership, you can refer the manner as same as PIOD dataset above. You can download trained DOOBNet on BSDS ownership from here DOOBNet BSDSownership.

Evaluation

Here we provide the PIOD and the BSDS ownership dataset's evaluation and visualization code in doobscripts folder.

Note that you need to config the necessary paths or variables. More information please refers to doobscripts/README.md.

To run the evaluation:

run doobscripts/evaluation/EvaluateOcc.m

Option

For visualization, to run the script:

run doobscripts/visulation/PlotAll.m

Third-party re-implementations

  1. Tensorflow, Attention Loss: code. Thanks for Guo Rui's contribution!

doobnet's People

Contributors

guoxiawang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

doobnet's Issues

Offer edge_ori maps of previous methods for evaluation

Hi guoxia,

Thanks a lot for your evaluation code, it's very useful! Btw, can you kindly offer a link to edge_ori predictions of previous methods? When I try to run OccCompCurvesPlot.m, I can't find predictions of previous methods. I think the offered source will be a good contribution to the community.

Thank you by advance!

Back-propping via the Attention Loss weighting

I noticed in that in src/caffe/layers/class_balanced_sigmoid_cross_entropy_attention_loss_layer.cu that the following code:
bottom_diff[i] = scale[i] * (target_value == 1 ? (1 - sigmoid_data[ i ]) : sigmoid_data[ i ]) * tmp;
suggests that the scale[i] is treated as a constant. It therefore appears to be that only the log(p) and log(1-p) terms carry a gradient and not the Beta^{(1-p)^gamma} or Beta^{p^gamma}.

Is this because it otherwise leads to numerical instability?

Some small problems

In the python file doobscript/doobnet_mat2hdf5_edge_ori.py, there are two problems

  1. line 343 and 346 should be train_pair_320x320.lst rather than train_pari_320x320.lst
  2. line 353 uses Python2 format
print('Down!')
# print 'Down!'

About DOOB'evaluation code

Hi, Guoxia Wang. I tried to use your DOOBNet/doobscripts/evaluation/EvaluateOcc.m to evaluate occ edge results. But I don't understand the content of edge_maps, which is in EvaluateBoundary.m

resfile = fullfile(resPath, [imglist{ires}, '.mat']);
edge_maps = load(resfile);
edge_maps = edge_maps.edge_ori;
res_img = zeros([size(edge_maps.edge), 2], 'single');
res_img(:,:,1) = edge_maps.edge;
res_img(:,:,2) = edge_maps.ori;

Can you please tell what is included in edge_maps, and besides, whether the edge_maps.edge and edge_maps.ori are the original result of the test, without any processing(such as NMS)?
Thank you very much!

About the evaluation method for occlusion orientation map ?

Hi ! Guoxia Wang,
I had read the evaluation matlab code, which include the evaluation method of edge and occlusion orientation. And I found that the part of occlusion orientation eval is a little confused. Here is some piece of code in doobscript/evaluation/collect_eval_bdry_occ.m

image

So from the code above, the AA_edge tranfer to func:collect_eval_bdry_v2 for edge evaluation, and AA_ori for occlusion orientation evaluation. but just as the annotation I add above, the AA_ori consist of [thresh cntR sumR cntP_occ sumP ], so why using the edge result cntR and sumR here?(I mean, use edge result in orientation eval); This maybe not proper, and if I eval the result in this way, the orientation result will be influenced by edge result (when the edge result is improve, the orientation result will be improve too)

Besides, in orientation predict task, there is no positive and negative (different from edge task), so it's hard to calculate the "recall" and "precision" as edge task. Why not use the precision to eval the result of orientation map?

Really Thanks for your answer !

About DOC's .h5 data

Hi GUOxia Wang!recently,I tried to train the ori net of DOC,but it needs .h5 format,so I use your "doobnet_mat2hdf5_edge_ori.py" to tramsform .mat to .h5,and then I get the .h5 training dataset which includes the edge map and ori map channels. But when I train the ori net I encourntered the INF and NAN error in loss. I wonder whether the .h5 dataset can directly apply to DOC or maybe I should make a modification.Thanks for your time!

About the normalization of loss

Hi !
When I read the code of class_balanced_sigmoid_cross_entropy_attention_loss_layer.cu to figure out the implement detail of this loss, I just found that you use a normalization of "FULL" (sum up the loss and then divide by NHW). But in your paper, you just use a normalization of "BATCH_SIZE" (sum up the loss and then divide by batch_size). So could you please tell me whether methods is proper?
Thanks!

How to generate the oritentation label?

I'am intersted in your work and want to use this method in my customized dataset but I have no idea how to generate the oritentation label according to the edge label. Can you provide some details about it? Thank your, bro!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.