Giter VIP home page Giter VIP logo

guoxiawang / doobnet Goto Github PK

View Code? Open in Web Editor NEW
29.0 29.0 11.0 3.37 MB

Caffe implementation of DOOBNet https://arxiv.org/abs/1806.03772

License: MIT License

MATLAB 7.60% Mathematica 0.03% Java 0.03% M 0.14% Python 8.65% CMake 2.58% Makefile 0.61% Dockerfile 0.07% C++ 74.19% Cuda 5.71% Shell 0.39%
boundary-detection bsds500 caffe edge-detection object-boundary-detection object-occlusion-boundary-detection occlusion-boundary-detection occlusion-edge-detection piod

doobnet's Issues

Back-propping via the Attention Loss weighting

I noticed in that in src/caffe/layers/class_balanced_sigmoid_cross_entropy_attention_loss_layer.cu that the following code:
bottom_diff[i] = scale[i] * (target_value == 1 ? (1 - sigmoid_data[ i ]) : sigmoid_data[ i ]) * tmp;
suggests that the scale[i] is treated as a constant. It therefore appears to be that only the log(p) and log(1-p) terms carry a gradient and not the Beta^{(1-p)^gamma} or Beta^{p^gamma}.

Is this because it otherwise leads to numerical instability?

About DOOB'evaluation code

Hi, Guoxia Wang. I tried to use your DOOBNet/doobscripts/evaluation/EvaluateOcc.m to evaluate occ edge results. But I don't understand the content of edge_maps, which is in EvaluateBoundary.m

resfile = fullfile(resPath, [imglist{ires}, '.mat']);
edge_maps = load(resfile);
edge_maps = edge_maps.edge_ori;
res_img = zeros([size(edge_maps.edge), 2], 'single');
res_img(:,:,1) = edge_maps.edge;
res_img(:,:,2) = edge_maps.ori;

Can you please tell what is included in edge_maps, and besides, whether the edge_maps.edge and edge_maps.ori are the original result of the test, without any processing(such as NMS)?
Thank you very much!

Some small problems

In the python file doobscript/doobnet_mat2hdf5_edge_ori.py, there are two problems

  1. line 343 and 346 should be train_pair_320x320.lst rather than train_pari_320x320.lst
  2. line 353 uses Python2 format
print('Down!')
# print 'Down!'

Offer edge_ori maps of previous methods for evaluation

Hi guoxia,

Thanks a lot for your evaluation code, it's very useful! Btw, can you kindly offer a link to edge_ori predictions of previous methods? When I try to run OccCompCurvesPlot.m, I can't find predictions of previous methods. I think the offered source will be a good contribution to the community.

Thank you by advance!

About the evaluation method for occlusion orientation map ?

Hi ! Guoxia Wang,
I had read the evaluation matlab code, which include the evaluation method of edge and occlusion orientation. And I found that the part of occlusion orientation eval is a little confused. Here is some piece of code in doobscript/evaluation/collect_eval_bdry_occ.m

image

So from the code above, the AA_edge tranfer to func:collect_eval_bdry_v2 for edge evaluation, and AA_ori for occlusion orientation evaluation. but just as the annotation I add above, the AA_ori consist of [thresh cntR sumR cntP_occ sumP ], so why using the edge result cntR and sumR here?(I mean, use edge result in orientation eval); This maybe not proper, and if I eval the result in this way, the orientation result will be influenced by edge result (when the edge result is improve, the orientation result will be improve too)

Besides, in orientation predict task, there is no positive and negative (different from edge task), so it's hard to calculate the "recall" and "precision" as edge task. Why not use the precision to eval the result of orientation map?

Really Thanks for your answer !

About the normalization of loss

Hi !
When I read the code of class_balanced_sigmoid_cross_entropy_attention_loss_layer.cu to figure out the implement detail of this loss, I just found that you use a normalization of "FULL" (sum up the loss and then divide by NHW). But in your paper, you just use a normalization of "BATCH_SIZE" (sum up the loss and then divide by batch_size). So could you please tell me whether methods is proper?
Thanks!

About DOC's .h5 data

Hi GUOxia Wang!recently,I tried to train the ori net of DOC,but it needs .h5 format,so I use your "doobnet_mat2hdf5_edge_ori.py" to tramsform .mat to .h5,and then I get the .h5 training dataset which includes the edge map and ori map channels. But when I train the ori net I encourntered the INF and NAN error in loss. I wonder whether the .h5 dataset can directly apply to DOC or maybe I should make a modification.Thanks for your time!

How to generate the oritentation label?

I'am intersted in your work and want to use this method in my customized dataset but I have no idea how to generate the oritentation label according to the edge label. Can you provide some details about it? Thank your, bro!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.