Giter VIP home page Giter VIP logo

hed's Introduction

Holistically-Nested Edge Detection

Created by Saining Xie at UC San Diego

Introduction:

We develop a new edge detection algorithm, holistically-nested edge detection (HED), which performs image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are important in order to resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of .790) and the NYU Depth dataset (ODS F-score of .746), and do so with an improved speed (0.4s per image). Detailed description of the system can be found in our paper.

Citations

If you are using the code/model/data provided here in a publication, please cite our paper:

@InProceedings{xie15hed,
  author = {"Xie, Saining and Tu, Zhuowen"},
  Title = {Holistically-Nested Edge Detection},
  Booktitle = "Proceedings of IEEE International Conference on Computer Vision",
  Year  = {2015},
}

Changelog

If you have downloaded the previous version (testing code) of HED, please note that we updated the code base to the new version of Caffe. We uploaded a new pretrained model with better performance. We adopted the python interface written for the FCN paper instead of our own implementation for training and testing. The evaluation protocol doesn't change.

Pretrained model

We provide the pretrained model and training/testing code for the edge detection framework Holistically-Nested Edge Detection (HED). Please see the Arxiv or ICCV paper for technical details. The pretrained model (fusion-output) gives ODS=.790 and OIS=.808 result on BSDS benchmark dataset. 0. Download the pretrained model (56MB) from (https://vcl.ucsd.edu/hed/hed_pretrained_bsds.caffemodel) and place it in examples/hed/ folder.

Installing

  1. Install prerequisites for Caffe(http://caffe.berkeleyvision.org/installation.html#prequequisites)
  2. Modified-caffe for HED: https://github.com/s9xie/hed.git

Training HED

To reproduce our results on BSDS500 dataset: 0. data: Download the augmented BSDS data (1.2GB) from (https://vcl.ucsd.edu/hed/HED-BSDS.tar) and extract it in data/ folder 0. initial model: Download fully convolutional VGG model (248MB) from (https://vcl.ucsd.edu/hed/5stage-vgg.caffemodel) and put it in examples/hed folder 0. run the python script python solve.py in examples/hed

Testing HED

Please refer to the IPython Notebook in examples/hed/ to test a trained model. The fusion-output, and individual side-output from 5 scales will be produced after one forward pass.

Note that if you want to evaluate the results on BSDS benchmarking dataset, you should do the standard non-maximum suppression (NMS) and edge thinning. We used Piotr's Structured Forest matlab toolbox available here https://github.com/pdollar/edges. Some helper functions are also provided in the eval/ folder.

Batch Processing

Jun-Yan Zhu from UC Berkeley recently applied HED for their Image-to-Image Translation work. A nice script for batch-processing HED edge detection can be found here. Thanks Jun-Yan!

Precomputed Results

If you want to compare your method with HED and need the precomputed results, you can download them from (https://vcl.ucsd.edu/hed/eval_results.tar).

Acknowledgment:

This code is based on Caffe. Thanks to the contributors of Caffe. Thanks @shelhamer and @longjon for providing fundamental implementations that enable fully convolutional training/testing in Caffe.

@misc{Jia13caffe,
  Author = {Yangqing Jia},
  Title = { {Caffe}: An Open Source Convolutional Architecture for Fast Feature Embedding},
  Year  = {2013},
  Howpublished = {\url{http://caffe.berkeleyvision.org/}}
}

If you encounter any issue when using our code or model, please let me know.

hed's People

Contributors

s9xie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hed's Issues

Package Version

Would you like to provide a list of the version of the related version, such as, cuda, cudnn, instead of the general installation introduction from the official website? It will help me a lot. Thank you very much!

caffe crash when I set batch size > 1

When I use default batch_size = 1, run 'caffe train --solver solver.prototxt' works. But if I set batch_size = 2 in train_val file, caffe crashes.
=============== error log ==============
...
layer {
type: "SigmoidCrossEntropyLoss"
bottom: "upscore-fuse"
bottom: "label"
top: "fuse_loss"
loss_weight: 1
}
@ 0x7f5207b74a4a (unknown)
I0509 14:28:30.309254 22135 layer_factory.hpp:76] Creating layer data
I0509 14:28:30.309273 22135 net.cpp:111] Creating Layer data
I0509 14:28:30.309281 22135 net.cpp:434] data -> data
I0509 14:28:30.309293 22135 net.cpp:434] data -> label
I0509 14:28:30.309303 22135 image_labelmap_data_layer.cpp:40] Opening file ../../data/HED-BSDS/train_pair.lst
@ 0x7f5207953182 start_thread
@ 0x7f520807d47d (unknown)
@ (nil) (unknown)
Aborted (core dumped)
============= end of errors =====================

Sometimes I got this error:
============== error log ===============
layer {
type: "SigmoidCrossEntropyLoss"
bottom: "upscore-fuse"
bottom: "label"
top: "fuse_loss"
loss_weight: 1
}
@ 0x7fe89b337598 caffe::ImageLabelmapDataLayer<>::load_batch()
I0509 14:34:30.756006 24849 layer_factory.hpp:76] Creating layer data
I0509 14:34:30.756026 24849 net.cpp:111] Creating Layer data
I0509 14:34:30.756033 24849 net.cpp:434] data -> data
I0509 14:34:30.756044 24849 net.cpp:434] data -> label
I0509 14:34:30.756054 24849 image_labelmap_data_layer.cpp:40] Opening file ../../data/HED-BSDS/train_pair.lst
@ 0x7fe89b2aeda9 caffe::BasePrefetchingLabelmapDataLayer<>::InternalThreadEntry()
@ 0x7fe89b24bb6f caffe::InternalThread::entry()
@ 0x7fe89958fa4a (unknown)
@ 0x7fe89936e182 start_thread
@ 0x7fe899a9847d (unknown)
@ (nil) (unknown)
Aborted (core dumped)
================ end of errors ==============

When the batch size has to be one? And I think my card has large enough memory for > 2 images as input...

loss when converge

Hi,

I tried to build me own fcn, but it seems that loss I can get after fine tune is still around 1.1k for 512X384 images. So could anyone tell me the loss you got when the cnn converges?

Many thanks.

Compilation error

It is nice to read your innovative paper and I want to replicate your performance you posted on your thesie ,Unfortunately,there were some error happens when I compile your lastest project. would your please to fix these bugs.

Validation and test Images

Hi, I'm new to ML.
I tried to train HED with my own dataset of 24 images which works fine.
When I used full 960 images dataset it gives wired looking result images.

All I've is one training folder(original and ground truth images) and one test folder with same structure BSDS-dataset. In HED paper it is written that their dataset is composed of "200 training, 100 validation, and 200 testing images". I don't find any validation images in BSDS-dataset folder.

In train_val.prototxt file's testing phase "#Just setup the network. No real online testing" is mentioned and also points to the same file. Does that mean, training doesn't include testing.

Can anyone help me with these,

  1. How to setup validation images
  2. How to setup testing images

Error while closing application using custom layers

Hi,
first of all congratulation for this awesome project, the results are really impressive. I'm trying to load the trained network to detect lines directly into my c++ application. Everything seem to works fine but I have a annoying error when the applications is closed:
*** Error in `./myApp': corrupted double-linked list: 0x0000000000975960 ***

The error raises when the application is closed, so I have no idea how to check the because I tried to destroy caffe instances and I get no error.

To reproduce the error you just can use the cpp_classification included into the "official" version of caffe and remove the labeling part. If it is necessary I can send you the modified file.

Any idea about how to fix the error? I tried some networks with the "official" version and I had not this problem.

thanks in advance,
Fran.

Training on NYUD dataset

Hi Saining,

I would like to reproduce the results of NYUD dataset in your paper. Could you please provide details on how to prepare the dataset in an acceptable format to the HED algorithm? For example, do I need to argument the images in a similar manner as the BSD dataset? How to convert a binary boundary mask to the ground truth format?

Looking forward to your reply and many thanks,
Feng

Compile error, at SigmoidCrossEntropyLoss Layer

Hi, under Ubuntu 14.04, CPU mode, I tried to run hed but got an error that says following:

CXX src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp
src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp:118:1: error: no ‘void caffe::SigmoidCrossEntropyLossLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype_>&, const std::vector&, const std::vectorcaffe::Blob<Dtype_>&)’ member function declared in class ‘caffe::SigmoidCrossEntropyLossLayer’
make: *** [.build_release/src/caffe/layers/sigmoid_cross_entropy_loss_layer.o] Error 1

I read "sigmoid_cross_entropy_loss_layer.cpp" and
it seems that "sigmoid_cross_entropy_loss_layer.hpp" file, which declare the class SigmoidCrossEntropyLossLayer::Backward_gpu
is deleted in this hed version, and instead includes "caffe/layer.hpp" file
that the class but not in the name of SigmoidCross...

Does anyone have the same issue and solved? I look forward to your reply or any suggestions, thanks!

CUDA error during the testing stage

Hi there,

I modified the testing Python code provided in the package. However when I try to predict images using the pre-trained BSDS model, I got the following error:
F0106 13:18:05.372474 6347 syncedmem.cpp:19] Check failed: error == cudaSuccess (29 vs. 0) driver shutting down

I have tested on two machines, one with GTX Titan X and the other with GTX 960, giving the same error. If I supply a list images, this only happens after the code finishes processing all the images. It appears to be one of the side effects of certain version of Caffe. So far I have not yet found any solutions, could you please help me with this?

Thanks in advance,
Feng

about the input label

hello, i have show the input image and label, i found that some image if it's label/255, which is all set to be zero for that image, but there is still apparently edges in the image, would this confusing the network? can anybody help? thanks a lot

make error with openblas

I'm using ubuntu 14.04 with openblas, the make all command gives the following errors:

[ 85%] Building CXX object src/caffe/CMakeFiles/caffe.dir/layer.cpp.o
Linking CXX shared library ../../lib/libcaffe.so
[ 86%] Built target caffe
Scanning dependencies of target caffe.bin
[ 86%] Building CXX object tools/CMakeFiles/caffe.bin.dir/caffe.cpp.o
Linking CXX executable caffe
/usr/bin/ld: caffe: hidden symbol `pthread_atfork' in /usr/lib/x86_64-linux-gnu/libpthread_nonshared.a(pthread_atfork.oS) is referenced by DSO
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
make[2]: *** [tools/caffe] Error 1
make[1]: *** [tools/CMakeFiles/caffe.bin.dir/all] Error 2
make: *** [all] Error 2

Any help would be appreciated. Thanks!

make error problem

hi there
i follow the tutorial using the file provided by hed to modify my caffe.
i type make clean,and the make all
some error arise
CXX src/caffe/layers/filter_layer.cpp CXX src/caffe/layers/flatten_layer.cpp CXX src/caffe/layers/hdf5_data_layer.cpp CXX src/caffe/layers/hdf5_output_layer.cpp CXX src/caffe/layers/hinge_loss_layer.cpp src/caffe/layers/bias_layer.cpp: In member function ‘virtual void caffe::BiasLayer<Dtype>::LayerSetUp(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&)’: src/caffe/layers/bias_layer.cpp:16:11: error: ‘BiasParameter’ does not name a type const BiasParameter& param = this->layer_param_.bias_param(); ^ src/caffe/layers/bias_layer.cpp:17:52: error: ‘param’ was not declared in this scope const int axis = bottom[0]->CanonicalAxisIndex(param.axis()); ^ src/caffe/layers/bias_layer.cpp: In member function ‘virtual void caffe::BiasLayer<Dtype>::Reshape(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&)’: src/caffe/layers/bias_layer.cpp:42:9: error: ‘BiasParameter’ does not name a type const BiasParameter& param = this->layer_param_.bias_param(); ^ src/caffe/layers/bias_layer.cpp:49:41: error: ‘param’ was not declared in this scope 0 : bottom[0]->CanonicalAxisIndex(param.axis()); ^ make: *** [.build_release/src/caffe/layers/bias_layer.o] Error 1 make: *** Waiting for unfinished jobs.... src/caffe/layers/batch_norm_layer.cpp: In member function ‘virtual void caffe::BatchNormLayer<Dtype>::LayerSetUp(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&)’: src/caffe/layers/batch_norm_layer.cpp:12:3: error: ‘BatchNormParameter’ was not declared in this scope BatchNormParameter param = this->layer_param_.batch_norm_param(); ^ src/caffe/layers/batch_norm_layer.cpp:13:30: error: ‘param’ was not declared in this scope moving_average_fraction_ = param.moving_average_fraction(); ^ make: *** [.build_release/src/caffe/layers/batch_norm_layer.o] Error 1 src/caffe/layers/elu_layer.cpp: In instantiation of ‘void caffe::ELULayer<Dtype>::Forward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]’: src/caffe/layers/elu_layer.cpp:44:1: required from here src/caffe/layers/elu_layer.cpp:14:54: error: ‘class caffe::LayerParameter’ has no member named ‘elu_param’ Dtype alpha = this->layer_param_.elu_param().alpha(); ^ src/caffe/layers/elu_layer.cpp: In instantiation of ‘void caffe::ELULayer<Dtype>::Backward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<bool>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = float]’: src/caffe/layers/elu_layer.cpp:44:1: required from here src/caffe/layers/elu_layer.cpp:31:56: error: ‘class caffe::LayerParameter’ has no member named ‘elu_param’ Dtype alpha = this->layer_param_.elu_param().alpha(); ^ src/caffe/layers/elu_layer.cpp: In instantiation of ‘void caffe::ELULayer<Dtype>::Forward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = double]’: src/caffe/layers/elu_layer.cpp:44:1: required from here src/caffe/layers/elu_layer.cpp:14:54: error: ‘class caffe::LayerParameter’ has no member named ‘elu_param’ Dtype alpha = this->layer_param_.elu_param().alpha(); ^ src/caffe/layers/elu_layer.cpp: In instantiation of ‘void caffe::ELULayer<Dtype>::Backward_cpu(const std::vector<caffe::Blob<Dtype>*>&, const std::vector<bool>&, const std::vector<caffe::Blob<Dtype>*>&) [with Dtype = double]’: src/caffe/layers/elu_layer.cpp:44:1: required from here src/caffe/layers/elu_layer.cpp:31:56: error: ‘class caffe::LayerParameter’ has no member named ‘elu_param’ Dtype alpha = this->layer_param_.elu_param().alpha(); ^ make: *** [.build_release/src/caffe/layers/elu_layer.o] Error 1

so what happened?how to solve such problem?

where is class-balanced cross-entropy loss in the code

In your paper, there is "we define the following class-balanced cross-entropy
loss function used in Equation", but i can't find it in your code. In the SigmoidCrossEntropyLoss , it is just a common CrossEntropyLoss without class-balanced .

Reimplemented based on new caffe

Hey there, I've reimplemented HED based on new version of caffe because currently CUDNN doesn't support HED, using CUDNN will dramatically boost the training speed.

I achieved ODS=0.779 on BSDS500 dataset, similar to 0.78 HED has posted in its paper.

Welcome to try out the code: https://github.com/zeakey/hed! #20 #40 #41

Make errors

Hi, I just clone this project and copy this project to the root directory of caffe, and then re-build the caffe by using make clean && make all. Then I met the problem below. Have you seen this error before? and how to solve it! Thank you!

root@hed-dev:/opt/caffe# make clean
root@hed-dev:/opt/caffe# make all

PROTOC src/caffe/proto/caffe.proto
CXX .build_release/src/caffe/proto/caffe.pb.cc
CXX src/caffe/common.cpp
CXX src/caffe/util/insert_splits.cpp
CXX src/caffe/util/im2col.cpp
CXX src/caffe/util/math_functions.cpp
CXX src/caffe/util/db.cpp
CXX src/caffe/util/signal_handler.cpp
CXX src/caffe/util/upgrade_proto.cpp
CXX src/caffe/util/io.cpp
CXX src/caffe/util/hdf5.cpp
CXX src/caffe/util/cudnn.cpp
CXX src/caffe/util/benchmark.cpp
CXX src/caffe/util/db_lmdb.cpp
CXX src/caffe/util/db_leveldb.cpp
CXX src/caffe/util/blocking_queue.cpp
CXX src/caffe/layers/neuron_layer.cpp
CXX src/caffe/layers/recurrent_layer.cpp
src/caffe/layers/recurrent_layer.cpp: In member function 'virtual void caffe::RecurrentLayer::LayerSetUp(const std::vector<caffe::Blob>&, const std::vector<caffe::Blob>&)':
src/caffe/layers/recurrent_layer.cpp:56:3: error: 'InputParameter' was not declared in this scope
InputParameter* input_param = input_layer_param->mutable_input_param();
^
src/caffe/layers/recurrent_layer.cpp:56:19: error: 'input_param' was not declared in this scope
InputParameter* input_param = input_layer_param->mutable_input_param();
^
src/caffe/layers/recurrent_layer.cpp:56:52: error: 'class caffe::LayerParameter' has no member named 'mutable_input_param'
InputParameter* input_param = input_layer_param->mutable_input_param();
^
src/caffe/layers/recurrent_layer.cpp: In instantiation of 'void caffe::RecurrentLayer::LayerSetUp(const std::vector<caffe::Blob>&, const std::vector<caffe::Blob>&) [with Dtype = float]':
src/caffe/layers/recurrent_layer.cpp:293:1: required from here
src/caffe/layers/recurrent_layer.cpp:30:18: error: 'class caffe::LayerParameter' has no member named 'recurrent_param'
expose_hidden_ = this->layer_param_.recurrent_param().expose_hidden();
^
src/caffe/layers/recurrent_layer.cpp:109:3: error: 'class caffe::LayerParameter' has no member named 'recurrent_param'
unrolled_net_->set_debug_info(
^
src/caffe/layers/recurrent_layer.cpp:151:56: error: 'class caffe::Net' has no member named 'param_display_names'
<< unrolled_net_->param_display_names()[i];
^
In file included from src/caffe/layers/recurrent_layer.cpp:8:0:
./include/caffe/layers/recurrent_layer.hpp: In instantiation of 'int caffe::RecurrentLayer::MinBottomBlobs() const [with Dtype = float]':
src/caffe/layers/recurrent_layer.cpp:293:1: required from here
./include/caffe/layers/recurrent_layer.hpp:39:5: error: 'const class caffe::LayerParameter' has no member named 'recurrent_param'
if (this->layer_param_.recurrent_param().expose_hidden()) {
^
./include/caffe/layers/recurrent_layer.hpp: In instantiation of 'int caffe::RecurrentLayer::ExactNumTopBlobs() const [with Dtype = float]':
src/caffe/layers/recurrent_layer.cpp:293:1: required from here
./include/caffe/layers/recurrent_layer.hpp:49:5: error: 'const class caffe::LayerParameter' has no member named 'recurrent_param'
if (this->layer_param_.recurrent_param().expose_hidden()) {
^
src/caffe/layers/recurrent_layer.cpp: In instantiation of 'void caffe::RecurrentLayer::LayerSetUp(const std::vector<caffe::Blob>&, const std::vector<caffe::Blob>&) [with Dtype = double]':
src/caffe/layers/recurrent_layer.cpp:293:1: required from here
src/caffe/layers/recurrent_layer.cpp:30:18: error: 'class caffe::LayerParameter' has no member named 'recurrent_param'
expose_hidden_ = this->layer_param_.recurrent_param().expose_hidden();
^
src/caffe/layers/recurrent_layer.cpp:109:3: error: 'class caffe::LayerParameter' has no member named 'recurrent_param'
unrolled_net_->set_debug_info(
^
src/caffe/layers/recurrent_layer.cpp:151:56: error: 'class caffe::Net' has no member named 'param_display_names'
<< unrolled_net_->param_display_names()[i];
^
In file included from src/caffe/layers/recurrent_layer.cpp:8:0:
./include/caffe/layers/recurrent_layer.hpp: In instantiation of 'int caffe::RecurrentLayer::MinBottomBlobs() const [with Dtype = double]':
src/caffe/layers/recurrent_layer.cpp:293:1: required from here
./include/caffe/layers/recurrent_layer.hpp:39:5: error: 'const class caffe::LayerParameter' has no member named 'recurrent_param'
if (this->layer_param_.recurrent_param().expose_hidden()) {
^
./include/caffe/layers/recurrent_layer.hpp: In instantiation of 'int caffe::RecurrentLayer::ExactNumTopBlobs() const [with Dtype = double]':
src/caffe/layers/recurrent_layer.cpp:293:1: required from here
./include/caffe/layers/recurrent_layer.hpp:49:5: error: 'const class caffe::LayerParameter' has no member named 'recurrent_param'
if (this->layer_param_.recurrent_param().expose_hidden()) {
^
Makefile:518: recipe for target '.build_release/src/caffe/layers/recurrent_layer.o' failed
make: *** [.build_release/src/caffe/layers/recurrent_layer.o] Error 1

Black and white reversal

I am trying this model on a similar application.
In my ground truth map, the black pixels predominated .
The train seems ok, but the test output is completely opposite.
That is mean, black and white are reversed in the output map.
What's wrong?

FAILED SigmoidCrossEntropyLossLayerTest

I run the HED on Ubuntu+caffe+CPU on computer and came across the following errors when make runtest, I ignoreed them and went on, It happened to ImportError: No module named _caffe, what should I do ?

[----------] Global test environment tear-down
[==========] 872 tests from 133 test cases ran. (19153 ms total)
[ PASSED ] 868 tests.
[ FAILED ] 4 tests, listed below:
[ FAILED ] SigmoidCrossEntropyLossLayerTest/0.TestGradient, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/0.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/1.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/1.TestGradient, where TypeParam = caffe::CPUDevice

4 FAILED TESTS
Makefile:470: recipe for target 'runtest' failed
make: *** [runtest] Error 1

error in 'test_image_data_layer' when make test

Hi all,

When compiling the caffe with this package, 'make test' always result error as follows:
_

.build_release/src/caffe/test/test_image_data_layer.o:(.data.rel.ro._ZTVN5caffe14ImageDataLayerIdEE[_ZTVN5caffe14ImageDataLayerIdEE]+0xf0): undefined reference to `non-virtual thunk to caffe::ImageDataLayer::~ImageDataLayer()'

_

Does anyone know how to solve it? Thanks!

ImportError: No module named Image

hello, when runiing HED-tutorial, it shows error:

ImportError: No module named Image

So should u modify the code import Image to from PIL import Image. Please check it.
I use python2.7

An error occurred using cuda make

When I use cuda8.0 version of the compiler, have an error:/hed-master/include/caffe/util/cudnn.hpp(123): error: argument of type "int" is incompatible with parameter of type "cudnnNanPropagation_t"
Is it a version issue?What cuda version should I use?

There are other errors using cuda7.5 :
`make[2]: *** [src/caffe/CMakeFiles/cuda_compile.dir/util/./cuda_compile_generated_im2col.cu.o] Error 1
make[2]: *** [src/caffe/CMakeFiles/cuda_compile.dir/layers/./cuda_compile_generated_base_data_layer.cu.o] Error 1
make[2]: CMake Error at cuda_compile_generated_cudnn_softmax_layer.cu.o.cmake:208 (message):
Error generating
/opt/yangmiao/hed-master/src/caffe/CMakeFiles/cuda_compile.dir/layers/./cuda_compile_generated_cudnn_softmax_layer.cu.o
CMake Error at cuda_compile_generated_silence_layer.cu.o.cmake:208 (message):
Error generating
/opt/yangmiao/hed-master/src/caffe/CMakeFiles/cuda_compile.dir/layers/./cuda_compile_generated_silence_layer.cu.o

`what should I do?

Compilation error

Hey I am impressed by your results and want to try your edge detection code. I am getting this error while doing make all for hed. No error when i install original Caffe.

In file included from ./include/caffe/common.hpp:19:0,
from ./include/caffe/blob.hpp:8,
from ./include/caffe/layer.hpp:9,
from src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp:5:
./include/caffe/util/device_alternate.hpp:30:39: error: no ‘void caffe::SigmoidCrossEntropyLossLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype_>&, const std::vector&, const std::vectorcaffe::Blob<Dtype_>&)’ member function declared in class ‘caffe::SigmoidCrossEntropyLossLayer’
const vector<Blob>& bottom) { NO_GPU; }
^
src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp:118:1: note: in expansion of macro ‘STUB_GPU_BACKWARD’
STUB_GPU_BACKWARD(SigmoidCrossEntropyLossLayer, Backward);
^
Makefile:518: recipe for target '.build_release/src/caffe/layers/sigmoid_cross_entropy_loss_layer.o' failed
make: *
* [.build_release/src/caffe/layers/sigmoid_cross_entropy_loss_layer.o] Error 1

make runtest reports failure on Ubuntu 16.04 CPU only

This is the error I got testing hed. All other caffe-based programs are fine.

[----------] Global test environment tear-down
[==========] 872 tests from 133 test cases ran. (23688 ms total)
[ PASSED ] 868 tests.
[ FAILED ] 4 tests, listed below:
[ FAILED ] SigmoidCrossEntropyLossLayerTest/0.TestGradient, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/0.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/1.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/1.TestGradient, where TypeParam = caffe::CPUDevice

4 FAILED TESTS
src/caffe/test/CMakeFiles/runtest.dir/build.make:57: recipe for target 'src/caffe/test/CMakeFiles/runtest' failed
make[3]: *** [src/caffe/test/CMakeFiles/runtest] Error 1
CMakeFiles/Makefile2:328: recipe for target 'src/caffe/test/CMakeFiles/runtest.dir/all' failed
make[2]: *** [src/caffe/test/CMakeFiles/runtest.dir/all] Error 2
CMakeFiles/Makefile2:335: recipe for target 'src/caffe/test/CMakeFiles/runtest.dir/rule' failed
make[1]: *** [src/caffe/test/CMakeFiles/runtest.dir/rule] Error 2
Makefile:240: recipe for target 'runtest' failed
make: *** [runtest] Error 2

Tried all I could, no luck. Any suggestions?

building errors

OS: Ubuntu 17.04

Thanks for any help :)

Compilation worked (after commenting out add_subdirectory line for examples), but running into errors with building:

[ 14%] Building CXX object src/caffe/CMakeFiles/caffe.dir/layers/concat_layer.cpp.o
[ 16%] Building CXX object src/caffe/CMakeFiles/caffe.dir/layers/contrastive_loss_layer.cpp.o
/home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp: In instantiation of ‘void caffe::ContrastiveLossLayer::Forward_cpu(const std::vector<caffe::Blob>&, const std::vector<caffe::Blob>&) [with Dtype = float]’:
/home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:118:1: required from here
/home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:56:30: error: no matching function for call to ‘max(float, double)’
Dtype dist = std::max(margin - sqrt(dist_sq_.cpu_data()[i]), 0.0);
~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/c++/6/algorithm:61:0,
from /home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:1:
/usr/include/c++/6/bits/stl_algobase.h:219:5: note: candidate: template constexpr const _Tp& std::max(const _Tp&, const _Tp&)
max(const _Tp& __a, const _Tp& __b)
^~~
/usr/include/c++/6/bits/stl_algobase.h:219:5: note: template argument deduction/substitution failed:
/home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:56:30: note: deduced conflicting types for parameter ‘const Tp’ (‘float’ and ‘double’)
Dtype dist = std::max(margin - sqrt(dist_sq
.cpu_data()[i]), 0.0);
~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/c++/6/algorithm:61:0,
from /home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:1:
/usr/include/c++/6/bits/stl_algobase.h:265:5: note: candidate: template<class _Tp, class _Compare> constexpr const _Tp& std::max(const _Tp&, const _Tp&, _Compare)
max(const _Tp& __a, const _Tp& __b, _Compare __comp)
^~~
/usr/include/c++/6/bits/stl_algobase.h:265:5: note: template argument deduction/substitution failed:
/home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:56:30: note: deduced conflicting types for parameter ‘const Tp’ (‘float’ and ‘double’)
Dtype dist = std::max(margin - sqrt(dist_sq
.cpu_data()[i]), 0.0);
~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/c++/6/algorithm:62:0,
from /home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:1:
/usr/include/c++/6/bits/stl_algo.h:3459:5: note: candidate: template constexpr _Tp std::max(std::initializer_list<_Tp>)
max(initializer_list<_Tp> __l)
^~~
/usr/include/c++/6/bits/stl_algo.h:3459:5: note: template argument deduction/substitution failed:
/home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:56:30: note: mismatched types ‘std::initializer_list<Tp>’ and ‘float’
Dtype dist = std::max(margin - sqrt(dist_sq
.cpu_data()[i]), 0.0);
~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/c++/6/algorithm:62:0,
from /home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:1:
/usr/include/c++/6/bits/stl_algo.h:3465:5: note: candidate: template<class _Tp, class _Compare> constexpr _Tp std::max(std::initializer_list<_Tp>, _Compare)
max(initializer_list<_Tp> __l, _Compare __comp)
^~~
/usr/include/c++/6/bits/stl_algo.h:3465:5: note: template argument deduction/substitution failed:
/home/daniel/Desktop/cats/hed/src/caffe/layers/contrastive_loss_layer.cpp:56:30: note: mismatched types ‘std::initializer_list<Tp>’ and ‘float’
Dtype dist = std::max(margin - sqrt(dist_sq
.cpu_data()[i]), 0.0);
~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/caffe/CMakeFiles/caffe.dir/build.make:398: recipe for target 'src/caffe/CMakeFiles/caffe.dir/layers/contrastive_loss_layer.cpp.o' failed
make[2]: *** [src/caffe/CMakeFiles/caffe.dir/layers/contrastive_loss_layer.cpp.o] Error 1
CMakeFiles/Makefile2:272: recipe for target 'src/caffe/CMakeFiles/caffe.dir/all' failed
make[1]: *** [src/caffe/CMakeFiles/caffe.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

Class Counter in Sigmoid Cross Entropy Loss

int dim = bottom[0]->count() / bottom[0]->num();
for (int i = 0; i < num; ++i) {
      temp_loss_pos = 0;
      temp_loss_neg = 0;
      for (int j = 0; j < dim; j ++) {
         if (target[i*dim+j] == 1) {
            count_pos ++;
            temp_loss_pos -= input_data[i*dim + j] * (target[i*dim+j] - (input_data[i*dim + j] >= 0)) -
                    log(1 + exp(input_data[i*dim + j] - 2 * input_data[i*dim + j] * (input_data[i*dim + j] >= 0)));
        }
        else if (target[i*dim+j] == 0) {
            count_neg ++;
            temp_loss_neg -= input_data[i*dim + j] * (target[i*dim+j] - (input_data[i*dim + j] >= 0)) -
                    log(1 + exp(input_data[i*dim + j] - 2 * input_data[i*dim + j] * (input_data[i*dim + j] >= 0)));
        }
     } 
     loss_pos += temp_loss_pos * count_neg / (count_pos + count_neg);
     loss_neg += temp_loss_neg * count_pos / (count_pos + count_neg);
}

The above code in this file seems confusing. Although I guess this won't make a huge difference.

In every iteration the counter count_pos and count_neg are accumulated. These two counters are not set to zero in each iteration, but they are used in each iteration. This doesn't look right.

Also, from the paper it looks like the ratio should be a constant that is calculated across all data in the training set.

make runtest errors

I want to compile the HED on my computer with environment (Ubuntu 16.04+Caffe+CPU). I am getting the following errors when i try to make runtest , could anyone help me, please.

[ RUN ] ImageDataLayerTest/1.TestRead
E0824 07:06:58.180407 3184 io.cpp:77] Could not open or find file examples/images/cat.jpg
F0824 07:06:58.180481 3184 image_data_layer.cpp:65] Check failed: cv_img.data Could not load examples/images/cat.jpg

porting to the newest caffe

The HED uses an old caffe, which is not compatible with the latest CUDNN v5 (2x faster reported by NVIDIA).

If only for testing, after some investigation, it seems that HED uses no specific layers, so it will run the edge detection work and it is.

Except that I have found the boundaries are shift to the right bottom part, as shown below. After I change the pad value of the first layer to 1 from 35, the shift is corrected, but the boarder of the images is also detected as edges.

I have compare the cpp files with caffe repo, I have found several difference parts, e.g. something related to image label map data, data transform. But I don't see any critical modifications with the conv layers. So would you give me some hints of how to solve this problem?

Image
1_000

with pad 35
1 jpg-hed_000

with pad 1
pad1

All of the Loss is zero

when I train the hed,all of the loss is zero,the next is my log of train:
I1204 13:19:24.325491 2461 solver.cpp:242] Iteration 8, loss = 0
I1204 13:19:24.325518 2461 solver.cpp:258] Train net output #0: dsn1_loss = 0 (* 1 = 0 loss)
I1204 13:19:24.325541 2461 solver.cpp:258] Train net output #1: dsn2_loss = 0 (* 1 = 0 loss)
I1204 13:19:24.325552 2461 solver.cpp:258] Train net output #2: dsn3_loss = 0 (* 1 = 0 loss)
I1204 13:19:24.325563 2461 solver.cpp:258] Train net output #3: dsn4_loss = 0 (* 1 = 0 loss)
I1204 13:19:24.325574 2461 solver.cpp:258] Train net output #4: dsn5_loss = 0 (* 1 = 0 loss)
I1204 13:19:24.325584 2461 solver.cpp:258] Train net output #5: fuse_loss = 0 (* 1 = 0 loss)
I1204 13:19:24.325598 2461 solver.cpp:571] Iteration 8, lr = 1e-14
I1204 13:19:24.401729 2461 net.cpp:754] [Update] Layer Edgeconv1_1, param 0 data: 0.145995; diff: 1.41403e-11
I1204 13:19:24.401800 2461 net.cpp:754] [Update] Layer Edgeconv1_1, param 1 data: 0.50245; diff: 0
I1204 13:19:24.401882 2461 net.cpp:754] [Update] Layer Edgeconv1_2, param 0 data: 0.026326; diff: 2.54981e-12
I1204 13:19:24.401904 2461 net.cpp:754] [Update] Layer Edgeconv1_2, param 1 data: 0.270118; diff: 0
I1204 13:19:24.402009 2461 net.cpp:754] [Update] Layer Edgeconv2_1, param 0 data: 0.0211515; diff: 2.04863e-12
I1204 13:19:24.402036 2461 net.cpp:754] [Update] Layer Edgeconv2_1, param 1 data: 0.133851; diff: 0
I1204 13:19:24.402231 2461 net.cpp:754] [Update] Layer Edgeconv2_2, param 0 data: 0.0164008; diff: 1.5885e-12
I1204 13:19:24.402281 2461 net.cpp:754] [Update] Layer Edgeconv2_2, param 1 data: 0.148366; diff: 0
I1204 13:19:24.402647 2461 net.cpp:754] [Update] Layer Edgeconv3_1, param 0 data: 0.0121919; diff: 1.18084e-12
I1204 13:19:24.402775 2461 net.cpp:754] [Update] Layer Edgeconv3_1, param 1 data: 0.0556366; diff: 0
I1204 13:19:24.403424 2461 net.cpp:754] [Update] Layer Edgeconv3_2, param 0 data: 0.00905248; diff: 8.76779e-13
I1204 13:19:24.403977 2461 net.cpp:754] [Update] Layer Edgeconv3_2, param 1 data: 0.0638749; diff: 0
I1204 13:19:24.404670 2461 net.cpp:754] [Update] Layer Edgeconv3_3, param 0 data: 0.00922374; diff: 8.93368e-13
I1204 13:19:24.405221 2461 net.cpp:754] [Update] Layer Edgeconv3_3, param 1 data: 0.0597807; diff: 0
I1204 13:19:24.406379 2461 net.cpp:754] [Update] Layer Edgeconv4_1, param 0 data: 0.00740561; diff: 7.17272e-13
I1204 13:19:24.407716 2461 net.cpp:754] [Update] Layer Edgeconv4_1, param 1 data: 0.0415128; diff: 0
I1204 13:19:24.409811 2461 net.cpp:754] [Update] Layer Edgeconv4_2, param 0 data: 0.00582409; diff: 5.64094e-13
I1204 13:19:24.412645 2461 net.cpp:754] [Update] Layer Edgeconv4_2, param 1 data: 0.042761; diff: 0
I1204 13:19:24.414752 2461 net.cpp:754] [Update] Layer Edgeconv4_3, param 0 data: 0.00603477; diff: 5.84498e-13
I1204 13:19:24.417565 2461 net.cpp:754] [Update] Layer Edgeconv4_3, param 1 data: 0.0570726; diff: 0
I1204 13:19:24.419639 2461 net.cpp:754] [Update] Layer Edgeconv5_1, param 0 data: 0.0118411; diff: 1.14688e-10
I1204 13:19:24.422469 2461 net.cpp:754] [Update] Layer Edgeconv5_1, param 1 data: 0.113729; diff: 0
I1204 13:19:24.424614 2461 net.cpp:754] [Update] Layer Edgeconv5_2, param 0 data: 0.0091425; diff: 8.85499e-11
I1204 13:19:24.427412 2461 net.cpp:754] [Update] Layer Edgeconv5_2, param 1 data: 0.174423; diff: 0
I1204 13:19:24.429507 2461 net.cpp:754] [Update] Layer Edgeconv5_3, param 0 data: 0.00717594; diff: 6.95028e-11
I1204 13:19:24.432281 2461 net.cpp:754] [Update] Layer Edgeconv5_3, param 1 data: 0.22697; diff: 0
I1204 13:19:24.432334 2461 net.cpp:754] [Update] Layer Edgescore-dsn1, param 0 data: 0.00322575; diff: 3.12431e-15
I1204 13:19:24.432346 2461 net.cpp:754] [Update] Layer Edgescore-dsn1, param 1 data: 0.0334441; diff: 0
I1204 13:19:24.432356 2461 net.cpp:754] [Update] Layer Edgescore-dsn2, param 0 data: 0.00306832; diff: 2.97182e-15
I1204 13:19:24.432366 2461 net.cpp:754] [Update] Layer Edgescore-dsn2, param 1 data: 0.021047; diff: 0
I1204 13:19:24.432376 2461 net.cpp:754] [Update] Layer Edgeupsample_2, param 0 data: 0.25; diff: 0
I1204 13:19:24.432404 2461 net.cpp:754] [Update] Layer Edgeupsample_2, param 1 data: 0; diff: 0
I1204 13:19:24.432417 2461 net.cpp:754] [Update] Layer Edgescore-dsn3, param 0 data: 0.0035274; diff: 3.41647e-15
I1204 13:19:24.432441 2461 net.cpp:754] [Update] Layer Edgescore-dsn3, param 1 data: 0.00768858; diff: 0
I1204 13:19:24.432451 2461 net.cpp:754] [Update] Layer Edgeupsample_4, param 0 data: 0.25; diff: 0
I1204 13:19:24.432461 2461 net.cpp:754] [Update] Layer Edgeupsample_4, param 1 data: 0; diff: 0
I1204 13:19:24.432471 2461 net.cpp:754] [Update] Layer Edgescore-dsn4, param 0 data: 0.00384611; diff: 3.72515e-15
I1204 13:19:24.432482 2461 net.cpp:754] [Update] Layer Edgescore-dsn4, param 1 data: 0.00425628; diff: 0
I1204 13:19:24.432492 2461 net.cpp:754] [Update] Layer Edgeupsample_8, param 0 data: 0.25; diff: 0
I1204 13:19:24.432502 2461 net.cpp:754] [Update] Layer Edgeupsample_8, param 1 data: 0; diff: 0
I1204 13:19:24.432512 2461 net.cpp:754] [Update] Layer Edgescore-dsn5, param 0 data: 0.00102744; diff: 9.9513e-16
I1204 13:19:24.432521 2461 net.cpp:754] [Update] Layer Edgescore-dsn5, param 1 data: 0.000218881; diff: 0
I1204 13:19:24.432533 2461 net.cpp:754] [Update] Layer Edgeupsample_16, param 0 data: 0.25; diff: 0
I1204 13:19:24.432543 2461 net.cpp:754] [Update] Layer Edgeupsample_16, param 1 data: 0; diff: 0
I1204 13:19:24.432552 2461 net.cpp:754] [Update] Layer Edgenew-score-weighting, param 0 data: 0.205404; diff: 1.98945e-14
I1204 13:19:24.432562 2461 net.cpp:754] [Update] Layer Edgenew-score-weighting, param 1 data: 0.00135704; diff: 0
I1204 13:19:24.444571 2461 solver.cpp:346] Iteration 9, Testing net (#0)
I1204 13:19:28.286509 2461 solver.cpp:414] Test net output #0: dsn1_loss = 0 (* 1 = 0 loss)
I1204 13:19:28.286576 2461 solver.cpp:414] Test net output #1: dsn2_loss = 0 (* 1 = 0 loss)
I1204 13:19:28.286936 2461 solver.cpp:414] Test net output #2: dsn3_loss = 0 (* 1 = 0 loss)
I1204 13:19:28.286955 2461 solver.cpp:414] Test net output #3: dsn4_loss = 0 (* 1 = 0 loss)
I1204 13:19:28.286972 2461 solver.cpp:414] Test net output #4: dsn5_loss = 0 (* 1 = 0 loss)
I1204 13:19:28.286984 2461 solver.cpp:414] Test net output #5: fuse_loss = 0 (* 1 = 0 loss)
E1204 13:19:38.964378 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
E1204 13:19:49.602077 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
E1204 13:20:00.292870 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
E1204 13:20:10.969599 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
E1204 13:20:21.624631 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
E1204 13:20:32.288601 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
E1204 13:20:42.971149 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
E1204 13:20:53.662241 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
E1204 13:21:04.363137 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
E1204 13:21:15.084235 2461 net.cpp:828] [Backward] All net params (data, diff): L1 norm = (123540, 0); L2 norm = (51.1185, 0)
I1204 13:21:15.084326 2461 solver.cpp:242] Iteration 9, loss = 0
I1204 13:21:15.084352 2461 solver.cpp:258] Train net output #0: dsn1_loss = 0 (* 1 = 0 loss)
I1204 13:21:15.084369 2461 solver.cpp:258] Train net output #1: dsn2_loss = 0 (* 1 = 0 loss)
I1204 13:21:15.084381 2461 solver.cpp:258] Train net output #2: dsn3_loss = 0 (* 1 = 0 loss)
I1204 13:21:15.084393 2461 solver.cpp:258] Train net output #3: dsn4_loss = 0 (* 1 = 0 loss)
I1204 13:21:15.084403 2461 solver.cpp:258] Train net output #4: dsn5_loss = 0 (* 1 = 0 loss)
I1204 13:21:15.084434 2461 solver.cpp:258] Train net output #5: fuse_loss = 0 (* 1 = 0 loss)
I1204 13:21:15.084448 2461 solver.cpp:571] Iteration 9, lr = 1e-15
I1204 13:21:15.160527 2461 net.cpp:754] [Update] Layer Edgeconv1_1, param 0 data: 0.145995; diff: 1.27263e-11
I1204 13:21:15.160596 2461 net.cpp:754] [Update] Layer Edgeconv1_1, param 1 data: 0.50245; diff: 0
I1204 13:21:15.160677 2461 net.cpp:754] [Update] Layer Edgeconv1_2, param 0 data: 0.026326; diff: 2.29483e-12
I1204 13:21:15.160699 2461 net.cpp:754] [Update] Layer Edgeconv1_2, param 1 data: 0.270118; diff: 0
I1204 13:21:15.160804 2461 net.cpp:754] [Update] Layer Edgeconv2_1, param 0 data: 0.0211515; diff: 1.84377e-12
I1204 13:21:15.160830 2461 net.cpp:754] [Update] Layer Edgeconv2_1, param 1 data: 0.133851; diff: 0
I1204 13:21:15.161026 2461 net.cpp:754] [Update] Layer Edgeconv2_2, param 0 data: 0.0164008; diff: 1.42965e-12
I1204 13:21:15.161074 2461 net.cpp:754] [Update] Layer Edgeconv2_2, param 1 data: 0.148366; diff: 0
I1204 13:21:15.161442 2461 net.cpp:754] [Update] Layer Edgeconv3_1, param 0 data: 0.0121919; diff: 1.06276e-12
I1204 13:21:15.161569 2461 net.cpp:754] [Update] Layer Edgeconv3_1, param 1 data: 0.0556366; diff: 0
I1204 13:21:15.162219 2461 net.cpp:754] [Update] Layer Edgeconv3_2, param 0 data: 0.00905248; diff: 7.89101e-13
I1204 13:21:15.162770 2461 net.cpp:754] [Update] Layer Edgeconv3_2, param 1 data: 0.0638749; diff: 0
I1204 13:21:15.163455 2461 net.cpp:754] [Update] Layer Edgeconv3_3, param 0 data: 0.00922374; diff: 8.04029e-13
I1204 13:21:15.164005 2461 net.cpp:754] [Update] Layer Edgeconv3_3, param 1 data: 0.0597807; diff: 0
I1204 13:21:15.165179 2461 net.cpp:754] [Update] Layer Edgeconv4_1, param 0 data: 0.00740561; diff: 6.45545e-13
I1204 13:21:15.166517 2461 net.cpp:754] [Update] Layer Edgeconv4_1, param 1 data: 0.0415128; diff: 0
I1204 13:21:15.168608 2461 net.cpp:754] [Update] Layer Edgeconv4_2, param 0 data: 0.00582409; diff: 5.07684e-13
I1204 13:21:15.171435 2461 net.cpp:754] [Update] Layer Edgeconv4_2, param 1 data: 0.042761; diff: 0
I1204 13:21:15.173553 2461 net.cpp:754] [Update] Layer Edgeconv4_3, param 0 data: 0.00603477; diff: 5.26049e-13
I1204 13:21:15.176348 2461 net.cpp:754] [Update] Layer Edgeconv4_3, param 1 data: 0.0570726; diff: 0
I1204 13:21:15.178428 2461 net.cpp:754] [Update] Layer Edgeconv5_1, param 0 data: 0.0118411; diff: 1.03219e-10
I1204 13:21:15.181217 2461 net.cpp:754] [Update] Layer Edgeconv5_1, param 1 data: 0.113729; diff: 0
I1204 13:21:15.183348 2461 net.cpp:754] [Update] Layer Edgeconv5_2, param 0 data: 0.0091425; diff: 7.96948e-11
I1204 13:21:15.186158 2461 net.cpp:754] [Update] Layer Edgeconv5_2, param 1 data: 0.174423; diff: 0
I1204 13:21:15.188238 2461 net.cpp:754] [Update] Layer Edgeconv5_3, param 0 data: 0.00717594; diff: 6.25525e-11
I1204 13:21:15.191023 2461 net.cpp:754] [Update] Layer Edgeconv5_3, param 1 data: 0.22697; diff: 0
I1204 13:21:15.191078 2461 net.cpp:754] [Update] Layer Edgescore-dsn1, param 0 data: 0.00322575; diff: 2.81188e-15
I1204 13:21:15.191090 2461 net.cpp:754] [Update] Layer Edgescore-dsn1, param 1 data: 0.0334441; diff: 0
I1204 13:21:15.191100 2461 net.cpp:754] [Update] Layer Edgescore-dsn2, param 0 data: 0.00306832; diff: 2.67464e-15
I1204 13:21:15.191110 2461 net.cpp:754] [Update] Layer Edgescore-dsn2, param 1 data: 0.021047; diff: 0
I1204 13:21:15.191138 2461 net.cpp:754] [Update] Layer Edgeupsample_2, param 0 data: 0.25; diff: 0
I1204 13:21:15.191149 2461 net.cpp:754] [Update] Layer Edgeupsample_2, param 1 data: 0; diff: 0
I1204 13:21:15.191159 2461 net.cpp:754] [Update] Layer Edgescore-dsn3, param 0 data: 0.0035274; diff: 3.07482e-15
I1204 13:21:15.191169 2461 net.cpp:754] [Update] Layer Edgescore-dsn3, param 1 data: 0.00768858; diff: 0
I1204 13:21:15.191179 2461 net.cpp:754] [Update] Layer Edgeupsample_4, param 0 data: 0.25; diff: 0
I1204 13:21:15.191189 2461 net.cpp:754] [Update] Layer Edgeupsample_4, param 1 data: 0; diff: 0
I1204 13:21:15.191202 2461 net.cpp:754] [Update] Layer Edgescore-dsn4, param 0 data: 0.00384611; diff: 3.35263e-15
I1204 13:21:15.191213 2461 net.cpp:754] [Update] Layer Edgescore-dsn4, param 1 data: 0.00425628; diff: 0
I1204 13:21:15.191223 2461 net.cpp:754] [Update] Layer Edgeupsample_8, param 0 data: 0.25; diff: 0
I1204 13:21:15.191233 2461 net.cpp:754] [Update] Layer Edgeupsample_8, param 1 data: 0; diff: 0
I1204 13:21:15.191243 2461 net.cpp:754] [Update] Layer Edgescore-dsn5, param 0 data: 0.00102744; diff: 8.95617e-16
I1204 13:21:15.191253 2461 net.cpp:754] [Update] Layer Edgescore-dsn5, param 1 data: 0.000218881; diff: 0
I1204 13:21:15.191264 2461 net.cpp:754] [Update] Layer Edgeupsample_16, param 0 data: 0.25; diff: 0
I1204 13:21:15.191274 2461 net.cpp:754] [Update] Layer Edgeupsample_16, param 1 data: 0; diff: 0
I1204 13:21:15.191284 2461 net.cpp:754] [Update] Layer Edgenew-score-weighting, param 0 data: 0.205404; diff: 1.7905e-14
I1204 13:21:15.191294 2461 net.cpp:754] [Update] Layer Edgenew-score-weighting, param 1 data: 0.00135704; diff: 0

Compilation Error during runtest

Hello All,

I am trying to compile the version of caffe included and I am getting the following error during runtest
My traditional version of caffe and other versions compile without any issues.
It would be great id somebody can shed some light on this.
I am compiling with cuda and cudnn along WITH_PYTHON_LAYER = 1

[ FAILED ] 8 tests, listed below:
[ FAILED ] SigmoidCrossEntropyLossLayerTest/0.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/0.TestGradient, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/1.TestGradient, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/1.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/2.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::GPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/2.TestGradient, where TypeParam = caffe::GPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/3.TestGradient, where TypeParam = caffe::GPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/3.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::GPUDevice

8 FAILED TESTS
YOU HAVE 2 DISABLED TESTS

F0218 20:59:02.200268 12246 syncedmem.hpp:30] Check failed: error == cudaSuccess (29 vs. 0) driver shutting down
*** Check failure stack trace: ***
@ 0x2b451c9addaa (unknown)
@ 0x2b451c9adce4 (unknown)
@ 0x2b451c9ad6e6 (unknown)
@ 0x2b451c9b0687 (unknown)
@ 0x2b451d7f32ff caffe::SyncedMemory::~SyncedMemory()
@ 0x464f52 boost::detail::sp_counted_impl_p<>::dispose()
@ 0x45779e boost::detail::sp_counted_base::release()
@ 0x2b451e56c5ea (unknown)
@ 0x2b451d772d73 (unknown)
make: *** [runtest] Aborted (core dumped)

HED on Ubuntu 16.04

Does HED support cuda8? This is the only cuda compatible with Ubuntu 16.04. What version of CuDNN is supported on this platform?

example folder has not CMakeLists.txt

When I do 'configure' in ccmake, I got the following error:

CMake Error at CMakeLists.txt:57 (add_subdirectory):
The source directory

 /home/weiliu/projects/hed/examples

does not contain a CMakeLists.txt file.

I commented out the line with 'examples' folder in the root CMakeLists file, and it builds no errors.

How much time the training costs?

Hi Saining,

I am wondering what's the mini-batch size are you using in practice for your pre-trained model. Your papers said 10 but your code set it to 1 while both use the same number of iterations. The two settings should have different results.
Why I ask this is because your paper mentions that you use just 7 hours to train. I also use a Tesla K40c but training takes 1 minutes for 20 iterations (the mini batch size is 1). For this speed, it needs 4 days to finish 10,000 iterations.
It is still running.
Could you help me figure it out?

Weird boundaries and fixed output size

Hi,
I am running your provided model on arbitrary images. However I get a weird boundaries on top and on the left side. I could obviously just crop it out, but the errors seem to propagate to lower levels of resolution:
out1
out5
Do you know how to fix this problem? Furthermore is it possible to change the output size of a network without retraining? I realized that changing the input size of the image in the prototxt file does not change anything.

error running solve.py:image_labelmap_data_layer.cpp:81] Check failed: (height == gt_height) && (width == gt_width) groundtruth size != image size *** Check failure stack trace: ***

I0228 21:51:04.337133 15059 layer_factory.hpp:76] Creating layer data
I0228 21:51:04.337187 15059 net.cpp:111] Creating Layer data
I0228 21:51:04.337209 15059 net.cpp:434] data -> data
I0228 21:51:04.337239 15059 net.cpp:434] data -> label
I0228 21:51:04.337265 15059 image_labelmap_data_layer.cpp:40] Opening file ../../data/HED-BSDS/train_pair.lst
I0228 21:51:04.363386 15059 image_labelmap_data_layer.cpp:50] Shuffling data
I0228 21:51:04.368824 15059 image_labelmap_data_layer.cpp:55] A total of 28800 images.
libpng warning: Application built with libpng-1.2.54 but running with 1.6.27
E0228 21:51:04.370077 15059 io.cpp:77] Could not open or find file ../../data/HED-BSDS/train/aug_gt_scale_0.5/270.0_1_1/54082.png
F0228 21:51:04.370098 15059 image_labelmap_data_layer.cpp:81] Check failed: (height == gt_height) && (width == gt_width) groundtruth size != image size
*** Check failure stack trace: ***
make ,make test, make runtest , all went well.
error when running "solve.py" in examples/hed folder . Any help??

Loss is zero or nan during training on another dataset.

Hello.

While training your network on different dataset I get zero loss or nan after first couple of iterations.

Do you know why it happens?

I also can't get how the ground truth in your dataset is marked: I notice values between 0 and 255.
Bigger number means stronger edge?
Is it ok to use this labels with sigmoid cross entropy loss?

How should I label my own dataset?

Thank you.

cudnn error: too few arguments in function call

Hello. I use gcc-4.8.5, CUDA 8.0 and cuDNN 6.0. I receive errors:

[  2%] Built target proto
[  2%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile.dir/util/cuda_compile_generated_im2col.cu.o
/home/puasonych/hed/include/caffe/util/cudnn.hpp(104): error: too few arguments in function call
/home/puasonych/hed/include/caffe/util/cudnn.hpp(123): error: argument of type "int" is incompatible with parameter of type "cudnnNanPropagation_t"
/home/puasonych/hed/include/caffe/util/cudnn.hpp(123): error: too few arguments in function call

3 errors detected in the compilation of "/tmp/tmpxft_00007ef5_00000000-5_im2col.cpp4.ii".
CMake Error at cuda_compile_generated_im2col.cu.o.cmake:266 (message):
  Error generating file
  /home/puasonych/hed/build/src/caffe/CMakeFiles/cuda_compile.dir/util/./cuda_compile_generated_im2col.cu.o

src/caffe/CMakeFiles/caffe.dir/build.make:533: ошибка выполнения рецепта для цели «src/caffe/CMakeFiles/cuda_compile.dir/util/cuda_compile_generated_im2col.cu.o»
make[2]: *** [src/caffe/CMakeFiles/cuda_compile.dir/util/cuda_compile_generated_im2col.cu.o] 
Ошибка 1
CMakeFiles/Makefile2:272: ошибка выполнения рецепта для цели «src/caffe/CMakeFiles/caffe.dir/all»
make[1]: *** [src/caffe/CMakeFiles/caffe.dir/all] Ошибка 2
Makefile:127: ошибка выполнения рецепта для цели «all»
make: *** [all] Ошибка 2

At the same time Caffe 1.0.0 build without problems.

Runtest

Hi,

Should the runtest complete? I'm hitting some issues, which may not be related to your code as ubuntu 15.10 has many issues.

[ FAILED ] 8 tests, listed below:
[ FAILED ] SigmoidCrossEntropyLossLayerTest/0.TestGradient, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/0.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/1.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/1.TestGradient, where TypeParam = caffe::CPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/2.TestGradient, where TypeParam = caffe::GPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/2.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::GPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/3.TestSigmoidCrossEntropyLoss, where TypeParam = caffe::GPUDevice
[ FAILED ] SigmoidCrossEntropyLossLayerTest/3.TestGradient, where TypeParam = caffe::GPUDevice
8 FAILED TESTS
YOU HAVE 2 DISABLED TESTS
E0415 11:37:19.710389 24415 common.cpp:104] Cannot create Cublas handle. Cublas won't be available.
E0415 11:37:19.710497 24415 common.cpp:111] Cannot create Curand generator. Curand won't be available.
*** Error in `.build_release/test/test_all.testbin': corrupted double-linked list: 0x00000000090976f0 ***
*** Aborted at 1460716639 (unix time) try "date -d @1460716639" if you are using GNU date ***

The caffe runtest completes fine on the current release.

Thanks!

Make errors

When I make the project, using the command make all, some errors occur as follows:

src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "cudnnAddMode_t" is incompatible with parameter of type "const void *"
          detected during instantiation of "void caffe::CuDNNConvolutionLayer<Dtype>::Forward_gpu(const std::vector<caffe::Blob<Dtype> *, std::allocator<caffe::Blob<Dtype> *>> &, const std::vector<caffe::Blob<Dtype> *, std::allocator<caffe::Blob<Dtype> *>> &) [with Dtype=float]" 
(157): here


...


src/caffe/layers/cudnn_conv_layer.cu(140): error: too few arguments in function call
          detected during instantiation of "void caffe::CuDNNConvolutionLayer<Dtype>::Backward_gpu(const std::vector<caffe::Blob<Dtype> *, std::allocator<caffe::Blob<Dtype> *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vector<caffe::Blob<Dtype> *, std::allocator<caffe::Blob<Dtype> *>> &) [with Dtype=double]" 
(157): here

20 errors detected in the compilation of "/tmp/tmpxft_0000153f_00000000-16_cudnn_conv_layer.compute_50.cpp1.ii".

These problems don't occur in the caffe project. It seams that there are some issues on the cudnn. The version of related packages in my computer is:

  • cuda 7.5
  • cudnn v4(4.0.7)

Would you like to help me to find any problems? Thank you very much!

'Backward_gpu' does not match any declaration in 'SigmoidCrossEntropyLossLayer<Dtype>' error during compilation

Hi Saining, I am trying to build the modified Caffe that HED uses on my Macbook Pro. I installed a bunch of packages via brew (protobuf, openblas, boost, opencv and hdf5), and uncommented "CPU_ONLY := 1" in my Makefile.config to build without GPU support. However, when I did 'make all', I encountered an error "'Backward_gpu' does not match any declaration in 'SigmoidCrossEntropyLossLayer'". Here below is my full 'make all' log.

Any help is highly appreciated! Thank you!

------ 'make all' log and error message -----
PROTOC src/caffe/proto/caffe.proto
CXX .build_release/src/caffe/proto/caffe.pb.cc
CXX src/caffe/blob.cpp
CXX src/caffe/common.cpp
CXX src/caffe/data_reader.cpp
CXX src/caffe/data_transformer.cpp
CXX src/caffe/internal_thread.cpp
CXX src/caffe/layer.cpp
CXX src/caffe/layer_factory.cpp
CXX src/caffe/layers/absval_layer.cpp
CXX src/caffe/layers/accuracy_layer.cpp
CXX src/caffe/layers/argmax_layer.cpp
CXX src/caffe/layers/base_conv_layer.cpp
CXX src/caffe/layers/base_data_layer.cpp
CXX src/caffe/layers/bnll_layer.cpp
CXX src/caffe/layers/concat_layer.cpp
CXX src/caffe/layers/contrastive_loss_layer.cpp
CXX src/caffe/layers/conv_layer.cpp
CXX src/caffe/layers/crop_layer.cpp
CXX src/caffe/layers/cudnn_conv_layer.cpp
CXX src/caffe/layers/cudnn_pooling_layer.cpp
CXX src/caffe/layers/cudnn_relu_layer.cpp
CXX src/caffe/layers/cudnn_sigmoid_layer.cpp
CXX src/caffe/layers/cudnn_softmax_layer.cpp
CXX src/caffe/layers/cudnn_tanh_layer.cpp
CXX src/caffe/layers/data_layer.cpp
CXX src/caffe/layers/deconv_layer.cpp
CXX src/caffe/layers/dropout_layer.cpp
CXX src/caffe/layers/dummy_data_layer.cpp
CXX src/caffe/layers/eltwise_layer.cpp
CXX src/caffe/layers/embed_layer.cpp
CXX src/caffe/layers/euclidean_loss_layer.cpp
CXX src/caffe/layers/exp_layer.cpp
CXX src/caffe/layers/filter_layer.cpp
CXX src/caffe/layers/flatten_layer.cpp
CXX src/caffe/layers/hdf5_data_layer.cpp
CXX src/caffe/layers/hdf5_output_layer.cpp
CXX src/caffe/layers/hinge_loss_layer.cpp
CXX src/caffe/layers/im2col_layer.cpp
CXX src/caffe/layers/image_data_layer.cpp
CXX src/caffe/layers/image_labelmap_data_layer.cpp
CXX src/caffe/layers/infogain_loss_layer.cpp
CXX src/caffe/layers/inner_product_layer.cpp
CXX src/caffe/layers/log_layer.cpp
CXX src/caffe/layers/loss_layer.cpp
CXX src/caffe/layers/lrn_layer.cpp
CXX src/caffe/layers/memory_data_layer.cpp
CXX src/caffe/layers/multinomial_logistic_loss_layer.cpp
CXX src/caffe/layers/mvn_layer.cpp
CXX src/caffe/layers/neuron_layer.cpp
CXX src/caffe/layers/pooling_layer.cpp
CXX src/caffe/layers/power_layer.cpp
CXX src/caffe/layers/prelu_layer.cpp
CXX src/caffe/layers/reduction_layer.cpp
CXX src/caffe/layers/relu_layer.cpp
CXX src/caffe/layers/reshape_layer.cpp
CXX src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp
src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp:120:1: error: out-of-line definition of 'Backward_gpu' does not match any declaration in 'SigmoidCrossEntropyLossLayer'
STUB_GPU_BACKWARD(SigmoidCrossEntropyLossLayer, Backward);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./include/caffe/util/device_alternate.hpp:28:24: note: expanded from macro 'STUB_GPU_BACKWARD'
void classname::funcname####gpu(const vector<Blob>& top,
^~~~~~~~~~~~~~~~
:87:1: note: expanded from here
Backward_gpu
^~~~~~~~~~~~
1 error generated.
make: *_* [.build_release/src/caffe/layers/sigmoid_cross_entropy_loss_layer.o] Error 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.