Comments (21)
@ps48 just try compile with CUDA but without CUDNN
from hed.
I am using Cuda 8.0 and Cudnn 5 in Ubuntu 16.04.
while making the caffe files i get errors as follows:
error: argument of type "int" is incompatible with parameter of type "cudnnNanPropagation_t"
from hed.
HED mainly branched from an old version of caffe, which makes it hard to compatible to newer CUDNN and CUDA.
I' reimplemented HED based on current "BVLC/caffe/master", https://github.com/zeakey/hed. Hope this will help all of you!
from hed.
HED is based on a old version of caffe, I don't think it's compatitable with cuda8.
from hed.
I can confirm that this project works under Ubuntu 16.04, with CUDA v8.0 on a GTX 1080. I simply followed the build instructions and everything worked out of the box.
Please note that you still get Check failed: error == cudaSuccess (29 vs. 0) driver shutting down *** Check failure stack trace: ***
error right after the test application terminates, but that happens at program shutdown, so it won't really interfere with anything. Everything works as expected.
from hed.
Heck, I was even able to get to work under Bash for Windows 10!
In case anybody else wants to get it to work with Bash for Windows 10, you can install CUDA 7.5 from the deb package on NVIDIA's website. Also, you need to use TkAgg backend for Python's matplotlib stuff. Install all of Python's packages with apt-get
rather than pip (scipy with pip has issues with Bash for Windows 10).
I only did "testing" and (obviously) no training because CUDA won't work with Bash for Windows 10. But you can build the code and use CPU for testing a trained model.
from hed.
@Maghoumi Hi Maghoumi, I also got the same error Check failed: error == cudaSuccess (29 vs. 0) driver shutting down *** Check failure stack trace: ***
, and the program exists with error, so I can't grab the result of the ouput. May I know how do you get the forward output result of this network?
Thank you!
from hed.
@liruoteng just try compile with CUDA but without CUDNN
from hed.
@mnill Hey, firstly, I did not compile it with CUDNN. The Makefile.config file is attached.
Secondly, watch out your offensive words you sent to me.
Makefile.config.txt
from hed.
@mnill worked smoothly without CUDNN
from hed.
@liruoteng sorry about that.
you on ubuntu 16/04 with cuda 8 ?
from hed.
@liruoteng please check your script, this error happens but after all operations are complete successful.
from hed.
ubuntu 16.04
cudnn v6
caffe 1.0.0 (had to back off from latest protoc
$python solve.py
rps@quasar:~/ml/hed/hed-master/examples/hed$ python solve.py
/usr/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0607 12:52:20.116489 10407 solver.cpp:44] Initializing solver from parameters:
test_iter: 0
test_interval: 1000000
base_lr: 1e-06
display: 20
max_iter: 30001
lr_policy: "step"
gamma: 0.1
momentum: 0.9
weight_decay: 0.0002
stepsize: 10000
snapshot: 1000
snapshot_prefix: "hed"
net: "train_val.prototxt"
iter_size: 10
I0607 12:52:20.116616 10407 solver.cpp:87] Creating training net from net file: train_val.prototxt
I0607 12:52:20.117053 10407 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer data
I0607 12:52:20.117338 10407 net.cpp:51] Initializing net from parameters:
name: "HED"
state {
phase: TRAIN
}
layer {
name: "data"
type: "ImageLabelmapData"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: false
mean_value: 104.00699
mean_value: 116.66877
mean_value: 122.67892
}
image_data_param {
source: "../../data/HED-BSDS/train_pair.lst"
batch_size: 1
shuffle: true
new_height: 0
new_width: 0
root_folder: "../../data/HED-BSDS/"
}
}
layer {
name: "conv1_1"
type: "Convolution"
bottom: "data"
top: "conv1_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 35
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu1_1"
type: "ReLU"
bottom: "conv1_1"
top: "conv1_1"
}
layer {
name: "conv1_2"
type: "Convolution"
bottom: "conv1_1"
top: "conv1_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu1_2"
type: "ReLU"
bottom: "conv1_2"
top: "conv1_2"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1_2"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2_1"
type: "Convolution"
bottom: "pool1"
top: "conv2_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu2_1"
type: "ReLU"
bottom: "conv2_1"
top: "conv2_1"
}
layer {
name: "conv2_2"
type: "Convolution"
bottom: "conv2_1"
top: "conv2_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu2_2"
type: "ReLU"
bottom: "conv2_2"
top: "conv2_2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2_2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv3_1"
type: "Convolution"
bottom: "pool2"
top: "conv3_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu3_1"
type: "ReLU"
bottom: "conv3_1"
top: "conv3_1"
}
layer {
name: "conv3_2"
type: "Convolution"
bottom: "conv3_1"
top: "conv3_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu3_2"
type: "ReLU"
bottom: "conv3_2"
top: "conv3_2"
}
layer {
name: "conv3_3"
type: "Convolution"
bottom: "conv3_2"
top: "conv3_3"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu3_3"
type: "ReLU"
bottom: "conv3_3"
top: "conv3_3"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "conv3_3"
top: "pool3"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv4_1"
type: "Convolution"
bottom: "pool3"
top: "conv4_1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu4_1"
type: "ReLU"
bottom: "conv4_1"
top: "conv4_1"
}
layer {
name: "conv4_2"
type: "Convolution"
bottom: "conv4_1"
top: "conv4_2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu4_2"
type: "ReLU"
bottom: "conv4_2"
top: "conv4_2"
}
layer {
name: "conv4_3"
type: "Convolution"
bottom: "conv4_2"
top: "conv4_3"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu4_3"
type: "ReLU"
bottom: "conv4_3"
top: "conv4_3"
}
layer {
name: "pool4"
type: "Pooling"
bottom: "conv4_3"
top: "pool4"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv5_1"
type: "Convolution"
bottom: "pool4"
top: "conv5_1"
param {
lr_mult: 100
decay_mult: 1
}
param {
lr_mult: 200
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu5_1"
type: "ReLU"
bottom: "conv5_1"
top: "conv5_1"
}
layer {
name: "conv5_2"
type: "Convolution"
bottom: "conv5_1"
top: "conv5_2"
param {
lr_mult: 100
decay_mult: 1
}
param {
lr_mult: 200
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu5_2"
type: "ReLU"
bottom: "conv5_2"
top: "conv5_2"
}
layer {
name: "conv5_3"
type: "Convolution"
bottom: "conv5_2"
top: "conv5_3"
param {
lr_mult: 100
decay_mult: 1
}
param {
lr_mult: 200
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
engine: CAFFE
}
}
layer {
name: "relu5_3"
type: "ReLU"
bottom: "conv5_3"
top: "conv5_3"
}
layer {
name: "score-dsn1"
type: "Convolution"
bottom: "conv1_2"
top: "score-dsn1-up"
param {
lr_mult: 0.01
decay_mult: 1
}
param {
lr_mult: 0.02
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 1
engine: CAFFE
}
}
layer {
name: "crop"
type: "Crop"
bottom: "score-dsn1-up"
bottom: "data"
top: "upscore-dsn1"
}
layer {
type: "SigmoidCrossEntropyLoss"
bottom: "upscore-dsn1"
bottom: "label"
top: "dsn1_loss"
loss_weight: 1
}
layer {
name: "score-dsn2"
type: "Convolution"
bottom: "conv2_2"
top: "score-dsn2"
param {
lr_mult: 0.01
decay_mult: 1
}
param {
lr_mult: 0.02
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 1
engine: CAFFE
}
}
layer {
name: "upsample_2"
type: "Deconvolution"
bottom: "score-dsn2"
top: "score-dsn2-up"
param {
lr_mult: 0
decay_mult: 1
}
param {
lr_mult: 0
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 4
stride: 2
}
}
layer {
name: "crop"
type: "Crop"
bottom: "score-dsn2-up"
bottom: "data"
top: "upscore-dsn2"
}
layer {
type: "SigmoidCrossEntropyLoss"
bottom: "upscore-dsn2"
bottom: "label"
top: "dsn2_loss"
loss_weight: 1
}
layer {
name: "score-dsn3"
type: "Convolution"
bottom: "conv3_3"
top: "score-dsn3"
param {
lr_mult: 0.01
decay_mult: 1
}
param {
lr_mult: 0.02
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 1
engine: CAFFE
}
}
layer {
name: "upsample_4"
type: "Deconvolution"
bottom: "score-dsn3"
top: "score-dsn3-up"
param {
lr_mult: 0
decay_mult: 1
}
param {
lr_mult: 0
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 8
stride: 4
}
}
layer {
name: "crop"
type: "Crop"
bottom: "score-dsn3-up"
bottom: "data"
top: "upscore-dsn3"
}
layer {
type: "SigmoidCrossEntropyLoss"
bottom: "upscore-dsn3"
bottom: "label"
top: "dsn3_loss"
loss_weight: 1
}
layer {
name: "score-dsn4"
type: "Convolution"
bottom: "conv4_3"
top: "score-dsn4"
param {
lr_mult: 0.01
decay_mult: 1
}
param {
lr_mult: 0.02
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 1
engine: CAFFE
}
}
layer {
name: "upsample_8"
type: "Deconvolution"
bottom: "score-dsn4"
top: "score-dsn4-up"
param {
lr_mult: 0
decay_mult: 1
}
param {
lr_mult: 0
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 16
stride: 8
}
}
layer {
name: "crop"
type: "Crop"
bottom: "score-dsn4-up"
bottom: "data"
top: "upscore-dsn4"
}
layer {
type: "SigmoidCrossEntropyLoss"
bottom: "upscore-dsn4"
bottom: "label"
top: "dsn4_loss"
loss_weight: 1
}
layer {
name: "score-dsn5"
type: "Convolution"
bottom: "conv5_3"
top: "score-dsn5"
param {
lr_mult: 0.01
decay_mult: 1
}
param {
lr_mult: 0.02
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 1
engine: CAFFE
}
}
layer {
name: "upsample_16"
type: "Deconvolution"
bottom: "score-dsn5"
top: "score-dsn5-up"
param {
lr_mult: 0
decay_mult: 1
}
param {
lr_mult: 0
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 32
stride: 16
}
}
layer {
name: "crop"
type: "Crop"
bottom: "score-dsn5-up"
bottom: "data"
top: "upscore-dsn5"
}
layer {
type: "SigmoidCrossEntropyLoss"
bottom: "upscore-dsn5"
bottom: "label"
top: "dsn5_loss"
loss_weight: 1
}
layer {
name: "concat"
type: "Concat"
bottom: "upscore-dsn1"
bottom: "upscore-dsn2"
bottom: "upscore-dsn3"
bottom: "upscore-dsn4"
bottom: "upscore-dsn5"
top: "concat-upscore"
concat_param {
concat_dim: 1
}
}
layer {
name: "new-score-weighting"
type: "Convolution"
bottom: "concat-upscore"
top: "upscore-fuse"
param {
lr_mult: 0.001
decay_mult: 1
}
param {
lr_mult: 0.002
decay_mult: 0
}
convolution_param {
num_output: 1
kernel_size: 1
weight_filler {
type: "constant"
value: 0.2
}
engine: CAFFE
}
}
layer {
type: "SigmoidCrossEntropyLoss"
bottom: "upscore-fuse"
bottom: "label"
top: "fuse_loss"
loss_weight: 1
}
I0607 12:52:20.117825 10407 layer_factory.hpp:77] Creating layer data
F0607 12:52:20.117851 10407 layer_factory.hpp:81] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: ImageLabelmapData (known types: AbsVal, Accuracy, ArgMax, BNLL, BatchNorm, BatchReindex, Bias, Concat, ContrastiveLoss, Convolution, Crop, Data, Deconvolution, Dropout, DummyData, ELU, Eltwise, Embed, EuclideanLoss, Exp, Filter, Flatten, HDF5Data, HDF5Output, HingeLoss, Im2col, ImageData, InfogainLoss, InnerProduct, Input, LRN, LSTM, LSTMUnit, Log, MVN, MemoryData, MultinomialLogisticLoss, PReLU, Parameter, Pooling, Power, Python, RNN, ReLU, Reduction, Reshape, SPP, Scale, Sigmoid, SigmoidCrossEntropyLoss, Silence, Slice, Softmax, SoftmaxWithLoss, Split, TanH, Threshold, Tile, WindowData)
*** Check failure stack trace: ***
Aborted
from hed.
@schaefer0 Have you finish the problem?
from hed.
from hed.
from hed.
from hed.
@Maghoumi Hi Maghoumi, I also has the same environment with you but keep meeting erros when rebuilding. Could you told me what's the version of caffe you used? Thank you very much!
from hed.
from hed.
@zeakey Thank you so much! Everything works nicely and smoothly.
from hed.
Hi DragonZZ, I have been working on a fairly large number of projects over the past years. Which problem are you refering to? bob s. (schaefer0) On Dec 8, 2017, at 6:22 AM, DragonZzzz <[email protected]mailto:[email protected]> wrote: @schaefer0https://github.com/schaefer0 Have you finish the problem? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<#22 (comment)>, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AAcnc0QlViwLTDC23vb94lBGh55lck8fks5s-RwAgaJpZM4KGVAn.
Thanks for the reminder. My hardware accellerator was obsolete. I actually had more compute power in my graphics card than the NVIDIA plug-in I was trying to use. The problem went away with the right hardware. On Dec 8, 2017, at 7:53 AM, DragonZzzz <[email protected]mailto:[email protected]> wrote: "Unknown layer type
I also had the same problem.
F1105 20:11:10.021813 26811 layer_factory.hpp:81] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: ImageLabelmapData
what's meaning about yours ?(My hardware accellerator was obsolete.)
my computer is GTX 1080 TI
CUDA 8.0
CUDNN V6.0
THANKS!
from hed.
Related Issues (20)
- ImportError: dynamic module does not define module export function (PyInit__caffe) HOT 1
- Batch Processing with HED without MATLAB
- Trouble with PostprocessHED.m parameters
- tenosrflow version
- link error undefined symbols for x86_64
- how to test it with RGBD data?
- Is this a memory leak? (python)
- How to get all the (X, Y) values of the edge for one object?
- train error
- Looking for BSDS500 test images computed by pretrained hed model
- NYUDv2 pretrainModel
- Can the parameters of the function edgeNmsMex be modified?
- 为什么,预测出来的边框比较宽? HOT 6
- requirement.txt file HOT 1
- Other datasets
- The preprocess of the dataset? HOT 1
- The augmented BSDS data (1.2GB), HOT 3
- The test output image and the input image are not aligned, and there are crops on the top and left HOT 1
- 嗨
- where is Yfuse implemented in code?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hed.