Giter VIP home page Giter VIP logo

deep-supervised-hashing-dsh's People

Contributors

lhmryan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

deep-supervised-hashing-dsh's Issues

Error generating at 64%

Good day!

I cannot seem to generate the algorithm as is. The following error occurs at 64% (Building CXX object src/caffe/CMakeFiles/caffe.dir/layers/hashing_loss_layer.cpp.o)

[...]/include/caffe/util/device_alternate.hpp:15:36: error: no ‘void caffe::HashingLossLayer::Forward_gpu(const std::vector<caffe::Blob>&, const std::vector<caffe::Blob>&)’ member function declared in class ‘caffe::HashingLossLayer’
const vector<Blob>& top) { NO_GPU; }
^
[...]/src/caffe/layers/hashing_loss_layer.cpp:118:1: note: in expansion of macro ‘STUB_GPU’
STUB_GPU(HashingLossLayer);
^
[...]/include/caffe/util/device_alternate.hpp:19:39: error: no ‘void caffe::HashingLossLayer::Backward_gpu(const std::vector<caffe::Blob
>&, const std::vector&, const std::vector<caffe::Blob>&)’ member function declared in class ‘caffe::HashingLossLayer’
const vector<Blob
>& bottom) { NO_GPU; }
^
[...]/src/caffe/layers/hashing_loss_layer.cpp:118:1: note: in expansion of macro ‘STUB_GPU’
STUB_GPU(HashingLossLayer);
^
src/caffe/CMakeFiles/caffe.dir/build.make:2054: recipe for target 'src/caffe/CMakeFiles/caffe.dir/layers/hashing_loss_layer.cpp.o' failed
make[2]: *** [src/caffe/CMakeFiles/caffe.dir/layers/hashing_loss_layer.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
CMakeFiles/Makefile2:272: recipe for target 'src/caffe/CMakeFiles/caffe.dir/all' failed
make[1]: *** [src/caffe/CMakeFiles/caffe.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

测试的时候是一张图片为输入,还是两张?

作者您好,
您在文章中计算loss的时候是和Siamese网络一样三个输入,您也解释了在线生成图片对的好处以及使用单层网可以达到和Siamese网相近的cost。
我想请问一下,您测试的时候是单张输入还是两张?

Must use GPU?

I try to make this project on the VMware with Ubuntu without GPU and failed.
error info. is follow:
In file included from ./include/caffe/common.hpp:19:0,
from ./include/caffe/blob.hpp:8,
from ./include/caffe/layers/hashing_loss_layer.hpp:6,
from src/caffe/layers/hashing_loss_layer.cpp:4:
./include/caffe/util/device_alternate.hpp:15:36: error: no ‘void caffe::HashingLossLayer::Forward_gpu(const std::vector<caffe::Blob>&, const std::vector<caffe::Blob>&)’ member function declared in class ‘caffe::HashingLossLayer’
const vector<Blob>& top) { NO_GPU; }
^
src/caffe/layers/hashing_loss_layer.cpp:118:1: note: in expansion of macro ‘STUB_GPU’
STUB_GPU(HashingLossLayer);
^
./include/caffe/util/device_alternate.hpp:19:39: error: no ‘void caffe::HashingLossLayer::Backward_gpu(const std::vector<caffe::Blob
>&, const std::vector&, const std::vector<caffe::Blob>&)’ member function declared in class ‘caffe::HashingLossLayer’
const vector<Blob
>& bottom) { NO_GPU; }
^
src/caffe/layers/hashing_loss_layer.cpp:118:1: note: in expansion of macro ‘STUB_GPU’
STUB_GPU(HashingLossLayer);
^
Makefile:572: recipe for target '.build_release/src/caffe/layers/hashing_loss_layer.o' failed
make: *** [.build_release/src/caffe/layers/hashing_loss_layer.o] Error 1
make: *** 正在等待未完成的任务....

I think that without GPU cause this problem. Can I make this without GPU or should I change a device?
Looking forward to receive reply! THX!

About NUS-WIDE

Dear authors,
Could you please tell me how to get the NUS-WIDE data set ? I mean the data set that contains only 21 concepts. I've only found data sets with 81 concepts on the NUS-WIDE website. Thank you !
@lhmRyan

"make test" and got the error "no matching function for call to ReadImageToDatum and ReadFileToDatum"

Dear authors,
when i follow the instructions on "http://caffe.berkeleyvision.org/installation.html" to compile the source code, when i set "USE_CUDNN := 1",i got the error: few arguements for cudnn::createPoolingDesc ,i compared the same file with caffe and copy the caffe's code and then compile this file successful. But when i continued compile other files also have this error and then i known these problems are caused by the version of CuDnn . So i set"#USE_CUDNN := 1",then i can "make pycaffe" and "make all" successful ,but when i want to "make test",i got the error "no matching function for call to ReadImageToDatum and ReadFileToDatum",i also compiled this file with caffe's and they are the same so i can't resolve this problem, can you give me some advise ?

HashLoss Layer question

dear:
I'm recreating your program of DSH, but I found that the hash loss layer does not have reverse-propagating code. Did you not published it ? Is it convenient for you to provide it? thank you a lot!

关于hashing_image_data_layer.cpp的问题

你好,仔细阅读过你hashing_image_data_layer.cpp的代码,有个疑问,比如batch_size设200,cat_per_iters设为10,则每个类中取20个样本,之后如何构造相似/不相似样本对呢?因为你的代码中同一个类的20个样本,是顺序保存在cpu_data中的,之后读取的话是不是相似的样本对会明显多于非相似的?谢谢~

代码在make test时会报错

你好,你的代码在make test 时会报
make: *** [.build_release/src/caffe/test/test_memory_data_layer.o] Error 1
的错误,但是我直接编译官网的caffe不会出现这个问题,请教一下是为什么呢?

mAP become worse when I change the dimension of the hash code to 24 or larger.

The default hash code dim is 12. I change it to 24 directly in train_test.prototxt instead of fine_tuning. Other hyperparameters remain unchanged. and I run train_full.sh and get a worse mAP of 0.6853 while mAP of 12 dim is 0.6987. I don‘’t know why . When I change it to 36. I get a mAP of 0.5839.

Could you give me some advise to improve the result when the hash dim is larger? Thx~

How to read the file ‘code.dat’?

Hello,dear author,I am working on a project which needs hashcode to quickly retrieval. You have done a great job on deep hashing. I want to ask you a question,"How can I read the hashcode from the file 'code.dat'?"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.