Comments (14)
My personal homepage is still under construction. If you have any question, feel free to contact me via email.
from deep-supervised-hashing-dsh.
about cuDNN: Our codes use cudnn v4. You can download the tar file of cudnn v4, untar it, and add the "include" and "lib64" directories to Makefile.config (e.g. INCLUDE_DIRS :=
about "make test": I modified the code of "io.cpp" to support multi-label LMDB/leveldb, but I didn't modify the codes for testing, thus "make test" won't work here. However, our codes works normally as long as you can run "make all" successfully.
from deep-supervised-hashing-dsh.
Thank you so much !!! i used "make test -i" to ignored the errors(crazy...)and follow your example usage CIFAR-10/train_full.sh to continue your experiment and i did't use the cudnn and now it running ,but the speed is not fast,about 12 seconds per 100 iterations ,the loss is ave 1.2,is that normal?
from deep-supervised-hashing-dsh.
Without cudnn, 12 seconds per 100 iterations is normal. With our codes, the final loss on CIFAR-10 is about 0.6 (12-bit code), and the corresponding retrieval mAP is about 0.67.
from deep-supervised-hashing-dsh.
Thank you for your help!!!
from deep-supervised-hashing-dsh.
Dear authors,
l have a question about why you use the image pair as the training input ?
from deep-supervised-hashing-dsh.
We use the image pairs and their corresponding similarity labels so that the network could learn to preserve the relationships of images defined by the similarity labels. Such scheme is widely adopted in many hashing methods.
Actually, other loss functions for metric learning, triplet ranking loss for example, could also be adopted, which we've evaluated in our recent experiments. Therefore, the use of image pair is just an option, while other options are also available.
from deep-supervised-hashing-dsh.
Thank you so much for your detailed and patient explanation!! If i may ask? Do you have personal homepage ?Because i am very interested in your research and want to learn from your study.
from deep-supervised-hashing-dsh.
Dear lhmRyan, I followed your advices to include cudnnv4, and run "make all" successfully.
However, my model didn't converge at all and the loss is 10.8 avg. b.t.w., I haven't changed any code except Makefile.config!!!
I have no idea about it, could you give me some advice?? @lhmRyan
from deep-supervised-hashing-dsh.
@ahhan02 It's strange. I just cloned this repository to my computer, compiled it, downloaded the data of cifar-10, and trained the model. After 100 iterations, the loss drops to about 2.3.
You could try training the model again. If it still don't converge, please provide some more information, and maybe we can figure out the problem together.
from deep-supervised-hashing-dsh.
Aha, I recompiled the project without cudnnv4, and it works now....
Anyway, I appreciate your help. @lhmRyan
from deep-supervised-hashing-dsh.
That's strange. I'll have a look at my code.
from deep-supervised-hashing-dsh.
hello,anthor,nowadays,i read your cvpr2016 paper again,and i have a question about the loss in the paper,is the Eqn.(6), sigma(x)=1 when x alone to (-1,0)&[1,INF) and otherwise equal to -1, and Eqn.(5) use it, but in Eqn.(5),b(i,j) is a vector ? or this b(i,j) has been relaxed to a continuous variable ?
from deep-supervised-hashing-dsh.
@liuyuying0829 Hi. In Eqn(3)~(5), b_i,j have been relaxed to continuous vectors.
As explained below Eqn(6), this equation is applied to Eqn(5) element-wisely. Namely, Eqn(6) is applied to each element of b_i,j separately.
from deep-supervised-hashing-dsh.
Related Issues (14)
- 代码在make test时会报错 HOT 7
- Must use GPU? HOT 4
- 关于hashing_image_data_layer.cpp的问题 HOT 2
- 测试的时候是一张图片为输入,还是两张? HOT 5
- HashLoss Layer question HOT 3
- mAP become worse when I change the dimension of the hash code to 24 or larger. HOT 2
- Error generating at 64% HOT 3
- 请问'hashing_image_data_layer'有什么用? HOT 6
- 请问map的计算是只在测试集中查询吗? HOT 2
- Nothing in the distribute file after the make all -j4 ,compared to the original caffe,there is no profile,why? HOT 1
- error: ‘class caffe::LayerParameter’ has no member named ‘hashing_loss_param’ HOT 1
- About NUS-WIDE HOT 12
- How to read the file ‘code.dat’? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deep-supervised-hashing-dsh.