zhenye-na / image-similarity-using-deep-ranking Goto Github PK
View Code? Open in Web Editor NEWπΌοΈ ππππππππππ PyTorch implementation of "Learning Fine-grained Image Similarity with Deep Ranking" (arXiv:1404.4661)
πΌοΈ ππππππππππ PyTorch implementation of "Learning Fine-grained Image Similarity with Deep Ranking" (arXiv:1404.4661)
The accuracy is 0.0 after I finish running the accuracy.py , there are two places where I changed, the batch_size_train and batch_size_test are changed into 5 and 1,and I do not use "torch.nn.DataParallel(net)",if it is because of this,I don't think so!But I don't know what's the matter about this?Any suggestions can you afford for me?Many thanks~~~
The final result just looks like this:
Get embedded_features, Done ... | Time elapsed 13727.054944753647s
0.0
1
Test accuracy 0.0%
Just wanted to know if the triplets are generally automatically or manual by the user?
Try have a look at resources below:
and
device = torch.device('cuda:0' if torch.cuda.is_avaliable() else 'cpu')
model.to(device)
From the error message it is because your model is not trained on GPU
The net seems must have 3 images as input.
Is there any way to just 2 imagesοΌ
Thank you!
mini Batch Loss: 0.838141679763794
mini Batch Loss: 0.038405612111091614
mini Batch Loss: 0.28175681829452515
mini Batch Loss: 1.1555051803588867
mini Batch Loss: 1.0039609670639038
mini Batch Loss: 0.43094536662101746
mini Batch Loss: 1.021866798400879
mini Batch Loss: 0.7752935886383057
mini Batch Loss: 0.23081330955028534
mini Batch Loss: 0.0
mini Batch Loss: 0.24504965543746948
mini Batch Loss: 0.8028221130371094
mini Batch Loss: 0.511269748210907
mini Batch Loss: 0.6132020354270935
mini Batch Loss: 1.1050782203674316
mini Batch Loss: 0.0
mini Batch Loss: 0.0
mini Batch Loss: 0.016318656504154205
mini Batch Loss: 0.0
mini Batch Loss: 0.0
mini Batch Loss: 0.014867536723613739
mini Batch Loss: 0.6881022453308105
mini Batch Loss: 0.0
mini Batch Loss: 0.08096975088119507
mini Batch Loss: 0.0
mini Batch Loss: 0.0
mini Batch Loss: 0.05993702635169029
mini Batch Loss: 0.8831809759140015
mini Batch Loss: 0.0
mini Batch Loss: 0.5886887311935425
mini Batch Loss: 0.241916224360466
mini Batch Loss: 0.29788437485694885
mini Batch Loss: 0.0
mini Batch Loss: 0.0
mini Batch Loss: 0.00650769891217351
mini Batch Loss: 1.1349914073944092
mini Batch Loss: 0.13306227326393127
mini Batch Loss: 0.0
I use the default setting and provided triplets.txt to fine-tune the pretrained model of resnet34 from torchvision.
But as figure above, there are a lot of zeros. I'm not sure if it is normal that the loss start from 3, as I see your loss visualization in README.md which start around 0.9.
And loss distribution is not that smooth as yours.
Any suggestion?
Thank you!!
Hi there,
I'm a bit confused. How could I use (inference) my trained model after training?
When I load my trained model and set it in eval mode, how could I generated my vector for a single image?
Thanks!
pix_1
==> Preparing Tiny ImageNet dataset ...
==> Retrieve model parameters ...
Get all test image classes, Done ... | Time elapsed 0.015627145767211914s
Get all training images, Done ... | Time elapsed 1.7184438705444336s
Get embedded_features, Done ... | Time elapsed 2935.702290058136s
Now processing 0th test image
Traceback (most recent call last):
File "D:\deep-ranking\model7\accuracy.py", line 206, in <module>
main()
File "D:\deep-ranking\model7\accuracy.py", line 202, in main
calculate_accuracy(trainloader, testloader, args.is_gpu)
File "D:\deep-ranking\model7\accuracy.py", line 112, in calculate_accuracy
embedded_test_numpy, (embedded_features_train.shape[0], 1))
File "C:\Users\DoDo\Anaconda3\envs\deep-ranking\lib\site-packages\numpy\lib\shape_base.py", line 1241, in tile
c = c.reshape(-1, n).repeat(nrep, 0)
MemoryError
>>>
whats is the problem ..
can any one help me please ??
Hi,
Great project! Would you mind adding a license?
Thanks!
I think the code just save each model every epoch, even if there is flag called "is_best" and it is no use.
Hello I'm receiving the following error while trying to execute this command:
python3 acc_knn.py --predict_similar_images "../tiny-imagenet-200/test/images/test_9970.JPEG" --predict_top_N 5
==> Preparing Tiny ImageNet dataset ...
Get all training images, Done ... | Time elapsed 74.98976492881775s
Traceback (most recent call last):
File "acc_knn.py", line 238, in
main()
File "acc_knn.py", line 229, in main
embedding_space = load_train_embedding()
File "acc_knn.py", line 33, in load_train_embedding
embedding_space = np.fromfile("../embedded_features_train.txt", dtype=np.float32)
FileNotFoundError: [Errno 2] No such file or directory: '../embedded_features_train.txt'
I looked for the file but is not on the repo. Can someone help me to sort this out? Or does the file can be gotten from somewhere?
Thanks
why this error?
==> Preparing Tiny ImageNet dataset ...
==> Retrieve model parameters ...
Get all test image classes, Done ... | Time elapsed 0.014259099960327148s
Get all training images, Done ... | Time elapsed 0.10345053672790527s
Get embedded_features, Done ... | Time elapsed 5874.124089002609s
Now processing 0th test image
Traceback (most recent call last):
File "D:\deep-ranking\model7\accuracy.py", line 206, in
main()
File "D:\deep-ranking\model7\accuracy.py", line 202, in main
calculate_accuracy(trainloader, testloader, args.is_gpu)
File "D:\deep-ranking\model7\accuracy.py", line 115, in calculate_accuracy
embedding_diff = embedded_features_train - embedded_features_test
ValueError: operands could not be broadcast together with shapes (100000,128) (200000,128)
could obtain the testing codeοΌοΌthanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.