The function [update_opt_from_json] is to exclude some parameters that need to be manually updated, and then used to update other parameters with the same name as the command line parameters but different from the default value of the command line。right?
Great work! I have been using your code, and I noticed that the best results are achieved around the third or fourth epoch during training. Is this because you have some special settings for training?
Dear Author, I am going to use vis_results.py to evaluate STUN(tea and stu) trianed from scratch,but I found that there is no embs.pickle file generated, how can I generate embs.pickle file in training and evaluating STUN from for vis_results.py?
Stun is a great work.Now I have read the source code you shared and have benefited a lot. Thank you for sharing.
When can you publish your training code?
Thank you for your great work! But I have a question about your pitts.py code. Since self.cache = None in line 253, is there no file open in line 270 with h5py.File(self.cache, mode='r') as h5, then the meaning of the code with h5py.File(self.cache, mode='r') as h5 is What?
I have one question about the experimental results in this paper.
The original Netvlad records a recall at 1 of about 86% for pitts30k. However, the recall at 1 of the network proposed in this paper is measured to be about 61%.
I think a lot of this difference can be attributed to the difference in pooling layers. In this paper, it used the gem pooling layer instead of the vlad pooling layer.
Could you share any personal experience or insight into the reason for this choice?
Dear authors, I got this error report while training with your code: python main.py --phase=train_tea --loss=cont, did you get the same error while training? How did you solve it?
/root/miniconda3/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate