Giter VIP home page Giter VIP logo

cal's People

Contributors

chengy12 avatar raoyongming avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

cal's Issues

运行未成功

is:open 你好!代码拉取下来之后阅读readme 把数据集放在相应位置之后 就可以运行吗?我尝试这样,并未成功

Output Feature During Inference Stage

I quickly checked the model script baseline.py and found that you used the cls_score as output when doing inference. I am wondering if your published results were generated by this instead of features before classifier (which is regularly applied in popular reid framework).

About the counterfactual

Hi, I read your paper and got some new experiences; I have a question about counterfactual attention, where you use random attention sampled from a uniform distribution U(0, 2). I wonder why not sampling from the Gaussian distribution from ubiquitous and intuitive considerations? Or is there a reference for this design?

对于推理阶段的一些疑问

hello~,你们的工作非常好。我看到
image
我想知道是不是推理阶段不用计算反事实的attention吗,只使用actual attention的那个分支来进行$\hat{y}$的输出吗

cannot reproduce the accuracy on CUB

I am trying to reproduce the result of the CUB dataset, which is 90.6 acc ( table 1 in the paper). However, I use the same config and startup script as the code repo, but only get 90.03 acc for the last epoch. I notice that the total training epoch for fgvc task is not reported in the paper. So what is the proper epoch to get the 90.6 acc? Are there any other reasons that could affect reproducing the acc?

Please see attachment for my training log. Thanks
train.log
!

FGVC预训练模型

你好!请问能否提供FGVC任务的预训练模型呢?打扰了!

Pre-trained weights

Hello everyone :)
thanks for you nice work! Do you provide pre-trained weights for the person re-identification datasets somewhere?
Thanks in advance! :)

准确率

作者您好,我对您的论文进行了改进,train 准确率为 90.43%,但是运行infer.py 准确率为90.28%,想问下这是什么原因?

About WSDAN-CAL in the train_distributed.py

I've tried to train this code, but I'm still confused about understanding the code in the train_distributed.py file

1
In this coding, the aug variable uses the crop method, and the aux variable uses the drop method

2
But why does the aug_images variable use both methods (crop&drop), not just the crop method?

Thank you

WSDAN_CAL

Hello, may I know about the WSDAN_CAL architecture and its clearer visualisation? Because when I read the journal I am still quite confused, then I have also run and got the model results, but I am still quite confused to understand it. Thank you for your help, I am very happy to receive help from you.

demo

@raoyongming 你好,请问可以出一个demo吗,检测一张图片的python脚本,谢谢

Batch Size 和 准确率

作者你好,我阅读了您的文章,很受启发,并在GPU上复现了一下您的代码,发现代码中的batch_size=4,而论文里的batch_size=16,这个对准确率有影响吗

TypeError: linear(): argument 'input' (position 1) must be Tensor, not tuple

When I run infer.py in fgvc, I get this error.

File "infer.py", line 100, in visualize
attention_maps = net.visualize(X)
File "/kaggle/working/CAL/fgvc/models/cal.py", line 180, in visualize
p = self.fc(feature_matrix * 100)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 96, in forward
return F.linear(input, self.weight, self.bias)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 1847, in linear
return torch._C._nn.linear(input, weight, bias)
TypeError: linear(): argument 'input' (position 1) must be Tensor, not tuple

Can you help me fix this? Thank you!

Reproduce Result for MSMT dataset

I am trying to reproduce the result as shown in the paper for the MSMT which is mAP@64% and [email protected]%; however, I could not do it. May I ask about the backbone you are using to get these results? Is it the same with the code in your repository or do you use different approach? And do you take the best case during training or the result after training for the whole 160 epoch? I am sorry if these questions bother you. Thank in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.