Giter VIP home page Giter VIP logo

shengcailiao / transmatcher Goto Github PK

View Code? Open in Web Editor NEW
23.0 1.0 3.0 5.41 MB

[NeurIPS 2021] TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification

Home Page: https://arxiv.org/abs/2105.14432

License: MIT License

Shell 0.24% Python 97.12% MATLAB 2.63%
correspondence deep-metric-learning domain-generalization generalizability generalization image-matching interpretability interpretable-deep-learning metric-learning person-re-identification

transmatcher's People

Contributors

shengcailiao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

transmatcher's Issues

Trained models

Hello,

I'm trying to just run the test portion of the code using the following command:
python main.py --dataset market --testset duke[,market] --evaluate

But I get the error:
No checkpoint found at '/TransMatcher/Exp/Market/TransMatcher/res50-ibnb-layer3/checkpoint.pth.tar'

I'd like to just test on the market dataset since I had trouble getting/dowloading the msmt and cuhk03-np datasets. Would you be able to provide the checpoint.pth.tar file or is there some way to generate it?

Thanks!

optimizer can only optimize Tensors, but one of the params is str

作者你好!我在跑TransMatcher的时候遇到了一些问题。
由于我不能连上github,我去IBNnet的仓库找到了resnet50_ibn_b的模型下载地址。我用torch.load()来载入模型后,却发生了另一个问题:
AttributeError: 'collections.OrderedDict' object has no attribute 'parameters'. 我猜把model.base.parameters()变为model.base可能有用。但之后又报错了:TypeError: optimizer can only optimize Tensors, but one of the params is str

Questions about input of TransMatcher and Variable Naming in implementation

Hi @ShengcaiLiao ,

Thank you so much for the impressive work!

I'm not familiar with person identification tasks, but I found another your paper that says, "The detection sub-task is to determine the presence of the probe subject in the gallery, and the identification sub-task is to determine which person in the gallery has the same identity as the accepted probe." So I assume the memory here (in the TransMatcher instance initialization) should be the gallery feature.

Let's look at the forward function of the TransMatcher,

    def forward(self, features):
        score = self.decoder(self.memory, features)
        return score

The first input is memory, and the second is features. However, in the TransformerDecoder definition, it go as follow

    def forward(self, tgt: Tensor, memory: Tensor) -> Tensor:
        r"""Pass the inputs through the decoder layer in turn.
        Args:
            tgt: the sequence to the decoder (required).
            memory: the sequence from the last layer of the encoder (required).
        Shape:
            tgt: [q, h, w, d*n], where q is the query length, d is d_model, n is num_layers, and (h, w) is feature map size
            memory: [k, h, w, d*n], where k is the memory length
        """

The tgt and memory variables here confuse me. Which should be probe (query), and which should be gallery features?

Thank you for your reply in advance.

Model in eval mode during training

Hello @ShengcaiLiao,
thank you very much for your work.
I do have a question: which are the reasons why you are keeping your model (the feature extractor part) in eval mode during training? In the train epoch you do:
self.model.eval() self.criterion.train()

Thank you again.

about normalization

so first of all, thankkkks for your brilliant work!I have a question, why didn't you utilize L-2 normalization on the channel dimension in TransMatcher before entering the decoder? while you did so in QAConv.

Out-Of-Memory regardless of setting batch_size

I run your code in main.py of TransMatcher but the Out-Of-Memory issue is still there regardless of setting batch_size. Do you know how to modify it to make it efficient? The code line of this issue:

score = einsum('q t d, k s d -> q k s t', query, key) * self.score_embed.sigmoid()

关于transmatcher的推理速度

感谢您出色的工作!关于transmatcher的推理速度,您是否与其他模型进行对比过,我自己用transmatcher在msmt17数据集上推理时,时间大概用了10分钟?(单张 NVIDIA GeForce RTX 2080 Ti GPU + 8700k + 32g内存)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.