Giter VIP home page Giter VIP logo

lmpt's People

Contributors

richard-peng-xia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

lmpt's Issues

Why do I need to input class weight as class count into hinge loss?

class_weights = sfm(torch.from_numpy(np.asarray(mmcv.load(freq_file)['neg_class_freq'])).to(torch.float32).cuda())
hinge_loss = SoftMarginHingeEmbeddingLoss(margin=0.2, class_counts=class_weights)

Why do I need to input class weight as class count into hinge loss? By the way, the class weights is two-dimension which is not suitable for the hinge_loss.

About data preparation

Thank you for your excellent work, could you please provide the class_freq.pkl file for voc and coco? Looking forward to your reply!

Loss bug in lmpt training

Hi everyone, could you please help to figure out this error? Thanks!

Traceback (most recent call last):
File "train.py", line 281, in
main(args)
File "train.py", line 223, in main
loss_2 = hinge_loss(x, y)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/host/home/yong/Documents/LMPT/lmpt/csel.py", line 25, in forward
class_weights = (1 / self.class_counts) ** self.gamma / torch.sum((1 / self.class_counts) ** self.gamma, dim=2)
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2)

Problem about the implementation of lmpt/csel.py.

Some of the code is different from the paper.

  1. Computing $n_i$
    In the paper $n_i$ denotes the number of training examples that contain class i. In the code, neg_class_ferq is loaded (the number of negative samples).

  2. Computing μ
    In the paper,
    image
    In the code image
    There is an addition softmax operation.

Can not execute.
image
because self.class_counts has value 0.

I wondering if there are any issues with the implementation of the CSE loss function? thanks!

What is this label?

No such file or directory: '../data/voc/voc_labels.txt'.

What is this label?

Question about the calculation of prediction probability

Hi,

Thanks for your excellent work for long-tailed multi-label visual recognition.

I am little confused about the calculation of prediction probability as shown below (formula 2). For example, let us assume that there exist three labels (i=a,b or c) in an image. We hope the prediction probability of the three labels are high enough. However, the prediction probability is normalized by the sum of all label classes like softmax, which can decline the prediction probability significantly when several labels exist. It seems really strange. Could you please give me some instructions?

image

Best regards

Questions about CSE Loss

Hello, I 'm interested in your excellent work. And I have two questions when re-produce your work.

  1. In line 210 of lmpt/train.py:
    class_weights = sfm(torch.from_numpy(np.asarray(mmcv.load(freq_file)['class_freq'])).to(torch.float32).cuda()),
    Why apply softmax on class_counts? After applying it, the result contains zeros, which influence the calculation of margin and
    class_weights in csel.py (class_counts is as denominator). For example, for voc-lt, class_counts after softmax is like this:
    image

  2. In line 20 of lmpt/csel.py:
    self.class_counts = self.class_counts.squeeze(dim=0).expand(labels.shape[0], self.class_counts.shape[-1]),
    after this line, self.class_counts has two dimensions: [batch_size, n_cls]. But in line 25, it requires self.class_counts to have a
    third dimension in torch.sum(dim=2):
    class_weights = (1 / self.class_counts) ** self.gamma / torch.sum((1 / self.class_counts) ** self.gamma, dim=2)
    I think the line 25 should be like this:
    class_weights = (1 / self.class_counts) ** self.gamma / torch.sum((1 / self.class_counts) ** self.gamma, dim=1,
    keepdim=True)

I would appreciate it if you could answer my confusions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.