Giter VIP home page Giter VIP logo

zs3's Issues

The use of unseen labels during the training!

Hi there,
I have read your paper and the code, brilliant idea. I have a few questions on the implementation:

  1. Are unseen target labels being used for training the classifier? (Line 264 train_context_GMMN.py). If so, how is this method different than just training the backbone - deeplabv3+ using (seen and unseen) target labels.
  2. For fine-tuning the classifier to predict unseen label, how would you place and arrange the unseen generated visual features so that the classifier can exploit the spatial correlation between visual features? Could you please elaborate on the fine tuning procedure of the classifier?
  3. GCN is taking in the graph that is being generated based on the seen and unseen labels. Aren’t we supposed not to use any information on the unseen labels during the training process for ZSL.

Pascal context splits

Hi,

I found the training and validation splits for pascal context are not provided on its website. Could someone please tell me where can I get them?

the usage of unseen class segmentation annotations

Dear dr.maximebucher:

We would like to thank you for your enlightening paper named "Zero-Shot Semantic Segmentation". Having read your release codes elaborately, we have a kind of confusion about the step 2 training process (In train_pascal_GMMN.py, line 265). The code "loss = self.criterion(output, target)" which means the unseen class segmentation annotations will be inevitably involved in the training procedure of the classifier. In other words, the annotations of the unseen classes shouldn't be used as the supervision, since the zero-shot learning should keep the unseen classes' annotation unavailable. Therefore, we hope to know how the training process of the classifier can achieve the seen and unseen classes recognition.

Thanks again, wish for your response.

Using unseen classes GT labels make no sense for zero-shot setting

As discussed in other issues, some unseen classes GT labels are used in this code. However, in the zero-shot setting, it is unsuitable since we do not know anything about the unseen classes except its name. Using some unseen classes GT labels is more like to process a semi-supervised task rather than a zero-shot task.

Reproduction issue

Hi there,

I tried training the model from scratch using train_context, and I got fair results (31%), but the train_context_GMMN model does not go higher than 17% pixel accuracy for 2 unseen categories. I downloaded pre-trained weights for 2 and 4 unseen on the pascal-context you provided and ran eval_context. But the results are not explainable:
Seen: pixel accuracy: 4.9% mIoU: 0.5%
Unseen: pixel accuracy: 1% mIoU: 0.6%

Could you please provide us with correct pre-trained weights or shed some light on how to train/eval the model?

Dimension of output tensors in eval_pascal.py not match

Hi,
When I was running eval_pascal.py, an error occured

Traceback (most recent call last): 
File "eval_pascal.py", line 166, in validation
all_target = np.concatenate(all_target)
File "<__array_function__ internals>", line 6, in concatenate 
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 2, the array at index 0 has size 700 and the array at index 1 has size 765

I tried to output target.shape for all elements in all_target and the dimensions vary a lot

You cannot use unseen label info in the training process

Thanks for sharing this repo.
The idea is creative. But there is some logic error in the implementation process.
In a word, you have used the unseen label info in the training process.

  1. has_unseen_class: You don't know you have unseen class or not in real world. You just know the whole set of classes, classes set with label and classes set without label.
    You should use has_unseen_class = len(unique_class) < 21 for VOC.
  2. unique_class: You don't know the unique_class of a image in real world. You just know the unique labeled classes in that image, and the whole set of unseen classes, and the whole set of classes.
    You should generate all classes, seen and unseen, here. And use the unique seen classes in a image to train your generator.
  3. loss = self.criterion(output, target): You cannot use the virtual unseen label in the target to train your classifier. It is not fair. You should add some lines of code:
    for unseen_label in whole_unseen_label: target[target==unseen_label] = 255
    And set the ignore_label parameter in CEloss to 255 to ignore the virtual unseen label to simulate the real world application totally.

Despite all of above, I still believe your idea works. Hope to see your updated result.

How can I reproduce the results for the baseline model described in experiments section?

Hello,

I read your paper and checked the repository. However, I couldn't find the code which is responsible for training and evaluating the baseline model discussed in the experiments section which is said to replace the classification layer of DeepLabv3+ with a projection layer to project extracted visual features onto the semantic embedding space to perform cosine similarity in the projected space. In the paper, it is said that this baseline is based on the DeViSE for zero-shot image classification. Could you please point me to the location in the repository where this experiment is conducted? If this experiment is not available in this repository, could you please add the responsible code to replicate it?

Thank you in advance.

Try out on own example?

Hi, I am currently trying to get this to run. Is there an easy way to just try out your pretrained model on my own set of images? Appreciate any help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.