The official project website of "Augmentation-free Dense Contrastive Distillation for Efficient Semantic Segmentation" (Af-DCD for short, accepted to NeurIPS 2023).
I'd like to debug your OmniContrastiveFeatureLoss and figure out the details. But I can't find the teacher model weights, such as deeplabv3_resnet101_camvid_best_model.pth. I would be extremely grateful if you could provide it. Or I didn't find it.
Thanks to the authors for their contribution! However, during the reproduction process, I realised that some divisions of the test data were not provided, such as the PASCAL Voc dataset. Could the authors please provide a more comprehensive test division and the corresponding procedure?
I would like to receive the model parameters and trains script file of PSPNet-Res101 used in paper Table 1. (b) experiment, PSPNet-Res101 -> DeepLabV3-Res18.
Thanks to the authors for their contribution. I have difficulties in reproducing the distillation experiments based on Transformer architecture to achieve the recorded performance. Can you provide more training details for reference. It would be better if .sh files are provided.