Giter VIP home page Giter VIP logo

corda's Introduction

CorDA

Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation ICCV21 alt text

Prerequisite

Please create and activate the following conda envrionment. To reproduce our results, please kindly create and use this environment.

# It may take several minutes for conda to solve the environment
conda update conda
conda env create -f environment.yml
conda activate corda 

Code was tested on a V100 with 16G Memory.

Train a CorDA model

# Train for the SYNTHIA2Cityscapes task
bash run_synthia_stereo.sh
# Train for the GTA2Cityscapes task
bash run_gta.sh

Test a trained CorDA model

bash shells/eval_syn2city.sh 
bash shells/eval_gta2city.sh

Pre-trained models are provided (Google Drive). Please put them in ./checkpoint.

  • The provided SYNTHIA2Cityscapes model achieves 56.3 mIoU (16 classes) at the end of the training.
  • The provided GTA2Cityscapes model achieves 57.7 mIoU (19 classes) at the end of the training.

Reported Results on SYNTHIA2Cityscapes (The reported results are based on 5 runs instead of the best run.)

Method mIoU*(13) mIoU(16)
CBST 48.9 42.6
FDA 52.5 -
DADA 49.8 42.6
DACS 54.8 48.3
CorDA 62.8 55.0

Citation

Please cite our work if you find it useful.

@inproceedings{wang2021domain,
  title={Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation},
  author={Wang, Qin and Dai, Dengxin and Hoyer, Lukas and Van Gool, Luc and Fink, Olga},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}

Acknowledgement

  • DACS is used as our codebase and our DA baseline official
  • SFSU as the source of stereo Cityscapes depth estimation Official

Data links

For questions regarding the code, please contact [email protected] .

corda's People

Contributors

qinenergy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

corda's Issues

How to continue train?

when I use script llike

CUDA_VISIBLE_DEVICES=0 python3 -u trainUDA_gta.py --config ./configs/configUDA_gta2city.json --name UDA-gta --resume /saved/DeepLabv2-depth-gtamono-cityscapestereo/05-03_02-13-UDA-gta/checkpoint-iter95000.pth | tee ./gta-corda.log

It would run again but the new checkpoint would be saved.

About intrinsics used in GTA depth estimation

Thanks a lot for your fantastic work. When I followed your depth estimation mentioned in issue#7, I went to the https://playing-for-benchmarks.org. However,its camera calibration doesn't include intrinsic matrix directly, which is needed in Monodepth2 depth estimation. Would you kindly share the intrinsic of GTA you used in depth estimation? Or may I know a way to convert GTA's projection matrix to intrinsic matrix?

Question about the pretrained parameters of backbone

Thanks for sharing the code, and it brings the amazing improvement for this filed.

I notice that you have used backbone with parameters pretrained on MSCOCO which is the same with DACS, and have you tried backbone pretrained on ImageNet? If yes, could you please provide the corresponding results?

gta2city

When I revisited the performance of your GTA2City, I found that the MIOU could only reach about 54.8 after 250,000 iterations. I didn't change anything except the 10.2 version of CUDA. Could you please provide the training log of your GTA2City?
Thanks a lot!!

Warning:optimizer contains a parameter group with duplicate parameters

I follow you code, and train a model. But it results may not meet the need.

I eval the model you share .

bash shells/eval_syn2city.sh

your share model.
in syn2city : 19 classes : meanIout: 0.4771
I train the model:
in syn2city : 19 classes : meanIout: only: 0.46.7

in the train. I find the warning, So I want to know if it may cause the result drop.

/home/ailab/anaconda3/envs/yy_CORDA/lib/python3.7/site-packages/torch/optim/sgd.py:68: UserWarning: optimizer contains a parameter group with duplicate parameters; in future, this will cause an error; see github.com/pytorch/pytorch/issues/40967 for more information
  super(SGD, self).__init__(params, defaults)
D_init tensor(134.8489, device='cuda:0', grad_fn=<DivBackward0>) D tensor(134.5171, device='cuda:0', grad_fn=<DivBackward0>)

checkpoints links fail

I can't download the checkpoints file from your links, when click into the google drive, The file size is shown to be 2GB, but it was only 0B when downloaded

how to use DACS

In your paper use DACS as baseline.
However, DACS don't propose the SYNTHIA->CITYSCAPES trainning.
So, I want to ask you how you use trainning DACS.
I read you code. Should I need to delete all code which use depth in trainning. because I need to use DACS in SYNTHIA->CITYSCAPES to obtain data in my work
image

Another question, When trainning finished, predictions direction will be created and some test pictures saved here.Where Can I find the code about it.

Maybe it's too long because this work is finished two years ago.You may forget some details.
I will be very appreciated if you reply me !

May deeplabv2_synthia.py have extra space symbol

if the forward code, return out ,an extra space symbol

   def forward(self, x):
        out = self.conv2d_list[0](x)
        for i in range(len(self.conv2d_list)-1):
            out += self.conv2d_list[i+1](x)
            return out

this is the code in your code

class Classifier_Module(nn.Module):

    def __init__(self, dilation_series, padding_series, num_classes):
        super(Classifier_Module, self).__init__()
        self.conv2d_list = nn.ModuleList()
        for dilation, padding in zip(dilation_series, padding_series):
            self.conv2d_list.append(nn.Conv2d(256, num_classes, kernel_size=3, stride=1, padding=padding, dilation=dilation, bias = True))

        for m in self.conv2d_list:
            m.weight.data.normal_(0, 0.01)

    def forward(self, x):
        out = self.conv2d_list[0](x)
        for i in range(len(self.conv2d_list)-1):
            out += self.conv2d_list[i+1](x)
            return out

I this this forward is possible.beaceuse your code, use list contain four elements,if return out have space, this may do only twice without fourth

   def forward(self, x):
        out = self.conv2d_list[0](x)
        for i in range(len(self.conv2d_list)-1):
            out += self.conv2d_list[i+1](x)
        return out
   self.__make_pred_layer(Classifier_Module,[6,12,18,24],[6, 12,18, 24],NUM_OUTPUT[task]
   def _make_pred_layer(self,block, dilation_series, padding_series,num_classes):
        return block(dilation_series,padding_series,num_classes)

Why does the class Train have 0 mIoU, What may could happen

I download your pretrained model, and start demo
But I find train iou 0.0

(yy_corda) ailab@ailab:/media/ailab/data/yy/corda$ bash shells/eval_gta2city.sh
./checkpoint/gta
Found 500 val images
Evaluating, found 500 batches.
100 processed
200 processed
300 processed
400 processed
500 processed
class  0 road         IU 94.81
class  1 sidewalk     IU 62.18
class  2 building     IU 88.03
class  3 wall         IU 33.09
class  4 fence        IU 43.51
class  5 pole         IU 39.93
class  6 traffic_light IU 49.46
class  7 traffic_sign IU 54.68
class  8 vegetation   IU 88.01
class  9 terrain      IU 47.67
class 10 sky          IU 89.22
class 11 person       IU 68.22
class 12 rider        IU 39.21
class 13 car          IU 90.25
class 14 truck        IU 51.43
class 15 bus          IU 58.37
class 16 train        IU 0.00
class 17 motorcycle   IU 40.38
class 18 bicycle      IU 57.42
meanIOU: 0.5767768805758403

I train my model on it, and test eval_syn2city.py. Here are 3 classes Iou 0.0 because missing classed in source domain.
but I download pretrained model ,and run eval_gta2city.sh
still miss one class train.
So, I want to know why. Is it may train class didn't appear city datasets? So it's IOU is 0.

Training on a custom dataset without ground truth label

From what I understand after reading your paper, you do not need ground truth label data on the target domain to train the pseudo labels. However, when I look at cityscapes_loader, it seems I need to supply the ground truth seg maps as well.

I am trying to train the network on a custom dataset (that only depth maps, and ground truth seg map only on the source domain), but it looks I cannot get away without providing it. Do you have any thoughts on this?

How to obtain your depth datasets?

Hi, thanks for your great work!

It would be great if you can elaborate more on how you obtain the monocular depth estimation.

I understand that you've uploaded the dataset, but it would be really helpful if I know exactly how you've done it.

From your paper, in the ablation study part: "We would like to highlight that for both stereo and monocular depth estimations, only stereo pairs or image sequences from the same dataset are used to train and generate the pseudo depth estimation model. As no data from external datasets is used, and stereo pairs and image sequences are relatively easy to obtain, our proposal of using self-supervised depth have the potential to be effectively realized in real-world applications."

So I image you get your monocular depth pseudo ground truth by:

  1. Downloading target domain videos (here Cityscapes. Btw, where do you get Cityscapes videos?)
  2. Train a Monodepth2 model on those videos (for how long?)
  3. Use the model to get pseudo ground truth
    Then repeat to the source domain (GTA 5 or Synthia)

Am I getting it right? And is there any more important points you want to highlight when calculating such depth labels?

Regards,
Tu

Coufusion about the 'depth' of cityscapes

Hello, nice work but i meet some question.

in 'data/cityscapes_loader.py' line 181-183:

depth = cv2.imread(depth_path, flags=cv2.IMREAD_ANYDEPTH).astype(np.float32) / 256. + 1.
if depth.shape != lbl.shape:
depth = cv2.resize(depth, lbl.shape[::-1], interpolation=cv2.INTER_NEAREST)
Monocular depth: in disparity form 0 - 65535

(1) Why the depth is calculated from x/256+1
(2) is it the depth or the disparity ? In the official doc of cityscapes, it say disparity = (x-1)/256

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.