Giter VIP home page Giter VIP logo

hcl's Introduction

Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data

Updates

Fixed several bugs:

  1. Added the file HCL/hcl_target/model/deeplab_advent_no_p.py.
  2. For files HCL/hcl_target/evaluate_cityscapes_advent.py and HCL/hcl_target/generate_plabel_cityscapes_advent.py, changed from model.deeplab_advent import get_deeplab_v2 into from model.deeplab_advent_no_p import get_deeplab_v2
  3. Changed the pre-trained models directories into ../pretrained_models/HCL_source_only_426/GTA5_HCL_source.pth

Paper

Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data
Jiaxing Huang, Dayan Guan, Xiao Aoran, Shijian Lu
School of Computer Science Engineering, Nanyang Technological University, Singapore
Thirty-fifth Conference on Neural Information Processing Systems.

If you find this code/paper useful for your research, please cite our paper:

@inproceedings{huang2021model,
  title={Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data},
  author={Huang, Jiaxing and Guan, Dayan and Xiao, Aoran and Lu, Shijian},
  booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
  year={2021}
}

Abstract

Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled target domain, but it requires to access the source data which often raises concerns in data privacy, data portability and data transmission efficiency. We study unsupervised model adaptation (UMA), or called Unsupervised Domain Adaptation without Source Data, an alternative setting that aims to adapt source-trained models towards target distributions without accessing source data. To this end, we design an innovative historical contrastive learning (HCL) technique that exploits historical source hypothesis to make up for the absence of source data in UMA. HCL addresses the UMA challenge from two perspectives. First, it introduces historical contrastive instance discrimination (HCID) that learns from target samples by contrasting their embeddings which are generated by the currently adapted model and the historical models. With the historical models, HCID encourages UMA to learn instance-discriminative target representations while preserving the source hypothesis. Second, it introduces historical contrastive category discrimination (HCCD) that pseudo-labels target samples to learn category-discriminative target representations. Specifically, HCCD re-weights pseudo labels according to their prediction consistency across the current and historical models. Extensive experiments show that HCL outperforms and state-of-the-art methods consistently across a variety of visual tasks and setups.

Installation

  1. Conda enviroment:
conda create -n hcl python=3.6
conda activate hcl
conda install -c menpo opencv
pip install torch==1.0.0 torchvision==0.2.1
  1. Clone the ADVENT:
git clone https://github.com/valeoai/ADVENT.git
pip install -e ./ADVENT
  1. Clone the repo:
https://github.com/jxhuang0508/HCL.git
pip install -e ./HCL
  1. Install environment:
conda env create -f hcl_target.yml

Prepare Dataset

  • GTA5: Please follow the instructions here to download images and semantic segmentation annotations. The GTA5 dataset directory should have this basic structure:
HCL/data/GTA5/                               % GTA dataset root
HCL/data/GTA5/images/                        % GTA images
HCL/data/GTA5/labels/                        % Semantic segmentation labels
...
  • Cityscapes: Please follow the instructions in Cityscape to download the images and validation ground-truths. The Cityscapes dataset directory should have this basic structure:
HCL/data/Cityscapes/                         % Cityscapes dataset root
HCL/data/Cityscapes/leftImg8bit              % Cityscapes images
HCL/data/Cityscapes/leftImg8bit/val
HCL/data/Cityscapes/gtFine                   % Semantic segmentation labels
HCL/data/Cityscapes/gtFine/val
...

Pre-trained models

Pre-trained models can be downloaded here and put GTA5_HCL_source.pth into HCL/pretrained_models/HCL_source_only_426, GTA5_HCL_target.pth into HCL/pretrained_models/HCL_target_482.

Training

To train GTA5_HCL_source:

conda activate hcl
cd HCL/hcl/scripts
python train.py --cfg configs/hcl_source.yml

To evaluate trained GTA5_HCL_source:

conda activate hcl
cd HCL/hcl/scripts
python test.py --cfg configs/hcl_source.yml

To train GTA5_HCL_target:

conda activate hcl_target
cd HCL/hcl_target
python generate_plabel_cityscapes_advent.py  --restore-from ../pretrained_models/HCL_source_only_426/GTA5_HCL_source.pth
conda activate hcl_target
python train_ft_advent_hcl.py --snapshot-dir ./snapshots/HCL_target \
--restore-from ../pretrained_models/HCL_source_only_426/GTA5_HCL_source.pth \
--drop 0.2 --warm-up 5000 --batch-size 9 --learning-rate 1e-4 --crop-size 512,256 --lambda-seg 0.5 --lambda-adv-target1 0 \
--lambda-adv-target2 0 --lambda-me-target 0 --lambda-kl-target 0 --norm-style gn --class-balance --only-hard-label 80 \
--max-value 7 --gpu-ids 0,1,2 --often-balance  --use-se  --input-size 1280,640  --train_bn  --autoaug False --save-pred-every 300

To evaluate trained GTA5_HCL_target:

conda activate hcl_target
cd HCL/hcl_target
./test.sh

Evaluation over Pretrained models

To evaluate GTA5_HCL_source.pth:

conda activate hcl
cd HCL/hcl/scripts
python test.py --cfg ./configs/hcl_source_pretrained.yml

To evaluate GTA5_HCL_target.pth:

conda activate hcl_target
cd HCL/hcl_target
python evaluate_cityscapes_advent_best.py --restore-from ../../pretrained_models/GTA5_HCL_target.pth

Related Works

We also would like to thank great works as follows:

Contact

If you have any questions, please contact: [email protected]

hcl's People

Contributors

jxhuang0508 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

hcl's Issues

ADVENT

I feel like the implementation applies ADVENT as a additional content while training in target domain, which was not mentioned in the paper? Can you provide more details about the implementation? Thank you :)

problems about the foggy level in foggy cityscapes

Hello, about cityscapes ->foggy cityscapes, what is the foggy level in your evaluation? Is it similar to the evaluation method in SFOD (i.e. foggy level=ALL). As far as I know, the evaluation level of some articles is 0.02

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.