Giter VIP home page Giter VIP logo

wmmd-caffe's Introduction

WMMD-Caffe

This is an implementation of CVPR17 paper "Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation". We fork the repository from mmd-caffe and make our modifications. We show them below:

  • mmd layer: the backward function in mmd_layer.cu is adjusted to replace the conventional mmd with weighted mmd specified in the paper
  • softmax loss layer: Instead of ignoring the empirical loss on target domain, we modified the function so that logistic loss is added based on pseudo label.
  • data layer: we add an parameter to the data label so that the number of classes is conveyed to mmd layer.

The machines configuration that run the experiments are specificied below:

  • OS: UNIX
  • GPU: TiTan X
  • CUDA: version 7.5
  • CUDNN: version 3.0

Slight changes may not results instabilities.

Usage

  • prepare model: The bvlc_reference_caffenet and bvlc_googlenet are used as initialzation for Alexnet and GoogleNet, respectively. They can be download from here. The model structure is specified in the relative path model/task_name/train_val.prototxt, e.g., when we transfer from amazon to caltech in office-10+caltech-10 dataset, replace the task_name with amazon_to_caltech.
  • prepare data: Since we reach the raw images on the disk when training, all the images file path need to be written into the .txt file, kept in data/task_name/ directory. For example, data/amazon_to_caltech/train(value).txt. To constrcuct such txtfile, a python script is offered in file images/code/data_constructor.py. Following are the main steps:
    • change into the directory image/code/: cd image/code
    • python data_constructor.py. You will be asked to input the source and target domain name, e.g., amazon and caltech, respectively.
    • the generated file traina2c.txt and texta2c.txt could be found in the parent directory, copy them into ../data/amazon2caltech/: cp *a2c.txt ../amazon2caltech.
  • fine tune a model: To fine tune the model paramenters, run the shell ./model/amazon2caltech/train.sh You will see the accuracy results once the script is finished. The detailed results could be found in files that are stored in the ./log. And the tuned model will be stored in model/amazon2caltech/

Fine tune the model parameters

Three model parameters are tunned in our experiments, i.e., $\lambda$, $\beta$, $lr$ and please refer to the paper for the meaning of these parameters. Manually tuning them in net defination file, i.e., 'trainval.protxt', could be rather tedious. We provide python scripts to help tune these parameters which can be found in directory script_fine_tune/. To correctly run these scripts, please follow the instruction to install the Python runtime library of Protobuf. In addition, the Caffe python's interface should be compiled: make pycaffe.

Citation

wmmd-caffe's People

Contributors

yhldhit avatar

Watchers

刘国友 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.