Giter VIP home page Giter VIP logo

asymmetricgan's Introduction

  • πŸ‘― We are looking self-motivated researcher to join/visit our Group.

GitHub stats

Top Langs

Hao Tang

[Homepage] [Google Scholar] [Twitter]

I am currently a postdoctoral researcher at Computer Vision Lab, ETH Zurich, Switzerland.

⚑ News

We released the code of XingVTON and CIT for virtual try-on, the code of TransDA for source-free domain adaptation using Transformer, the code of IEPGAN for 3D pose transfer, the code of TransDepth for monocular depth prediction using Transformer, the code GLANet for unpaired image-to-image translation, the code MHFormer for 3D human pose estimation.

🌱 My Repositories

3D-Aware Image/Video Generation

3D Human Pose Estimation

Text-to-Image Synthesis

3D Objection Generation

Monocular Depth Prediction

Face Anonymisation

Person Image Generation

Scene Image Generation

Unsupervised Image Translation

Deep Dictionary Learning

Virtual Try-On

Hand Gesture Recognition

Source-Free Domain Adaptation

asymmetricgan's People

Contributors

ha0tang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

asymmetricgan's Issues

SSIM and lambdas

Hi,

I'm trying to reproduce your results for style transfer using photo2monet, photo2vangogh, photo2ukiyoe and photo2cezanne datasets. (Collection Style)
I wanted to know if you used specific lambda values for this particular use case (as the showcased results are really good).

I have another question regarding the conditional identity preserving loss mentioned in the paper:

removing the conditional identity preserving loss, multi-scale SSIM loss and color cycle-consistency loss substantially degrades the performance, meaning that the proposed joint optimization objectives are particularly important to stabilize the training process and thus produce much better generation performance

Maybe I missed it, but I haven't found this loss anywhere in the code. Is there a practical reason for it?

Finally, I have noticed that you did not use the mentioned LSGAN loss but rather the loss behind the Wasserstein GAN GP model. What are the reasons behind it?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.