Giter VIP home page Giter VIP logo

Comments (8)

yuval-alaluf avatar yuval-alaluf commented on June 15, 2024 1

Hi @TomatoBoy90 ,
We used the default procedure used in InterFaceGAN. If I remember correctly, this means we used 500,000 randomly sampled w vectors. We then took the 10,000 samples that got the highest attribute score to be the positive samples and 10,000 samples with the lowest scores to be our negative samples.

from encoder4editing.

omertov avatar omertov commented on June 15, 2024

Hi @TomatoBoy90!
To obtain new editing directions, you should follow the official InterFaceGAN repository's guidelines, in this this issue we wrote how we obtained our 3 editing directions so feel free to check it out (note that you need to use an attribute classification network to label each style vector).

The boundary files from the InterFaceGAN repository were obtained for the pretrained StyleGAN1's latent space, while we trained the boundaries for the pretrained StyleGAN2 (hence the difference you are experiencing).

from encoder4editing.

TomatoBoy90 avatar TomatoBoy90 commented on June 15, 2024

thank you for your reply quickly.I read InterFaceGAN repository,but seem it need to train svm model,as a supervised learning, it needs label,how to get the label?

from encoder4editing.

TomatoBoy90 avatar TomatoBoy90 commented on June 15, 2024

Training a SVM model, but how does the machine know which face attribute we need to train?

from encoder4editing.

omertov avatar omertov commented on June 15, 2024

We used a pretrained network to label each StyleGAN image.
For example, after sampling a style vector w, we generated the corresponding image I=Generator(w) and used a pretrained age classification network to obtain the label age_I = AGE_NET(I) to train the SVM for the age direction.

from encoder4editing.

TomatoBoy90 avatar TomatoBoy90 commented on June 15, 2024

We used a pretrained network to label each StyleGAN image.
For example, after sampling a style vector w, we generated the corresponding image I=Generator(w) and used a pretrained age classification network to obtain the label age_I = AGE_NET(I) to train the SVM for the age direction.
thanks,how many samples should I take for style vector w? How many samples did you take it?

from encoder4editing.

TomatoBoy90 avatar TomatoBoy90 commented on June 15, 2024

thank you reply so detailed and generous.Wish you a happy life

from encoder4editing.

omertov avatar omertov commented on June 15, 2024

Good luck with your experiments!
I am closing the issue :)

from encoder4editing.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.