Giter VIP home page Giter VIP logo

Comments (10)

k-l-lambda avatar k-l-lambda commented on June 15, 2024 1

Ha, I know some about this guy. He has a lot of computation resource, but the researching is not the first-class yet. Yes, I envy him.

from stylegan-web.

k-l-lambda avatar k-l-lambda commented on June 15, 2024

Nice work.

@johndpope, I guess you have tried this GenEdi, how accurate are these feature vectors you think?

from stylegan-web.

johndpope avatar johndpope commented on June 15, 2024

I think these are just out of the box.
https://github.com/GreenLimeSia/GenEdi/tree/master/latent_directions

https://github.com/anvoynov/GANLatentDiscovery

https://github.com/GreenLimeSia/GenEdi/blob/master/latent_directions/eyes_open.npy

https://github.com/search?q=latent_directions&type=code

@Gvanderl - maybe there's interest to collaborate with this web layout for your facemaker code?
https://github.com/Gvanderl/FaceMaker

UPDATE
also noteworthy @blubs
https://github.com/blubs/stylegan2_playground
image

from stylegan-web.

k-l-lambda avatar k-l-lambda commented on June 15, 2024

Very nice. I'm curious how these latent directions been got. StyleGAN is not trained with feature labels, latent space is very likely highly twisted. So did these latent directions come from manual labeling or unsupervised learning?

from stylegan-web.

johndpope avatar johndpope commented on June 15, 2024

I think this guy @a312863063 created them https://github.com/a312863063/generators-with-stylegan2

english
https://github.com/a312863063/generators-with-stylegan2/blob/master/README_EN.md

from stylegan-web.

johndpope avatar johndpope commented on June 15, 2024

Amazon / stylegan2 take Gan-Control
https://arxiv.org/pdf/2101.02477v1.pdf

https://alonshoshan10.github.io/gan_control/

from stylegan-web.

k-l-lambda avatar k-l-lambda commented on June 15, 2024

Amazon / stylegan2 take Gan-Control
https://arxiv.org/pdf/2101.02477v1.pdf

https://alonshoshan10.github.io/gan_control/

Interesting.

from stylegan-web.

johndpope avatar johndpope commented on June 15, 2024

related - https://github.com/johndpope/ALAE/blob/master/interactive_demo.py

from stylegan-web.

Gvanderl avatar Gvanderl commented on June 15, 2024

I think these are just out of the box.
https://github.com/GreenLimeSia/GenEdi/tree/master/latent_directions

https://github.com/anvoynov/GANLatentDiscovery

https://github.com/GreenLimeSia/GenEdi/blob/master/latent_directions/eyes_open.npy

https://github.com/search?q=latent_directions&type=code

@Gvanderl - maybe there's interest to collaborate with this web layout for your facemaker code?
https://github.com/Gvanderl/FaceMaker

UPDATE
also noteworthy @blubs
https://github.com/blubs/stylegan2_playground
image

I would love to collaborate, I'm not certain what exactly should be implemented though.

from stylegan-web.

johndpope avatar johndpope commented on June 15, 2024

Hi @Gvanderl
nice repo! https://github.com/Gvanderl/StyleSegments

There seems to be 3 parts - (though maybe more)
Python API

  • expose the directory contents as file names via api / could be done via the initial spec call
  • DRAFTED
{
    "image_shape": [
        null,
        3,
        1024,
        1024
    ],
    "latent_directions": [
      "latent_directions/emotion_angry.npy"
1: "latent_directions/angle_vertical.npy"
2: "latent_directions/emotion_fear.npy"
3: "latent_directions/lip_ratio.npy"
4: "latent_directions/pitch.npy"
5: "latent_directions/exposure.npy"
6: "latent_directions/roll.npy"
7: "latent_directions/eyes_open.npy"
8: "latent_directions/beauty.npy"
9: "latent_directions/nose_ratio.npy"
10: "latent_directions/glasses.npy"
11: "latent_directions/eye_eyebrow_distance.npy"
12: "latent_directions/face_shape.npy"
13: "latent_directions/mouth_open.npy"
14: "latent_directions/nose_tip.npy"
15: "latent_directions/eye_distance.npy"
16: "latent_directions/race_yellow.npy"
17: "latent_directions/mouth_ratio.npy"
18: "latent_directions/smile.npy"
19: "latent_directions/emotion_surprise.npy"
20: "latent_directions/race_black.npy"
21: "latent_directions/angle_horizontal.npy"
22: "latent_directions/gender.npy"
23: "latent_directions/emotion_happy.npy"
24: "latent_directions/race_white.npy"
25: "latent_directions/width.npy"
26: "latent_directions/emotion_disgust.npy"
27: "latent_directions/camera_rotation.npy"
28: "latent_directions/age.npy"
29: "latent_directions/height.npy"
30: "latent_directions/yaw.npy"
31: "latent_directions/nose_mouth_distance.npy"
32: "latent_directions/emotion_sad.npy"
33: "latent_directions/eye_ratio.npy"
34: "latent_directions/emotion_easy.npy"
    ],
    "latents_dimensions": 512,
    "model": "cat",
    "synthesis_input_shape": [
        null,
        18,
        512
    ]
}

ML

  • when app changes the values - make them update backend
    VUE.js
    -surface the latent_directions with sliders

1st step import the latent_directions directory - DONE

I pushed a tensorflow 2 branch. Work in progress.
https://github.com/johndpope/stylegan-web

It seems each latent directions is a vector of 'shape': (18, 512)

That may makes things more complicated.
I'm thinking we could have a drop down - then when they select - show the 512 latent vectors associated with say age.

This looks promising - need to prototype the code to hotwire the http_server (without UI)

def change_face(image="maface_01", direction="gender", coeffs=None):

    if coeffs is None:
        coeffs = [-2, 0, 2]

    directions = {
        "smile": 'ffhq_dataset/latent_directions/smile.npy',
        "gender": 'ffhq_dataset/latent_directions/gender.npy',
        "age": 'ffhq_dataset/latent_directions/age.npy'
    }
    direction = np.load(directions[direction])
    face_latent = np.load(config.latents_dir / (image + ".npy"))
    move_and_show(face_latent, direction, coeffs)

If we can hardcode a change of smile - then this would progress things to extend ui.

Screen Shot 2021-01-16 at 5 18 07 pm

UPDATE
@Puzer Dmitry Nikitko originally found directions - according to this tweet
UPDATE - they're different / I've added them all in.

https://hostb.org/NCM - official download here
https://twitter.com/robertluxemburg/status/1207087801344372736
https://github.com/Puzer/stylegan-encoder/blob/master/Learn_direction_in_latent_space.ipynb

UPDATE.
I added a killServer.sh to take out the http_server and a restart.sh script that also runs the build to recompile vuejs code.

Got the drop down showing latent directions in project.
Screen Shot 2021-01-16 at 10 24 17 pm

It seems final piece would be to wire up the events of changing the drop down -
to loading the corresponding pkl and then making this become the generator???
I've wired up the event onChange - so it calls the backend and selects to load the respective pkl / needs work here.
The generator would either need to take a turn to include a selected blended model - or ....IDK.

UPDATE
seems like we can just concatenate the pre-canned latent directions with some simple scaling as done by @jasonlbx13

https://github.com/jasonlbx13/FaceHack/blob/master/face_gan/flask_app.py

  if len(request.form) != 0:
                    smile = float(request.form['smile'])
                    age = float(request.form['age'])
                    gender = float(request.form['gender'])
                    beauty = float(request.form['beauty'])
                    angleh = float(request.form['angleh'])
                    anglep = float(request.form['anglep'])
                    raceblack = float(request.form['raceblack'])
                    raceyellow = float(request.form['raceyellow'])
                    racewhite = float(request.form['racewhite'])
                    feature_book = [smile, age, gender, beauty, angleh, anglep, raceblack, raceyellow, racewhite]
                else:
                    feature_book = [0, 0, 0, 0, 0, 0, 0, 0, 0]


def move_latent(self, npy_dir, Gs_network, Gs_syn_kwargs, *args):
        latent_vector = np.load(npy_dir)[np.newaxis, :]
        smile, age, gender, beauty, angleh, anglep, raceblack, raceyellow, racewhite = args
        new_latent_vector = latent_vector.copy()
        new_latent_vector[0][:8] = (latent_vector[0] + smile * self.smile_drt + age * self.age_drt + gender * self.gender_drt
                                    + beauty * self.beauty_drt + angleh * self.angleh_drt + anglep * self.anglep_drt
                                    + raceblack * self.raceblack_drt + raceyellow * self.raceyellow_drt + racewhite * self.racewhite_drt)[:8]
        with self.graph.as_default():
            with self.session.as_default():
                images = Gs_network.components.synthesis.run(new_latent_vector, **Gs_syn_kwargs)
        PIL.Image.fromarray(images[0], 'RGB').save(
            dnnlib.make_run_dir_path('./static/img/edit_face.jpg'))

I guess once code is dropped in - the coefficients will be important to get this to work correctly.
https://github.com/dream80/StyleganEncoder/blob/d9c8fe41a5b3ccdebd6af9d234a45c97416af3af/show1.py

#move_and_show(generator, target, "age", [-6, -4, -3, -2, 0, 2, 3, 4, 6])
#move_and_show(generator, target, "angle_horizontal", [-6, -4, -3, -2, 0, 2, 3, 4, 6])
#move_and_show(generator, target, gender, [-6, -4, -3, -2, 0, 2, 3, 4, 6])
#move_and_show(generator, target, "eyes_open", [-3, -2, -1, -0.5, 0, 0.5, 1, 2, 3])
#move_and_show(generator,target, glasses, [-6, -4, -3, -2, 0, 2, 3, 4, 6])
#move_and_show(generator,target, smile, [-3, -2, -1, -0.5, 0, 0.5, 1, 2, 3])

Also noteable mention @swapp1990 - seems to have achieved something similar already.
https://github.com/swapp1990/my-stylegan2-face-editing-app/blob/dev/client/src/views/FaceEditing.vue

from stylegan-web.

Related Issues (10)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.