marekkowalski / faceswap Goto Github PK
View Code? Open in Web Editor NEW3D face swapping implemented in Python
License: MIT License
3D face swapping implemented in Python
License: MIT License
Hello, I was trying to launch code, but stuck with this error: "Unexpected version found while deserializing dlib::shape_predictor." Have tried to use one from dlib website, but it didn't work out. What version of DLIB did you use for this app? Maybe it can help.
The speed is very slow, and imges delay to the camera.
How to optimize it to speed up it's speed?
Hi,
Great job and thank you for sharing!
I have some follow-up questions after working with your project:
I was trying the DAN version, but i was unable to make it work with GPU. Is it should be enough to tell to theano use GPU via THEANO_FLAGS='device=cuda,floatX=float32'?
http://deeplearning.net/software/theano/tutorial/using_gpu.html
In order to get the best results i want to edit the 3d mesh model. should the model match the source face or the targets?
i wan't to implement eyes blinking. Could you advice what is the best approach for that?
Thanks in advance!
The code is working really nice, thanks for making it available! My problem is that I'm not able to interrupt or cancel the script. I'm using anaconda to execute the zad2.py file, and ctrl+c doesn't work.
My solution so far is to exit anaconda, but it feels a bit heavy. There must be an easier solution?
glMatrixMode(GL_PROJECTION)
File "C:\Users\vallem.balu\Anaconda3\envs\python_apis\lib\site-packages\OpenGL\error.py", line 234, in glCheckError
baseOperation = baseOperation,
OpenGL.error.GLError: GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glMatrixMode,
cArguments = (GL_PROJECTION,)
)
any one this type error....
Hi I was experimenting with some images and found weird results, sometimes faces in the final image are not aligned
I am using the newer version which is based on the Deep Alignment Network method
https://www.dropbox.com/sh/yfq0hcu0we2hed0/AABXckFonehfgLfdzicjqqJYa?dl=0
I saw that on the function you write to read parameters from candide.npz
def load3DFaceModel(filename):
faceModelFile = np.load(filename)
mean3DShape = faceModelFile["mean3DShape"]
mesh = faceModelFile["mesh"]
idxs3D = faceModelFile["idxs3D"]
idxs2D = faceModelFile["idxs2D"]
blendshapes = faceModelFile["blendshapes"]
mesh = fixMeshWinding(mesh, mean3DShape)
return mean3DShape, blendshapes, mesh, idxs3D, idxs2D
It is seem that content of file candide.npz is different from candide file from http://www.icg.isy.liu.se/candide/candide3.wfm
And can you explain the meaning of mean3DShape
, mesh
, idxs3D
, idxs2D
, blendshapes
array?
any plans for a google colab notebook that works headless, it also has free gpu for 12 hours cycles
Hello @MarekKowalski, Thank you for this good work it is really impressive.
Now i want to try out making only the eyes appear for example and not the whole face, let's say for example i want to crop from the nose down to the jaw to show only moustache and beard, is there a way to do that i found my self stuck and might need a help.
Thank you in advance.
hi there i am getting the above error while trying to use the program. any help on this would be great
Hi @MarekKowalski, congratulations on the excellent work done (not to mention livescan3d, that is something indescribable), even if Angelina Jolie is not so attractive with my beard.
Now I have to do something similar to what yo've done here: i have to add, to every single frame, Angelina Jolie's hair.
What do you think the beast approach is?
Do you think the update of your candide.npz with hair information makes sense?
I'm a little confused and i want to avoid hair recognition, because of the obvious presence of false positives and negatives.
Thank you in advance
Regards
Thanks your work,I want to use gesture control to swap face , is it possible?
And can it swap face like Opera Masks?
if it can do ,how to implement.Can you introduce paper about this software?Thank you
@MarekKowalski will this work in real time scenarios for both pictures and short videos? What would be the training time requirements for the model to generate optimum results?
Hi Marek,
great work! Finally we got your code running. Stunning results.
Unfortunately it's quite slow. The image rendered is always showing a few seconds later plus it's not smooth at all. We localised the main performance drop while calling the dlib.get_frontal_face_detector() function. We tried on a Mac and a Windows machine.
Any ideas what we could do to improve the performance?
Thanks in advance.
Thanks for great work. There is a lot of help for me
Although I saw other issue about eye blinking, I don't implement eye blinking.
What you said is removing eye parts of mesh (the triangles) that correspond to the eye regions.
I don't know where eye parts is in 175. So could you give me advice? Could you give me edidted candide.npz model which remove eye parts?
len(mesh) is differnt from 'FACE LIST' in candide3.wfm. why is it??
This candide3.wfm is the latest version of candide model(v3.1.6)
len(mesh) = 175
'FACE LIST' in candide3.wfm = 184
What do you think about?
Maybe are you deleting some parts in 'FACE LIST'?
What you said about model is 'processed version'
Your Code doesn't work transition between .txt to .npz.
Hello Mr. Kowalski,
First of all I have to say you made great work.
I have a question: Is there a way to make the overlay more transparent? What I would like to achieve is to not make a hard overlay of a other face onto the viewer but instead make the person look older. I thought of making the overlay a bit more transparent so the wrinkles etc. are still visible but achieve a more morphed version with the original face.
For support I would be very thankful.
Best regards
Hey!
tracking breaks when attempting to track multiple faces on the stable/fast faceswap-DAN. To get multiple faces to work properly you have to recalculate every scene. (I'm attempting to swap a face.jpg with all of the faces in a scene)
Here is the modifications I made to get multiple faces working (and swapping), although it becomes extremely slow - definitely not realtime:
https://gist.github.com/samhains/648ec70aab3d5a47920c95c5e0960ee3
Anyone have ideas on fixing this in a more performant way?
Maybe a silly question but how did you calculate mean3DShape? because they are not the same as the vertices on the wfm
I'm new to opengl or pygame. The server is a centos6 which has no desktop installed, and failed to install pygame with many difficulties.
When i comment out the pygame stuffs and run python zad2.py, it just goes wrong and the renderedImg is an all-zero array.
Is there any way that I can run it successfully?
Hi! I'm very surprised by the quality of your work! I wonder if it is possible to put a more abstract texture on the face ?. It's possible?.
Hello, MarekKowalski. Thank you for this good work.
now i want to swap source face to the target one. So should i create candide.npz of the target face. And Maybe the 3D face reconstruction technology can offer useful imformation, such as the vertices,the mesh and so on.
Thank you in advance.
Hi there,
can i swap two face from two image with zad2.py and without use of camera?
i mean i have two image named im1 and im2, i want to swap face from these two image.
if i can do that using zad2, how?
My target video (or source for that matter) has 2 persons inside. I have checked that running the scripts would swap on only 1 person's face and in this case, it was the wrong person. May I ask what is the best way to filter out unwanted faces and only swap the chosen person's face?
Hi,
I'm actually testing FaceSwap with the following environment:
➜ FaceSwap lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal
➜ FaceSwap python --version
Python 3.8.5
➜ FaceSwap pip show tensorflow
Name: tensorflow
Version: 2.6.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2.0
Location: ~/.local/lib/python3.8/site-packages
Requires: google-pasta, flatbuffers, wrapt, opt-einsum, termcolor, h5py, keras-preprocessing, grpcio, numpy, wheel, absl-py, tensorflow-estimator, gast, astunparse, typing-extensions, protobuf, six, tensorboard, keras-nightly
Required-by: fawkes
➜ FaceSwap apt show opencv
Package: opencv
Version: 4.5.2-6
Status: install ok installed
Priority: extra
Section: checkinstall
Maintainer: root@myVision
Installed-Size: 252 MB
Provides: build
Download-Size: unknown
APT-Manual-Installed: yes
APT-Sources: /var/lib/dpkg/status
Description: Package created with checkinstall 1.6.3
However, I got the following ERROR message.
➜ FaceSwap python zad2.py
pygame 2.0.1 (SDL 2.0.14, Python 3.8.5)
Hello from the pygame community. https://www.pygame.org/contribute.html
Press T to draw the keypoints and the 3D model
Press R to start recording to a video file
Loading network...
Input shape: (None, 1, 112, 112)
WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
[ WARN:0] global ....../opencv/modules/videoio/src/cap_gstreamer.cpp (1081) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Traceback (most recent call last):
File "zad2.py", line 66, in <module>
cameraImg = ImageProcessing.blendImages(renderedImg, cameraImg, mask)
File "....../face/FaceSwap/FaceSwap-DAN-public/FaceSwap/ImageProcessing.py", line 17, in blendImages
dists[i] = cv2.pointPolygonTest(hull, (maskPts[i, 0], maskPts[i, 1]), True)
cv2.error: OpenCV(4.5.2-dev) :-1: error: (-5:Bad argument) in function 'pointPolygonTest'
> Overload resolution failed:
> - Can't parse 'pt'. Sequence item with index 0 has a wrong type
> - Can't parse 'pt'. Sequence item with index 0 has a wrong type
Any suggestions?
Cheers
Pei
Hello @MarekKowalski,
I asked you a month ago about how to blend a portion of the face, and you responded that I need to save and modify the mesh using MeshLab, In order to do that; what is the mesh that I need to save, is it blendshapes or mesh ? And how am I supposed to save them into obj file, because I tried to open the mesh file in the candide.npz but the MeshLab couldn't open that. Can you give me a quick guide because I'm new in manipulating meshes and working with 3D softwares.
Thanks.
Hi,
I've noticed a behavior where the whole face is moving when I'm blinking.
can the blinking be ignored?
what is causing this behavior?
when the face on the camera is stable and not moving the output is not stable and moving a little bit.
I assume this is due to a face detection on every frame.
how can i improve it and provide more stable output for the same input?
Thanks!
Although you provided some details about the code and the parameters, I can't fully understand the principle in it. So could you recommend some related papers to me? thank u very much!
Hi,
First of all great work done by you man!
Hats OFF!
The point here i want to make is that when i run face swap on videos, face is kind of vibrating or moving. So i want to resolve that issue, could you please tell me how i can resolve it
Thanks in advance
why not just use the 68 landmarks of real face from camera and using Delaunay to form the face mesh to draw, and using the input face image's 68 landmarks as texture coord ?
Hi! I'm trying to create a headless version of this by replacing PyOpenGL with ModerlGL. I'm planning to use shaders to transfer the texture to the facial 3d model. However it's difficult since I'm new to 3d space & I'm only familiar with DLIB. I hope you can answer these questions abt the parameters you used so I can understand more.
Thank you so much!
Can the mouth not be swapped?
May I know how do I run this code? which file I should run?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.