Giter VIP home page Giter VIP logo

faceswap's Introduction

deepfakes_faceswap


FaceSwap is a tool that utilizes deep learning to recognize and swap faces in pictures and videos.

    


Emma Stone/Scarlett Johansson FaceSwap using the Phaze-A model


Jennifer Lawrence/Steve Buscemi FaceSwap using the Villain model

Build Status Documentation Status

Make sure you check out INSTALL.md before getting started.

Manifesto

FaceSwap has ethical uses.

When faceswapping was first developed and published, the technology was groundbreaking, it was a huge step in AI development. It was also completely ignored outside of academia because the code was confusing and fragmentary. It required a thorough understanding of complicated AI techniques and took a lot of effort to figure it out. Until one individual brought it together into a single, cohesive collection. It ran, it worked, and as is so often the way with new technology emerging on the internet, it was immediately used to create inappropriate content. Despite the inappropriate uses the software was given originally, it was the first AI code that anyone could download, run and learn by experimentation without having a Ph.D. in math, computer theory, psychology, and more. Before "deepfakes" these techniques were like black magic, only practiced by those who could understand all of the inner workings as described in esoteric and endlessly complicated books and papers.

"Deepfakes" changed all that and anyone could participate in AI development. To us, developers, the release of this code opened up a fantastic learning opportunity. It allowed us to build on ideas developed by others, collaborate with a variety of skilled coders, experiment with AI whilst learning new skills and ultimately contribute towards an emerging technology which will only see more mainstream use as it progresses.

Are there some out there doing horrible things with similar software? Yes. And because of this, the developers have been following strict ethical standards. Many of us don't even use it to create videos, we just tinker with the code to see what it does. Sadly, the media concentrates only on the unethical uses of this software. That is, unfortunately, the nature of how it was first exposed to the public, but it is not representative of why it was created, how we use it now, or what we see in its future. Like any technology, it can be used for good or it can be abused. It is our intention to develop FaceSwap in a way that its potential for abuse is minimized whilst maximizing its potential as a tool for learning, experimenting and, yes, for legitimate faceswapping.

We are not trying to denigrate celebrities or to demean anyone. We are programmers, we are engineers, we are Hollywood VFX artists, we are activists, we are hobbyists, we are human beings. To this end, we feel that it's time to come out with a standard statement of what this software is and isn't as far as us developers are concerned.

  • FaceSwap is not for creating inappropriate content.
  • FaceSwap is not for changing faces without consent or with the intent of hiding its use.
  • FaceSwap is not for any illicit, unethical, or questionable purposes.
  • FaceSwap exists to experiment and discover AI techniques, for social or political commentary, for movies, and for any number of ethical and reasonable uses.

We are very troubled by the fact that FaceSwap can be used for unethical and disreputable things. However, we support the development of tools and techniques that can be used ethically as well as provide education and experience in AI for anyone who wants to learn it hands-on. We will take a zero tolerance approach to anyone using this software for any unethical purposes and will actively discourage any such uses.

How To setup and run the project

FaceSwap is a Python program that will run on multiple Operating Systems including Windows, Linux, and MacOS.

See INSTALL.md for full installation instructions. You will need a modern GPU with CUDA support for best performance. Many AMD GPUs are supported through DirectML (Windows) and ROCm (Linux).

Overview

The project has multiple entry points. You will have to:

  • Gather photos and/or videos
  • Extract faces from your raw photos
  • Train a model on the faces extracted from the photos/videos
  • Convert your sources with the model

Check out USAGE.md for more detailed instructions.

Extract

From your setup folder, run python faceswap.py extract. This will take photos from src folder and extract faces into extract folder.

Train

From your setup folder, run python faceswap.py train. This will take photos from two folders containing pictures of both faces and train a model that will be saved inside the models folder.

Convert

From your setup folder, run python faceswap.py convert. This will take photos from original folder and apply new faces into modified folder.

GUI

Alternatively, you can run the GUI by running python faceswap.py gui

General notes:

  • All of the scripts mentioned have -h/--help options with arguments that they will accept. You're smart, you can figure out how this works, right?!

NB: there is a conversion tool for video. This can be accessed by running python tools.py effmpeg -h. Alternatively, you can use ffmpeg to convert video into photos, process images, and convert images back to the video.

Some tips:

Reusing existing models will train much faster than starting from nothing. If there is not enough training data, start with someone who looks similar, then switch the data.

Help I need support!

Discord Server

Your best bet is to join the FaceSwap Discord server where there are plenty of users willing to help. Please note that, like this repo, this is a SFW Server!

FaceSwap Forum

Alternatively, you can post questions in the FaceSwap Forum. Please do not post general support questions in this repo as they are liable to be deleted without response.

Donate

The developers work tirelessly to improve and develop FaceSwap. Many hours have been put in to provide the software as it is today, but this is an extremely time-consuming process with no financial reward. If you enjoy using the software, please consider donating to the devs, so they can spend more time implementing improvements.

Patreon

The best way to support us is through our Patreon page:

become-a-patron

One time Donations

Alternatively you can give a one off donation to any of our Devs:

@torzdf

There is very little FaceSwap code that hasn't been touched by torzdf. He is responsible for implementing the GUI, FAN aligner, MTCNN detector and porting the Villain, DFL-H128 and DFaker models to FaceSwap, as well as significantly improving many areas of the code.

Bitcoin: bc1qpm22suz59ylzk0j7qk5e4c7cnkjmve2rmtrnc6

Ethereum: 0xd3e954dC241B87C4E8E1A801ada485DC1d530F01

Monero: 45dLrtQZ2pkHizBpt3P3yyJKkhcFHnhfNYPMSnz3yVEbdWm3Hj6Kr5TgmGAn3Far8LVaQf1th2n3DJVTRkfeB5ZkHxWozSX

Paypal: torzdf

@andenixa

Creator of the Unbalanced and OHR models, as well as expanding various capabilities within the training process. Andenixa is currently working on new models and will take requests for donations.

Paypal: andenixa

How to contribute

For people interested in the generative models

  • Go to the 'faceswap-model' to discuss/suggest/commit alternatives to the current algorithm.

For devs

  • Read this README entirely
  • Fork the repo
  • Play with it
  • Check issues with the 'dev' tag
  • For devs more interested in computer vision and openCV, look at issues with the 'opencv' tag. Also feel free to add your own alternatives/improvements

For non-dev advanced users

  • Read this README entirely
  • Clone the repo
  • Play with it
  • Check issues with the 'advuser' tag
  • Also go to the 'faceswap Forum' and help others.

For end-users

  • Get the code here and play with it if you can
  • You can also go to the faceswap Forum and help or get help from others.
  • Be patient. This is a relatively new technology for developers as well. Much effort is already being put into making this program easy to use for the average user. It just takes time!
  • Notice Any issue related to running the code has to be opened in the faceswap Forum!

For haters

Sorry, no time for that.

About github.com/deepfakes

What is this repo?

It is a community repository for active users.

Why this repo?

The joshua-wu repo seems not active. Simple bugs like missing http:// in front of urls have not been solved since days.

Why is it named 'deepfakes' if it is not /u/deepfakes?

  1. Because a typosquat would have happened sooner or later as project grows
  2. Because we wanted to recognize the original author
  3. Because it will better federate contributors and users

What if /u/deepfakes feels bad about that?

This is a friendly typosquat, and it is fully dedicated to the project. If /u/deepfakes wants to take over this repo/user and drive the project, he is welcomed to do so (Raise an issue, and he will be contacted on Reddit). Please do not send /u/deepfakes messages for help with the code you find here.

About machine learning

How does a computer know how to recognize/shape faces? How does machine learning work? What is a neural network?

It's complicated. Here's a good video that makes the process understandable: How Machines Learn

Here's a slightly more in depth video that tries to explain the basic functioning of a neural network: How Machines Learn

tl;dr: training data + trial and error

faceswap's People

Contributors

50mkw avatar abysmalbiscuit avatar andenixa avatar andykdy avatar babilio avatar bryanlyon avatar clorr avatar coldstacks avatar czfhhh avatar daniellivingston avatar deepfakes avatar dfaker avatar facepainter avatar ganonmaster avatar gdunstone avatar geewiz94 avatar iperov avatar jayantpythonlover avatar joshua-wu avatar kilroythethird avatar kvrooman avatar leftler avatar lorjuo avatar mpuels avatar oatssss avatar torzdf avatar tvde1 avatar wallopthecat avatar wauner avatar yutsa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

faceswap's Issues

Can't install dlib or boost

Hi all,

This is not a bug. I am just having problems with my install. Ive installed everything in this github except dlib and face_recognition. I keep getting this when I try the pip install dlib:

C:\Users\Michael Nguyen>pip install dlib
Collecting dlib
Using cached dlib-19.8.1.tar.gz
Building wheels for collected packages: dlib
Running setup.py bdist_wheel for dlib ... error
Complete output from command "c:\users\michael nguyen\appdata\local\programs\python\python35\python.exe" -u -c "import setuptools, tokenize;file='C:\Users\MICHAE1\AppData\Local\Temp\pip-build-bkwfh9da\dlib\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d C:\Users\MICHAE1\AppData\Local\Temp\tmp_sxp8unwpip-wheel- --python-tag cp35:
running bdist_wheel
running build
Detected Python architecture: 64bit
Detected platform: win32
Configuring cmake ...
-- Building for: Visual Studio 14 2015
-- Selecting Windows SDK version to target Windows 10.0.16299.
-- The C compiler identification is MSVC 19.0.24210.0
-- The CXX compiler identification is MSVC 19.0.24210.0
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning at C:/Program Files/CMake/share/cmake-3.10/Modules/FindBoost.cmake:1610 (message):
No header defined for python-py34; skipping header check
Call Stack (most recent call first):
C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:66 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
CMake Warning at C:/Program Files/CMake/share/cmake-3.10/Modules/FindBoost.cmake:1610 (message):
No header defined for python-py35; skipping header check
Call Stack (most recent call first):
C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:68 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
CMake Warning at C:/Program Files/CMake/share/cmake-3.10/Modules/FindBoost.cmake:1610 (message):
No header defined for python3; skipping header check
Call Stack (most recent call first):
C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:71 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
-- Could NOT find Boost
-- Found PythonLibs: C:/Users/Michael Nguyen/AppData/Local/Programs/Python/Python35/libs/python35.lib (found suitable version "3.5.4", minimum required is "3.4")
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Enabling SSE4 instructions
-- Searching for BLAS and LAPACK
-- Searching for BLAS and LAPACK
-- Looking for pthread.h
-- Looking for pthread.h - not found
-- Found Threads: TRUE
-- A library with BLAS API not found. Please specify library location.
-- LAPACK requires BLAS
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0 (found suitable version "8.0", minimum required is "7.5")
CMake Warning at C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/CMakeLists.txt:535 (message):
You have CUDA installed, but we can't use it unless you put visual studio
in 64bit mode.
-- Disabling CUDA support for dlib. DLIB WILL NOT USE CUDA
-- C++11 activated.
-- *****************************************************************************************************
-- We couldn't find the right version of boost python. If you installed boost and you are still getting this error then you might have installed a version of boost that was compiled with a different version of visual studio than the one you are using. So you have to make sure that the version of visual studio is the same version that was used to compile the copy of boost you are using.

-- You will likely need to compile boost yourself rather than using one of the precompiled
-- windows binaries. Do this by going to the folder tools\build\ within boost and running
-- bootstrap.bat. Then run the command:
-- b2 install
-- And then add the output bin folder to your PATH. Usually this is the C:\boost-build-engine\bin
-- folder. Finally, go to the boost root and run a command like this:
-- b2 -a --with-python address-model=64 toolset=msvc runtime-link=static
-- Note that you will need to set the address-model based on if you want a 32 or 64bit python library.
-- When it completes, set the BOOST_LIBRARYDIR environment variable equal to wherever b2 put the
-- compiled libraries. You will also need to set BOOST_ROOT to the root folder of the boost install.
-- E.g. Something like this:
-- set BOOST_ROOT=C:\local\boost_1_57_0
-- set BOOST_LIBRARYDIR=C:\local\boost_1_57_0\stage\lib

-- Next, if you aren't using python setup.py then you will be invoking cmake to compile dlib.
-- In this case you may have to use cmake's -G option to set the 64 vs. 32bit mode of visual studio.
-- Also, if you want a Python3 library you will need to add -DPYTHON3=1. You do this with a statement like:
-- cmake -G "Visual Studio 14 2015 Win64" -DPYTHON3=1 ....\tools\python
-- Rather than:
-- cmake ....\tools\python
-- Which will build a 32bit Python2 module by default on most systems.

-- *****************************************************************************************************
CMake Error at C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:149 (message):
Boost python library not found.
Call Stack (most recent call first):
CMakeLists.txt:9 (include)
-- Configuring incomplete, errors occurred!
See also "C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/tools/python/build/CMakeFiles/CMakeOutput.log".
See also "C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/tools/python/build/CMakeFiles/CMakeError.log".
error: cmake configuration failed!


Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib
Running setup.py install for dlib ... error
Complete output from command "c:\users\michael nguyen\appdata\local\programs\python\python35\python.exe" -u -c "import setuptools, tokenize;file='C:\Users\MICHAE1\AppData\Local\Temp\pip-build-bkwfh9da\dlib\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\MICHAE1\AppData\Local\Temp\pip-p539ppox-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
Detected Python architecture: 64bit
Detected platform: win32
Removing build directory C:\Users\MICHAE~1\AppData\Local\Temp\pip-build-bkwfh9da\dlib./tools/python/build
Configuring cmake ...
-- Building for: Visual Studio 14 2015
-- Selecting Windows SDK version to target Windows 10.0.16299.
-- The C compiler identification is MSVC 19.0.24210.0
-- The CXX compiler identification is MSVC 19.0.24210.0
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning at C:/Program Files/CMake/share/cmake-3.10/Modules/FindBoost.cmake:1610 (message):
No header defined for python-py34; skipping header check
Call Stack (most recent call first):
C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:66 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
CMake Warning at C:/Program Files/CMake/share/cmake-3.10/Modules/FindBoost.cmake:1610 (message):
No header defined for python-py35; skipping header check
Call Stack (most recent call first):
C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:68 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
CMake Warning at C:/Program Files/CMake/share/cmake-3.10/Modules/FindBoost.cmake:1610 (message):
No header defined for python3; skipping header check
Call Stack (most recent call first):
C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:71 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
-- Could NOT find Boost
-- Found PythonLibs: C:/Users/Michael Nguyen/AppData/Local/Programs/Python/Python35/libs/python35.lib (found suitable version "3.5.4", minimum required is "3.4")
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Enabling SSE4 instructions
-- Searching for BLAS and LAPACK
-- Searching for BLAS and LAPACK
-- Looking for pthread.h
-- Looking for pthread.h - not found
-- Found Threads: TRUE
-- A library with BLAS API not found. Please specify library location.
-- LAPACK requires BLAS
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0 (found suitable version "8.0", minimum required is "7.5")
CMake Warning at C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/CMakeLists.txt:535 (message):
You have CUDA installed, but we can't use it unless you put visual studio
in 64bit mode.
-- Disabling CUDA support for dlib. DLIB WILL NOT USE CUDA
-- C++11 activated.
-- *****************************************************************************************************
-- We couldn't find the right version of boost python. If you installed boost and you are still getting this error then you might have installed a version of boost that was compiled with a different version of visual studio than the one you are using. So you have to make sure that the version of visual studio is the same version that was used to compile the copy of boost you are using.
--
-- You will likely need to compile boost yourself rather than using one of the precompiled
-- windows binaries. Do this by going to the folder tools\build\ within boost and running
-- bootstrap.bat. Then run the command:
-- b2 install
-- And then add the output bin folder to your PATH. Usually this is the C:\boost-build-engine\bin
-- folder. Finally, go to the boost root and run a command like this:
-- b2 -a --with-python address-model=64 toolset=msvc runtime-link=static
-- Note that you will need to set the address-model based on if you want a 32 or 64bit python library.
-- When it completes, set the BOOST_LIBRARYDIR environment variable equal to wherever b2 put the
-- compiled libraries. You will also need to set BOOST_ROOT to the root folder of the boost install.
-- E.g. Something like this:
-- set BOOST_ROOT=C:\local\boost_1_57_0
-- set BOOST_LIBRARYDIR=C:\local\boost_1_57_0\stage\lib
--
-- Next, if you aren't using python setup.py then you will be invoking cmake to compile dlib.
-- In this case you may have to use cmake's -G option to set the 64 vs. 32bit mode of visual studio.
-- Also, if you want a Python3 library you will need to add -DPYTHON3=1. You do this with a statement like:
-- cmake -G "Visual Studio 14 2015 Win64" -DPYTHON3=1 ....\tools\python
-- Rather than:
-- cmake ....\tools\python
-- Which will build a 32bit Python2 module by default on most systems.
--
-- *****************************************************************************************************
CMake Error at C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:149 (message):
Boost python library not found.
Call Stack (most recent call first):
CMakeLists.txt:9 (include)
-- Configuring incomplete, errors occurred!
See also "C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/tools/python/build/CMakeFiles/CMakeOutput.log".
See also "C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/tools/python/build/CMakeFiles/CMakeError.log".
error: cmake configuration failed!

----------------------------------------

Command ""c:\users\michael nguyen\appdata\local\programs\python\python35\python.exe" -u -c "import setuptools, tokenize;file='C:\Users\MICHAE1\AppData\Local\Temp\pip-build-bkwfh9da\dlib\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\MICHAE1\AppData\Local\Temp\pip-p539ppox-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\MICHAE~1\AppData\Local\Temp\pip-build-bkwfh9da\dlib\

C:\Users\Michael Nguyen>

Anyone use windows 10 and know how to tackle this?

License?

How can we use this amazing peace of ML art?
Having a licence would greatly improve the value of this project.

MIT license is one of the most permissive.
GNU is a way to go if you want to force all the derivative projects to be opensource.

Improve usage for non technical people

It would be great to facilitate use for non technical people. IE by making this an .exe with PyInstaller, or something similar. Any idea is welcome....

Training is not stopping when press "q"

Had been training the model from past 36 hours on MAC OS HIGH SIERRA.
Now the issue I am facing is when I press "q" to stop the training process, nothing happens, it's continuing to train the model.
Please help as I don't want to terminate the program using ctrl+c to stop it as the trained model will also be lost and all my time/eletricity/etc will get wasted!
image

Improve Dockerfile

Dockerfile does contains basic dependencies to run the program.

It creates bugs with os.scandir or mkdir parameter exist_ok

Please feel free to improve it and provide better support for the program.

Please take care of CPU only users if you update Tensorflow. Provide a Dockerfile.gpu for optimised usages.

Train fine, but merge / convert not

Please can anyone determine why this issue may be occurring when trained to very low amount (<0.02) , preview window looking fine but at convert / merge stage it turns into a mess?

Please forgive if this is an intrusion as strictly this is concerned with reddit.com/r/FakeApp, but i noticed user Clorr1's offer of helping out and pointing to this repo, which presumably fakeApp implements. I have seen several people with the issue on /r/fakeapp & r/deepfakes

[Links removed]

It's really frustrating because the training seems to be fine in the earlier stages (from early merge test runs) but once things start getting accurate in the preview window merge (now convert) turns the whole thing into zombie mode. So far i can't see anything in common between users like myself who have this issue. I have tried a lot of different troubleshooting steps but they are random tbh.

Downgrading graphics driver to one shipped with CUDA 8.0
switching A and B to match file format
running app in admin mode
uninstalling coincidentally installed python

I'm really sure about the quality of the celeb images there is nothing to indicate they are problematic, after all why does it start fine and end like crap with the preview window showing good results?

Thanks for any help

Nvidia 1070
Win 10

Installation manual step by step

Hello everyone. More than a week i am trying to run this project, but i am having troubles.. i even deleted my python and pycharm to reinstall in again, but again no success. Right now, i can't install tensorflow-pgu package. it says "binascii.Error: Incorrect padding". Before was trouble with dlib, but reinstall python and pycharm helped with this.
I am praying someone to help with installation manual, step by step... thank u a lot! i will not give up, this project is awesome!

ImportError: No module named pathlib

I have already installed pathlib in python3.6:Requirement already satisfied: pathlib in /usr/local/lib/python3.6/dist-packages

Command executed: python3 faceswap.py extract -i ~/faceswap/photo/trump -o ~/faceswap/data/trump

Traceback (most recent call last):
File "faceswap.py", line 3, in
from scripts.extract import ExtractTrainingData
File "/home/ubuntu/data/faceswap/scripts/extract.py", line 2, in
from lib.cli import DirectoryProcessor
File "/home/ubuntu/data/faceswap/lib/cli.py", line 6, in
from lib.utils import get_image_paths, get_folder, load_images, stack_images
File "/home/ubuntu/data/faceswap/lib/utils.py", line 4, in
from pathlib import Path
ImportError: No module named pathlib

Can anyone help me out with this issue?

Create pre-built packages

PyInstaller or some other means to create pre-built packages for the most common OSes. Alternatively, look for other similar tools that would allow us to manage most of the dependencies. Perhaps Conda or others.

Target OSes:

  • Windows
  • macOS

For Linux users the manual setup with virtualenv or Dockerfile will most likely suffice.

GPU training does not work

Limit:                  1823465472
InUse:                  1823465472
MaxInUse:               1823465472
NumAllocs:                     229
MaxAllocSize:             94452992

2018-01-28 16:25:05.010972: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:277] ***************************************************************************************************x
2018-01-28 16:25:05.011266: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[2048]```

## Other relevant information

- **Operating system and version:** Windows 8.1
- **Python version:** 3.6.4
- **Faceswap version:** a799f769e4c48908c3efd64792384403392f2e82
- **Faceswap method:** GPU

two tricks to improve results.

adjust the new face's average color as same as the old face's

use the old face's edge to apply a smooth mask to new one

887989700
new_3

before
after

Configurable internal image size

It'd be nice if the internal image size was easily configurable through the various steps to something other than 64x64. With upcoming improved plugins, or just someone willing to put in a lot more training time, going higher resolution seems inevitable.

I've tried hacking around a bit but I'm new to python and deep learning. I'm sure I'll get it eventually but this is probably a trivial change for someone familiar with these libraries.

Doesn't need to be piped all the way to the command line, but if it could be pulled up to changing a define it would be as easy a change for people as changing ENCODER_DIM is.

Add config file ? wouldn't it be better than arg parsing ?

Hi guys,

The more I think about it, the more I think complex arg parsing will be a problem.

Also if we want to move on to a GUI, it would be better to have a config file to set parameters durably.

We still can have an override for params through command line, so we can customize easily just the things we want. Something like ConfigArgParse does this for example.

What do you think about it?

Don't reload models everytime `convert_one_image` is called

Expected behavior

Use the convert command to convert a directory. convert_one_image loads the model once.

Actual behavior

Use the convert command to convert a directory. convert_one_image loads the model every time that it is called.

Fakeapp use low gpu

I have a question about gpu usage. I have ryzen 1600x and nvidia 1060. When i run learning mode cpu run about 30 % and gpu Core about 10 and 20% vram is full 5gb. Is this normal? Learning proces run few hours at same %. Thx for answer and sorry for my english.

Handle better shapes for face replacement (Not just a rectangle)

The rectangle gives an artificial look at the generated image, so it would be a nice feature to use a more soft shape.

This page shows a couple of interesting tricks like: landmarks detection, hull detection, seamless cloning....

(Note: If needed, landmarks are already handled in the aligner class)

Allow a user to gracefully cancel training when there is no preview

In its current state, the tool does not allow users to hit q to quit the tool unless the preview window is active and has focus. The tool will only save after a certain amount of iterations, so it would be great if we could enable "non-previewers" or Docker users to quit out this way also and preserve their work..

Cannot install tensorflow-gpu requirement

Tried installing the requirements-gpu.txt and get this error:

Collecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) Cache entry deserialization failed, entry ignored Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: ) No matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))

I went here to troubleshoot the issue: tensorflow/tensorflow#8251
Installed Python 64bit. Opened new command prompt window and typed in: pip3 install --upgrade tensorflow-gpu

Successfully uninstalled setuptools-28.8.0
Successfully installed bleach-1.5.0 enum34-1.1.6 html5lib-0.9999999 markdown-2.6.11 numpy-1.13.3 protobuf-3.5.1 setuptools-38.4.0 six-1.11.0 tensorflow-gpu-1.4.0 tensorflow-tensorboard-0.4.0rc3 werkzeug-0.14.1 wheel-0.30.0

Went back to my faceswap env to enter the requirements-gpu.txt and still get the same error:
(faceswap) C:\faceswap>pip install -r requirements-gpu.txt
Collecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))
Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: )
No matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))

Other relevant information

  • Operating system and version: Windows 10
    Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32
  • Faceswap version: 1/5/2018
  • Faceswap method: CPU/GPU "CPU method only works"
  • ...

"Easy install" on windows

Installing on windows is not as easy as it seems. You can go the hard way, that will require to compile some sources ad therefore install the copilation tools. Or you can try an easier way, but by being not totally up to date with the tools.

My experience on it is as follow:

  • Make sure you install Python 3.6.4 64bit!
  • Got to https://www.lfd.uci.edu/~gohlke/pythonlibs/#scikit-image and download the scikit_image‑0.13.1‑cp36‑cp36m‑win_amd64.whl file
  • Run pip install scikit_image‑0.13.1‑cp36‑cp36m‑win_amd64.whl
  • Then run pip install dlib==19.7.0 (this is not the latest, but it is precompiled)
  • Then run pip install -r .\requirements-gpu.txt

The requirements should go straightforward as the 2 big painful dependencies are already installed. If you get No matching distribution found for tensorflow-gpu, it means you have the 32bit version!

Problem "Resource exhausted: OOM when allocating tensor"

Hi, I tested the train in using CPU, all work fin. There was the preview window. And it saved the train date in the models.
But when I changed to train on GPU, there was no preview window. It finished the train in 2 minute. Also there was no train date in the models directory.
I don't know what happened, maybe the video card I used.
I use a GTX 660 support CUDA 3.0.
Any help for it? Thank you.

Those are screenshot when using GPU for train.
Resource exhausted: OOM when allocating tensor with shape[3, 3, 128, 256]

  • Operating system and version: Windows10
  • Python version: 3.6.4
  • Faceswap version: the least
  • Faceswap method: GPU with GTX660 installed cuda toolkit 8.0

Error when launching train

Output:

[zoulock@zoulock-desktop faceswap]$ python3 faceswap.py train -A faceA -B faceB -m models -p
/usr/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
WARNING:tensorflow:From /usr/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1264: calling reduce_prod (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /usr/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Model A Directory: /home/zoulock/faceswap/faceA
Model B Directory: /home/zoulock/faceswap/faceB
Training data directory: /home/zoulock/faceswap/models
Not loading existing training data.
Unable to open file (unable to open file: name = '/home/zoulock/faceswap/models/encoder.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
Starting, this may take a while...
usage: faceswap.py [-h] {extract,train,convert} ...

positional arguments:
{extract,train,convert}
extract Extract the faces from a pictures.
train This command trains the model for the two faces A and
B.
convert Convert a source image to a new one with the face
swapped.

optional arguments:
-h, --help show this help message and exit

Releasing Desktop App w/ GUI

If anyone here is interested I've created a simple desktop app w/ GUI to distribute the deepfakes toolkit without the need to install python or other dependencies. Here is a screenshot of it working. The download and more info are in this thread.

Currently this app runs on the original scripts because I wasn't aware of this repo, but I think I'm eventually going to migrate it to the improved scripts here.

Migrate to Python3 only

The Dockerfile is now cleaner, but still relying on Python2 dependencies.

It would be great to move everything to Python3 so we get rid of legacy, and can take advantage of Python3 features.

I'm not a regular Python user so I'm not sure on what to do. I can try to tackle that at some point but for now, I'm prefer to focus on new features and algo improvments....

Special request to make things easy.

I request you to give steps by step instructions to work around with this project.
I am a JAVA and Nodejs guy but interested in AI/ML stuff and noob to python.
I am sure there are many people like me around the globe so I request you to give more details as for me env is perfectly setup but the project files when run are throwing errors.
Once I am handy with its working I will assist in this project with you all.
Thanks in advance.

Prompt to create new directories if they do not exist

The current iteration of the tool will simply error out or quit if the model and output directories do not exist. To improve this behavior for new users, we can prompt to create these directories automatically.

Improve documentation

Improve documentation covering the following topics:

  • Installation/requirements
  • Finding and extracting training data
  • Training your model with your training data
  • Conversion/swapping

Do not print help text after cancelling

At this time, all the scripts will print help text after exiting in an unexpected fashion. For example, when cancelling a command, it will still print the help text before quitting.

Cluster faces during extract using dlib.chinese_whispers_clustering

I have had some success hacking together a pre-processing script to run over my training images. It uses dlib.chinese_whispers_clustering to group the found faces in the training data based on likeness. I think one of the keys to good results is good training sets, and this helps to prevent polluting the training data with other peoples faces as tends to be the case with Google image search sets or images with multiple faces.

There are a couple of ways I think this could be integrated into the project:

  1. during extract when generating face chips, discard non target faces (all faces not in the largest cluster)
  2. during convert where frames have multiple faces, identifying only the target face for replacement.

Here's the script, sorry its a bit hacky, I just wanted something that worked and haven't cleaned it up. I'm not sure where I would begin to integrate it into the project, perhaps as an alternative plugin?

Amount of RAM needed?

I'm trying to train the network with the example provided, and it crashes with this error:

"Resource exhausted: OOM when allocating tensor with shape"

Python reached 3.7GB, and my compute has 8GB, had a peak of 9.8GB used.

I'm using Tensor Flow on CPU, How Much RAM is needed ?

Improved dependency handling

Ensure that all dependencies are met prior to execution and provide helpful messages to the user in case they are missing.

The convert step shows error

It shows that error with no picture output!

contrib/shape_predictor_68_face_landmarks.dat file not found.
Landmark file can be found in http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
Unzip it in the contrib/ folder.
OpenCV Error: Assertion failed (ssize.width > 0 && ssize.height > 0) in resize, file /io/opencv/modules/imgproc/src/resize.cpp, line 4044
Failed to extract from image:~path~/~photoname~.jpg. Reason: /io/opencv/modules/imgproc/src/resize.cpp:4044: error: (-215) ssize.width > 0 && ssize.height > 0 in function resize

I don't know why and it still shows error with another computer in linux.

Web UI?

I think it would be nice to have web UI for this project. It may be very helpful when running the script in a cloud, but also it may be convenient for local usage. Users can set up everything by executing one simple docker run/nvidia-docker run command which will start web ui server.

I made a few sketches to better understand the idea: https://www.figma.com/file/LCHSW0lMj8OAUo8dLOljPc9b/Deepfakes

image
image

What do you think about it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.