Giter VIP home page Giter VIP logo

deepfacelab's Introduction

DeepFaceLab

https://arxiv.org/abs/2005.05535

the leading software for creating deepfakes

More than 95% of deepfake videos are created with DeepFaceLab.

DeepFaceLab is used by such popular youtube channels as

deeptomcruise 1facerussia arnoldschwarzneggar
mariahcareyathome? diepnep mr__heisenberg deepcaprio
VFXChris Ume Sham00k
Collider videos iFake NextFace
Futuring Machine RepresentUS Corridor Crew
DeepFaker DeepFakes in movie
DeepFakeCreator Jarkan

What can I do using DeepFaceLab?

Replace the face

De-age the face

https://www.youtube.com/watch?v=Ddx5B-84ebo

Replace the head

https://www.youtube.com/watch?v=xr5FHd0AdlQ

https://www.youtube.com/watch?v=RTjgkhMugVw

https://www.youtube.com/watch?v=R9f7WD0gKPo

Manipulate politicians lips

(voice replacement is not included!) (also requires a skill in video editors such as Adobe After Effects or Davinci Resolve)

https://www.youtube.com/watch?v=IvY-Abd2FfM

https://www.youtube.com/watch?v=ERQlaJ_czHU

Deepfake native resolution progress

Unfortunately, there is no "make everything ok" button in DeepFaceLab. You should spend time studying the workflow and growing your skills. A skill in programs such as AfterEffects or Davinci Resolve is also desirable.

Mini tutorial

Releases

Windows (magnet link) Last release. Use torrent client to download.
Windows (Mega.nz) Contains new and prev releases.
Windows (yandex.ru) Contains new and prev releases.
Google Colab (github) by @chervonij . You can train fakes for free using Google Colab.
Linux (github) by @nagadit
CentOS Linux (github) May be outdated. By @elemantalcode

Links

Guides and tutorials

DeepFaceLab guide Main guide
Faceset creation guide How to create the right faceset
Google Colab guide Guide how to train the fake on Google Colab
Compositing To achieve the highest quality, compose deepfake manually in video editors such as Davinci Resolve or Adobe AfterEffects
Discussion and suggestions

Supplementary material

Ready to work facesets Celebrity facesets made by community
Pretrained models Pretrained models made by community

Communication groups

Discord Official discord channel. English / Russian.
Telegram group Official telegram group. English / Russian. For anonymous communication. Don't forget to hide your phone number
Русский форум
mrdeepfakes the biggest NSFW English community
reddit r/DeepFakesSFW/ Post your deepfakes there !
reddit r/RUdeepfakes/ Постим русские дипфейки сюда !
QQ群124500433 中文交流QQ群,商务合作找群主
dfldata.cc 中文交流论坛,免费软件教程、模型、人脸数据
deepfaker.xyz 中文学习站(非官方)

Related works

DeepFaceLive Real-time face swap for PC streaming or video calls
neuralchen/SimSwap Swapping face using ONE single photo 一张图免训练换脸
deepfakes/faceswap Something that was before DeepFaceLab and still remains in the past

How I can help the project?

Sponsor deepfake research and DeepFaceLab development.

Donate via Yoomoney
bitcoin:bc1qkhh7h0gwwhxgg6h6gpllfgstkd645fefrd5s6z

Collect facesets

You can collect faceset of any celebrity that can be used in DeepFaceLab and share it in the community

Star this repo

Register github account and push "Star" button.

Meme zone

#deepfacelab #deepfakes #faceswap #face-swap #deep-learning #deeplearning #deep-neural-networks #deepface #deep-face-swap #fakeapp #fake-app #neural-networks #neural-nets #tensorflow #cuda #nvidia

deepfacelab's People

Contributors

andenixa avatar andy-ger avatar auroir avatar cclauss avatar christopherta54321 avatar fakerdaker avatar geekjosh avatar imiyou avatar iperov avatar jakob6174 avatar kcimit avatar lbfs avatar maksv79 avatar nemirovd avatar niemenjoki avatar pluckypan avatar sergeevii123 avatar toomuchfun avatar vxfemboy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepfacelab's Issues

File Exists Error

D:\DeepFaceLabTorrent>"6) train MIAEF128 best GPU.bat"
Running trainer.

Loading model...
Loading: 100%|███████████████████████████████████████████████████████████████████| 2828/2828 [00:01<00:00, 2299.73it/s]
Loading: 100%|████████████████████████████████████████████████████████████████████| 7456/7456 [00:08<00:00, 919.10it/s]
===== Model summary =====
== Model name: MIAEF128

== Current epoch: 0

== Options:
== |== batch_size : 24
== |== multi_gpu : False
== |== created_vram_gb : 11.0
== Running on:
== |== [0 : GeForce GTX 1080 Ti]

Saving...
Starting. Press "Enter" to stop training and save model.
Saving...[#515][1074ms] loss_src:0.073 loss_dst:0.065
Saving...[#1027][1085ms] loss_src:0.066 loss_dst:0.059
Saving...[#1536][1092ms] loss_src:0.060 loss_dst:0.048
Saving...[#2046][1074ms] loss_src:0.052 loss_dst:0.044
Saving...[#2556][1079ms] loss_src:0.055 loss_dst:0.039
Error: [WinError 183] Cannot create a file when that file already exists: 'D:\DeepFaceLabTorrent\workspace\model\MIAEF128_decoderCommonB.h5.tmp' -> 'D:\DeepFaceLabTorrent\workspace\model\MIAEF128_decoderCommonB.h5'
Traceback (most recent call last):
File "D:\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Trainer.py", line 84, in trainerThread
model_save()
File "D:\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Trainer.py", line 47, in model_save
model.save()
File "D:\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ModelBase.py", line 221, in save
self.onSave()
File "D:\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\Model_MIAEF128\Model.py", line 100, in onSave
[self.inter_B, self.get_strpath_storage_for_file(self.inter_BH5)]] )
File "D:\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ModelBase.py", line 245, in save_weights_safe
source_filename.rename ( str(target_filename) )
File "pathlib.py", line 1307, in rename
File "pathlib.py", line 393, in wrapped
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'D:\DeepFaceLabTorrent\workspace\model\MIAEF128_decoderCommonB.h5.tmp' -> 'D:\DeepFaceLabTorrent\workspace\model\MIAEF128_decoderCommonB.h5'
Press any key to continue . . .

Question for avatar model

I tried some training with the avatar model, result here:
https://youtu.be/Rk5um-dtqQc

Have you planned some further development, to convert the swapped face back into the original frames/video ? Is that even possible at the moment, as the swapped face got other head movement than the original face.

Regards

Full Face vs. Half Face

Expected behavior

The main python script says "Default 'full_face'. Don't change this option, currently all models uses 'full_face'" so training the H128 model, I expect that the full face is used.

Actual behavior

It seems only a 'half face' is used.
README.md says "DF (5GB+) - @dfaker model. As H128, but fullface model." which indicates that there is some difference between those two models.

Steps to reproduce

Train H128 model with default settings.

Extraction not working with RTX card

Extracting faces was working fine with my old 1080 but not with the 2080ti

If I run either DLIB or MT extraction it fails with this message

"Running on GeForce RTX 2080 Ti.
You have no capable GPUs. Try to close programs which can consume VRAM, and run again."

With DLIB I get a bit more info

"Exception while initialization: Traceback (most recent call last):
File "L:\Archives\deepfake\Peri\vengance_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 215, in subprocess
fail_message = self.onClientInitialize(client_dict)
File "L:\Archives\deepfake\Peri\vengance_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 249, in onClientInitialize
self.dlib = gpufmkmgr.import_dlib( self.device_idx )
File "L:\Archives\deepfake\Peri\vengance_internal\bin\DeepFaceLab\gpufmkmgr\gpufmkmgr.py", line 14, in import_dlib
import dlib
ImportError: DLL load failed: The specified module could not be found."

No problem training or converting, just extracting.

2DFAN-4.h5

Hi
thanks for your work!
how can i get 2DFAN-4.h5 file?

No training data provided

Expected behavior

A: 745 png
B: 552 png

python main.py train --training-data-src-dir A\ --training-data-dst-dir B\ --model-dir M\ --model LIAEF128

Actual behavior

00747.png - no embedded faceswap info found required for training
Loading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 745/745 [00:01<00:00, 509.29it/s]
Loading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 552/552 [00:00<00:00, 569.36it/s]
Traceback (most recent call last):
File "C:\Python36\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "C:\Python36\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "D:\AI\fake\DeepFaceLab-master\utils\iter_utils.py", line 39, in process_func
gen_data = next (self.generator_func)
File "D:\AI\fake\DeepFaceLab-master\models\TrainingDataGeneratorBase.py", line 73, in batch_func
raise ValueError('No training data provided.')
ValueError: No training data provided.

Steps to reproduce

1.python main.py extract --input-dir input --output-dir output --detector mt
copy output/.png to sort/.png
2.python main.py sort --input-dir sort --by hist-blur
del some blur face
copy sort/.png to A/.png
3.python main.py train --training-data-src-dir A\ --training-data-dst-dir B\ --model-dir M\ --model LIAEF128

Other relevant information

  • Command lined used (if not specified in steps to reproduce): main.py ...
  • Operating system and version: Windows
  • Python version: 3.6

how to swap face directly in video and do I need to train the model from scratch if I want to swap face from dst2src? thanks~

Expected behavior

Describe, in some detail, what you are trying to do and what the output is that you expect from the program.

Actual behavior

Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.

Steps to reproduce

Describe, in some detail, the steps you tried that resulted in the behavior described above.

Other relevant information

  • Command lined used (if not specified in steps to reproduce): main.py ...
  • Operating system and version: Windows, macOS, Linux
  • Python version: 3.5, 3.6.4, ...

No module named 'keras_contrib'

Expected behavior

Train success

Actual behavior

Using TensorFlow backend.
Error: No module named 'keras_contrib'
Traceback (most recent call last):
File "D:\AI\fake\DeepFaceLab-master\mainscripts\Trainer.py", line 41, in trainerThread
**in_options)
File "D:\AI\fake\DeepFaceLab-master\models\ModelBase.py", line 106, in init
self.keras_contrib = gpufmkmgr.import_keras_contrib()
File "D:\AI\fake\DeepFaceLab-master\gpufmkmgr\gpufmkmgr.py", line 107, in import_keras_contrib
import keras_contrib
ModuleNotFoundError: No module named 'keras_contrib'

Steps to reproduce

already import keras
can not find keras_contrib

Other relevant information

  • Command lined used (if not specified in steps to reproduce): python main.py train --training-data-src-dir A --training-data-dst-dir B --model-dir M --model DF
  • Operating system and version: Windows
  • Python version: 3.6.4

Editing landmarks

First, thanks for doing a great job.

I have tested several models, and some of the results are mindblowing.

I notice that it all boils down to how accurate the landmarks are placed. I have used the manual tool on all images, which greatly improved result om difficult angles.

But it would be very helpful to manually edit landmarks point by point when they are misaligned. Also, it would be great if it was possible to reload a manual edit session, instead of having to start all over every time. Maybe I just misunderstood how it works :)

AVATAR conversion throws error

Hey, I get the following error when trying to convert an AVATAR model trained on 1500 and 3000 face images respectively:

Exception while process data [undefined]: Traceback (most recent call last):
  File "C:\deepfakes\DeepFaceLab_06_07\_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 232, in subprocess
    result = self.onClientProcessData (data)
  File "C:\deepfakes\DeepFaceLab_06_07\_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 150, in onClientProcessData
    image = self.converter.convert_image(image, image_landmarks, self.debug)
  File "C:\deepfakes\DeepFaceLab_06_07\_internal\bin\DeepFaceLab\models\Model_AVATAR\Model.py", line 244, in convert_image
    face_mat            = LandmarksProcessor.get_transform_mat (img_face_landmarks, self.predictor_input_size, face_type=FaceType.HALF )
  File "C:\deepfakes\DeepFaceLab_06_07\_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 58, in get_transform_mat
    mat = umeyama(image_landmarks[17:], landmarks_2D, True)[0:2]
IndexError: too many indices for array

The error seems to be thrown multiple times, I have 40 CPUs/threads working on conversion in parallel.

I'm using the binaries from https://rutracker.org/forum/viewtopic.php?p=75318742, 6.7.2018.

How should I proceed?

Upside down side faces

Sometimes, for some reason, dlib extractor puts the face upside down. It does this for some side faces. I guess it depends on the exact angle. I get things likes this:

ginny00521_0

Question: should I keep them for conversion? I understand deleting them from the training set, but at convert time, will it be converted correctly?

Model training stagnates and other error messages

Openfaceswap model training issues

LowMem training, batch size 64, ran first 23 hours then paused, and ran 8 more hours.
Still running at this time.


Expected behavior

Im trynna train a model that will be used by the openfaceswap program to convert faces from a video. The loss value should decrease over training time, which means a better model and a better quality faceswap.

Actual behavior

The loss value for both faces stagnates, still after 23 + 8 hours of training, and iv'e got some error messages about keepdims that is deprecated and about memory that could be better used but i dont know how to do that.

Steps to reproduce

For the memory, tried to use the Original model training with batch size, 64, 32, 16, 8, 4, 2 but i got OOMed every time: LowMem model training works fine with batch size 64. I dont know whats keepdims is at all, and for the loss value cant do anything except waiting.


Other relevant information

  • Windows 10, 64bits

Specs :

  • GPU Nvidia Geforce GTX 1050 ti 4go
  • 16go ram (never gets over 7go/16go usage dont know why, maybe this influences the model training!!)
  • Intel 7500 smth CPU

Useful images + log
log training model faceswap lowmem.txt
openfaceswap model train settings
memory usage
gpu usage


Sorry for the french text in the screenshots, but i think it's understandable. If help needed, i can translate it.

Add option of extraction and conversion on CPU

Expected behavior

I am trying to extract and convert on the my local CPU and do training on a GPU cloud. Since moving large number of images to the GPU cloud costs time each time I wish to convert a new clip.

Actual behavior

Currently I am working on a regular macOS without any nvidia gpus, thus running extract will cause a NVML Shared Library Not Found error, is it possible to add an option of CPU only for extract and convert.

Step 8 - Convert to AVI or MP4

After completing steps 1 - 7, I have a folder called Merged with frames of SRC overlayed into frames of the DST video.

However, when running either of the step 8 commands, to convert to AVI or MP4. The Command Prompt window pops up for a split second and then closes for either of them.

Any idea why this is?

Error with operands cannot be broadcasted

Super excited to try this one out. I just set up everything like mentioned in the doc. Once I did, I started with extracting the face of me.

But after 1 hour of extracting (passing first 2 steps), I get this error on the third pass:

ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Here is a screenshot of the error: https://d.pr/free/i/cFqKtL

Any idea what could be the reason for this?

I am running this on Tesla K80 GPU. Ubuntu 16.04 LTS.

Update: I created a new instance, re-installed all scripts again and tried to use different images. But I still get the same error. I also tried dlib and mt but both are showing the same error. So any help I could get is greatly appreciated. Thanks in advance.

AttributeError: module 'tensorflow.python.ops.image_ops' has no attribute 'ssim'

Expected behavior

Training should work

Actual behavior

Training fails with this error:

Running trainer.

Loading model...
C:\Users\Singularity\AppData\Local\Programs\Python\Python36\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated
as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
2018-08-14 19:42:44.666474: I C:\tf_jenkins\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Using TensorFlow backend.
Error: module 'tensorflow.python.ops.image_ops' has no attribute 'ssim'
Traceback (most recent call last):
  File "Z:\DeepFaceLab\mainscripts\Trainer.py", line 41, in trainerThread
    **in_options)
  File "Z:\DeepFaceLab\models\ModelBase.py", line 108, in __init__
    self.onInitialize(**in_options)
  File "Z:\DeepFaceLab\models\Model_DF\Model.py", line 41, in onInitialize
    self.autoencoder_src.compile(optimizer=optimizer, loss=[dssimloss, 'mse'] )
  File "C:\Users\Singularity\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py", line 830, in compile
    sample_weight, mask)
  File "C:\Users\Singularity\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py", line 429, in weighted
    score_array = fn(y_true, y_pred)
  File "Z:\DeepFaceLab\nnlib\__init__.py", line 34, in __call__
    loss = (1.0 - tf.image.ssim (y_true*mask, y_pred*mask, 1.0)) / 2.0
AttributeError: module 'tensorflow.python.ops.image_ops' has no attribute 'ssim'
PS Z:\DeepFaceLab`

Steps to reproduce

Latest repo - CUDA 9, everything updated, 1080GT

Other relevant information

Windows 10
python 3.6.4

Missing binary

it says there is a prebuilt standalone Windows binary, but after downloading the zip there are no windows binary anywhere. There is a link to a website at the bottom but it is in Russian unfortunately.

Converting does not swap the (detected) faces.

Expected behavior

Converting frames with a pretrained model should results in output frames with faces swapped (or at least modified depending on the quality of the trained network) for all frames where a face was detected.

Actual behavior

I have a pretrained H128 model. I have some new images from a video and extract them first. Around half of the frames have a face detected, which is ok. When trying to convert the frames, the output frames do not have the face swapped. The face region in the output frame looks completely untouched.

Steps to reproduce

  1. Pretrained H128 model available.
  2. I downloaded a youtube video:
    youtube-dl https://www.youtube.com/watch?v=g8nkbusv5sY
  3. Extract frames:
    ffmpeg -i input.mkv frame_%06d.jpg
  4. Align
    python main.py extract --input-dir /mnt/in --output-dir /mnt/in/aligned --detector mt
  5. Convert:
    python main.py convert --model-dir models --model H128 --input-dir /mnt/in --output-dir /mnt/out --mode seamless --aligned-dir /mnt/in/aligned

Other relevant information

System: Ubuntu 16.04, requirements installed as described in Linux.md

Noise appeared on white background when using seamless converter

I use LIAEF128 on training and using seamless converter,
noise appeared on white background.

sample1

https://gyazo.com/2946e4665176d4212078688c78836184

sample2

https://gyazo.com/29d965c2702c933a46463044ee241d20

convert command:

$ python3 main.py convert --input-dir ./outputs/targets --output-dir outputs/out03_seamless --aligned-dir ./outputs/aligned_targets --model-dir workspace/model --model LIAEF128 --ask-for-params
Choose mode: (1) hist match, (2) hist match bw, (3) seamless (default), (4) seamless hist match : 3
Choose erode mask modifier [-100..100] (default 0) :
Choose blur mask modifier [-100..200] (default 0) :
Export png with alpha channel? [0..1] (default 0) :
Transfer color from original DST image? [0..1] (default 0) :
Running converter.

When I use hist converter, noise does not appeared.
But I want to use seamless converter because of quality.

Would you tell me how to remove noise when seamless converting?
or change any parameters?

mask appear at the test output video

Expected behavior

Using the trained model to output faceswapped video, but the video maintain the mask at the bottom of the video. Hope you can tell me where to uncomment the code or modify it in order to abandon the mask, thanks~

is there any way to have the dfaker model work with a 4GB GPU?

Expected behavior

Describe, in some detail, what you are trying to do and what the output is that you expect from the program.

Actual behavior

Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.

Steps to reproduce

Describe, in some detail, the steps you tried that resulted in the behavior described above.

Other relevant information

  • Command lined used (if not specified in steps to reproduce): main.py ...
  • Operating system and version: Windows, macOS, Linux
  • Python version: 3.5, 3.6.4, ...

Model won't train with tensorflow 1.11.0

I am receiving this error message:
"terminate called after throwing an instance of 'std::bad_alloc'"

This is after I upgraded tensorflow to 1.11.0 and Keras to 2.2.4. I upgraded to solve this error: "importerror: cannot import name 'normalize_data_format'"

Landmakrs get corrupted after holding '.' or ',' in manual fix extractor

Expected behavior

Advance a lot of frames when hold '.' key

Actual behavior

Advance a lot frames, but past frames landmarks get corrupted

Steps to reproduce

Use manual fix option and hold '.' on the keyboard, then go back some frames to see landmark corruption

Other relevant information

I tried to fix it with mutex, but cannot figure out how to fix it.

convert(seamless) has Exception

Expected behavior

convert successful

Actual behavior

Converting: 35%|███████████████████████▌ | 208/601 [00:11<00:22, 17.57it/s]Exception while process data [undefined]: Traceback (most recent call last):
File "D:\AI\fake\DeepFaceLab-master\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\AI\fake\DeepFaceLab-master\mainscripts\Converter.py", line 159, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\AI\fake\DeepFaceLab-master\models\ConverterMasked.py", line 169, in convert_face
out_img = cv2.seamlessClone( (out_img255).astype(np.uint8), (img_bgr255).astype(np.uint8), (img_face_mask_flatten_aaa*255).astype(np.uint8), (masky,maskx) , cv2.NORMAL_CLONE )
cv2.error: OpenCV(3.4.3) C:\projects\opencv-python\opencv\modules\core\src\matrix.cpp:465: error: (-215:Assertion failed) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function 'cv::Mat::Mat'

Exception while process data [undefined]: Traceback (most recent call last):
File "D:\AI\fake\DeepFaceLab-master\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\AI\fake\DeepFaceLab-master\mainscripts\Converter.py", line 159, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\AI\fake\DeepFaceLab-master\models\ConverterMasked.py", line 169, in convert_face
out_img = cv2.seamlessClone( (out_img255).astype(np.uint8), (img_bgr255).astype(np.uint8), (img_face_mask_flatten_aaa*255).astype(np.uint8), (masky,maskx) , cv2.NORMAL_CLONE )
cv2.error: OpenCV(3.4.3) C:\projects\opencv-python\opencv\modules\core\src\matrix.cpp:465: error: (-215:Assertion failed) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function 'cv::Mat::Mat'

Steps to reproduce

convert(hist-match) is OK
but convert(seamless) and convert(seamless-hist-match) throw the exception while proccessing about 32%(always in different srcfiles)

python main.py convert --input-dir input\ --output-dir porn\ --aligned-dir sort\ --model-dir M\ --model LIAEF128 --mode seamless

Other relevant information

  • Command lined used (if not specified in steps to reproduce): python main.py convert --input-dir input\ --output-dir porn\ --aligned-dir sort\ --model-dir M\ --model LIAEF128 --mode seamless
  • Operating system and version: Windows
  • Python version: 3.6

LIAEF208

Modifed LIAEF128 -> LIAEF208:

  • now 13x13 deepest layer (up from 8x8), total 208x208 resolution
  • reduced number of total filters (mostly decoder mask)
  • runs on min 8GB GPU, batch-size 3
  • 180k+ iter necessary

Results:
LIAF208_2

Benefits:

  • Teeth definition
  • Eye pupils mostly working

No clever engineering and unwieldy to run, but can post model if wanted.

Running extraction

When running option 3, or any option that uses the just extracted data, it gives me an error that it cannot find the utils module.

I'm using Python 2.7 and have also tried downloading 3.6.6

  File "C:/Python27/Scripts/DeepFaceLab-master/main.py", line 4, in <module>
    from utils import Path_utils
ImportError: No module named utils

It might have been asked before but cannot find it in the issues.

NVML shared directory not found

Hello,
I downloaded the prebuilt Torrent and tried to Run it. I was successfully able to create frames, but Im stuck at extracting faces.
It throws an error saying "NVML shared library not found" I have Nvidia 1060 and installed cuda 9.
I have tried Openfaceswap, it worked with half face... but I need to model using DFaker full face. Please help.
Thanks.

You are trying to load a weight file containing 1 layers into a model with 9 layers.

Ubuntu 16.04

avatar

Using TensorFlow backend.
Error: You are trying to load a weight file containing 1 layers into a model with 9 layers.
Traceback (most recent call last):
File "/home/oracle/DeepFaceLab/mainscripts/Trainer.py", line 41, in trainerThread
**in_options)
File "/home/oracle/DeepFaceLab/models/ModelBase.py", line 108, in init
self.onInitialize(**in_options)
File "/home/oracle/DeepFaceLab/models/Model_AVATAR/Model.py", line 31, in onInitialize
self.encoder64.load_weights (self.get_strpath_storage_for_file(self.encoder64H5))
File "/home/oracle/anaconda3/envs/deepfacelab/lib/python3.6/site-packages/keras/engine/topology.py", line 2667, in load_weights
f, self.layers, reshape=reshape)
File "/home/oracle/anaconda3/envs/deepfacelab/lib/python3.6/site-packages/keras/engine/topology.py", line 3365, in load_weights_from_hdf5_group
str(len(filtered_layers)) + ' layers.')
ValueError: You are trying to load a weight file containing 1 layers into a model with 9 layers.

Быстрый вопрос

Приветос! Прежде чем оставить свое оборудование на пару суток для обработки видео, хотелось бы спросить, ваша сборка работает же лучше, чем https://github.com/deepfakes/faceswap верно? Увидел пример вашего видео и там нет никаких квадратиков на лице как у faceswap-a.

How to improve the resolution of faces and models

Thanks to the author for providing us with excellent software, ask two questions:
1 How to increase the faces of data_dst and data_src from 256256 to 512512.
2 How to increase the train DF model from 128 to 256, or 100 to 200.
The goal is to double the resolution and improve clarity.
Thank you

Crash while extracting with full_face and --manual-fix

C:\Users\Kirin>python c:\users\kirin\deepfacelab\main.py extract --input-dir H:\
fakes\harper --output-dir h:\fakes\harper\aligned.ip --detector dlib --face-type
 full_face --manual-fix
Running extractor.

Performing 1st pass...
Running on GeForce GTX 1060 6GB.
100%|██████████████████████████████████████████| 64/64 [00:11<00:00,  5.60it/s]
Performing 2nd pass...
Running on GeForce GTX 1060 6GB.
C:\Program Files\Python36\lib\site-packages\h5py\__init__.py:36: FutureWarning:
Conversion of the second argument of issubdtype from `float` to `np.floating` is
 deprecated. In future, it will be treated as `np.float64 == np.dtype(float).typ
e`.
  from ._conv import register_converters as _register_converters
2018-06-05 20:43:25.681946: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7715
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 5.58GiB
2018-06-05 20:43:25.690947: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-05 20:43:27.605056: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1
edge matrix:
2018-06-05 20:43:27.611057: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:929]      0
2018-06-05 20:43:27.616057: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:942] 0:   N
2018-06-05 20:43:27.621057: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:
0/task:0/device:GPU:0 with 5365 MB memory) -> physical GPU (device: 0, name: GeF
orce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
Using TensorFlow backend.
100%|██████████████████████████████████████████| 64/64 [00:12<00:00, 10.21it/s]
Performing manual fix...
Running on GeForce GTX 1060 6GB.
C:\Program Files\Python36\lib\site-packages\h5py\__init__.py:36: FutureWarning:
Conversion of the second argument of issubdtype from `float` to `np.floating` is
 deprecated. In future, it will be treated as `np.float64 == np.dtype(float).typ
e`.
  from ._conv import register_converters as _register_converters
2018-06-05 20:43:57.946792: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7715
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 5.58GiB
2018-06-05 20:43:57.957792: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-05 20:43:58.596829: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1
edge matrix:
2018-06-05 20:43:58.603829: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:929]      0
2018-06-05 20:43:58.607830: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:942] 0:   N
2018-06-05 20:43:58.612830: I T:\src\github\tensorflow\tensorflow\core\common_ru
ntime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:
0/task:0/device:GPU:0 with 5366 MB memory) -> physical GPU (device: 0, name: GeF
orce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
Using TensorFlow backend.
 88%|████████████████████████████████████?     | 56/64 [00:46<00:44,  5.53s/it]E
xception while process data [H:\fakes\harper\279209_09big.jpg]: Traceback (most
recent call last):
  File "c:\users\kirin\deepfacelab\utils\SubprocessorBase.py", line 232, in subp
rocess
    result = self.onClientProcessData (data)
  File "c:\users\kirin\deepfacelab\mainscripts\Extractor.py", line 225, in onCli
entProcessData
    landmarks = self.e.extract_from_bgr (image, rects)
  File "c:\users\kirin\deepfacelab\facelib\LandmarksExtractor.py", line 123, in
extract_from_bgr
    image = crop(input_image, center, scale).transpose ( (2,0,1) ).astype(np.flo
at32) / 255.0
  File "c:\users\kirin\deepfacelab\facelib\LandmarksExtractor.py", line 36, in c
rop
    newImg[newY[0] - 1:newY[1], newX[0] - 1:newX[1] ] = image[oldY[0] - 1:oldY[1
], oldX[0] - 1:oldX[1], :]
ValueError: could not broadcast input array from shape (184,0,3) into shape (184
,167,3)

Happened with a face not upright. I had just validated a manual fix and it never processed the next image.

Pic set if you want to test : https://owncloud.dspnet.fr/index.php/s/DFRV1UTnrH2uLmA/download (NSFW)

It doesn't crash without manual-fix, but then it skips the pics where the face is at 90° angle.

Please update to support CUDA 10 and CUDNN 7.4.1

I own an RTX 2080 and I am unable to extract using my GPU, After trying numerous thing on numerous configurations and operating systems I've come to the conclusion, after reviewing Nvidia documentation, that the current versions required by DeepFaceLab simply are incompatible with Turing GPUs. I have no trouble using DeepFaceLab with my GTX 1070 on the same configurations.
This link explains it https://docs.nvidia.com/deeplearning/sdk/cudnn-support-matrix/index.html

The good news is that at least old cards will still work if you update for new hardware.

What is the full name of LIAEF and MIAEF

Hi, I'm an AI researcher, too. Very curious about the full name of LIAEF and MIAEF?
As I known, AE means autoencoder, does IAE refers to Implicit Autoencoders?
Besides, what's the letter L,M,F stands for?

Thanks for reply~

Converting stalls when using "Transfer Color from original image"

Ive tried this with DF converter and MIAF128 converter. When i select to use the color transfer, the converter stalls and converts only the first handful of images. Then i get a large error print which ill paste down below. I am using a 1080TI with driver version 417.22. My cpu is a Ryzen 7 1800x. When i do not select the color transfer option my converts are successful.

Has anyone else ran into this issue? I have found when using the DF model this feature is very nice for transferring makeup. Hopefully I can get this working as intended.

=========================================
Choose mode: (1) hist match, (2) hist match bw, (3) seamless (default), (4) seamless hist match : 1
Masked hist match? [0..1] (default - model choice) : 1
Choose erode mask modifier [-100..100] (default 0) : 20
Choose blur mask modifier [-100..200] (default 0) : 15
Choose output face scale modifier [-50..50] (default 0) : 20
Transfer color from original DST image? [0..1] (default 0) : 1
Degrade color power of final image [0..100] (default 0) : 15
Export png with alpha channel? [0..1] (default 0) : 0
Running converter.

Loading model...
===== Model summary =====
== Model name: DF

== Current epoch: 266132

== Options:
== |== batch_size : 32
== |== multi_gpu : False
== |== created_vram_gb : 11.0
== Running on:
== |== [0 : GeForce GTX 1080 Ti]

Collecting alignments: 100%|███████████████████████████████████████████████████| 18670/18670 [00:06<00:00, 3071.04it/s]
Running on CPU0.
Running on CPU1.
Running on CPU2.
Running on CPU3.
Running on CPU4.
Running on CPU5.
Running on CPU6.
Running on CPU7.
Running on CPU8.
Running on CPU9.
Running on CPU10.
Running on CPU11.
Running on CPU12.
Running on CPU13.
Running on CPU14.
Running on CPU15.
Converting: 0%| | 0/25251 [00:00<?, ?it/s]no faces found for 00114.png, copying without faces
no faces found for 00115.png, copying without faces
no faces found for 00116.png, copying without faces
no faces found for 00117.png, copying without faces
2018-12-08 00:50:45.675791: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocato2018-1r cpu
2-08 00:50:45.678253: W tensorflow/core/2018-12-08 00:50:45.675793: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_cat cwise_ops_common.cc:70 : Resource exhaustoed:mmon.cc:70 : Resource exhaus OOM when allocatited: OOM when allocating tensor with shape[408960,3] and typeng tensor with shape[408960,3] float on /job:localhostand type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu/replica:0/task:0/device:
CPU:0 by allocator cpu2018-12-08 00:50:45.678255: W ten
sorflow/core/framework/op_kernel.cc:2018-12-08 00:512730] OP_REQUIR:ES fai4led at cwise_ops_common.cc:70 : Resource exhausted: OOM wh5en allocating tensor with shape[408960,3] and type bool on /job:loc.alhost/replica:0/task:0/device:CPU:0 b6y75793: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when alloca allocator cpu
ti2ng tens018-12-08 00:50:45.678255: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIREor with shape[S failed at cwise_ops_common408960,3] and typ.ce bool on /jobc:70 ::localhost/replica:0/ta Resource esxhausted: OOM when allocating tensor with skha:0/device:pe[408960,3] and CPUtype bool on /job:0 by all:localhost/replica:0/task:0/device:CPU:0 by allocator cpuocator
cpu2018-12-08 00:50:45.
678267: W tensorflow/core/framework/op_kernel.cc2:018-12-08 00:50:45.675825: W tensorfl1273] OP_REQUIRES failed at cwiow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70se_ops_common.cc:70 : Resource exhausted: : Resource exhaus OOM when allocating tensor with shape[408960,3] anted: OOM when allocadt type float on /jing tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpuo
b:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
Exception while process data [undefined]: Traceback (most recent call last):
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1292, in _do_call
return fn(*args)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1277, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1367, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/truediv}} = Mul[T=DT_FLOAT, _grappler:ArithmeticOptimizer:MinimizeBroadcasts=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ConstantFolding/rgb_to_lab/srgb_to_xyz/truediv_recip, rgb_to_lab/Reshape)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 207, in convert_face
img_lab_l, img_lab_a, img_lab_b = np.split ( self.TFLabConverter.bgr2lab (img_bgr), 3, axis=-1 )
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 296, in bgr2lab
return self.tf_session.run(self.lab_output_tensor, feed_dict={self.bgr_input_tensor: bgr})
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 887, in run
run_metadata_ptr)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1110, in _run
feed_dict_tensor, options, run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1286, in _do_run
run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1308, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/truediv}} = Mul[T=DT_FLOAT, _grappler:ArithmeticOptimizer:MinimizeBroadcasts=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ConstantFolding/rgb_to_lab/srgb_to_xyz/truediv_recip, rgb_to_lab/Reshape)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Caused by op 'rgb_to_lab/srgb_to_xyz/truediv', defined at:
File "", line 1, in
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 205, in convert_face
self.TFLabConverter = image_utils.TFLabConverter()
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 291, in init
self.lab_output_tensor = self.rgb_to_lab(self.tf_module, self.bgr_input_tensor)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 308, in rgb_to_lab
rgb_pixels = (srgb_pixels / 12.92 * linear_mask) + (((srgb_pixels + 0.055) / 1.055) ** 2.4) * exponential_mask
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\math_ops.py", line 874, in binary_op_wrapper
return func(x, y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\math_ops.py", line 970, in _truediv_python3
return gen_math_ops.real_div(x, y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6370, in real_div
"RealDiv", x=x, y=y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 3272, in create_op
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/truediv}} = Mul[T=DT_FLOAT, _grappler:ArithmeticOptimizer:MinimizeBroadcasts=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ConstantFolding/rgb_to_lab/srgb_to_xyz/truediv_recip, rgb_to_lab/Reshape)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Exception while process data [undefined]: Traceback (most recent call last):
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1292, in _do_call
return fn(*args)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1277, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1367, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/Greater}} = Greater[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 207, in convert_face
img_lab_l, img_lab_a, img_lab_b = np.split ( self.TFLabConverter.bgr2lab (img_bgr), 3, axis=-1 )
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 296, in bgr2lab
return self.tf_session.run(self.lab_output_tensor, feed_dict={self.bgr_input_tensor: bgr})
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 887, in run
run_metadata_ptr)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1110, in _run
feed_dict_tensor, options, run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1286, in _do_run
run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1308, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/Greater}} = Greater[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Caused by op 'rgb_to_lab/srgb_to_xyz/Greater', defined at:
File "", line 1, in
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 205, in convert_face
self.TFLabConverter = image_utils.TFLabConverter()
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 291, in init
self.lab_output_tensor = self.rgb_to_lab(self.tf_module, self.bgr_input_tensor)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 307, in rgb_to_lab
exponential_mask = tf.cast(srgb_pixels > 0.04045, dtype=tf.float32)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 3426, in greater
"Greater", x=x, y=y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 3272, in create_op
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/Greater}} = Greater[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

2018-12-08 00:50:45.820336: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
2018-12-08 00:50:45.820376: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
2018-12-08 00:50:45.820360: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
2018-12-08 00:50:45.820386: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
2018-12-08 00:50:45.845414: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwisException while process data [undefined]: Traceback (most recent call last):
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1292, in _do_call
return fn(*args)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1277, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1367, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/add}} = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/add/y)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 207, in convert_face
img_lab_l, img_lab_a, img_lab_b = np.split ( self.TFLabConverter.bgr2lab (img_bgr), 3, axis=-1 )
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 296, in bgr2lab
return self.tf_session.run(self.lab_output_tensor, feed_dict={self.bgr_input_tensor: bgr})
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 887, in run
run_metadata_ptr)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1110, in _run
feed_dict_tensor, options, run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1286, in _do_run
run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1308, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/add}} = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/add/y)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Caused by op 'rgb_to_lab/srgb_to_xyz/add', defined at:
File "", line 1, in
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 205, in convert_face
self.TFLabConverter = image_utils.TFLabConverter()
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 291, in init
self.lab_output_tensor = self.rgb_to_lab(self.tf_module, self.bgr_input_tensor)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 308, in rgb_to_lab
rgb_pixels = (srgb_pixels / 12.92 * linear_mask) + (((srgb_pixels + 0.055) / 1.055) ** 2.4) * exponential_mask
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\math_ops.py", line 874, in binary_op_wrapper
return func(x, y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 311, in add
"Add", x=x, y=y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 3272, in create_op
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/add}} = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/add/y)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

e_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu

2018-12-08 00:50:45.845425: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
2018-12-08 00:50:45.845443: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
2018-12-08 00:50:45.845460: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at cwise_ops_common.cc:70 : Resource exhausted: OOM when allocating tensor with shape[408960,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
Exception while process data [undefined]: Traceback (most recent call last):
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1292, in _do_call
return fn(*args)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1277, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1367, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/LessEqual}} = LessEqual[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 207, in convert_face
img_lab_l, img_lab_a, img_lab_b = np.split ( self.TFLabConverter.bgr2lab (img_bgr), 3, axis=-1 )
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 296, in bgr2lab
return self.tf_session.run(self.lab_output_tensor, feed_dict={self.bgr_input_tensor: bgr})
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 887, in run
run_metadata_ptr)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1110, in _run
feed_dict_tensor, options, run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1286, in _do_run
run_metadata)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\client\session.py", line 1308, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/LessEqual}} = LessEqual[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Caused by op 'rgb_to_lab/srgb_to_xyz/LessEqual', defined at:
File "", line 1, in
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 118, in _main
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Converter.py", line 170, in onClientProcessData
image = self.converter.convert_face(image, image_landmarks, self.debug)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\models\ConverterMasked.py", line 205, in convert_face
self.TFLabConverter = image_utils.TFLabConverter()
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 291, in init
self.lab_output_tensor = self.rgb_to_lab(self.tf_module, self.bgr_input_tensor)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\image_utils.py", line 306, in rgb_to_lab
linear_mask = tf.cast(srgb_pixels <= 0.04045, dtype=tf.float32)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 4336, in less_equal
"LessEqual", x=x, y=y, name=name)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 3272, in create_op
op_def=op_def)
File "D:\DeepFaceLab_build_02_12_2018\DeepFaceLabTorrent_internal\bin\lib\site-packages\tensorflow\python\framework\ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[408960,3] and type bool on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node rgb_to_lab/srgb_to_xyz/LessEqual}} = LessEqual[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rgb_to_lab/Reshape, rgb_to_lab/srgb_to_xyz/LessEqual/y)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

`Error: Sorry, this model works only on 2GB+ GPU` on GPU with 2GB dedicated VRAM

Expected behavior

Start training.

Actual behavior

The script shows this output:

Running trainer.

Loading model...
/home/giovanni/virtualenv/faceswap/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
2018-06-05 12:48:16.247626: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-05 12:48:16.644955: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-06-05 12:48:16.645801: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties: 
name: GeForce 840M major: 5 minor: 0 memoryClockRate(GHz): 1.124
pciBusID: 0000:01:00.0
totalMemory: 1.96GiB freeMemory: 1.80GiB
2018-06-05 12:48:16.645834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-05 12:48:30.618264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-05 12:48:30.618302: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929]      0 
2018-06-05 12:48:30.618312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0:   N 
2018-06-05 12:48:30.661743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1562 MB memory) -> physical GPU (device: 0, name: GeForce 840M, pci bus id: 0000:01:00.0, compute capability: 5.0)
Using TensorFlow backend.
Error: Sorry, this model works only on 2GB+ GPU
Traceback (most recent call last):
  File "/home/giovanni/git_repos/DeepFaceLab/mainscripts/Trainer.py", line 41, in trainerThread
    **in_options)
  File "/home/giovanni/git_repos/DeepFaceLab/models/ModelBase.py", line 108, in __init__
    self.onInitialize(**in_options)
  File "/home/giovanni/git_repos/DeepFaceLab/models/Model_H64/Model.py", line 18, in onInitialize
    self.set_vram_batch_requirements( {2:2,3:4,4:8,5:16,6:32,7:32,8:32,9:48} )
  File "/home/giovanni/git_repos/DeepFaceLab/models/ModelBase.py", line 323, in set_vram_batch_requirements
    raise Exception ('Sorry, this model works only on %dGB+ GPU' % ( keys[0] ) )
Exception: Sorry, this model works only on 2GB+ GPU

However my GPU does have 2GBs of dedicated VRAM (the script says totalMemory: 1.96GiB, which is more than 2GB), it's an Nvidia GeForce 840M. I imagine the problem could be that only 1.80GiB are free, but I have no idea how to free more memory. I don't know if it's possible to use the integrated Intel Graphics GPU to handle my laptop's graphics and use the Nvidia GPU exclusively for tensorflow, I haven't found out anything so far, but I'm still researching this topic.

Steps to reproduce

$ python main.py train --training-data-src-dir ./data-old/person1 --training-data-dst-dir ./data-old/person2 --model-dir ./face-models --model H64 --write-preview-history --save-interval-min 30

Other relevant information

  • Operating system and version: elementary OS 0.4.1 Loki, Built on "Ubuntu 16.04.3 LTS", Linux 4.13.0-43-generic
  • Python version: 3.5.2

Why HALF-FACE Model need a mask layer?

It seems necessary for full-face(exclude background).
But H64, H128, why need it? it's slow and it eats VRAM. And having a mask layer does not particularly improve quality.

for convert process?

I would appreciate it if someone could explain the thing.

Thank you

Question: Any flag option to sharpen on Step 7?

Is there any way to sharpen the converted mask prior to merging with the destination video frames? OpenFaceSwap has the -sh flag on the convert.py script which will sharpen up the mask prior to merging with the original video frame and outputting to a folder before the final video output step. Sharpening greatly enhances the final output especially around eye brows and eye details.

ValueError: operands could not be broadcast together with shapes

Expected behavior

I was trying to extract data from data_src.mp4.

Actual behavior

Running extractor.

Performing 1st pass...
Running on GeForce GTX 1070.
100%|████████████████████████████████████████████████████████████████████████████████| 655/655 [01:37<00:00, 6.83it/s]
Performing 2nd pass...
Running on GeForce GTX 1070.
100%|████████████████████████████████████████████████████████████████████████████████| 655/655 [01:36<00:00, 6.82it/s]
Performing 3rd pass...
Running on CPU0.
Running on CPU1.
Running on CPU2.
Running on CPU3.
Running on CPU4.
Running on CPU5.
Running on CPU6.
Running on CPU7.
Running on CPU8.
Running on CPU9.
Running on CPU10.
Running on CPU11.
0%| | 0/655 [00:00<?, ?it/s]Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00005.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00002.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00003.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00004.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00007.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00006.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00008.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00009.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

0%|▏ | 1/655 [00:00<01:57, 5.55it/s]Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00011.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)
Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00012.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Exception while process data [D:\boba\DeepFaceLabTorrent\workspace\data_src\00010.png]: Traceback (most recent call last):
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\utils\SubprocessorBase.py", line 233, in subprocess
result = self.onClientProcessData (data)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\mainscripts\Extractor.py", line 302, in onClientProcessData
facelib.LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, self.image_size, self.face_type)
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 203, in draw_rect_landmarks
draw_landmarks(image, image_landmarks, (0,255,0) )
File "D:\boba\DeepFaceLabTorrent_internal\bin\DeepFaceLab\facelib\LandmarksProcessor.py", line 195, in draw_landmarks
for x, y in right_eyebrow+left_eyebrow+mouth+right_eye+left_eye+nose:
ValueError: operands could not be broadcast together with shapes (5,2) (20,2)

Steps to reproduce

I had run the batch file for step 4) data_src extract faces DLIB all GPU debug.

Other relevant information

  • Operating system and version: Windows
  • Python version: 3.7
    Maybe my version of Python is too new?

Best GPU device 1 detected, but training still runs on less powerful GPU 0 device?

Expected behavior

Have a system with a GTX 1070 installed in PCIe slot0 and GTX 1080ti installed in PCIe slot1. When I run the training batch files to select the "best" GPU (GTX 1080ti), the GTX 1080ti is detected and the program says that the training is running on device 1 (the GTX 1080ti), but the created_vram_gb is 8 Gb, not 11 Gb. and the training still runs on the GTX 1070, with no GPU activity on the GTX 1080ti. Is there a way to force the training to run on GPU 1, with parameter ---force-best-gpu-idx?
Can CUDA_VISIBLE_DEVICES=1 force the use of GPU 1, and which script and line should include CUDA_VISIBLE_DEVICES=1?

Actual behavior

The best GPU is not being used, only GPU device 0.

  • Operating system and version: Windows 10
  • Python version: 3.6.5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.