Comments (13)
Your GPU does not have enough memory. Try decreasing ENCODER_DIM to 512.
from faceswap.
tried tuning ENCODER_DIM and BATCH_SIZE. did not find a working combination.
For ENCODER_DIM=64 and BATCH_SIZE=1 i get one valid output 0.15305635 0.11783895
but then it just says save model weights
and quits.
2018-01-28 20:05:43.939681: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this T
ensorFlow binary was not compiled to use: AVX
2018-01-28 20:05:44.539616: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Found device 0 with properties:
name: Quadro K2000M major: 3 minor: 0 memoryClockRate(GHz): 0.745
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 1.92GiB
2018-01-28 20:05:44.540210: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1120] Creating TensorFlow device (/device:GP
U:0) -> (device: 0, name: Quadro K2000M, pci bus id: 0000:01:00.0, compute capability: 3.0)
2018-01-28 20:05:47.504003: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.14GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.566201: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.07GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.603648: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.04GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.730635: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.04GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.838415: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.03GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.860781: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.07GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:48.012480: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.06GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:48.050733: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.14GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:48.051200: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:48.108983: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
0.15305635 0.11783895
save model weights
usage: faceswap.py [-h] {extract,train,convert} ...
positional arguments:
{extract,train,convert}
extract Extract the faces from a pictures.
train This command trains the model for the two faces A and
B.
convert Convert a source image to a new one with the face
swapped.
optional arguments:
-h, --help show this help message and exit
from faceswap.
Setting batch size <14 can screw up the preview window which is hard-coded to display a bunch of preview images. It's possible to edit that too and run 1 sized batches but it's kind of a pain.
How much memory does your GPU have? 64 is really low for ENCODER_DIM, probably going to have a hard time getting good results.
from faceswap.
I did not enable the preview window.
It's just 2 GB memory.
Just wanted to have a look at this!
from faceswap.
Did you figured it out? Because i have 2GB card too and im facing ran out of memory error too.
from faceswap.
there are 4GB memory in my GPU, but I also had the same error
from faceswap.
Had the same with 4GB GPU
I edited ENCODER_DIM
to 512
in model.py
and it works for me. Iterations are 20x faster on GPU now.
see also #40 (comment)
from faceswap.
Also there is a LowMem model plugin now. You can edit scripts/convert.py
and replace "Original in "variant = "Original"
with "LowMem". But be careful, as it has one layer less, your current model will not be reloaded if you started training it!
from faceswap.
I've a NVIDIA GeForce 940MX with 2GB dedicated VRAM.
I edited ENCODER_DIM
in Model_LowMem.py
to 128
(it represents nodes of NN?) and commented line
#x = self.conv(512)(x)
(layers of NN?)
It works for me.
Could be useful to add CLI parameters for ENCODER_DIM and convolutions.
from faceswap.
I'm not much in favor of adding CLI params for that, because if you later provide incorrect params, you may well end up with unusable model files. They are not parameters in the sense you can change them freely from time to time, they are intrinsic to the model that is trained.
from faceswap.
If you train with LowMem you will also need to use the LowMem model when converting. If not you will get a layer count mismatch.
I could see a todo in the code so have put a pull request in to fix this. Hope that is OK. Please just disregard if not :)
from faceswap.
Any pull request is welcome ;-)
from faceswap.
@MarcSN311 has this solved your problem?
from faceswap.
Related Issues (20)
- Potential Issue in CI Workflow Affecting Code Coverage Accuracy HOT 1
- Error when open GUI after install HOT 1
- Bounding boxes coordinates HOT 1
- DeprecationWarning: 'locale.getdefaultlocale' is deprecated and slated for removal in Python 3.15. Use setlocale(), getencoding() and getlocale() instead HOT 1
- DeprecationWarning: pkg_resources is deprecated as an API. HOT 1
- There are model on project? HOT 2
- あ
- GUI crash when running on Apple Silicon macbook HOT 5
- Is my PC good for face swapping? HOT 1
- [Question] How isolated is the installer HOT 2
- Images found: 439 Faces detected: 0 HOT 1
- TypeError: unsupported operand type(s) for |: 'type' and 'type' HOT 8
- install error HOT 1
- Offline Installer? HOT 1
- windows installer: wrong pillow version installed HOT 2
- -LF option doesn't get carried to the internal tools HOT 1
- L
- inpaint error HOT 1
- just installed new graphic card and it stopped working HOT 1
- exception after few hours of training with villain HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from faceswap.