Giter VIP home page Giter VIP logo

Comments (13)

rifterbater avatar rifterbater commented on April 28, 2024

Your GPU does not have enough memory. Try decreasing ENCODER_DIM to 512.

from faceswap.

MarcSN311 avatar MarcSN311 commented on April 28, 2024

tried tuning ENCODER_DIM and BATCH_SIZE. did not find a working combination.
For ENCODER_DIM=64 and BATCH_SIZE=1 i get one valid output 0.15305635 0.11783895 but then it just says save model weights and quits.

2018-01-28 20:05:43.939681: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this T
ensorFlow binary was not compiled to use: AVX
2018-01-28 20:05:44.539616: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Found device 0 with properties:
name: Quadro K2000M major: 3 minor: 0 memoryClockRate(GHz): 0.745
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 1.92GiB
2018-01-28 20:05:44.540210: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1120] Creating TensorFlow device (/device:GP
U:0) -> (device: 0, name: Quadro K2000M, pci bus id: 0000:01:00.0, compute capability: 3.0)
2018-01-28 20:05:47.504003: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.14GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.566201: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.07GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.603648: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.04GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.730635: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.04GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.838415: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.03GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:47.860781: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.07GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:48.012480: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.06GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:48.050733: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.14GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:48.051200: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-01-28 20:05:48.108983: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory
trying to allocate 1.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
0.15305635 0.11783895
save model weights
usage: faceswap.py [-h] {extract,train,convert} ...

positional arguments:
  {extract,train,convert}
    extract             Extract the faces from a pictures.
    train               This command trains the model for the two faces A and
                        B.
    convert             Convert a source image to a new one with the face
                        swapped.

optional arguments:
  -h, --help            show this help message and exit

from faceswap.

rifterbater avatar rifterbater commented on April 28, 2024

Setting batch size <14 can screw up the preview window which is hard-coded to display a bunch of preview images. It's possible to edit that too and run 1 sized batches but it's kind of a pain.

How much memory does your GPU have? 64 is really low for ENCODER_DIM, probably going to have a hard time getting good results.

from faceswap.

MarcSN311 avatar MarcSN311 commented on April 28, 2024

I did not enable the preview window.
It's just 2 GB memory.
Just wanted to have a look at this!

from faceswap.

shadowzoom avatar shadowzoom commented on April 28, 2024

Did you figured it out? Because i have 2GB card too and im facing ran out of memory error too.

from faceswap.

duoyu5555 avatar duoyu5555 commented on April 28, 2024

there are 4GB memory in my GPU, but I also had the same error

from faceswap.

schmunk42 avatar schmunk42 commented on April 28, 2024

Had the same with 4GB GPU

I edited ENCODER_DIM to 512 in model.py and it works for me. Iterations are 20x faster on GPU now.

see also #40 (comment)

from faceswap.

Clorr avatar Clorr commented on April 28, 2024

Also there is a LowMem model plugin now. You can edit scripts/convert.py and replace "Original in "variant = "Original" with "LowMem". But be careful, as it has one layer less, your current model will not be reloaded if you started training it!

from faceswap.

lnora avatar lnora commented on April 28, 2024

I've a NVIDIA GeForce 940MX with 2GB dedicated VRAM.

I edited ENCODER_DIM in Model_LowMem.py to 128 (it represents nodes of NN?) and commented line

#x = self.conv(512)(x) (layers of NN?)

It works for me.

Could be useful to add CLI parameters for ENCODER_DIM and convolutions.

from faceswap.

Clorr avatar Clorr commented on April 28, 2024

I'm not much in favor of adding CLI params for that, because if you later provide incorrect params, you may well end up with unusable model files. They are not parameters in the sense you can change them freely from time to time, they are intrinsic to the model that is trained.

from faceswap.

facepainter avatar facepainter commented on April 28, 2024

If you train with LowMem you will also need to use the LowMem model when converting. If not you will get a layer count mismatch.

I could see a todo in the code so have put a pull request in to fix this. Hope that is OK. Please just disregard if not :)

from faceswap.

Clorr avatar Clorr commented on April 28, 2024

Any pull request is welcome ;-)

from faceswap.

gdunstone avatar gdunstone commented on April 28, 2024

@MarcSN311 has this solved your problem?

from faceswap.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.