Comments (2)
A100-SXM4-40GB
-
GPU=31142/40536MiB, 32814 after first save, 33302 after 2nd save,
-
1.03s/it training, 3.30s/it inference
-
BATCH_SIZE=4
-
TRAIN_TEXT_ENCODER
-
USE_8BIT_ADAM
-
FP16
-
GRADIENT_CHECKPOINTING
-
GRADIENT_ACCUMULATION_STEPS=1
-
USE_EMA=False
-
RESOLUTION=512
-
Warnings with xformers-0.0.15.dev0+4c06c7 (compiled on A10G)
-
/usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
WARNING:xformers:WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop
*/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py:433: UserWarning: Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers withpython setup.py develop
?
warnings.warn(
diffusers==0.9.0
accelerate==0.14.0
torchvision @ https://download.pytorch.org/whl/cu116/torchvision-0.14.0%2Bcu116-cp38-cp38-linux_x86_64.whl
transformers==4.25.1
xformers @ https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl
Copy-and-paste the text below in your GitHub issue
Accelerate
version: 0.14.0- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.15
- Numpy version: 1.21.6
- PyTorch version (GPU?): 1.13.0+cu116 (True)
Accelerate
default config:
Not foundAccelerate
version: 0.14.0- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.15
- Numpy version: 1.21.6
- PyTorch version (GPU?): 1.13.0+cu116 (True)
from dreambooth.
A100-SXM4-40GB
- GPU=16168/40536MiB
- 1.23s/it training, 5.83 it/s inference
- BATCH_SIZE=4
- TRAIN_TEXT_ENCODER
- USE_8BIT_ADAM
- FP16
- GRADIENT_CHECKPOINTING
- GRADIENT_ACCUMULATION_STEPS=1
- USE_EMA=False
- RESOLUTION=512
- No errors or warnings with 0.0.15.dev0%2B4c06c79/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl
Description: Ubuntu 18.04.6 LTS
diffusers==0.9.0
torchvision @ https://download.pytorch.org/whl/cu116/torchvision-0.14.0%2Bcu116-cp38-cp38-linux_x86_64.whl
transformers==4.25.1
xformers @ https://github.com/brian6091/xformers-wheels/releases/download/0.0.15.dev0%2B4c06c79/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl
2022-12-08 10:21:20.344739: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0
.
Copy-and-paste the text below in your GitHub issue
Accelerate
version: 0.14.0- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.15
- Numpy version: 1.21.6
- PyTorch version (GPU?): 1.13.0+cu116 (True)
from dreambooth.
Related Issues (20)
- use_ema may is not prepared by accelerator. necessary? check unwrap as well for new parameter
- Freeze bitsandbytes? HOT 1
- Update EMA model to config style with dictionary for params HOT 2
- Add close-up and face mask augmentations
- Add scheduler config
- Prevent saving again when final step == last saved step
- Add back push to Hugging face
- Remove some requirements
- RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and torchvision has CUDA Versio=11.6. Please reinstall the torchvision that matches your PyTorch install. HOT 12
- Add submodule (and module/class?) excludes to lora HOT 1
- Add parameters
- Option to generate sample images from base model
- Intermediate sampling should cache rng state
- Accelerate scaler does not allow kwargs
- pin_memory and num_workers > 1 produces completely different results? HOT 1
- `Accelerate` default config: Not found HOT 4
- Options necessary for attention/vae slicing?
- --mixed_precision: invalid choice:
- i get class images of old indigenous women instead of my instance prompt/ token while training LoRA
- Error when run colab notebook HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dreambooth.