Comments (6)
@valdecircarvalho Can you access (error) logs from that server where MRE is running, by any chance?
from fooocus-mre.
Hello @MoonRide303
Yes, I have full access to the server. Could you please point me to where the Fooocus-MRE logs are stored?
from fooocus-mre.
Following are the output of the program start and a successfully generated image accessing it from the ip:port.
When I try to access it from foooocus.domain.com no error is displayed on the console, only on the web browser.
Here is the link if you want to try it https://fooocus.localhostcloud.com/
(fooocus-mre) ubuntu@sd-linux:~/ai/Fooocus-MRE$ python launch.py --listen 0.0.0.0 --port 7866
Python 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0]
Fooocus version: 1.0.41 MRE
Inference Engine exists.
Inference Engine checkout finished.
Total VRAM 16151 MB, total RAM 181123 MB
xformers version: 0.0.21
Set vram state to: NORMAL_VRAM
Device: cuda:0 Tesla V100-SXM2-16GB : cudaMallocAsync
Using xformers cross attention
Running on local URL: http://0.0.0.0:7866
To create a public link, set share=True
in launch()
.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
model_type EPS
adm 2816
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: sd_xl_base_1.0_0.9vae.safetensors
App started successful. Use the app with http://localhost:7866/ or 0.0.0.0:7866
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is None and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1536, context_dim is 1280 and using 24 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
model_type EPS
adm 2560
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Refiner model loaded: sd_xl_refiner_1.0_0.9vae.safetensors
LoRAs loaded: [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)]
loading new
loading new
/home/ubuntu/anaconda3/envs/fooocus-mre/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
loading new
0%| | 0/30 [00:00<?, ?it/s]/home/ubuntu/anaconda3/envs/fooocus-mre/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:594: UserWarning: Should have tb<=t1 but got tb=14.614643096923828 and t1=14.614643.
warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.")
67%|████████████████████████████████████████████████████████████████████████████▋ | 20/30 [00:07<00:02, 3.55it/s]loading new
Refiner swapped.
93%|███████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 28/30 [00:11<00:00, 2.85it/s]/home/ubuntu/anaconda3/envs/fooocus-mre/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:585: UserWarning: Should have ta>=t0 but got ta=0.02916753850877285 and t0=0.029168.
warnings.warn(f"Should have ta>=t0 but got ta={ta} and t0={self._start}.")
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:11<00:00, 2.54it/s]Image generated with private log at: /home/ubuntu/ai/Fooocus-MRE/outputs/2023-08-30/log.html
loading new
67%|████████████████████████████████████████████████████████████████████████████▋ | 20/30 [00:05<00:02, 3.53it/s]loading new
Refiner swapped.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:10<00:00, 2.88it/s]Image generated with private log at: /home/ubuntu/ai/Fooocus-MRE/outputs/2023-08-30/log.html
/home/ubuntu/ai/Fooocus-MRE/outputs/2023-08-30/
from fooocus-mre.
@valdecircarvalho I observed similar issue when trying to use refiner model in Colab - please try setting refiner model to None, or using updated Colab file from my repository (updated version starts with disabled refiner).
from fooocus-mre.
@MoonRide303
Thanks for the prompt response.
I'm running it on a baremetal server in a cloud provider. I can't/don't know how to run it on Colab (I will do some research).
Could you please explain to me how to set the refiner model to None? Accessing it from the domain:port if I click on the Advanced checkbox, the error pops up.
You can check it here: https://fooocus.localhostcloud.com/
from fooocus-mre.
@valdecircarvalho You can make refiner disabled by default by copying settings-no-refiner.json
to settings.json
.
Using Colab is pretty simple - you can use this link (also available in readme) to open Fooocus-MRE notebook file in Colab, then press run (it can take few minutes to setup and download models), and when it finishes you should see gradio live link where you'll be able to access it.
from fooocus-mre.
Related Issues (20)
- I get an error when I launch the tool - 'Lora file not found!' HOT 2
- Generates without any problem first time, then runs out of ram.
- The installation of V2.0.78.5 failed many times and the following error message appeared. What is the reason? HOT 1
- weird preview generating problem here and in the main repo
- My boss, MRE version 2.1 ready? HOT 12
- load_settings, e: 'utf-8' codec can't decode byte 0xd7 in position 1206: invalid continuation byte HOT 1
- Extreme RAM usage on colab
- fooocus-MRE loads model on every startup HOT 1
- All of a sudden
- How to use SDXL CHECKPOINTS that are in subfolders?
- [FIXed] Colab Error WARNING[XFORMERS] HOT 1
- running an error message
- Error while deserializing header: HeaderTooSmall
- ERROR: Failed building wheel for pygit2sudo
- paths.json not recognized
- Colab Error – xFormers wasn't build with CUDA support
- Ordering of Models and Loras
- Running different models similar to running different styles
- complaints
- [feature request] random model
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fooocus-mre.