Giter VIP home page Giter VIP logo

disco-diffusion-1's People

Contributors

aletts avatar cansakirt avatar entmike avatar msftserver avatar nebulatgs avatar njbbaer avatar somnai-dreams avatar thegenerativegeneration avatar twmmason avatar zhl146 avatar zippy731 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

disco-diffusion-1's Issues

Windows Anaconda Installation Docs missing modules

I've been following the instructions to install on windows with anaconda, but I feel there is a missing step or something because once have the conda activated, and go to run python disco.py, I get a variety of module not found errors-- I realized I had to run pip install -r requirements.txt in the conda before it would work... also step 4: execute the test run has another conda activate discodiffusion line which is already activated at that point--- perhaps that line was supposed to be pip install -r requirements.txt , sorry Python is new to me.

I also had to run this command because I was getting CUDA errors:
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

There's a problem in disco diffusion

when i change the prompt of my disco diffusion

  1. Diffuse! jusy jump out this infomation
Starting Run: TimeToDisco(3) at frame 0
Prepping model...
---------------------------------------------------------------------------
EOFError                                  Traceback (most recent call last)
[<ipython-input-18-1c79d8e6a9d5>](https://localhost:8080/#) in <module>()
    205     model.load_state_dict(torch.load(custom_path, map_location='cpu'))
    206 else:
--> 207     model.load_state_dict(torch.load(f'{model_path}/{get_model_filename(diffusion_model)}', map_location='cpu'))
    208 model.requires_grad_(False).eval().to(device)
    209 for name, param in model.named_parameters():

1 frames
[/usr/local/lib/python3.7/dist-packages/torch/serialization.py](https://localhost:8080/#) in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
    918             "functionality.")
    919 
--> 920     magic_number = pickle_module.load(f, **pickle_load_args)
    921     if magic_number != MAGIC_NUMBER:
    922         raise RuntimeError("Invalid magic number; corrupt file?")

EOFError: Ran out of input

I don't know how to fix it, then I just to copy another disco diffusion
And it happened again.

Anyone knows what happen in this case?

Turbo mode not working

I have mentioned this on the discord but I figured it was worth making a proper issue for it. I have tested this now with colab and lambda labs. Basically when you get to frame 10 (that's when turbo kicks in), it stops working, this is the error:

Traceback (most recent call last):
  File "disco.py", line 30, in <module>
    dd.start_run(pargs=pargs, folders=folders, device=device, is_colab=dd.detectColab())
  File "/workspace/disco-diffusion-1/dd.py", line 2157, in start_run
    processBatch(pargs=job, folders=folders, device=device, is_colab=is_colab, session_id=session_id)
  File "/workspace/disco-diffusion-1/dd.py", line 2399, in processBatch
    do_run(args=args, device=device, is_colab=is_colab, batchNum=batchNum, start_frame=start_frame, folders=folders)
  File "/workspace/disco-diffusion-1/dd.py", line 1446, in do_run
    old_frame = do_3d_step(
  File "/workspace/disco-diffusion-1/dd.py", line 491, in do_3d_step
    translation_x = translations.translation_x_series[frame_num]
AttributeError: 'NoneType' object has no attribute 'translation_x_series'

Screen Shot 2022-05-19 at 2 58 50 PM

YAML config:

RN101: false
RN50: true
RN50x16: false
RN50x4: false
RN50x64: false
ViTB16: true
ViTB32: true
ViTL14: false
ViTL14_336: false
angle: 0:(0)
animation_mode: 3D
batch_name: AnimTest7
check_model_SHA: false
clamp_grad: true
clamp_max: 0.26
clip_denoised: false
clip_guidance_scale: 51000
console_preview: false
console_preview_width: 80
cuda_device: cuda:0
cut_ic_pow: 1
cut_icgray_p: '[0.2]*30+[0]*970'
cut_innercut: '[8]*30+[16]*970'
cut_overview: '[8]*30+[0]*970'
cutn_batches: 1
cutout_debug: false
# db: /content/gdrive/MyDrive/disco-diffusion-1/disco.db
#diffusion_model: 512x512_diffusion_uncond_finetune_008100
#diffusion_sampling_mode: ddim
display_rate: 50
eta: 0.2
extract_nth_frame: 2
#f: /root/.local/share/jupyter/runtime/kernel-d6f56a62-75e3-41ba-b009-d513f3dc9e61.json
far_plane: 10000
fov: 120
frames_scale: 35000
frames_skip_steps: 70%
fuzzy_prompt: false
image_prompts: {}
images_out: images_out
init_image:
init_images: init_images
init_scale: 55
intermediate_saves: 0
intermediates_in_subfolder: true
interp_spline: Linear
key_frames: true
max_frames: 10000
midas_depth_model: dpt_large
midas_weight: 0.3
model_path: models
modifiers: {}
multipliers: {}
n_batches: 1
near_plane: 200
padding_mode: border
#per_job_kills: false
#perlin_init: false
#perlin_mode: mixed
rand_mag: 0.1
randomize_class: true
range_scale: 1666
resume_from_frame: latest
resume_run: true
retain_overwritten_frames: false
rotation_3d_x: '0: (0)'
rotation_3d_y: '0: (0)'
rotation_3d_z: '0: (-0.002)'
run_to_resume: latest
sampling_mode: bicubic
sat_scale: 90000
save_metadata: false
set_seed: random_seed
simple_nvidia_smi_display: true
skip_augs: false
skip_steps: 10
skip_video_for_run_all: false
steps: 110
symmetry_loss: true
symmetry_loss_scale: 1500
symmetry_switch: 40
text_prompts:
  0:
  - "a cybernetic elephant walking through a cyberpunk city by Juan P. Osorio and thomas kinkade, 4k ultra, Trending on artstation."
  1000:
  - "a cybernetic DJ performing in a cyberpunk city by Juan P. Osorio and syd mead, album cover, beautiful modern colors, 4k ultra, Trending on artstation."
translation_x: '0: (0)'
translation_y: '0: (0)'
translation_z: '0: (15.0)'
turbo_mode: true
turbo_preroll: 10
turbo_steps: 2
tv_scale: 75
twilio_account_sid: null
twilio_auth_token: null
twilio_from: null
twilio_to: null
useCPU: false
use_checkpoint: true
use_secondary_model: true
v_symmetry_loss: true
v_symmetry_loss_scale: 1500
v_symmetry_switch: 40
video_init_path: training.mp4
video_init_seed_continuity: true
vr_eye_angle: 0.5
vr_ipd: 5.0
vr_mode: false
width_height:
- 912
- 512
zoom: '"0: (1)'

parameters for a single image output?

I'm trying to use this actually on replicate.com
https://replicate.com/nightmareai/disco-diffusion

but I get the zeroth frame which is just perlin noise, and then for follow on images, nothing.

Is it safe to assume that if

        'steps': 250,
        'display_rate': 250,

match, then we should just get rendered the final frame?

'prompt': "An octopus riding a bicycle",
            'batch_size': 5,
            'width': 256,
            'height': 256,
            'steps': 250,
            'display_rate': 250,
            'diffusion_model': '256x256_diffusion_uncond',

Animation Mode: Video Input doesn't work properly in colab +ubuntu

Tested using YAML version with this config:

https://gist.github.com/seandearnaley/2859612e30c76ad2a13e4c9e5ec99353

image

UPDATE: 5/30/22: some findings, weirdly the colab YAML wasn't actually pulling down repo updates, I don't know whether this was because my gdrive repo was in some kind of locked state, but I deleted disco-diffusion-1 from my gdrive, run colab again which rebuild the folder, now I have more to date code running on colab... and this time weirdly it didn't work first time, looking in the videoframes folder it looked like skipped frames 500 apart , even weirder, running it again wrote the remaining frames and the process started working, so definitely seems like a first run bug. Tested this on lamba labs and it did the same thing, the first run doesn't write all the necessary videoFrames but the second run in the same running container fills in the remaining frames and the job runs.

2D+3D Animation Mode: not observing text_prompt keyframes

I have a YAML file with keyframes for the text_prompt parameter but simplified DD doesn't seem to observe them in animation mode... in the following example, I had set another prompt for keyframe 20, but as you can see in the screen shot, it doesn't change the prompt from 0... I have tested this in both colab and docker on lambda labs:
Screen Shot 2022-05-19 at 3 20 40 PM

YAML (note: keyframe 20)

RN101: false
RN50: true
RN50x16: false
RN50x4: false
RN50x64: false
ViTB16: true
ViTB32: true
ViTL14: false
ViTL14_336: false
angle: 0:(0)
animation_mode: 3D
batch_name: AnimTest7
check_model_SHA: false
clamp_grad: true
clamp_max: 0.26
clip_denoised: false
clip_guidance_scale: 51000
console_preview: false
console_preview_width: 80
cuda_device: cuda:0
cut_ic_pow: 1
cut_icgray_p: '[0.2]*30+[0]*970'
cut_innercut: '[8]*30+[16]*970'
cut_overview: '[8]*30+[0]*970'
cutn_batches: 1
cutout_debug: false
# db: /content/gdrive/MyDrive/disco-diffusion-1/disco.db
#diffusion_model: 512x512_diffusion_uncond_finetune_008100
#diffusion_sampling_mode: ddim
display_rate: 50
eta: 0.2
extract_nth_frame: 2
#f: /root/.local/share/jupyter/runtime/kernel-d6f56a62-75e3-41ba-b009-d513f3dc9e61.json
far_plane: 10000
fov: 120
frames_scale: 35000
frames_skip_steps: 70%
fuzzy_prompt: false
image_prompts: {}
images_out: images_out
init_image:
init_images: init_images
init_scale: 55
intermediate_saves: 0
intermediates_in_subfolder: true
interp_spline: Linear
key_frames: true
max_frames: 10000
midas_depth_model: dpt_large
midas_weight: 0.3
model_path: models
modifiers: {}
multipliers: {}
n_batches: 1
near_plane: 200
padding_mode: border
#per_job_kills: false
#perlin_init: false
#perlin_mode: mixed
rand_mag: 0.1
randomize_class: true
range_scale: 1666
resume_from_frame: latest
resume_run: true
retain_overwritten_frames: false
rotation_3d_x: '0: (0)'
rotation_3d_y: '0: (0)'
rotation_3d_z: '0: (-0.002)'
run_to_resume: latest
sampling_mode: bicubic
sat_scale: 90000
save_metadata: false
set_seed: random_seed
simple_nvidia_smi_display: true
skip_augs: false
skip_steps: 10
skip_video_for_run_all: false
steps: 110
symmetry_loss: true
symmetry_loss_scale: 1500
symmetry_switch: 40
text_prompts:
  0:
  - "a cybernetic elephant walking through a cyberpunk city by Juan P. Osorio and thomas kinkade, 4k ultra, Trending on artstation."
  20:
  - "a cybernetic DJ performing in a cyberpunk city by Juan P. Osorio and syd mead, album cover, beautiful modern colors, 4k ultra, Trending on artstation."
translation_x: '0: (0)'
translation_y: '0: (0)'
translation_z: '0: (15.0)'
turbo_mode: false
turbo_preroll: 10
turbo_steps: 3
tv_scale: 75
twilio_account_sid: null
twilio_auth_token: null
twilio_from: null
twilio_to: null
useCPU: false
use_checkpoint: true
use_secondary_model: true
v_symmetry_loss: true
v_symmetry_loss_scale: 1500
v_symmetry_switch: 40
video_init_path: training.mp4
video_init_seed_continuity: true
vr_eye_angle: 0.5
vr_ipd: 5.0
vr_mode: false
width_height:
- 912
- 512
zoom: '"0: (1)'

docker container can not be transferred

Hi @entmike,
Since downloading models is tedious, I want to download them once and wrap them into a docker container for local transfer.
But when I transferred the container to a new machine, the python environment just failed.
Please help~
Thx

Multipliers feature seems to have broken Modifiers feature

Replication:

python disco.py --config_file=./examples/configs/artstudy2.yml --width_height="[256, 256]"

Problem 1:

The first problem that shows up immediately is that split_prompts errors because it's always expecting a multi valued array for prompts for some reason, but this can be fixed by doing something that doesn't require the post-init assignment like

def split_prompts(prompts, max_frames=None):
    prompt_series = pd.Series([v for k, v in prompts.items()])
    prompt_series = prompt_series.ffill().bfill()
    return prompt_series

Problem 2 (main one):

It seems as though after the multipliers feature went in, using modifiers on its own does not work.
multargs is being passed as the first argument to processModifiers():

disco-diffusion-1/dd.py

Lines 2218 to 2219 in da528b4

multargs = processMultipliers(args=pargs)
jobs = processModifiers(multargs)

However in the implementation, processModifiers has two args and only the second is iterated over, so this method is always only returning the 1 job.

disco-diffusion-1/dd.py

Lines 2123 to 2125 in da528b4

def processModifiers(mods=[], args=[]):
for p in range(len(args)):
# Deep copy

I tried the simple workaround of changing this to

multargs = processMultipliers(args=pargs)
jobs = processModifiers(args=multargs)

but this wasn't enough, and the problem must run deeper than the more correct argument placement. The only workaround I got working locally was to comment out processMultipliers() and revert processModifiers() to the version from 1f5af86

make the notebook work w/ Kaggle

The free tier of Colab is too weak. Kaggle offers much better GPU (P100) for free. Would be nice to make it work w/ Kaggle. I'm stuck at install dependencies.

'pip install -r requirements.txt' gives the following:

  WARNING: Did not find branch or tag 'packagify', assuming revision or ref.
  Running command git checkout -q packagify
  error: pathspec 'packagify' did not match any file(s) known to git
  error: subprocess-exited-with-error
  
  × git checkout -q packagify did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× git checkout -q packagify did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

I pip-installed 'packagify' before this too. Didn't work. Any idea?

OMP: Error when `import lpips`

  • Installed via docker pull entmike/disco-diffusion-1
  • When running test job
docker run --rm -it \
    -v $(echo ~)/disco-diffusion/images_out:/workspace/code/images_out \
    -v $(echo ~)/disco-diffusion/init_images:/workspace/code/init_images \
    -v $(echo ~)/disco-diffusion/models:/workspace/disco-diffusion-1/models \
    -v $(echo ~)/disco-diffusion/configs:/workspace/disco-diffusion/configs \
    --gpus=all \
    --name="disco-diffusion" --ipc=host \
    --user $(id -u):$(id -g) \
disco-diffusion python disco.py

I got the following errors tracing back to import lpips when import dd

OMP: Error #179: Function Can't open SHM2 failed:
OMP: System error #13: Permission denied
Aborted

Environment is Windows 11 WSL2 Ubuntu. Any help is greatly appreciated!

PyTorch3D fix

Hi, so I'm not sure if the notice is relevant anymore, but I have a temporary solution for this.

First, downgrade torch:

!pip install torch==1.11.0 --extra-index-url https://download.pytorch.org/whl/cu113

Then just install pytorch3d normally:

!pip install --no-cache pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py37_cu113_pyt1110/download.html

Colab Notebook setup not working

Running the colab Notebook setup Simplified_Disco_Diffusion.ipynb yields the following error

ModuleNotFoundError Traceback (most recent call last)
in ()
107 # Import DD helper modules
108 sys.path.append(PROJECT_DIR)
--> 109 import dd, dd_args
110
111 # Unsure about these:

/content/gdrive/MyDrive/disco-diffusion-1/dd.py in ()
37 from deepdiff import DeepHash
38 import sqlite3
---> 39 from torchmetrics import RetrievalFallOut
40 from tqdm.notebook import tqdm
41 from twilio.rest import Client

ModuleNotFoundError: No module named 'torchmetrics'


NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.

run_to_resume: latest not working as expected

on DD5 this parameter normally picks up the last run you were working on, but on simplified it's starting a new run.... if you specify the run to resume it works, but it's not very convenient because you have to alter the YAML file rather than just hit run again.

Docker build fails

cd docker
docker build -t disco-diffusion .
Sending build context to Docker daemon  19.97kB
Step 1/39 : ARG base_image
Step 2/39 : FROM ${base_image} as base
base name (${base_image}) should not be blank

Random black frames from Video Input Animation

observing seemingly random black frames running video input animation with the aforementioned workaround (running the command twice in the running docker container).. unfortunately I've observed the same thing in all 3 test environments, windows anaconda, colab and ubuntu with lambda labs, never observed this with other dd forks [actually saw it with DD5.1, I wonder is it related to seeds or something]... when I run the frame again it generates fine, so something screwy is happening here... see screen shot. Ruined a couple of expensive large jobs, but glad I can regen them with the seed values and manual file edits.

Screenshot 2022-05-30 222546

Screenshot 2022-05-30 222946

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.