Comments (5)
Glad you've made some progress. Since there's another issue opened now in #5091, shall we close this one?
Thanks,
Lev
from deepspeed.
This is fixed on the master
branch of DeepSpeed and will be fixed in the next release (see #5065).
@mikob what version of diffusers
do you have installed? You may need to update to a more recent version.
from deepspeed.
Hello @mikob,
This is due to the diffusers
library changing the path to vae
in diffusers version >=0.25.0
. See this PR here.
What this means is that the latest versions of DeepSpeed support diffusers version >=0.25.0
as seen in the requirements.txt
in the PR linked above.
Can you please try updating your diffusers
library to the latest and running again?
Thanks,
Lev
from deepspeed.
I've also created a PR adding DeepSpeed Stable Diffusion backwards compatibility here:
#5083
Please feel free to try running from that branch as well if the PR isn't merged before then.
from deepspeed.
Hi @lekurile yes tried your fix_sd_cli branch and it got me closer to working, along with another change it was working but then hit another roadblock in inference: #5091
from deepspeed.
Related Issues (20)
- [BUG] Circular dependency error with 0.13.2 HOT 2
- [BUG] Memory leak issue in training a model with deepspeed zero stage 2 HOT 3
- Training failing w/ certain parallelism degrees after PR #4974 HOT 2
- nv-nightly CI test failure HOT 1
- When using bf16_optimizer, there is a doubt about the BF16_Optimizer initially with self.optimizer.step(). HOT 1
- Inference without Deepspeed HOT 2
- [BUG] stage3 cost more gpu vram than stage 2,why?
- How to support topk>2 (topk=6 is needed in our experiment) in MoE model?
- [BUG] LM converges much slower when gradient is accumulated in FP32. HOT 3
- nv-ds-chat CI test failure HOT 1
- [REQUEST] detect opbuilder list at launch time HOT 7
- [BUG] FP32 gradient accumulation result in crash. HOT 2
- [BUG]Training speed of deepspeed>=0.12.5 becomed slower than before! HOT 9
- The sequence length is not divisible by Sequence Parallel World Size
- [REQUEST] parameter sharding, gradient sharding, and optimizer state sharding with various sharding factors like Zero++
- RuntimeError: You can't move a model that has some modules offloaded to cpu or disk. HOT 2
- [BUG] `deepspeed.zero.Init` leaks
- AFAIK if I run the `install.sh` script with `curl` before installing dos2unix, it will fail, so dos2unix should be installed before the `curl`. HOT 2
- [BUG] Differences between training result using zero-2 and zero-3
- [BUG]Cannot install deepspeed when cuda is installed by a non-root user
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.