We achieve image composition via Stable Diffusion Model. Application includes Virtual Funiture Try-on.
Demo: Virtual Furniture Try-on
- Requirements
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt
- Initialize an 🤗Accelerate environment with
accelerate config default
Please provide at least one images in .jpg format and instance prompt. For example, images in ./data/sofa
export MODEL_NAME="runwayml/stable-diffusion-inpainting"
export INSTANCE_DIR="dataset/sofa"
export Test_DIR="test/sofa_test"
export MODEL_DIR="logs/sofa"
export OUT_DIR="out/sofa"
export INSTANCE_PROMPT="sofa"
Please provide at least one images in .jpg format and instance prompt. The preprocess.py script will generate captions and instance masks.
python preprocess.py --instance_data_dir $INSTANCE_DIR \
--instance_prompt $INSTANCE_PROMPT
We then embed the instance images and prompt into stable diffusion model.
accelerate launch --num_processes 1 finetune_train.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$MODEL_DIR \
--instance_prompt=$INSTANCE_PROMPT \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=1 \
--learning_rate=5e-6 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=1000
Finally, you can provide new images to achieve image composition.
python inference_lora.py --image_path $Test_DIR \
--model_path $MODEL_DIR \
--out_path $OUT_DIR \
--instance_prompt $INSTANCE_PROMPT
using end-to-end run.sh.
bash run_lora.sh
bash run_gradio.sh
We tested the code on RTX3090 GPU. If there is Out-of-Memory error, please refer to for low memory training: