Giter VIP home page Giter VIP logo

tcd's Introduction

Trajectory Consistency Distillation

Arxiv Project page Hugging Face Model Hugging Face Space

Official Repository of the paper: Trajectory Consistency Distillation

A Solemn Statement Regarding the Plagiarism Allegations.

We regret to hear about the serious accusations from the CTM team.

We sadly found out our CTM paper (ICLR24) was plagiarized by TCD! It's unbelievable😢—they not only stole our idea of trajectory consistency but also comitted "verbatim plagiarism," literally copying our proofs word for word! Please help me spread this. pic.twitter.com/aR6pRjhj5X

— Dongjun Kim (@gimdong58085414) March 25, 2024

Before this post, we already have several rounds of communication with CTM's authors. We shall proceed to elucidate the situation here.

We regret to hear about the serious accusations from the CTM team @gimdong58085414. I shall proceed to elucidate the situation and make an archive here. We already have several rounds of communication with CTM's authors. https://t.co/BKn3w1jXuh

— Michael (@Merci0318) March 26, 2024
  1. In the first arXiv version, we have provided citations and discussion in A. Related Works:

    Kim et al. (2023) proposes a universal framework for CMs and DMs. The core design is similar to ours, with the main differences being that we focus on reducing error in CMs, subtly leverage the semi-linear structure of the PF ODE for parameterization, and avoid the need for adversarial training.

  2. In the first arXiv version, we have indicated in D.3 Proof of Theorem 4.2

    In this section, our derivation mainly borrows the proof from (Kim et al., 2023; Chen et al., 2022).

    and we have never intended to claim credits.

    As we have mentioned in our email, we would like to extend a formal apology to the CTM authors for the clearly inadequate level of referencing in our paper. We will provide more credits in the revised manuscript.

  3. In the updated second arXiv version, we have expanded our discussion to elucidate the relationship with the CTM framework. Additionally, we have removed some proofs that were previously included for completeness.

  4. CTM and TCD are different from motivation, method to experiments. TCD is founded on the principles of the Latent Consistency Model (LCM), aimed to design an effective consistency function by utilizing the exponential integrators.

  5. The experimental results also cannot be obtained from any type of CTM algorithm.

    5.1 Here we provide a simple method to check: use our sampler here to sample the checkpoint CTM released, or vice versa.

    5.2 CTM also provided training script. We welcome anyone to reproduce the experiments on SDXL or LDM based on CTM algorithm.

We believe the assertion of plagiarism is not only severe but also detrimental to the academic integrity of the involved parties. We earnestly hope that everyone involved gains a more comprehensive understanding of this matter.

📣 News

  • (🔥New) 2024/3/28 A ComfyUI plugin ComfyUI-TCD. Thanks @JettHu and @dfl.
  • (🔥New) 2024/3/18 We have integrated the official TCDScheduler into the 🧨 Diffusers library! See the official documentation here. Thanks for the Diffusers team!
  • (🔥New) 2024/3/11 We released TCD-SD15-LoRA for SDv1.5, and TCD-SD21-base-LoRA for SDv2.1-base.
  • (🔥New) 2024/2/29 We provided a demo of TCD on 🤗 Hugging Face Space. Try it out here.
  • (🔥New) 2024/2/29 We released our model TCD-SDXL-Lora in 🤗 Hugging Face.
  • (🔥New) 2024/2/29 TCD is now integrated into the 🧨 Diffusers library. Please refer to the Usage for more information.

Introduction

TCD, inspired by Consistency Models, is a novel distillation technology that enables the distillation of knowledge from pre-trained diffusion models into a few-step sampler. In this repository, we release the inference code and our model named TCD-SDXL, which is distilled from SDXL Base 1.0. We provide the LoRA checkpoint in this 🔥repository.

⭐ TCD has following advantages:

  • Flexible NFEs: For TCD, the NFEs can be varied at will (compared with SDXL Turbo), without adversely affecting the quality of the results (compared with LCM), where LCM experiences a notable decline in quality at high NFEs.
  • Better than Teacher: TCD maintains superior generative quality at high NFEs, even exceeding the performance of DPM-Solver++(2S) with origin SDXL. It is worth noting that there is no additional discriminator or LPIPS supervision included during training.
  • Freely Change the Detailing: During inference, the level of detail in the image can be simply modified by adjusing one hyper-parameter gamma. This option does not require the introduction of any additional parameters.
  • Versatility: Integrated with LoRA technology, TCD can be directly applied to various models (including the custom Community Models, styled LoRA, ControlNet, IP-Adapter) that share the same backbone, as demonstrated in the Usage.
  • Avoiding Mode Collapse: TCD achieves few-step generation without the need for adversarial training, thus circumventing mode collapse caused by the GAN objective. In contrast to the concurrent work SDXL-Lightning, which relies on Adversarial Diffusion Distillation, TCD can synthesize results that are more realistic and slightly more diverse, without the presence of "Janus" artifacts.

For more information, please refer to our paper Trajectory Consistency Distillation.

Usage

To run the model yourself, you can leverage the 🧨 Diffusers library.

pip install diffusers transformers accelerate peft

And then we clone the repo.

git clone https://github.com/jabir-zheng/TCD.git
cd TCD

Here, we demonstrate the applicability of our TCD LoRA to various models, including SDXL, SDXL Inpainting, a community model named Animagine XL, a styled LoRA Papercut, pretrained Depth Controlnet, Canny Controlnet and IP-Adapter to accelerate image generation with high quality in few steps.

Text-to-Image generation

import torch
from diffusers import StableDiffusionXLPipeline
from scheduling_tcd import TCDScheduler 

device = "cuda"
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

prompt = "Beautiful woman, bubblegum pink, lemon yellow, minty blue, futuristic, high-detail, epic composition, watercolor."

image = pipe(
    prompt=prompt,
    num_inference_steps=4,
    guidance_scale=0,
    # Eta (referred to as `gamma` in the paper) is used to control the stochasticity in every step.
    # A value of 0.3 often yields good results.
    # We recommend using a higher eta when increasing the number of inference steps.
    eta=0.3, 
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]

Inpainting

import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid
from scheduling_tcd import TCDScheduler 

device = "cuda"
base_model_id = "diffusers/stable-diffusion-xl-1.0-inpainting-0.1"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

pipe = AutoPipelineForInpainting.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = load_image(img_url).resize((1024, 1024))
mask_image = load_image(mask_url).resize((1024, 1024))

prompt = "a tiger sitting on a park bench"

image = pipe(
  prompt=prompt,
  image=init_image,
  mask_image=mask_image,
  num_inference_steps=8,
  guidance_scale=0,
  eta=0.3, # Eta (referred to as `gamma` in the paper) is used to control the stochasticity in every step. A value of 0.3 often yields good results.
  strength=0.99,  # make sure to use `strength` below 1.0
  generator=torch.Generator(device=device).manual_seed(0),
).images[0]

grid_image = make_image_grid([init_image, mask_image, image], rows=1, cols=3)

Versatile for Community Models

import torch
from diffusers import StableDiffusionXLPipeline
from scheduling_tcd import TCDScheduler 

device = "cuda"
base_model_id = "cagliostrolab/animagine-xl-3.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

prompt = "A man, clad in a meticulously tailored military uniform, stands with unwavering resolve. The uniform boasts intricate details, and his eyes gleam with determination. Strands of vibrant, windswept hair peek out from beneath the brim of his cap."

image = pipe(
    prompt=prompt,
    num_inference_steps=8,
    guidance_scale=0,
    # Eta (referred to as `gamma` in the paper) is used to control the stochasticity in every step.
    # A value of 0.3 often yields good results.
    # We recommend using a higher eta when increasing the number of inference steps.
    eta=0.3, 
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]

Combine with styled LoRA

import torch
from diffusers import StableDiffusionXLPipeline
from scheduling_tcd import TCDScheduler 

device = "cuda"
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"
styled_lora_id = "TheLastBen/Papercut_SDXL"

pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id, adapter_name="tcd")
pipe.load_lora_weights(styled_lora_id, adapter_name="style")
pipe.set_adapters(["tcd", "style"], adapter_weights=[1.0, 1.0])

prompt = "papercut of a winter mountain, snow"

image = pipe(
    prompt=prompt,
    num_inference_steps=4,
    guidance_scale=0,
    # Eta (referred to as `gamma` in the paper) is used to control the stochasticity in every step.
    # A value of 0.3 often yields good results.
    # We recommend using a higher eta when increasing the number of inference steps.
    eta=0.3, 
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]

Compatibility with ControlNet

Depth ControlNet

import torch
import numpy as np
from PIL import Image
from transformers import DPTFeatureExtractor, DPTForDepthEstimation
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline
from diffusers.utils import load_image, make_image_grid
from scheduling_tcd import TCDScheduler 

device = "cuda"
depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to(device)
feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas")

def get_depth_map(image):
    image = feature_extractor(images=image, return_tensors="pt").pixel_values.to(device)
    with torch.no_grad(), torch.autocast(device):
        depth_map = depth_estimator(image).predicted_depth

    depth_map = torch.nn.functional.interpolate(
        depth_map.unsqueeze(1),
        size=(1024, 1024),
        mode="bicubic",
        align_corners=False,
    )
    depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True)
    depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True)
    depth_map = (depth_map - depth_min) / (depth_max - depth_min)
    image = torch.cat([depth_map] * 3, dim=1)

    image = image.permute(0, 2, 3, 1).cpu().numpy()[0]
    image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8))
    return image

base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
controlnet_id = "diffusers/controlnet-depth-sdxl-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

controlnet = ControlNetModel.from_pretrained(
    controlnet_id,
    torch_dtype=torch.float16,
    variant="fp16",
).to(device)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
    base_model_id,
    controlnet=controlnet,
    torch_dtype=torch.float16,
    variant="fp16",
).to(device)
pipe.enable_model_cpu_offload()

pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

prompt = "stormtrooper lecture, photorealistic"

image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png")
depth_image = get_depth_map(image)

controlnet_conditioning_scale = 0.5  # recommended for good generalization

image = pipe(
    prompt, 
    image=depth_image, 
    num_inference_steps=4, 
    guidance_scale=0,
    eta=0.3, # A parameter (referred to as `gamma` in the paper) is used to control the stochasticity in every step. A value of 0.3 often yields good results.
    controlnet_conditioning_scale=controlnet_conditioning_scale,
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]

grid_image = make_image_grid([depth_image, image], rows=1, cols=2)

Canny ControlNet

import torch
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline
from diffusers.utils import load_image, make_image_grid
from scheduling_tcd import TCDScheduler 

device = "cuda"
base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
controlnet_id = "diffusers/controlnet-canny-sdxl-1.0"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

controlnet = ControlNetModel.from_pretrained(
    controlnet_id,
    torch_dtype=torch.float16,
    variant="fp16",
).to(device)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
    base_model_id,
    controlnet=controlnet,
    torch_dtype=torch.float16,
    variant="fp16",
).to(device)
pipe.enable_model_cpu_offload()

pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

prompt = "ultrarealistic shot of a furry blue bird"

canny_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png")

controlnet_conditioning_scale = 0.5  # recommended for good generalization

image = pipe(
    prompt, 
    image=canny_image, 
    num_inference_steps=4, 
    guidance_scale=0,
    eta=0.3, # A parameter (referred to as `gamma` in the paper) is used to control the stochasticity in every step. A value of 0.3 often yields good results.
    controlnet_conditioning_scale=controlnet_conditioning_scale,
    generator=torch.Generator(device=device).manual_seed(0),
).images[0]

grid_image = make_image_grid([canny_image, image], rows=1, cols=2)

Compatibility with IP-Adapter

⚠️ Please refer to the official repository for instructions on installing dependencies for IP-Adapter.

import torch
from diffusers import StableDiffusionXLPipeline
from diffusers.utils import load_image, make_image_grid

from ip_adapter import IPAdapterXL
from scheduling_tcd import TCDScheduler 

device = "cuda"
base_model_path = "stabilityai/stable-diffusion-xl-base-1.0"
image_encoder_path = "sdxl_models/image_encoder"
ip_ckpt = "sdxl_models/ip-adapter_sdxl.bin"
tcd_lora_id = "h1t/TCD-SDXL-LoRA"

pipe = StableDiffusionXLPipeline.from_pretrained(
    base_model_path, 
    torch_dtype=torch.float16, 
    variant="fp16"
)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)

pipe.load_lora_weights(tcd_lora_id)
pipe.fuse_lora()

ip_model = IPAdapterXL(pipe, image_encoder_path, ip_ckpt, device)

ref_image = load_image("https://raw.githubusercontent.com/tencent-ailab/IP-Adapter/main/assets/images/woman.png").resize((512, 512))

prompt = "best quality, high quality, wearing sunglasses"

image = ip_model.generate(
    pil_image=ref_image, 
    prompt=prompt,
    scale=0.5,
    num_samples=1, 
    num_inference_steps=4, 
    guidance_scale=0,
    eta=0.3, # A parameter (referred to as `gamma` in the paper) is used to control the stochasticity in every step. A value of 0.3 often yields good results.
    seed=0,
)[0]

grid_image = make_image_grid([ref_image, image], rows=1, cols=2)

Local Gradio Demo

Install the gradio library first,

pip install gradio

then local gradio demo can be launched by:

python gradio_app.py

Colab Demo

Open In Colab

We provided a colob demo for Text-to-Image generation with TCD-LoRA.

Related and Concurrent Works

  • Luo S, Tan Y, Huang L, et al. Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378, 2023.
  • Luo S, Tan Y, Patil S, et al. LCM-LoRA: A universal stable-diffusion acceleration module. arXiv preprint arXiv:2311.05556, 2023.
  • Lu C, Zhou Y, Bao F, et al. DPM-Solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems, 2022, 35: 5775-5787.
  • Lu C, Zhou Y, Bao F, et al. DPM-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2022.
  • Zhang Q, Chen Y. Fast sampling of diffusion models with exponential integrator. ICLR 2023, Kigali, Rwanda, May 1-5, 2023.
  • Kim D, Lai C H, Liao W H, et al. Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion. ICLR 2024.

Citation

@misc{zheng2024trajectory,
      title={Trajectory Consistency Distillation}, 
      author={Jianbin Zheng and Minghui Hu and Zhongyi Fan and Chaoyue Wang and Changxing Ding and Dacheng Tao and Tat-Jen Cham},
      year={2024},
      eprint={2402.19159},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgments

This codebase heavily relies on the 🤗Diffusers library and LCM. Additionally, we employ several finetuned models of Stable Diffusion from the community to evaluate the versatility of TCD, including SDXL Inpainting Model, Animagine XL, Papercut LoRA, Depth Controlnet, Canny Controlnet, IP-Adapter. We also thank @hysts for creating Hugging Face Space and providing free GPU resources to launch our online demo.

Thanks for their contributions!

tcd's People

Contributors

jabir-zheng avatar jetthu avatar mhh0318 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tcd's Issues

About Training Schedule

Hi,

Thanks for your work. I wonder in the training phase, which interval is used to sample 'n' in algorithm 1. 1 or 20 ? In other words, can 'n' be 0,1,2,...,978,979 as in CM or just 0,19,39...,959,979 as in LCM?

implementation in ComfyUI framework

Thank for your excellent contribution!

I am attempting to implement this for the ComfyUI community, however there are some differences in structure between Diffusers (more explicit time scheduling) and ComfyUI (array of sigmas, and abstracted scheduling).

The tricky part is how to do the adjusted timestep calculation using eta/gamma.
Something is incorrect so the image just goes black after the first step, when eta is > 0.

I wonder someone on your team would be able to look at my code please? (or anyone else here on GitHub who is good with diffusers and comfyui)
https://github.com/dfl/comfyui-tcd-scheduler

Any plans for SD 1.5 LoRA?

Thank you for publishing this. This result is quite awesome and it's great to see how the details can get such X10 improvements!

A typo error

Hi, I find "eta=0" means determined sampling? Maybe it should swap?

TCD LoRA is higher rank than It needs to be

Thanks for this amazing sampler, it's way better than LCM in my estimation.

The file size of the LoRA could be much smaller with no ill effect however.

Full rank file size: 375.6mb
Resizing to rank 4: File size 23.8mb
Average Frobenius norm retention: 91.92% | std: 0.101

Resizing to rank 2: File size 12.1mb
Average Frobenius norm retention: 88.57% | std: 0.140

image

Inpainting results are blurry

Hi,
The text-to-image results looks great, but inpainting results are extremely blurry (using the default parameters)

I have faced the same effect when tried to train LCM for the task of inpainting

Do you have any intuition on why this happens? Or any suggestion to mitigate it?

Uploading a_fox.png…

Number of ODE Solver Steps in TCD training

Hi, awesome work! I had a question with regards to the distillation algorithm for TCD (Algorithm 1/2 in the paper, particularly w.r.t. Eq. 21). In the original LCM paper (to my understanding), the skipping step $k$ denotes the size of the single-step ODE solve used by the teacher model to solve from $t_{n+k}$ to $t_n$ (e.g. solving from 950 to 930 using a single step that is sized $k=20$). However, in the paragraph before equation 21 it is noted that $\Phi^{k}$ denotes $k$ "discretization steps" of a one-step ODE solver. Thus, my question is: do you use multiple calls of the ODE solver (with the teacher model) to solve to some timestep between $t_{n+k}$ and $t_m$ (e.g. solving the integral with 2 single-step ODE solves thus two calls to the teacher model), or are you still only using a single ODE solver call across that interval (similar to LCM)? If so, how many? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.