Giter VIP home page Giter VIP logo

controlvideo's Introduction

ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing

This is the official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing". The project page is available here. Code will be released soon.

Overview

ControlVideo incorporates visual conditions for all frames to amplify the source video's guidance, key-frame attention that aligns all frames with a selected one and temporal attention modules succeeded by a zero convolutional layer for temporal consistency and faithfulness. The three key components and corresponding fine-tuned parameters are designed by a systematic empirical study. Built upon the trained ControlVideo, during inference, we employ DDIM inversion and then generate the edited video using the target prompt via DDIM sampling. image

Main Results

image

To Do List

  • Multi Controls Code Organization
  • Support ControlNet 1.1
  • Support Attention Control
  • More Applications such as Image-Guided Video Generation
  • Hugging Face
  • More Sampler

Environment

conda env create -f environment.yml

The environment is similar to Tune-A-Video

Prepare Pretrained Text-to-Image Diffusion Model

Download the Stable Diffusion 1.5 and ControlNet 1.0 for canny, HED, depth and pose. Put them in ./ .

Quick Start

python main.py --control_type hed --video_path videos/car10.mp4 --source 'a car' --target 'a red car' --out_root outputs/ --max_step 300 

The control_type is the type of controls, which is chosen from canny/hed/depth/pose. The video_path is the path to the input video. The source is the source prompt for the source video. The target is the target prompt. The max_step is the step for training. The out_root is the path for saving results.

Run More Demos

Download the data and put them in videos/.

python run_demos.py

References

If you find this repository helpful, please cite as:

@article{zhao2023controlvideo,
  title={ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing},
  author={Zhao, Min and Wang, Rongzhen and Bao, Fan and Li, Chongxuan and Zhu, Jun},
  journal={arXiv preprint arXiv:2305.17098},
  year={2023}
}

This implementation is based on Tune-A-Video and Video-p2p.

controlvideo's People

Contributors

gracezhao1997 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.