Instead of loading all of the frames in VRAM, this breaks down the video into chunks and sends one chunk to the video card for processing at a time. All frames are still loaded in regular RAM, however, people have more of that than VRAM usually. The only significant change is test.py
English | 简体中文
This repository contains a fork of official implementation of the following paper:
Towards An End-to-End Framework for Flow-Guided Video Inpainting
Zhen Li#, Cheng-Ze Lu#, Jianhua Qin, Chun-Le Guo*, Ming-Ming Cheng
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
[Paper] [Demo Video (Youtube)] [演示视频 (B站)] [MindSpore Implementation] [Project Page (TBD)] [Poster (TBD)]
- 2022.05.15: We release E2FGVI-HQ, which can handle videos with arbitrary resolution. This model could generalize well to much higher resolutions, while it only used 432x240 videos for training. Besides, it performs better than our original model on both PSNR and SSIM metrics. :link: Download links: [Google Drive] [Baidu Disk] 🎥 Demo video: [Youtube] [B站]
- 2022.04.06: Our code is publicly available.
- SOTA performance: The proposed E2FGVI achieves significant improvements on all quantitative metrics in comparison with SOTA methods.
- Highly effiency: Our method processes 432 × 240 videos at 0.12 seconds per frame on a Titan XP GPU, which is nearly 15× faster than previous flow-based methods. Besides, our method has the lowest FLOPs among all compared SOTA methods.
-
Clone Repo
git clone https://github.com/MCG-NKU/E2FGVI.git
-
Create Conda Environment and Install Dependencies
conda create -n e2fgvi python=3.9 conda activate e2fgvi pip install mmcv==2.1.0 -f https://download.openmmlab.com/mmcv/dist/cu117/torch2.0/index.html pip install .
- Python >= 3.7
- PyTorch >= 1.5
- mmcv-full (following the pipeline to install)
- CUDA 11.1 (It will work on Cuda 9.2+ but you will have to update the torch and mmcv wheel sources for a compatible version of CUDA with your graphics card)
Before performing the following steps, please download their pretrained models from their readme [Original Readme]. The following links are copied from that Readme for your convencience. There's no guarantee that they will still be at this location when you read this, which is why we ask you to download the models from the original author's readme.
Model | 🔗 Download Links | Support Arbitrary Resolution ? | PSNR / SSIM / VFID (DAVIS) |
---|---|---|---|
E2FGVI | [Google Drive] [Baidu Disk] | ❌ | 33.01 / 0.9721 / 0.116 |
E2FGVI-HQ | [Google Drive] [Baidu Disk] | ⭕ | 33.06 / 0.9722 / 0.117 |
Then, unzip the file and place the models to release_model
directory.
The directory structure will be arranged as:
release_model
|- E2FGVI-CVPR22.pth
|- E2FGVI-HQ-CVPR22.pth
|- i3d_rgb_imagenet.pt (for evaluating VFID metric)
|- README.md
We provide two examples in the examples
directory.
Run the following command to enjoy them:
# The first example (using split video frames)
python test.py --model e2fgvi (or e2fgvi_hq) --video examples/tennis --mask examples/tennis_mask --ckpt release_model/E2FGVI-CVPR22.pth (or release_model/E2FGVI-HQ-CVPR22.pth)
# The second example (using mp4 format video)
python test.py --model e2fgvi (or e2fgvi_hq) --video examples/schoolgirls.mp4 --mask examples/schoolgirls_mask --ckpt release_model/E2FGVI-CVPR22.pth (or release_model/E2FGVI-HQ-CVPR22.pth)
The inpainting video will be saved in the results
directory.
Please prepare your own mp4 video (or split frames) and frame-wise masks if you want to test more cases.
Note: E2FGVI always rescales the input video to a fixed resolution (432x240), while E2FGVI-HQ does not change the resolution of the input video. If you want to custom the output resolution, please use the --set_size
flag and set the values of --width
and --height
.
Example:
# Using this command to output a 720p video
python test.py --model e2fgvi_hq --video <video_path> --mask <mask_path> --ckpt release_model/E2FGVI-HQ-CVPR22.pth --set_size --width 1280 --height 720
Dataset | YouTube-VOS | DAVIS |
---|---|---|
Details | For training (3,471) and evaluation (508) | For evaluation (50 in 90) |
Images | [Official Link] (Download train and test all frames) | [Official Link] (2017, 480p, TrainVal) |
Masks | [Google Drive] [Baidu Disk] (For reproducing paper results) |
### Evaluation
Run one of the following commands for evaluation:
```shell
# For evaluating E2FGVI model
python evaluate.py --model e2fgvi --dataset <dataset_name> --data_root datasets/ --ckpt release_model/E2FGVI-CVPR22.pth
# For evaluating E2FGVI-HQ model
python evaluate.py --model e2fgvi_hq --dataset <dataset_name> --data_root datasets/ --ckpt release_model/E2FGVI-HQ-CVPR22.pth
You will get scores as paper reported if you evaluate E2FGVI. The scores of E2FGVI-HQ can be found in [Prepare pretrained models].
The scores will also be saved in the results/<model_name>_<dataset_name>
directory.
Please --save_results
for further evaluating temporal warping error.
If you find our repo useful for your research, please consider citing our paper:
@inproceedings{liCvpr22vInpainting,
title={Towards An End-to-End Framework for Flow-Guided Video Inpainting},
author={Li, Zhen and Lu, Cheng-Ze and Qin, Jianhua and Guo, Chun-Le and Cheng, Ming-Ming},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
If you have any question, please feel free to contact us via zhenli1031ATgmail.com
or czlu919AToutlook.com
.
Licensed under a Creative Commons Attribution-NonCommercial 4.0 International for Non-commercial use only. Any commercial use should get formal permission first.
The original repository is maintained by Zhen Li and Cheng-Ze Lu. This repository is maintained by Teravus
This code is based on STTN, FuseFormer, Focal-Transformer, and MMEditing.