vchitect / vbench Goto Github PK
View Code? Open in Web Editor NEW[CVPR2024 Highlight] VBench - We Evaluate Video Generation
Home Page: https://vchitect.github.io/VBench-project/
License: Apache License 2.0
[CVPR2024 Highlight] VBench - We Evaluate Video Generation
Home Page: https://vchitect.github.io/VBench-project/
License: Apache License 2.0
Hello, Do you have a plan to release the code for the LeaderBoard?
Hi, great work!
Will the sampled videos from different video generation models be released? Especially the videos generated by Gen-2 and Pika.
Thanks!
Hi,
Thank you so much for your efforts in putting together the comprehensive benchmarks!
Could you provide detailed instructions for submitting the evaluation results? I obtained 16 *eval_results.json
after evaluating through all the dimensions. But it seems that I cannot submit these individual json
file to the leaderboard.
Thanks,
Jiachen
Could you give an example of the sample_per_video function? Thanks!
Hi,
When the input directory contains a single video asset, this line will throw an exception related to division by 0:
sim_per_video = sim / (len(video_list) - 1)
I believe it should be
sim_per_video = sim / len(video_list)
Can you confirm that's the case?
Hello, can you provide the model weights and project for reproducing the results in leaderboard?
Thank you so much.
When using vbench installed with pip install, the error popped up.
ERROR: Could not find a version that satisfies the requirement detectron2==0.6 (from versions: none)
ERROR: No matching distribution found for detectron2==0.6
can't pip install detectron2==0.6
AssertionError: dimensions : {'background_consistency'} not supported for custom input
Currently Vbench can evaluate on the list of dimension
['subject_consistency', 'background_consistency', 'temporal_flickering', 'motion_smoothness', 'dynamic_degree', 'aesthetic_quality', 'imaging_quality', 'object_class', 'multiple_objects', 'human_action', 'color', 'spatial_relationship', 'scene', 'temporal_style', 'appearance_style', 'overall_consistency']
I run some of them on my customized video, however, the score range for each of the dimension is different.
For one of my videos, It has 5 scores
subject consistency: 10.982122957706451
motion_smoothness: 0.9960492387493192
dynamic degree: false
aesthetic_quality: 0.6582092642784119
imaging_quality: 72.89873886108398
while the overall score is as following
subject consistency: 0.9861730885776606
motion_smoothness: 0.9909714810295909
dynamic degree: 0.16666666666666666
aesthetic_quality: 0.6556713245809078
imaging_quality: 0.7093512528141342
Can you illustrate more on the score range for each video and what does it mean?
It would be better to generate some example for score anchor for each dimension.
For example, if the range of aesthetic_quality is 0-1.
I would like to know how 0.1, 0.5 and 0.9 look like separately.
Thanks!
Is there a model.txt file missing in the umt_model folder? Do I need to download it?
I understand we can run inference on customized video (as in #7).
However, can we run inference on both customized video and prompt?
I wrote a script for evaluation,but I encoutered a issue then:
1、script:
from vbench import VBench
import torch
device = torch.device("cuda")
my_VBench = VBench(device, "VBench_full_info.json", "evaluation_results")
my_VBench.evaluate(
videos_path = "./videocrafter/spatial_relationship", #there are several videos in this directory
name = "spatial_relationship",
dimension_list = ["spatial_relationship"],
)
2、issue:
RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
Hi, thank you for building this benchmark. I wonder why SVD is not evaluated?
Hi, the path "prompts/prompts_per_dimension" contains only a part of evaluation dimensions. So, if i want to evaluation the dimension of "background_consistent", and there is no proper prompt text file (maybe background_consistent.txt) for me to generation videos. So, what can i do to perform evaluation on "background_consistent"?
Hi,
Dose Vbench have limitation on the video size and the number of frames for generated videos? What is the setting of the video size and the number of frames in your evaluation shown in Table 1 in your paper?
{ "imaging_quality": [ 0.6686933368155237, [ { "video_path": "/data/*.mp4", "video_results": 56.49358892440796 },
This is part of the result. I can see that video_results is the imaging_quality score, but what does '0.6686933368155237' means?
when running !vbench evaluate --videos_path "..." --dimension "...." encounter file NOT FOUND
and the link you put in the repo for VBench_full_info.json will turn to 404
thank you very much!
Hi,
Thanks for your great work. I like the way you present the results and the style of the drawings. So I am wondering if you could share the code that generated Figure 6?
Huge thanks.
First of all, thank you for creating this benchmark. It is an important contribution. I see both Pika and Gen-2 results are quite old. I’m wondering if it’s possible to update those numbers just to see how far open source models are behind the close source models. Thanks!
as the title
I run the following
vbench evaluate --videos_path ./demo/videos --dimension temporal_flickering
and get the following
args: Namespace(func=<function evaluate at 0x7f3b34fd6950>, output_path='./evaluation_results/', full_json_dir='./VBench_full_info.json', videos_path='./demo/videos', dimension='temporal_flickering', load_ckpt_from_local=None, read_frame=None)
start evaluation
Evaluation meta data saved to ./evaluation_results/temporal_flickering_full_info.json
0it [00:00, ?it/s]
It seems it cannot find the videos under the directory
its great effort from the authors. can anyone tell step by step explanation to run the project. your effort will be highly appreciated.
Hello, i'm trying to run inference. I've installed the package. I've tried the code here.
I have a new directory of videos, called "videos".
(venv) (base) yonatan:~/VBench$ ls videos/
Iron_Man.mp4 birthday.mp4 lavie_human_action_full_info.json skateboarding_dog.mp4
When I try to run
vbench evaluate --videos_path "videos" --dimension "human_action"
I receive an error indicating that no data is found.
When I try through the code, I also get a long list of missing videos:
>>> from vbench import VBench
>>> my_VBench = VBench('cuda', 'VBench_full_info.json', 'videos')
>>> my_VBench.evaluate(videos_path='videos', name='my_test', dimension_list=['human_action'])
WARNING!!! This required video is not found! Missing benchmark videos can lead to unfair evaluation result. The missing video is: A person is riding a bike-0.mp4
...
...
I guess that your code only supports your existing evaluated videos? And the instructions do not yet support inference on new videos?
The question is how can I run the evaluation for a new video + description. Thank you very much 🙏
I ran this command:
vbench evaluate --videos_path "/home/notebook/code/group/hkx/video_tasks/dover/DOVER/demo" --dimension "motion_smoothness"
then it worked like this below:
args: Namespace(func=<function evaluate at 0x7fba11448f40>, output_path='./evaluation_results/', full_json_dir='./VBench_full_info.json', videos_path='/home/notebook/code/group/hkx/video_tasks/dover/DOVER/demo', dimension='motion_smoothness', load_ckpt_from_local=None, read_frame=None) start evaluation Evaluation meta data saved to ./evaluation_results/motion_smoothness_full_info.json Loading [networks.AMT-S.Model] from [/home/oppoer/.cache/vbench/amt_model/amt-s.pth]... 0it [00:00, ?it/s] /home/notebook/data/group/hkx/vbench/lib/python3.12/site-packages/numpy/core/fromnumeric.py:3504: RuntimeWarning: Mean of empty slice. return _methods._mean(a, axis=axis, dtype=dtype, /home/notebook/data/group/hkx/vbench/lib/python3.12/site-packages/numpy/core/_methods.py:129: RuntimeWarning: invalid value encountered in scalar divide ret = ret.dtype.type(ret / rcount) Evaluation results saved to ./evaluation_results/motion_smoothness_eval_results.json done
But I got this result in motion_smoothness_eval_results.json:
{ "motion_smoothness": [ NaN, [] ] }
can you tell me where is the problem?
Hello, author. After reading your paper, I have some doubts about certain details. May I directly input a video clip (without any prompts) for evaluation? Is your method applicable to generating videos from images? Looking forward to your response.
Hello, Is there an unified configs for the evaluation data (mp4 files), such as the fps, duration, resolution, and etc.
And does the different settings (such as duration and resolution) of mp4 files have an influence on the final evaluation results using VBench? I'm not sure about this.
Hi, there is an error using python script for evaluation:
python3.10/site-packages/vbench/imaging_quality.py", line 30, in technical_quality
preprocess_mode = kwargs['imaging_quality_preprocessing_mode']
KeyError: 'imaging_quality_preprocessing_mode'
Thanks for your great work, I want to know where can I find those generated videos mentioned in the paper?
When evaluating aesthetic_quality for my own videos, I always get the error 'CUDA out of memory', which leading me not be able to batch test. Is there any problem? And according to my understanding, the model should only load one video at a time.
Example:
python evaluate.py --dimension aesthetic_quality --videos_path ../test --mode=custom_input
The first video can be OK. However, for the third one, torch.cuda.OutOfMemoryError: CUDA out of memory
would occur.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.