Comments (8)
Hi @NIneeeeeem,
I appreciate your interest in our work. Please share the exact steps you followed to reproduce our work. For example, which model weights are you using? What command you are using for running inference and evaluation.
Further, note that the scripts provided in our repository are not tested for batch sizes > 1 and are not guaranteed to be working properly. I would highly recommend to keep the batch_size =1
. Thank You.
from videogpt-plus.
Hi @NIneeeeeem,
I appreciate your interest in our work. Please share the exact steps you followed to reproduce our work. For example, which model weights are you using? What command you are using for running inference and evaluation.
Further, note that the scripts provided in our repository are not tested for batch sizes > 1 and are not guaranteed to be working properly. I would highly recommend to keep the
batch_size =1
. Thank You.
Hi, here is the command:
CUDA_VISIBLE_DEVICES=0 python eval/mvbench/inference/infer.py --model-path weights/VideoGPT-plus_Phi3-mini-4k/mvbench - -model-base weights/Phi-3-mini-128k-instruct --video-folder MVBench/video --question-dir MVBench/json --output-dir M VBench/dual_result3
Weights:
video encoder: VideoGPT-plus/OpenGVLab/InternVideo2-Stage2_1B-224p-f4/InternVideo2-stage2_1b-224p-f4.pt
image encoder: VideoGPT-plus/openai/clip-vit-large-patch14-336
llm: Phi-3-mini-128k-instruct
ckpt: VideoGPT-plus_Phi3-mini-4k/mvbench
Building OpenGVLab/InternVideo2-Stage2_1B-224p-f4/InternVideo2-stage2_1b-224p-f4.pt
missing_keys=[]
Building openai/clip-vit-large-patch14-336
Building mlp2x_gelu
projector_type: mlp2x_gelu
Building mlp2x_gelu
projector_type: mlp2x_gelu
Loading additional VideoGPT+ weights...
Loading LoRA weights...
Merging LoRA weights...
Model is loaded...
load_state_dict: _IncompatibleKeys(missing_keys=[]
from videogpt-plus.
Hi @NIneeeeeem,
Thank you for providing the inference command you are using. Please note that our experiments use the Phi-3-mini-4k-instruct
base model, not the Phi-3-mini-128k-instruct
.
Please try replacing the 128K context model with 4K context model and this should solve the issue. Good Luck!
from videogpt-plus.
Thank you for your reply!
With Phi-3-mini-4k-instruct, the total Acc achieved is 58.14%.
I have another question that I noticed that in the instruction-tuning phase, subsets from multiple datasets were mixed, such as K710 and SSV2. Is there a regular pattern in the division of the subsets, or is it randomly selected?
from videogpt-plus.
Hi @NIneeeeeem,
These design choices are selected to optimize the training time and performance for both benchmarks.
from videogpt-plus.
@mmaaz60 Thank you for your reply. I think I didn't state my question clearly.
For datasets like SSV2, it contains 220,847 videos, of which 168,913 samples are used as the training set. And 40,000 were selected for the IT dataset in VideoGPT-plus. I am curious about the basis for such selection.
from videogpt-plus.
Hi @NIneeeeeem,
Thanks for the clarification. We follow the splits proposed in MVBench for training VideoChat2. I hope it answers your question.
from videogpt-plus.
Thank you, my issue has been resolved.
from videogpt-plus.
Related Issues (20)
- Where to download the VCGplus 110k original video? HOT 6
- full-parameter or lora? HOT 5
- are you planning to relase the inference codes for VideoGPT-plus_LLaMA3-8B-8k and/or VideoGPT-plus_Vicuna-7B-4k HOT 1
- The webm file from ssv2 can not be loaded HOT 3
- Simple Demo HOT 3
- eval/vcgbench/inference/run_ddp_inference.sh HOT 1
- About pre-training stage. HOT 2
- Detailed Video Descriptions HOT 3
- In what order should I reproduce the paper? HOT 6
- About downloading the datatset? HOT 1
- Question about Training Time HOT 1
- Intermediate descriptions for vcg-plus_112k
- Phi3Model ImportError HOT 2
- You are using a model of type phi3 to instantiate a model of type VideoGPT+. This is not supported for all configurations of models and can yield errors.
- “python setup.py install” for flash-attention reports errors HOT 1
- Where can I find the dense captions for the 112K videos?
- Support for Multi-turn Conversations with Fixed Video Input?
- Inquiry about Costs Associated with Video LLM Benchmarks
- Zero-shot QA evaluation
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from videogpt-plus.