Comments (5)
I see mm_mlp_adapter is saved in /home/develop/fyy/Video-ChatGPT-main/Video-ChatGPT_7B-1.1_Checkpoints/mm_projector, is there anything need to save? or I download the wrong package
from video-chatgpt.
I also have the same problem.
from video-chatgpt.
Hi @tianguang2525 @HaotianLiu123,
Thank you for your interest in our work and apologies for the late reply. Were you able to solve the issue?
If not, please provide some more information regarding the issue, such as, what command you are running and where exactly the error is coming. This information would be helpful to reproduce the error on my side and provide further help. Thanks.
from video-chatgpt.
I had the same error.
27%|██▋ | 3000/11214 [4:55:11<13:22:58, 5.87s/it]Traceback (most recent call last):
File "/home/usr/Video-ChatGPT/video_chatgpt/train/train_mem.py", line 9, in <module>
train()
File "/home/usr/Video-ChatGPT/video_chatgpt/train/train.py", line 828, in train
trainer.train()
File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 1932, in train
return inner_training_loop(
File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 2345, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 2796, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 2879, in _save_checkpoint
self._save_optimizer_and_scheduler(output_dir)
File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/transformers/trainer.py", line 2995, in _save_optimizer_and_scheduler
torch.save(self.optimizer.state_dict(), os.path.join(output_dir, OPTIMIZER_NAME))
File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/torch/serialization.py", line 627, in save
with _open_zipfile_writer(f) as opened_zipfile:
File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/torch/serialization.py", line 501, in _open_zipfile_writer
return container(name_or_buffer)
File "/home/usr/my-envs/vid-chatgpt/lib/python3.10/site-packages/torch/serialization.py", line 472, in __init__
super().__init__(torch._C.PyTorchFileWriter(self.name))
RuntimeError: Parent directory ./Video-ChatGPT_7B-1.1_Checkpoints_Vids/checkpoint-3000 does not exist.
27%|██▋ | 3000/11214 [4:55:15<13:28:24, 5.91s/it]
I am not using the transformers version which is suggested, but rather transformers==4.42.3
. I wonder if that could be causing this issue.
from video-chatgpt.
I have started a fresh environment with the original package versions from requirements.txt
and still experience the same issue. This is the script I use for training.
export PYTHONPATH="./:$PYTHONPATH"
python video_chatgpt/train/train_mem.py \
--model_name_or_path /home/usr/Video-ChatGPT/LLaVA-7B-Lightening-v1-1 \
--version v1 \
--data_path /home/usr/Video-ChatGPT/qa_video.json \
--video_folder /home/usr/pkls \
--tune_mm_mlp_adapter True \
--mm_use_img_start_end \
--lazy_preprocess True \
--bf16 True \
--output_dir /home/usr/Video-ChatGPT/Video-ChatGPT_7B-1.1_Checkpoints_Vids_Start_End \
--num_train_epochs 3 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 3000 \
--save_total_limit 3 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 100 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
from video-chatgpt.
Related Issues (20)
- Question about the semi-automatic dataset creation process HOT 1
- Why is the <video> tag is needed in training json? HOT 1
- How to load the tuned backbone? HOT 1
- Why does "No module named 'video chatgpt'" appear? What should I do next? HOT 3
- How to download the vidoes? HOT 1
- Single Node Training HOT 3
- GoogleDrive of Clip-Features HOT 3
- Question about the test json of msvd and msrvtt dataset HOT 1
- Can I use bfloat16 when training? HOT 1
- evaluate_activitynet_qa HOT 5
- License HOT 1
- Inference Code and Possible Utilization of Prompts HOT 1
- wondering if the speed is right about activitynet_qa eval
- Inquiry about Costs Associated with Video LLM Benchmarks
- why i use A100 80G to inference so slow?
- How to get the 100k original videos? HOT 2
- xxx
- Cannot understand choice of mm_hidden_size 1024
- Is the linear layer initialized by llava's linear layer?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from video-chatgpt.