Comments (6)
The problem was solved after I changed the version of Transformer.
Hi!
I faced the same problem. Could you tell me which version of Transformers you use?
Thanks!
from video-chatgpt.
The root cause can be seen in this issue: huggingface/transformers#24130
Actually, I was wrong. The problem is with the flash_attn
monkey patch not being updated to reflect the breaking code changes in transformers. To fix this, update the llama_flash_attn_monkey_patch.py
in this repository to match this one: https://github.com/lm-sys/FastChat/blob/dd84d166d7694f0cc0c766e5a811d995f5801c77/fastchat/train/llama_flash_attn_monkey_patch.py
The specific commit with this fix is this one: lm-sys/FastChat@daa9c11
But after that, you also need to add a kwarg, padding_mask: Optional[torch.LongTensor] = None,
in the forward
like this (if the FastChat repo hasn't when you see this):
# ...video_chatgpt/train/llama_flash_attn_monkey_patch.py
...
def forward(
self,
hidden_states: torch_Tensor,
attention_mask: Optional[torch_Tensor] = None,
position_ids: Optional[torch_Tensor] = None,
past_key_value: Optional[Tuple[torch_Tensor]] = None,
output_attentions: bool = False,
use_cache: bool = False,
padding_mask: Optional[torch_LongTensor] = None,
) -> Tuple[torch_Tensor, Optional[torch_Tensor], Optional[Tuple[torch_Tensor]]]:
if output_attentions:
...
(Ignore my _
s. Treat them as .
I enjoy silly runtime optimizations).
from video-chatgpt.
Hi @GZHU-DVL,
Thank you for your interest in our work. Please make sure that you followed the mentioned environment setup process and using the correct versions of the libraries.
If the issue still exists, please provide the script and command that you are running to understand the issue.
I hope it will help. Thanks
from video-chatgpt.
The versions of the libraries are as follows:
torch~=2.0.0
tqdm~=4.65.0
transformers
numpy~=1.23
Pillow~=9.5.0
decord~=0.6.0
gradio~=3.23.0
markdown2~=2.4.8
einops~=0.6.1
requests~=2.30.0
sentencepiece~=0.1.99
protobuf~=4.23.2
accelerate~=0.20.3
accelerate==0.19.0
tokenizers>=0.13.3
The command are as follows:
torchrun video_chatgpt/train/train_mem.py
--model_name_or_path /gemini/data-2/7b/
--version v1
--data_path /gemini/code/Video-ChatGPT/scripts/video_chatgpt_training.json
--video_folder /gemini/data-2/ActivityNet_Train_Video-ChatGPT_Clip-L14_Features/activity_clip-14L_spatio_temporal_356/
--tune_mm_mlp_adapter True
--mm_use_vid_start_end
--bf16 True
--output_dir ./Video-ChatGPT_7B-1.1_Checkpoints
--num_train_epochs 3
--per_device_train_batch_size 1
--per_device_eval_batch_size 1
--gradient_accumulation_steps 1
--evaluation_strategy "no"
--save_strategy "steps"
--save_steps 3000
--save_total_limit 3
--learning_rate 2e-5
--weight_decay 0.
--warmup_ratio 0.03
--lr_scheduler_type "cosine"
--logging_steps 100
--tf32 True
--model_max_length 2048
--gradient_checkpointing True
--lazy_preprocess True
from video-chatgpt.
The problem was solved after I changed the version of Transformer.
from video-chatgpt.
The root cause can be seen in this issue: huggingface/transformers#24130
from video-chatgpt.
Related Issues (20)
- Same output for any system prompt HOT 1
- Adding Modality HOT 1
- Question about the semi-automatic dataset creation process HOT 1
- RuntimeError: Parent directory ./Video-ChatGPT_7B-1.1_Checkpoints/checkpoint-3000 does not exist. HOT 5
- Why is the <video> tag is needed in training json? HOT 1
- How to load the tuned backbone? HOT 1
- Why does "No module named 'video chatgpt'" appear? What should I do next? HOT 3
- How to download the vidoes? HOT 1
- Single Node Training HOT 3
- GoogleDrive of Clip-Features HOT 3
- 13B, 70B models HOT 1
- RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] HOT 5
- I encountered an error while using "Run the Demo". HOT 3
- Insufficient VRAM on a single card, how to utilize multiple cards for inference HOT 2
- Question about consistency evaluation HOT 3
- The responses from the offline_demo are garbled HOT 2
- Why TGIF videos missed and can't be processed HOT 3
- How to download the ready LLaVA-Lightening-7B weights HOT 5
- Download videos HOT 1
- How many frames are sampled per video for the training and testing process? HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from video-chatgpt.