Comments (4)
Hi @joshmyersdean,
Thank you for your interest in our work! I'm glad to hear you're considering fine-tuning our model for open-vocabulary and referring segmentation tasks. Given that GLaMM has been pre-trained across a variety of data types, including object detection, region-level captions, object segmentation, and referring expression segmentation, it's well-suited for your applications.
To proceed, ensure your dataset is formatted according to the specifications for the relevant dataset type (For example, refer: this for open-vocabulary segmentation and this for referring segmentation. If you need specific guidance on preparing your data or have any other questions, please feel free to reach out. We're here to help.
Apologies for the delayed response, and we'll make sure to be more prompt moving forward.
from groundinglmm.
Thank you so much! Do you happen to have a script for fine-tuning or recommendations on which layers to freeze/train?
from groundinglmm.
Hi @joshmyersdean,
Thank you for reaching back! I'll try to provide some further details on fine-tuning our model for your specific needs.
We offer two scripts for training: train.py and train_ft.py.
train.py is ideal when dealing with a mix of dataset types, such as a combination of region/bbox data, segmentation data, and captioning data. This script manages the diverse forward passes by sampling data accordingly, facilitating a balanced training regime across different types.
On the other hand, train_ft.py is tailored for focusing on a single data type at a time. It doesn't set steps_per_epoch explicitly but estimates the steps based on the dataset's length, ensuring the model iterates through the entire dataset effectively.
For fine-tuning on "open-vocabulary segmentation and referring segmentation tasks", train_ft.py would be your go-to script since the datasets are primarily segmentation-oriented - sharing the same forward pass.
Regarding layer management during fine-tuning:
We generally freeze the global image encoder (CLIP), the grounding image encoder (SAM encoder), and the LLM during pre-training and fine-tuning. The trainable layers include the Region-encoder, Vision-Language (V-L) projection layers, LoRA LLM layers, the Language-to-Prompt (L-P) projection layer, and the mask decoder.
You can explore several configurations for fine-tuning:
- Freeze everything except for the V-L projection layer and LLM LoRA layers. This approach focuses on adapting the core interaction between vision and language components to your specific segmentation tasks.
- Train only the L-P projection layer and the mask decoder. This method is particularly useful for refining the model's output and improving segmentation performance directly.
- Train both the V-L and L-P projection layers. This strategy allows for adjustments in both the initial processing of visual-language information and the final generation of segmentations.
Feel free to experiment with these configurations to find the best fit for your tasks. If you have further questions or need assistance in setting up your fine-tuning process, don't hesitate to reach out. Thank you!
Best Regards,
Hanoona.
from groundinglmm.
This is extremely helpful! Thank you so much for taking the time.
from groundinglmm.
Related Issues (20)
- Some bugs in the GranD_ReferringSegm_ds.py
- Online Demo Down HOT 1
- Fine-tuning Grounded Conversation Generation (GCG) Task HOT 4
- token_positives HOT 2
- assertion error cur_len == total_len HOT 1
- can not install mmcv HOT 2
- Can not find file for glamm_conda_env.zip in the given Google Drive Link HOT 4
- Training on New Data HOT 2
- training V-L and L-P projection layer HOT 1
- Can not download the train.json file for visual genome
- How can I let the model receive multiple images at once HOT 1
- How should I train on the GranD dataset
- How can I finetune on combined tasks?
- Confusing referring segmentation results. HOT 1
- mmcv failed to install HOT 1
- AssertionError when running a demo
- Offline demo error
- Why is it that during the computation of segmentation results, the model() function is used instead of model.generate()? Wouldn't this mean that when predicting the next token, the information viewed is from the actual token rather than the predicted one?
- What are the ‘categories’ in the dataset used for? When would I use them?
- How to Construct a Ground-Truth Test Dataset
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from groundinglmm.