Giter VIP home page Giter VIP logo

Comments (4)

hanoonaR avatar hanoonaR commented on August 10, 2024

Hi @joshmyersdean,

Thank you for your interest in our work! I'm glad to hear you're considering fine-tuning our model for open-vocabulary and referring segmentation tasks. Given that GLaMM has been pre-trained across a variety of data types, including object detection, region-level captions, object segmentation, and referring expression segmentation, it's well-suited for your applications.

To proceed, ensure your dataset is formatted according to the specifications for the relevant dataset type (For example, refer: this for open-vocabulary segmentation and this for referring segmentation. If you need specific guidance on preparing your data or have any other questions, please feel free to reach out. We're here to help.

Apologies for the delayed response, and we'll make sure to be more prompt moving forward.

from groundinglmm.

joshmyersdean avatar joshmyersdean commented on August 10, 2024

Thank you so much! Do you happen to have a script for fine-tuning or recommendations on which layers to freeze/train?

from groundinglmm.

hanoonaR avatar hanoonaR commented on August 10, 2024

Hi @joshmyersdean,

Thank you for reaching back! I'll try to provide some further details on fine-tuning our model for your specific needs.

We offer two scripts for training: train.py and train_ft.py.

train.py is ideal when dealing with a mix of dataset types, such as a combination of region/bbox data, segmentation data, and captioning data. This script manages the diverse forward passes by sampling data accordingly, facilitating a balanced training regime across different types.

On the other hand, train_ft.py is tailored for focusing on a single data type at a time. It doesn't set steps_per_epoch explicitly but estimates the steps based on the dataset's length, ensuring the model iterates through the entire dataset effectively.

For fine-tuning on "open-vocabulary segmentation and referring segmentation tasks", train_ft.py would be your go-to script since the datasets are primarily segmentation-oriented - sharing the same forward pass.

Regarding layer management during fine-tuning:

We generally freeze the global image encoder (CLIP), the grounding image encoder (SAM encoder), and the LLM during pre-training and fine-tuning. The trainable layers include the Region-encoder, Vision-Language (V-L) projection layers, LoRA LLM layers, the Language-to-Prompt (L-P) projection layer, and the mask decoder.

You can explore several configurations for fine-tuning:

  1. Freeze everything except for the V-L projection layer and LLM LoRA layers. This approach focuses on adapting the core interaction between vision and language components to your specific segmentation tasks.
  2. Train only the L-P projection layer and the mask decoder. This method is particularly useful for refining the model's output and improving segmentation performance directly.
  3. Train both the V-L and L-P projection layers. This strategy allows for adjustments in both the initial processing of visual-language information and the final generation of segmentations.

Feel free to experiment with these configurations to find the best fit for your tasks. If you have further questions or need assistance in setting up your fine-tuning process, don't hesitate to reach out. Thank you!

Best Regards,
Hanoona.

from groundinglmm.

joshmyersdean avatar joshmyersdean commented on August 10, 2024

This is extremely helpful! Thank you so much for taking the time.

from groundinglmm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.