Comments (10)
@mmaaz60 sorry for the late response, the caption result is quite simple
from groundinglmm.
Thank you for your interest in our work. Could you please share the image and corresponding input prompt that you tried to better assist you?
Thanks
from groundinglmm.
@hanoonaR @mmaaz60 why close, you did not give a response?
from groundinglmm.
Please share the original images, not screenshots. Our model's responses vary with different images. The paper's examples come from various models, including the full-scope GLaMM and the GCG model, which is fine-tuned for grounding interleaved captioning. We haven't set a limit on response length.
from groundinglmm.
@hanoonaR I have try many image, and upload one for example, include your image in your paper, maybe you should reproduce the image in your paper with the online demo
from groundinglmm.
Hi @trouble-maker007 ,
We are not facing any issues in reproducing the results in the paper with the live demo.
from groundinglmm.
@hanoonaR the demo result with the ballon.jpg
while in the paper is very detailed
I think there is a big gap
Segment one, lack of the river result
from groundinglmm.
@hanoonaR I just change a little words in the prompt, the prompt meaning I think that not change much, but the result is different, looks like overfitting
from groundinglmm.
Let me clarify a few points:
-
The GLaMM paper covers a broad range of contributions, including detailed analyses of tasks such as image-level captioning and segmentation. Our open-source release includes both a full-scope model and models fine-tuned for specific application. The image captioning results showcased in the particular figure you are showing are from a fine-tuned model.
-
You are trying phrase grounding in the second example - but the demo model does not support this feature. We have not released this specific model, and accordingly, the demo's instructions do not cover its use.
-
The generative model is trained on a diverse set of prompts for each task, chosen randomly. This approach can lead to variations in the output. We encourage you to review the quantitative results presented in both the paper and the codebase to better understand the model's capabilities.
Before raising concerns about reproducing results, we urge you to diligently utilize the codebase and the documentation. We can confidently reproduce the results and are here to assist with any genuine issues encountered.
from groundinglmm.
@hanoonaR I still don't fully agree with your explanation for the second case. I only changed one word in your demo example from "this" to "the", and the result changed significantly. This seems to indicate that your model has simply memorized the results for this specific prompt and image, rather than having generalization capabilities.
from groundinglmm.
Related Issues (20)
- Some bugs in the GranD_ReferringSegm_ds.py
- Online Demo Down HOT 1
- Fine-tuning Grounded Conversation Generation (GCG) Task HOT 4
- token_positives HOT 2
- assertion error cur_len == total_len HOT 1
- can not install mmcv HOT 2
- Can not find file for glamm_conda_env.zip in the given Google Drive Link HOT 4
- Training on New Data HOT 2
- training V-L and L-P projection layer HOT 1
- Can not download the train.json file for visual genome
- How can I let the model receive multiple images at once HOT 1
- How should I train on the GranD dataset
- How can I finetune on combined tasks?
- Confusing referring segmentation results. HOT 1
- mmcv failed to install HOT 1
- AssertionError when running a demo
- Offline demo error
- Why is it that during the computation of segmentation results, the model() function is used instead of model.generate()? Wouldn't this mean that when predicting the next token, the information viewed is from the actual token rather than the predicted one?
- What are the ‘categories’ in the dataset used for? When would I use them?
- How to Construct a Ground-Truth Test Dataset
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from groundinglmm.