Comments (1)
Great observation, I see the discrepancy you are referring to.
The key question is: what do we do when "no-object" is the most probable class prediction? Do we suppress/filter out such outputs, or do we treat them as predictions for the second most probable class, which is the most probable class that isn't "no-object"?
I'll start out by saying that personally it doesn't seem to me that there is an obvious answer to which convention is the "right" one. So the question of which convention to choose might end up being an empirical/practical one.
The COCO evaluation convention appears to follow the second convention: keep all predictions and assign each prediction the most probable object class label that was produced. Note that this will always be less than or equal to 0.5 if "no-object" is actually the most probable class predicted. We use the code created by the original DETR authors to follow this convention for computing COCO metrics.
In our inference code, we follow the other convention: always assign the most probable class to the prediction, even if it is "no-object". As a result, we can filter out predictions that are labeled "no-object", rather than treat them as a low-probability object class prediction. This is intentional, but we did not empirically study the difference it has in practice, if any, with the other convention.
Do you have a preference or an argument for always favoring one convention over the other?
Best,
Brandon
from table-transformer.
Related Issues (20)
- How to handle the nesting of tables in PDF? HOT 1
- Loading Weights - not expected behaviour HOT 1
- Table is not getting identified completely after training. HOT 1
- Mapping of cells output with actual inference image
- table transformer : detection issue
- Where is the annotation script?
- ocr in tabletransformer error
- "Urgent"-Issue in null cell- null cell value getting overlapped by next cell value HOT 2
- Question on fine-tuning TATR with a proprietary dataset HOT 3
- What does spans mean in the cell output?
- Structure recognition output is not satisfying for some tables HOT 1
- Evaluation Script Freeze for Fine-tune Table Structure Recognition Model HOT 1
- i try pre-train model but not good
- Table transformer crop issue HOT 1
- create own datasets and train the model HOT 2
- table_transformer relies on onnxruntime~=1.14.1, but onnxruntime==1.14.1 cannot be installed
- How do I perform Batch Inference for TSR? HOT 1
- Many short rows with financial table structure model HOT 2
- TSR model export onnx, but does not support dynamic batch HOT 1
- structure fine-tuning issue
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from table-transformer.