Comments (2)
-
The framework relies on the fact that 3D and 2D inputs describe exactly the same scene, so I expect mismatched data to produce less precise tracking results. However, if the time difference is small, it should still perform very well. The problem with different sensor frequency is misalignment of 2D and 3D detection boxes during fusion and 2nd stage association - nothing else relies on sensors being synchronised. One way to accommodate this could be to expand 2D detection boxes - make them bigger in all directions, so they can be matched to 3D boxes that were captured earlier/later. Or change the IoU thresholds for fusion and 2nd association - parameters
fusion_iou_threshold
andleftover_matching_thres
- seeconfigs/params.py
. -
If you are talking about not having both 3D and 2D for each frame, then it is possible. 3D-only is enough to update the position of a track, 2D-only is enough to keep the track alive and rely on Kalman Filter predictions. That's one of the major points of this work - expect two sources of detections to be able to update tracks with at least one of them. More details are in the paper section III-B Matching.
Just remember that detections are consumed as a dictionary for each frame, so make sure you return empty lists for frames where only one of the sensors is used. For example,MOTSequence.load_detections_3d
could return adefaultdict(list)
from eagermot.
No reply to the answer, so I assume there is no follow-up.
from eagermot.
Related Issues (20)
- Query on result for nuscenes data with centerpoint and mmdetection_cascade_x101 detections HOT 3
- Tracking on nuscenes data with centerpoint 3d detections on nuscenes and mmdetection cascade on nuimages HOT 2
- How to reproduce ego-motion files for KITTI? HOT 2
- Can you provide 2D TrackR-CNN detections (for testing) of KITTI MOT?There are no these datas on their Home linked pages. HOT 1
- Results in a video format HOT 1
- Question about the Track creation and confirmation HOT 2
- how to test the AMOTA MOTA
- About running the testing split HOT 1
- Question about AMOTP in ablation HOT 1
- about using AB3DMOT HOT 1
- some clarifications regarding kitti dataset HOT 3
- EagerMot result format HOT 1
- Visualization of the results HOT 3
- no detections for HOT 1
- 'MOTFrameNuScenes' object has no attribute 'bbox_3d_from_nu' HOT 1
- visualize code - utils_viz issue&render_option.json HOT 1
- a problem about visualize.py
- Which file to download and Format of Trackrcnn result
- About the normalized cosine distance in paper
- ego_motion HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from eagermot.