qcraftai / simtrack Goto Github PK
View Code? Open in Web Editor NEWExploring Simple 3D Multi-Object Tracking for Autonomous Driving (ICCV 2021)
License: Other
Exploring Simple 3D Multi-Object Tracking for Autonomous Driving (ICCV 2021)
License: Other
Hello, where is the code of the “Motion Updating Branch” module proposed in the paper?
I didn't find it, Could you tell me?
I have a gpu card with only 12GB video memory. The process failed with out-of-memory error when I tried to run with default config. What should I change in the default config to run your model normally?
File "/mnt/data02/wzy/simtrack/det3d/models/detectors/init.py", line 3, in
from .point_pillars_tracking import PointPillarsTracking
ModuleNotFoundError: No module named 'det3d.models.detectors.point_pillars_tracking'
i have looked through the det3d ,and don`t find it. What should i do for it ?
Will consider releasing this part of the code? Thx a lot.
Can you provide the code for the nuscenes dataset in order to draw continuous trace results in a picture? Thank you!
can you tell me where you do the motion updating in your code? I only find the ego-motion update in your val_nusc_tracking.py
when i train the model using command line in the tutorial, I got this :
Traceback (most recent call last): File "./tools/train.py", line 13, in <module> from det3d.models import build_detector File "/home/lz/task3/simtrack/simtrack/det3d/models/__init__.py", line 13, in <module> from .detectors import * # noqa: F401,F403 File "/home/lz/task3/simtrack/simtrack/det3d/models/detectors/__init__.py", line 3, in <module> from .point_pillars_tracking import PointPillarsTracking ModuleNotFoundError: No module named 'det3d.models.detectors.point_pillars_tracking'
The error occurs because there is no point_pillars_tracking.py in /home/lz/task3/simtrack/simtrack/det3d/models/detectors, but a voxelnet.py which has been commented out.
Can anybodyhelp me ?
do you use just keyframe of nuScenes or all of trainval?
Nice work. I noticed that your implemented DynamicPillarFeatureNet requests the raw points as an input and not the output of the voxelizer. I wonder how to train using this pipeline?
Thanks!
I notice that in "simtrack/det3d/models/detectors/single_stage.py" line 32, init_weights is Annotated, why? how you init the parameters?
I have one question, does the model structure of the simTrack is completely the same as CenterPoint?
And the only difference is the post-processing. Am I right? Do I miss something important?
Thanks a lot for your fantastic work. Could you also upload ./tools/val_nusc_tracking.py
as specified in the README?
The metirc result is not deterministic.
cmd:
python tools/val_nusc_tracking.py examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py --checkpoint model_zoo/simtrack_pillar.pth --work_dir word_dirs/baseline
For fast, I just use nuscenes v1.0-mini set to test.
It returns the different metric results when I run the above cmd.
Dear authors:
I really appreciate your great work, and i'm tring to use it in some of my projects. I am wondering how to visualize your result just like the first gif of your README.
Thanks for your answer!
hello, when I tried to train a model, I just got this error, the traceback is following:
2022-04-24 18:16:11,886 - INFO - Distributed training: False
2022-04-24 18:16:11,886 - INFO - torch.backends.cudnn.benchmark: False
2022-04-24 18:16:11,900 - INFO - Backup source files to SAVE_DIR/det3d
Traceback (most recent call last):
File "./tools/train.py", line 146, in
main()
File "./tools/train.py", line 119, in main
model = build_detector(cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
File "/simtrack/det3d/models/builder.py", line 53, in build_detector
return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "/simtrack/det3d/models/builder.py", line 21, in build
return build_from_cfg(cfg, registry, default_args)
File "/simtrack/det3d/utils/registry.py", line 66, in build_from_cfg
"{} is not in the {} registry".format(obj_type, registry.name)
KeyError: 'PointPillars is not in the detector registry'
Traceback (most recent call last):
File "/anaconda3/envs/simtrack/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/anaconda3/envs/simtrack/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/anaconda3/envs/simtrack/lib/python3.6/site-packages/torch/distributed/launch.py", line 260, in
main()
File "/anaconda3/envs/simtrack/lib/python3.6/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '[/anaconda3/envs/simtrack/bin/python', '-u', './tools/train.py', '--local_rank=0', 'examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py', '--work_dir', 'SAVE_DIR']' returned non-zero exit status 1.
the order is: python -m torch.distributed.launch --nproc_per_node=1 ./tools/train.py examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py --work_dir SAVE_DIR
Thanks your fantastic work first, and I have a problem while view the code, did you do anything about motion compensation or undistort the pointcloud? I saw you use the matrix between two sweep lidar points to concat the points, but I did not see motion process when concat two sweeps lidar points, maybe I missed. If do not do this, is there any distort problem for the objects with big velocity?
Look forward to your kind reply,Thank you
when training:
https://github.com/qcraftai/simtrack/blob/main/det3d/datasets/pipelines/preprocess.py#L321
radius = max(cfg.min_radius, int(radius))
when inference:
https://github.com/qcraftai/simtrack/blob/main/det3d/datasets/pipelines/preprocess.py#L321
radius = min(cfg.min_radius, int(radius))
Is there any special consideration or just a bug?
I am now trying to reproduce this work. I am curious about the motion update branch, especially how to implement update if multiple frames are lost. The update here seems to only predict the detection target in the previous frame, but How to predict when multiple frames are lost. The results of the paper show the prediction results when multiple frames of the vehicle are lost. Can you tell me how this operates or what part of the code is there?
I used the default model to eval, but triger a warming and a AssertionError, sincerely look forward to your answer. Also, when will you release the full version of the code?
python ./tools/val_nusc_tracking.py examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py --checkpoint model_zoo/simtrack_pillar.pth --work_dir /data/simtrack_output/
/data/simtrack/det3d/core/bbox/geometry.py:160: NumbaWarning:
Compilation is falling back to object mode WITH looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: No implementation of function Function() found for signature:
getitem(array(float64, 3d, C), Tuple(slice<a:b>, list(int64)<iv=None>, slice<a:b>))
There are 22 candidate implementations:
During: typing of intrinsic-call at /data/simtrack/det3d/core/bbox/geometry.py (179)
File "det3d/core/bbox/geometry.py", line 179:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
@numba.jit
/data/simtrack/det3d/core/bbox/geometry.py:160: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "points_in_convex_polygon_jit" failed type inference due to: Cannot determine Numba type of <class 'numba.core.dispatcher.LiftedLoop'>
File "det3d/core/bbox/geometry.py", line 196: def points_in_convex_polygon_jit(points, polygon, clockwise=True):
For more information visit https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit
File "det3d/core/bbox/geometry.py", line 171:
def points_in_convex_polygon_jit(points, polygon, clockwise=True):
# first convert polygon to directed lines
num_points_of_polygon = polygon.shape[1]
^
Reverse indexing ...
Done reverse indexing in 8.5 seconds. ======
Initializing nuScenes tracking evaluation
Loaded results from /data/simtrack_output/tracking_results.json. Found detections for 6019 samples.
Loading annotations for val split from nuScenes version: v1.0-trainval
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6019/6019 [00:06<00:00, 871.16it/s]
Loaded ground truth annotations for 6019 samples.
Filtering tracks
=> Original number of boxes: 227984
=> After distance based filtering: 190099
=> After LIDAR points based filtering: 190099
=> After bike rack filtering: 189972
Filtering ground truth tracks
=> Original number of boxes: 142261
=> After distance based filtering: 103564
=> After LIDAR points based filtering: 93885
=> After bike rack filtering: 93875
Accumulating metric data...
Computing metrics for class bicycle...
Computed thresholds
[15/167]
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.1681 0.000 0.278 0.507 1923 1993 971 982 40 2431 971 1420 40
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.1975 0.000 0.271 0.488 1769 1993 939 1021 33 1999 939 1027 33
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.2212 0.127 0.266 0.462 1686 1993 893 1073 27 1700 893 780 27
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.2547 0.450 0.262 0.441 1546 1993 857 1114 22 1350 857 471 22
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.2824 0.535 0.262 0.414 1514 1993 804 1167 22 1200 804 374 22
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.2922 0.551 0.260 0.395 1501 1993 766 1205 22 1132 766 344 22
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.3012 0.538 0.277 0.368 1490 1993 712 1260 21 1062 712 329 21
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.3335 0.603 0.266 0.346 1470 1993 673 1303 17 957 673 267 17
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.3816 0.741 0.267 0.316 1422 1993 617 1364 12 789 617 160 12
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.4070 0.769 0.248 0.293 1413 1993 575 1410 8 716 575 133 8
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.4231 0.781 0.241 0.276 1407 1993 544 1443 6 669 544 119 6
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.4741 0.841 0.231 0.243 1385 1993 479 1508 6 561 479 76 6
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.4873 0.837 0.223 0.221 1385 1993 435 1553 5 511 435 71 5
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.5002 0.860 0.206 0.199 1378 1993 394 1596 3 452 394 55 3
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.5331 0.926 0.202 0.183 1351 1993 363 1628 2 392 363 27 2
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.5464 0.944 0.199 0.153 1347 1993 303 1688 2 322 303 17 2
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.5668 0.940 0.206 0.134 1347 1993 266 1726 1 283 266 16 1
MOTAR MOTP Recall Frames GT GT-Mtch GT-Miss GT-IDS Pred Pred-TP Pred-FP Pred-IDS
thr_0.5879 0.956 0.207 0.104 1343 1993 206 1786 1 216 206 9 1
Traceback (most recent call last):
File "./tools/val_nusc_tracking.py", line 202, in
tracking()
File "./tools/val_nusc_tracking.py", line 148, in tracking
dataset.evaluation_tracking(copy.deepcopy(predictions), output_dir=args.work_dir, testset=False)
File "/data/simtrack/det3d/datasets/nuscenes/nuscenes.py", line 382, in evaluation_tracking
metrics_summary = nusc_eval.main()
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/nuscenes/eval/tracking/evaluate.py", line 205, in main
metrics, metric_data_list = self.evaluate()
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/nuscenes/eval/tracking/evaluate.py", line 135, in evaluate
accumulate_class(class_name)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/nuscenes/eval/tracking/evaluate.py", line 131, in accumulate_class
curr_md = curr_ev.accumulate()
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/nuscenes/eval/tracking/algo.py", line 156, in accumulate
assert unachieved_thresholds + duplicate_thresholds + len(thresh_metrics) == self.num_thresholds
AssertionError
2022-03-17 14:21:36,906 - INFO - Start running, host: yangjinrong@tracking-q5x64-32246-worker-0, work_dir: /data/simtrack_output
2022-03-17 14:21:36,907 - INFO - workflow: [('train', 1), ('val', 1)], max: 20 epochs
Traceback (most recent call last):
File "./tools/train.py", line 141, in
main()
File "./tools/train.py", line 136, in main
logger=logger,
File "/data/simtrack/det3d/torchie/apis/train.py", line 206, in train_detector
trainer.run(data_loaders, cfg.workflow, cfg.total_epochs, local_rank=cfg.local_rank)
File "/data/simtrack/det3d/torchie/trainer/trainer.py", line 527, in run
epoch_runner(data_loaders[i], self.epoch, **kwargs)
File "/data/simtrack/det3d/torchie/trainer/trainer.py", line 393, in train
self.model, data_batch, train_mode=True, **kwargs
File "/data/simtrack/det3d/torchie/trainer/trainer.py", line 356, in batch_processor
losses = model(example, return_loss=True)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 511, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/simtrack/det3d/models/detectors/point_pillars.py", line 48, in forward
x = self.extract_feat(data)
File "/data/simtrack/det3d/models/detectors/point_pillars.py", line 29, in extract_feat
x = self.neck(x)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/simtrack/det3d/models/necks/rpn.py", line 142, in forward
ups.append(self.deblocksi - self._upsample_start_idx)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/simtrack/det3d/models/utils/misc.py", line 82, in forward
input = module(input)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/modules/activation.py", line 102, in forward
return F.relu(input, inplace=self.inplace)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/nn/functional.py", line 1119, in relu
result = torch.relu(input)
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 2; 10.76 GiB total capacity; 9.76 GiB already allocated; 47.44 MiB free; 9.88 GiB reserved in total by PyTorch)
^CProcess Process-10:
^CProcess Process-9:
Process Process-9:
Process Process-9:
Process Process-9:
Process Process-3:
Process Process-2:
Process Process-5:
Traceback (most recent call last):
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1019, in wait
Process Process-2:
Process Process-10:
Process Process-5:
Process Process-10:
Process Process-2:
Process Process-10:
Process Process-5:
Process Process-1:
Process Process-7:
Process Process-5:
Process Process-9:
Process Process-5:
Process Process-4:
Process Process-8:
Process Process-8:
Process Process-8:
Process Process-4:
Process Process-4:
Process Process-1:
Process Process-3:
Process Process-1:
Process Process-6:
Process Process-1:
Process Process-7:
Process Process-7:
return self._wait(timeout=timeout)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1653, in _wait
(pid, sts) = self._try_wait(0)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1611, in _try_wait
(pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in
main()
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/site-packages/torch/distributed/launch.py", line 254, in main
process.wait()
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1032, in wait
self._wait(timeout=sigint_timeout)
File "/home/yangjinrong/miniconda3/envs/det3d/lib/python3.7/subprocess.py", line 1647, in _wait
time.sleep(delay)
KeyboardInterruptq
Hi, I'm trying to download the pretrained model using the link in INSTALL.md. That link seem to point to a Azure storage service. I tried many accounts and all got error
AADSTS50177: User account '***' from identity provider 'live.com' does not exist in tenant 'Massachusetts Institute of Technology' and cannot access the application '00000003-0000-0ff1-ce00-000000000000'(Office 365 SharePoint Online) in that tenant. The account needs to be added as an external user in the tenant first. Sign out and sign in again with a different Azure Active Directory user account.
Have you made this file public?
Dear author,I still have some questions.
@qcraftai @chenxuluo @xiaodongyang
Thanks for your fantastic work! Could you give some instructions on how to create the elegant example.gif using nuScenes dataset?
Does anyone have some suggestions about this problem? Maybe just keep one objects if they appear at the same heatmap location?
I followed install.md, does anyone know how to install det3d? Thanks.
(simtrack) z@z:~/dev/simtrack$ python ./tools/val_nusc_tracking.py examples/point_pillars/configs/nusc_all_pp_centernet_tracking.py --checkpoint model_zoo/simtrack_pillar.pth --work_dir work_dirs/
Traceback (most recent call last):
File "./tools/val_nusc_tracking.py", line 8, in <module>
from det3d.datasets import build_dataloader, build_dataset
ModuleNotFoundError: No module named 'det3d'
How to use v1.0-mini of nuscenes dataset? I can train v1.0-mini, but it is not supported during the test. I want to know if there is a solution. How long did you spend training nuscenes dataset? I only have one GPU. Should i set -- nproc_ per_ Set to 1? About ./model_ zoo/simtrack_ Pillar.pth, I get the following results: Super (open_zipfile_reader, self)_ init__ (torch.C.PyTorchFileReader(name_or_buffer))
RuntimeError: version <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_ 1579061855666/work/caffe2/serialize/inline_ container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /opt/conda/conda-bld/pytorch_1579061855666/work/caffe2/serialize/inline_container.cc:132)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7f24ac0a7627 in /home/ubuntu/anaconda3/lib/python3.8/site-packages/torch/lib/libc10.so)
Can you provide a trained (pillar based) model with unlimited pytorch version and the specific environment version? Thank you very much for your help.
I can't find "nuscenes.utils" in your code?
Hi,author!
tracking_batch_hm = (batch_hm + prev_hm[task_id]) / 2.0
I don't understand the actual physical meaning of “tracking_batch_hm”.I also don’t understand why we need to execute this way instead of directly using batch_hm or prev_hm?
I have another question that if the displacement of an object is relatively large, then the position of this object in the previous centerness map will be farther away from that in the current centerness map.(In other words, there is no intersection between the representation of this object in the previous centerness map and the current centerness map.)
So after NMS operation, will this object be considered as a new object?
Using the provided config(completely same), get the result:
Same test on the provided checkpoint, get the result:
There is a 5 points gap between them. @chenxuluo @xiaodongyang
Is there anyone met the same problem like me?
can you share the log.json for the training on nuscenes dataset? By comparison, I know if there is a problem with my code, thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.