Giter VIP home page Giter VIP logo

kitti-object-eval-python's People

Contributors

erickwan avatar traveller59 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

kitti-object-eval-python's Issues

How to evaluate for a limited range of experiment

Hi,
Thank you for sharing your code,
if I have detection boxes in a limited range, for example, I restricted my range to 0 - 50 meters in the x-direction, so while evaluation, do I have to remove boxes in ground truth manually after this range?
Thanks In advance
Best Regards,
Ansab

Sanity Check failure

Hello,
I was testing out the evaluation code using the same directory for both label_path and result_path. I should be expecting 100.0 for all metrics as I am using the same annotations. Instead, I get:

Car [email protected], 0.70, 0.70:
bbox AP: 100.00, 100.00, 100.00
bev AP: 0.50, 0.47, 0.47
3d AP:0.50, 0.47, 0.47
aos AP: 100.00, 100.00, 100.00
Car [email protected], 0.50, 0.50:
bbox AP: 100.00, 100.00, 100.00
bev AP: 0.50, 0.47, 0.47
3d AP:0.50, 0.47, 0.47
aos AP: 100.00, 100.00, 100.00

Any suggestions as to why this is the case?

import second

Hello,
I try to use this codebase to evaluate my model. However, in eval.py, it imports a function from second. Should I clone that repository? If so, how should I structure the directories? I try to do so, then it requires spconv as well. Could you please clarify this requirement?
Thanks.

Precision and Recall

hello, is there a way that allow me to display the precision and recall values ?

3D results are not right

In fact, when you make the data of gt and dt are the same one, the 3D evaluation results are not 100

--label_path = kitti/label_2",
--result_path = kitti/label_2",
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:100.00, 100.00, 100.00
bev  AP:0.07, 0.11, 0.20
3d   AP:0.07, 0.11, 0.20
aos  AP:100.00, 100.00, 100.00
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:100.00, 100.00, 100.00
bev  AP:0.07, 0.11, 0.20
3d   AP:0.07, 0.11, 0.20
aos  AP:100.00, 100.00, 100.00

different confidence threshold get the same mAP

here is my code :
for confidence in range(1,10):
label_filename = "./evaluate/label_2"
result_path="./results/exp%d"%49
split_file = "./evaluate/lists/val.txt"
print(0.9+confidence/100)
evaluate(label_filename, result_path, split_file,score_thresh=0.9+confidence/100)

output:

0.91

Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:27.05, 23.71, 24.48
bev AP:35.93, 34.25, 32.38
3d AP:8.86, 7.92, 7.50
aos AP:12.04, 10.18, 10.61
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:27.05, 23.71, 24.48
bev AP:61.88, 61.37, 57.36
3d AP:48.05, 43.41, 41.65
aos AP:12.04, 10.18, 10.61

0.92
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:27.05, 23.71, 24.50
bev AP:35.93, 34.25, 32.38
3d AP:8.86, 7.92, 7.50
aos AP:12.04, 10.18, 10.62
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:27.05, 23.71, 24.50
bev AP:61.88, 61.37, 57.36
3d AP:48.05, 43.41, 41.65
aos AP:12.04, 10.18, 10.62

0.93
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:27.05, 23.71, 24.52
bev AP:35.93, 34.25, 32.38
3d AP:8.86, 7.92, 7.50
aos AP:12.04, 10.18, 10.63
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:27.05, 23.71, 24.52
bev AP:61.88, 61.37, 57.36
3d AP:48.05, 43.43, 41.65
aos AP:12.04, 10.18, 10.63

0.9400000000000001
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:27.05, 23.71, 24.54
bev AP:35.93, 34.25, 32.38
3d AP:8.86, 7.92, 7.50
aos AP:12.04, 10.18, 10.64
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:27.05, 23.71, 24.54
bev AP:61.88, 61.37, 57.36
3d AP:48.05, 43.48, 41.65
aos AP:12.04, 10.18, 10.64

0.9500000000000001
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:27.05, 23.71, 24.58
bev AP:35.93, 34.25, 32.38
3d AP:8.86, 7.92, 7.50
aos AP:12.04, 10.18, 10.66
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:27.05, 23.71, 24.58
bev AP:61.88, 61.37, 57.36
3d AP:48.05, 43.53, 41.65
aos AP:12.04, 10.18, 10.66

0.96
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:27.05, 23.71, 24.62
bev AP:35.93, 34.25, 32.38
3d AP:8.86, 7.92, 7.50
aos AP:12.04, 10.18, 10.67
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:27.05, 23.71, 24.62
bev AP:61.88, 61.37, 57.36
3d AP:48.05, 43.59, 41.65
aos AP:12.04, 10.18, 10.67

0.97
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:27.05, 23.71, 24.69
bev AP:35.93, 34.25, 32.38
3d AP:8.86, 7.92, 7.50
aos AP:12.04, 10.18, 10.70
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:27.05, 23.71, 24.69
bev AP:61.88, 61.37, 57.36
3d AP:48.05, 43.67, 41.65
aos AP:12.04, 10.18, 10.70

0.98
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:27.05, 23.71, 24.75
bev AP:35.93, 34.25, 32.38
3d AP:8.86, 7.94, 7.50
aos AP:12.04, 10.18, 10.73
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:27.05, 23.71, 24.75
bev AP:61.88, 61.37, 57.36
3d AP:48.05, 43.76, 41.65
aos AP:12.04, 10.18, 10.73

0.99
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:27.05, 23.71, 24.87
bev AP:35.93, 34.25, 32.38
3d AP:8.86, 8.03, 7.50
aos AP:12.04, 10.18, 10.79
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:27.05, 23.71, 24.87
bev AP:61.88, 61.37, 57.36
3d AP:48.05, 43.98, 41.65
aos AP:12.04, 10.18, 10.79
Process finished with exit code 0

No module named 'second'

I am sure that skimage, numpy, fire, numba and scipy is ready in my python environment, and I have already fired 'conda install -c numba cudatoolkit=10.0' according to my cuda environment. But when I try to import module 'second' just in terminal, the "ModuleNotFoundError: No module named 'second'" is raised. However, the 'second' module can not be downloaded from pip, so I'd like to wait for a resolution. Thanks!

Parameter passing

Hi,
First of all, thank you for sharing your code.
I am trying to evaluate my algorithm and on line 468 found you are calling a function with parameters:
rets = calculate_iou_partly(dt_annos, gt_annos, metric, num_parts)
while the function is defined as: line 335
def calculate_iou_partly(gt_annos, dt_annos, metric, num_parts=50):
so gt_annos becomes dt_annos, kindly have a look

rets = calculate_iou_partly(dt_annos, gt_annos, metric, num_parts)

Bug in eval.py while computing fp statistics. [with proposed solution]

Line https://github.com/traveller59/kitti-object-eval-python/blob/master/eval.py#L277 only applies the nstuff filter when metric =0 i.e; evaluating on image 2D bbbox. The current implementation gives poor performance when evaluation is done through bev or 3D iou.

Also, the official kitti evaluate_object.cpp [download their dev kit] does not "explicitly" filters the fps for metric=0. Rather, they use the box_overlap method corresponding to the respective metric (image2D/bev/3D).

So, I suggest a simple fix for the same as shown below.

if compute_fp:
    # count fp
    for i in range(det_size):
        if (not (assigned_detection[i] or ignored_det[i] == -1
                 or ignored_det[i] == 1 or ignored_threshold[i])):
            fp += 1
    nstuff = 0
    # do not consider detections falling under neutral zones as fp
    if metric==0:
        overlaps_dt_dc = image_box_overlap(dt_bboxes, dc_bboxes, 0)
    elif metric==1:
        overlaps_dt_dc = bev_box_overlap(dt_bboxes, dc_bboxes, 0)
    else:   # metric==2:
        overlaps_dt_dc = d3_box_overlap(dt_bboxes, dc_bboxes, 0)
    for i in range(dc_bboxes.shape[0]):
        for j in range(det_size):
            if (assigned_detection[j]):
                continue
            if (ignored_det[j] == -1 or ignored_det[j] == 1):
                continue
            if (ignored_threshold[j]):
                continue
            if overlaps_dt_dc[j, i] > min_overlap:
                assigned_detection[j] = True
                nstuff += 1
    fp -= nstuff

Can't understand eval metrics

Hi there, many thanks for sharing this code! I'm new to Kitti and I just successfully ran the code on my detection result. However, I don't quite understand the result given by your evaluation code. Why are there 2 sets of results for each class? Also, I understand values like 0.7, 0.5 are thresholds for IoU, but why we should consider multiple thresholds for each class? For instance,
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:100.00, 100.00, 100.00
bev AP:2.69, 1.97, 2.40
3d AP:1.82, 1.23, 1.46
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:100.00, 100.00, 100.00
bev AP:12.77, 12.40, 11.82
3d AP:11.19, 8.34, 10.00

Thanks in advance for your attention!

TypeError: can't unbox array from PyObject into native value.

Something wrong in calculate_iou_partly,and I dont know how to fix it.
File "/workspace/OpenPCDet-master/pcdet/datasets/kitti/kitti_dataset.py", line 363, in evaluation
ap_result_str, ap_dict = kitti_eval.get_official_eval_result(eval_gt_annos, eval_det_annos, class_names)
File "/workspace/OpenPCDet-master/pcdet/datasets/kitti/kitti_object_eval_python/eval.py", line 675, in get_official_eval_result
gt_annos, dt_annos, current_classes, min_overlaps, compute_aos, PR_detail_dict=PR_detail_dict)
File "/workspace/OpenPCDet-master/pcdet/datasets/kitti/kitti_object_eval_python/eval.py", line 588, in do_eval
min_overlaps, compute_aos)
File "/workspace/OpenPCDet-master/pcdet/datasets/kitti/kitti_object_eval_python/eval.py", line 473, in eval_class
rets = calculate_iou_partly(dt_annos, gt_annos, metric, num_parts)
File "/workspace/OpenPCDet-master/pcdet/datasets/kitti/kitti_object_eval_python/eval.py", line 363, in calculate_iou_partly
overlap_part = image_box_overlap(gt_boxes, dt_boxes)
TypeError: can't unbox array from PyObject into native value. The object maybe of a different type

Question about bbox AP for KITTI

Is the bbox AP only related to 2D bbox result or it has something to do with other result like orientation and 3D localization?

After I redivided the data set, i met such a problem

I redivided the KITTI training to train and val, about 7:3. But when i eval the network. I found my result so low. I also see the result txt. I think I get right result. Can someone help me.

2020-12-30 22:48:50,569 INFO total roi bbox recall(thresh=0.100): 8340 / 8588 = 0.971122
2020-12-30 22:48:50,569 INFO total roi bbox recall(thresh=0.300): 8272 / 8588 = 0.963204
2020-12-30 22:48:50,569 INFO total roi bbox recall(thresh=0.500): 8086 / 8588 = 0.941546
2020-12-30 22:48:50,569 INFO total roi bbox recall(thresh=0.700): 6593 / 8588 = 0.767699
2020-12-30 22:48:50,569 INFO total roi bbox recall(thresh=0.900): 132 / 8588 = 0.015370
2020-12-30 22:48:50,569 INFO total bbox recall(thresh=0.100): 8336 / 8588 = 0.970657
2020-12-30 22:48:50,569 INFO total bbox recall(thresh=0.300): 8274 / 8588 = 0.963437
2020-12-30 22:48:50,569 INFO total bbox recall(thresh=0.500): 8159 / 8588 = 0.950047
2020-12-30 22:48:50,569 INFO total bbox recall(thresh=0.700): 7443 / 8588 = 0.866674
2020-12-30 22:48:50,569 INFO total bbox recall(thresh=0.900): 2002 / 8588 = 0.233116
2020-12-30 22:48:50,570 INFO Averate Precision:
2020-12-30 22:49:08,986 INFO Car [email protected], 0.70, 0.70:
bbox AP:0.0063, 0.0103, 0.0155
bev AP:0.0023, 0.0031, 0.0034
3d AP:0.0025, 0.0014, 0.0014
aos AP:0.00, 0.01, 0.01
Car [email protected], 0.50, 0.50:
bbox AP:0.0063, 0.0103, 0.0155
bev AP:0.0088, 0.0082, 0.0094
3d AP:0.0044, 0.0048, 0.0048
aos AP:0.00, 0.01, 0.01

numba.core.errors.TypingError: Failed in nopython mode pipeline

Hello, this error occurred when I was running, I don’t know where the problem is, I hope I can get help.Thank u.
image

numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Untyped global name 'devRotateIoUEval': Cannot determine Numba type of <class 'function'>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.