Giter VIP home page Giter VIP logo

saod's Introduction

Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration

The official implementation of Self-aware Object Detectors. Our implementation is based on mmdetection.

Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration, Kemal Oksuz, Tom Joy, Puneet K. Dokania, CVPR 2023 (Appendices)

What is Self-aware Object Detection?

The standard approach to evaluate an object detector assumes that the test images belong to the same distribution from which the training examples are sampled. The common metric of such evaluation is the Average Precision which indicates how accurate a detector is. However, in practical applications, the test sample can be very different compared to the training ones. For example, there can be scenes with the objects similar to the ones in the training set but in very different environments, known as domain shift. Or, these scenes can differ completely from the training set, referred here as out of distribution scenes. Considering these, we design Self-aware Object Detection task. As illustrated in the figure below, a self-aware object detector first decides whether it can reliably operate on a scene represented by the binary variable a. If it accepts the image, it produces accurate and calibrated detections. We evaluate such detectors considering:

  • accuracy measured by LRP Error,
  • calibration by measured by Localistion-aware ECE (LaECE) proposed in this paper,
  • image-level out of distribution detection, and
  • domain shift.

To enable this task, we introduce datasets, performance measures as well as investigate uncertainty quantification and calibration of object detectors. Accordingly, this repository provides these necessary tools for self-aware object detection as well as for enabling self-aware object detectors as described in our paper.

1. Specification of Dependencies and Preparation

Preparing MMDetection

Please see get_started.md for requirements and installation of mmdetection.

Additional Dependencies and Preparation for this Repository

Having completed the standard preparation of mmdetection, please make the following additional changes:

  • Please create detections directory in the root of the directory. This will include the json files with the attached uncertainties after making inference.
  • Please create results directory in the root of the directory. This will include the evaluation logs.
  • Please replace the standard cocoapi with the one that includes LRP Error using this repository. Specifically, rune the following commands:
# Remove the standard pycocotools
pip uninstall pycocotools

# Install pycocotools with LRP Error
pip install "git+https://github.com/kemaloksuz/LRP-Error.git#subdirectory=pycocotools"

Preparing Datasets

Please see SAOD datasets for configuration of the datasets.

2. Used Conventional Object Detectors

Here, we provide the models that we use in this project. You can either download the models and use them, or else you can train them using the provided configuration files.

Using Trained Detectors

Conventional Detectors Trained using COCO training set (General Object Detection Use-case)

Method AP LRP[1] Config Model
Faster R-CNN 39.9 59.5 config model
RS R-CNN 42.0 58.1 config model
ATSS 42.8 58.5 config model
Deformable DETR 44.3 55.9 config model
NLL R-CNN 40.1 59.5 config model
Energy-Score R-CNN 40.3 59.4 config model

Conventional Detectors Trained using nuImages training set (AV Object Detection Use-case)

Method AP LRP[1] Config Model
Faster R-CNN 55.0 43.6 config model
ATSS 56.9 43.2 config model

Note: While AP is a higher-better measure, LRP indicates the error, hence lower is better.

All models are included here. After downloading the models, please include them under work_dirs directory. For example for Faster R-CNN, the model should ideally be placed in work_dirs/faster_rcnn_r50_fpn_straug_3x_coco/epoch_36.pth.

Training the Detectors

Alternatively, the models can be trained. The configuration files of all models listed above can be found in the configs/saod/training folder. As an example, to train Faster R-CNN on 8 GPUs, use the following command:

 tools/dist_train.py configs/saod/training/faster_rcnn_r50_fpn_straug_3x_coco.py 8

This repository also includes the implementation of probabilistic object detectors minimizing Negative Log Likelihood[2] or Energy Score[3].

3. Inference with Detection-level Uncertainties Attached

configs/saod/test includes all of the configuration files that we use for testing. In that directory, there is a seperate directory for each detector that includes the necessary test configuration files for making an object detector self-aware and evaluating it. Specifically, there are five configuration files for each detector. To illustrate on our general object detection setting using Faster R-CNN:

  • faster_rcnn_r50_fpn_straug_3x_coco: Standard validation set evaluation
  • faster_rcnn_r50_fpn_straug_3x_coco_pseudoid.py: Validation set with the images including objects
  • faster_rcnn_r50_fpn_straug_3x_coco_pseudoood.py: Validation set with the images in which the objects are padded
  • faster_rcnn_r50_fpn_straug_3x_coco_obj45k.py: Obj45K test set and its corrupted versions for evaluating the SAOD
  • faster_rcnn_r50_fpn_straug_3x_coco_sinobj110kood: OOD test set for evaluating the SAOD

Obtaining SAODets and evaluating them require COCO-style json outputs with detection-level uncertainties attached, which can be obtained by the configuration files above. To illustrate again on Faster R-CNN, you will find entropy and dempster shafer estimations for each detection following this configuration file. As a result, each detection is now represented by a bounding box, a class id, a detection confidence score and a set of pre-defined uncertainty values in the resulting json file. Note that 1-p_i is obtained using the detection confidence score, hence it is not explicitly stated as an uncertainty type in the configuration file. The uncertainty estimations supported by this repository is implemented in this script.

To obtain the desired json files, we provide a bash script template that can be utilized as:

tools/dist_test_for_saod.sh dir_name model_path num_gpus

Again on the same example on Faster R-CNN, the following command will generate the required 8 json files of detections (using the 5 configuration files above) under the detections directory:

tools/dist_test_for_saod.sh faster_rcnn_r50_fpn_straug_3x_coco work_dirs/faster_rcnn_r50_fpn_straug_3x_coco/epoch_36.pth 2

4. Making Object Detectors Self-aware and Their Evaluation

Given detection-level uncertainties on the eight necessary data splits, we can now make object detectors self-aware and evaluate them. To do so following the recommended configuration in our paper, please run the following command:

 python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --cls_unc_type 0 --calibrate linear_regression --benchmark True

Note: The resulting DAQ might differ 0.1-0.2 compared to our results in Table 6. This is because we generate corruptions on the fly and fixed a minor bug in the code.

Furthermore, the saod_evaluation script has several optional arguments facilitating the reproduction of the most of our ablation experiments and analyses in the paper. Please check out parse_args() function for the specifications of the arguments in this script. To illustrate some:

  • One can use the average determinant of the covariance matrix of top-2 detections in NLL-Faster R-CNN and calibrate the detections with isotonic regression:
 python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --loc_unc_type 3 --max_det_num 2 --calibrate isotonic_regression --benchmark True
  • Alternatively, the baseline method in Table 7 in the paper (a detection confidence threshold of 0.50, image-level uncertainty threshold of 0.95 and no calibrator) that we use for ablation can be obtained by:
 python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --cls_unc_type 0 --image_level_threshold 0.95 --detection_level_threshold 0.50 --calibrate identity --benchmark True

5. Other Features Provided in this Repository

Evaluate only OOD performance using AUROC

 python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --ood_evaluate True

Evaluate only accuracy and calibration using LRP Error and LaECE

python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --calibrate isotonic_regression

Plot reliability Diagrams

python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --calibrate isotonic_regression --plot_reliability_diagram True

Standard COCO Style Evaluation using AP and LRP Error

python tools/analysis_tools/saod_evaluation.py faster_rcnn_r50_fpn_straug_3x_coco --evaluate_top_100 True

How to Cite

Please cite the paper if you benefit from our paper or the repository:

@inproceedings{saod,
       title = {Towards Building Self-Aware Object Detectors via Reliable Uncertainty Quantification and Calibration},
       author = {Kemal Oksuz and Tom Joy and Puneet K. Dokania},
       booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
       year = {2023}
}

References

[1] One Metric to Measure them All: Localisation Recall Precision (LRP) for Evaluating Visual Detection Tasks, TPAMI in 2022 and ECCV 2018
[2] Bounding Box Regression with Uncertainty for Accurate Object Detection, CVPR 2019
[3] Estimating and Evaluating Regression Predictive Uncertainty in Deep Object Detectors, ICLR 2021

saod's People

Contributors

kemaloksuz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

saod's Issues

'dtIoUs' key error

Hi, Thank you for sharing the code.
During evaluation, in saod_evaluation.py>Calibration>prepare_input, I got error at a line 101;

dtIoU = np.concatenate(
[e['dtIoUs'][:, 0:maxDet] for e in E], axis=1)[:, inds]

Can you please help me with this, what I am missing?

How to do LA-ECE computation for already calibrated model?

Hi,
I have two concerns regarding reporting the LAECE measure value for an already calibrated model. As I am new to this domain, your guidance would be greatly appreciated.

  1. I aim to compute only the LA-ECE measure of an object detector that has already been calibrated using another method. What values should be set for the TP validation threshold and detection level threshold in general?

  2. The LRP value computed by the 'COCO_evaluation' option differs from the 'Evaluate the Calibration Performance and Accuracy' option. Currently, I am using '--detection_level_threshold' = -1 and '--tau' = 0.50 and comparing the results with the Coco optimal LRP@IOU=0.5.

Many thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.