https://github.com/ethanhe42/KL-Loss
Deprecated. Please see link above
This project forked from facebookresearch/detectron
Bounding Box Regression with Uncertainty for Accurate Object Detection (CVPR'19)
Home Page: https://github.com/ethanhe42/KL-Loss
License: Apache License 2.0
https://github.com/ethanhe42/KL-Loss
Deprecated. Please see link above
Thanks for your public code and I can run in my computer!
But I actually confused about the goal of learned standard deviation...
If only use the KL loss instead of softer NMS, I think σ can accelerate the training because the value of loss func will be enlarged by its small value.
And I think the result of σ is close to 1 as possible. The reason is still unclear for me. But I think it can not influence the loss func because when σ=1, the loss func will become L1 loss and it can still train the bounding boxes. And in computing bounding box, if σ is too large, it will result in putting too much attention on one of relevant proposals and ignore others for the algorithm.
In the paper, KL loss is very important, but the KL loss implementation method is not found in the code.
Thank you .
你好,看之前您有说道:”log的原因,这bbox_pred_std_abs_logw_loss一项可以是负值。“
但是负值训练这一项绝对loss数值越来越大,从"bbox_pred_std_abs_logw_loss": -0.083365到"bbox_pred_std_abs_logw_loss": -1.142597, 而且最后的mAP值降低了,前9万次只将XYXY项设为True时mAP为0.75。然后把所有增加项都设为True后又接着训练到18万次,mAP值却只有0.74???
谢谢!!
Hi I am trying to train a SSD using xxyy and KL loss. May I ask the final loss when you train your model using xxyy and KL loss? It seems location_loss can be a negative number, is that normal?
What did you expect to see?
what's std1 and std0 in py_cpu_nms.py?
and how could I calculate it?
What did you observe instead?
py_cpu_nms.py::[41-42]::
std1 = confidence[i, j]
stdo = confidence[:, j]
test.py::[794-796]::
confs_j = confs[inds, j * 4:(j + 1) * 4]
nms_dets, _ = py_nms_utils.soft(dets_j,
confidence=confs_j )
test.py::[82]::
scores, boxes, cls_boxes = box_results_with_nms_and_limit(scores, boxes, confs=confs)
test.py::[67]::
scores, boxes, im_scale, confs = im_detect_bbox(
model, im, cfg.TEST.SCALE, cfg.TEST.MAX_SIZE, boxes=box_proposals
)
test.py::[176]::
confs = workspace.FetchBlob(core.ScopedName('bbox_pred_std_abs'))
what's "bbox_pred_std_abs" and what's the content of confs?
several days ago,I run another repo:https://github.com/LeonJWH/detectron-cascade-rcnn-exp
this repo just upload 2 folder,config and detectron,it is clearly to let me know the structure.but how could I use your repo?where could I find the implement of softer-NMS[I think maybe in detectron folder].So,I just change the detectron folder name to detectron_nms,and when I want to train dataset,I just change the yaml file,is that true?Thanks.
just change a little bit code in soft NMS ?could you please illustrate detailly?
Hello, how do the two Lcls and Lreg loss functions in the paper train the network? Are they separate training networks? Or do two functions turn into one to train the network?
Dear author,
Have you tried softer-NMS in retinanet or other else one-stage detectors?
Is this method also performs well in one-stage detectors?
json_stats: {"accuracy_cls": 0.944484, "bbox_pred_std_abs_logw_loss": NaN, "bbox_pred_std_abs_mulw_loss": NaN, "eta": "9:15:45", "iter": 2740, "loss": NaN, "loss_bbox": NaN, "loss_cls": NaN, "loss_rpn_bbox_fpn2": NaN, "loss_rpn_bbox_fpn3": NaN, "loss_rpn_bbox_fpn4": NaN, "loss_rpn_bbox_fpn5": 0.000000, "loss_rpn_bbox_fpn6": 0.000000, "loss_rpn_cls_fpn2": NaN, "loss_rpn_cls_fpn3": NaN, "loss_rpn_cls_fpn4": NaN, "loss_rpn_cls_fpn5": 0.004753, "loss_rpn_cls_fpn6": 0.000000, "lr": 0.002500, "mb_qsize": 64, "mem": 7410, "time": 0.382141}
CRITICAL train.py: 98: Loss is NaN
compare with e2e_faster_rcnn_R-50-FPN_1x.yaml,your init.yaml adding [PROPOSAL_FILES],so,if I train my own dataset,it could be error,and it could train well when I train coco2014 dataset,so how could I train my own dataset?
I0928 10:56:50.334882 20043 context_gpu.cu:306] Total: 306 MB
INFO train.py: 192: Loading dataset: ('bdd100k_train_detection',)
Traceback (most recent call last):
File "tools/train_net.py", line 132, in
main()
File "tools/train_net.py", line 114, in main
checkpoints = detectron.utils.train.train_model()
File "/home/test/code/softer-NMS/detectron/utils/train.py", line 58, in train_model
setup_model_for_training(model, weights_file, output_dir)
File "/home/test/code/softer-NMS/detectron/utils/train.py", line 170, in setup_model_for_training
add_model_training_inputs(model)
File "/home/test/code/softer-NMS/detectron/utils/train.py", line 194, in add_model_training_inputs
cfg.TRAIN.DATASETS, cfg.TRAIN.PROPOSAL_FILES
File "/home/test/code/softer-NMS/detectron/datasets/roidb.py", line 60, in combined_roidb_for_training
assert len(dataset_names) == len(proposal_files)
AssertionError
我已经将三个都设置为true了
In softer-nms paper:
simply training with x1, y1, x2, y2 coordinates instead of the standard x, y, w, h coordinates improves the AP by 0.7%
My question is: Are x1,y1,x2,y2 coordinates used only in FasterRCNN head or both FasterRCNN and RPN heads ? The paper does not make it clear.
Hoping for your timely response:)
(Delete this line and the text above it.)
What did you expect to see?
What did you observe instead?
E.g.:
The command that you ran
PYTHONPATH
environment variable: ?python --version
output: ?HELLO, I want to add softer-NMS in YOLO-NMS , what should I do to change the code ? thanks.
(Delete this line and the text above it.)
What did you expect to see?
What did you observe instead?
E.g.:
The command that you ran
PYTHONPATH
environment variable: ?python --version
output: ?Your project is adapted based on the origin FAIR Detectron, so I directly "make" in the folder. After $make, it said "Installed /media/hh/D/softer-NMS-master
Processing dependencies for Detectron==0.0.0
Finished processing dependencies for Detectron==0.0.0".
But when I do the test "python detectron/tests/test_spatial_narrow_as_op.py",thesr is a error "File "detectron/tests/test_spatial_narrow_as_op.py", line 88, in
c2_utils.import_detectron_ops()
File "/media/XX/D/softer-NMS-master/detectron/utils/c2.py", line 43, in import_detectron_ops
detectron_ops_lib = envu.get_detectron_ops_lib()
File "/media/XX/D/softer-NMS-master/detectron/utils/env.py", line 72, in get_detectron_ops_lib
('Detectron ops lib not found; make sure that your Caffe2 '
AssertionError: Detectron ops lib not found; make sure that your Caffe2 version includes Detectron module". I tried many times, but still can not solve this problem.Is it necessary to download the source code of Detecton?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.