Giter VIP home page Giter VIP logo

queryinst's People

Contributors

simonjjj avatar vealocia avatar xinggangw avatar yuxin-cv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

queryinst's Issues

CUDA OOM on 2080Ti

I tired queryinst_r50_fpn_100_proposals_mstrain_480-800_3x_coc queryinst_r101_fpn_300_proposals_crop_mstrain_480-800_ with batchsize=1, but both of them end up with OOM in like half an hour after starting.

There seems to be a memory leak somewhere.

Training Log?

Dear authors,

Do you have training log files available? BTW, I got 3 warnings as shown below when training. I was wondering if you also got them? Is that fine?

  1. queryinst/mmdet/models/backbones/resnet.py:400: UserWarning: DeprecationWarning: pretrained is a deprecated, please use "init_cfg" instead warnings.warn('DeprecationWarning: pretrained is a deprecated

  2. python3.7/site-packages/mmcv/cnn/bricks/conv_module.py:107: UserWarning: ConvModule has norm and bias at the same time
    warnings.warn('ConvModule has norm and bias at the same time')

  3. [W reducer.cpp:346] Warning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is
    not an error, but may impair performance. grad.sizes() = [80, 256, 1, 1], strides() = [256, 1, 256, 256]

Training Memory Usage

Thanks for sharing the implementation!

I got memory error with this config https://github.com/hustvl/QueryInst/blob/main/configs/queryinst/queryinst_swin_large_patch4_window7_fpn_300_proposals_crop_mstrain_400-1200_50e_coco.py. I used 8 GPUs each with 32 GB memeory.

What GPUs did you use to conduct your experiments?

image

mAP = 0 all the time

In the training log, the mAP is 0 all the time

bbox_mAP: 0.0000, bbox_mAP_50: 0.0000, bbox_mAP_75: 0.0000, bbox_mAP_s: -1.0000, bbox_mAP_m: -1.0000, bbox_mAP_l: 0.0000, bbox_mAP_copypaste: 0.000 0.000 0.000 -1.000 -1.000 0.000, segm_mAP: 0.0000, segm_mAP_50: 0.0000, segm_mAP_75: 0.0000, segm_mAP_s: -1.0000, segm_mAP_m: -1.0000, segm_mAP_l: 0.0000, segm_mAP_copypaste: 0.000 0.000 0.000 -1.000 -1.000 0.000

Do you know how to solve it? Thank you

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

Loss value when training QueryInst

Hi all,
I really impressed with your work and the performance of QueryInst so I am tried to train your work on my custom dataset. This dataset that I am successfully train on Swin Transformer. Unfortunately, I tried to train on your work so the loss values zero except loss_cls. Can you help me to solve that, I really appreciate it.

[stage0_loss_cls: 0.0025, stage0_pos_acc0, stage4_loss_iou: 0.0000, stage4_loss_mask: 0.0000, stage5_loss_cls: 0.0024, stage5_pos_acc: 0.0000, stage5_loss_bbox: 0.0000, stage5_loss_iou: 0.0000, stage5_loss_mask: 0.0000,0.0000, stage1_loss_mask: 0.0000, stage2_loss_cls: 0.0012, stage2_pos_acc: 0.0000, stage2_loss_bbox: 0.0000, stage2_loss_iou: 0.0000, stage2_loss_mask: 0.0000, stage3_loss_cls: 0.0012, stage3_pos_acc: 0.0000, stage3_loss_bbox: 0.0000, stage3_loss_iou: 0.0000, stage3_loss_mask: 0.0000, stage4_loss_cls: 0.0010, stage4_pos_acc: 0.0000, stage4_loss_bbox: 0.000](lr: 1.499e-05, eta: 17:44:38, time: 0.620, data_time: 0.004, memory: 10924, stage0_loss_cls: 125712.0807, stage0_pos_acc: 0.0000, stage0_loss_bbox: 0.0000, stage0_loss_iou: 0.0000, stage0_loss_mask: 0.0000, stage1_loss_cls: 90232.6852, stage1_pos_acc: 0.0000, stage1_loss_bbox: 0.0000, stage1_loss_iou: 0.0000, stage1_loss_mask: 0.0000, stage2_loss_cls: 46718.8418, stage2_pos_acc: 0.0000, stage2_loss_bbox: 0.0000, stage2_loss_iou: 0.0000, stage2_loss_mask: 0.0000, stage3_loss_cls: 32931.4105, stage3_pos_acc: 0.0000, stage3_loss_bbox: 0.0000, stage3_loss_iou: 0.0000, stage3_loss_mask: 0.0000, stage4_loss_cls: 41905.0223, stage4_pos_acc: 0.0000, stage4_loss_bbox: 0.0000, stage4_loss_iou: 0.0000, stage4_loss_mask: 0.0000, stage5_loss_cls: 47268.3891, stage5_pos_acc: 0.0000, stage5_loss_bbox: 0.0000, stage5_loss_iou: 0.0000, stage5_loss_mask: 0.0000, loss: 384768.4313, grad_norm: 1482184.3400)

looking for your reply!

pytorch2onnx failed

Thank you for your nice work!

I was trying to convert the pre-trained model to ONNX but failed.
Would you be able to direct me a bit? Many thanks.

`

$ python tools/deployment/pytorch2onnx.py
configs/queryinst/queryinst_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py
work_dirs/queryinst_r50_300_queries-6b5ca732.pth
--output-file work_dirs/queryinst_r50_300_queries.onnx
--shape 1333 800
--input-img ./mmdetection/tests/data/color.jpg
--verify

apex is not installed
./.pyenv/versions/QueryInst/lib/python3.7/site-packages/mmcv/utils/misc.py:324: UserWarning: "dropout" is deprecated in FFN.__init__, please use "ffn_drop" instead
f'"{src_arg_name}" is deprecated in '
./.pyenv/versions/QueryInst/lib/python3.7/site-packages/mmcv/cnn/bricks/conv_module.py:151: UserWarning: Unnecessary conv bias before batch/instance norm
'Unnecessary conv bias before batch/instance norm')
Use load_from_local loader
Use load_from_local loader
./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
./mmdet/models/dense_heads/embedding_rpn_head.py:76: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
num_imgs = len(imgs[0])
./mmdet/core/bbox/transforms.py:70: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if bboxes.size(0) > 0:
./.pyenv/versions/QueryInst/lib/python3.7/site-packages/mmcv/ops/roi_align.py:80: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!'
./mmdet/models/roi_heads/query_roi_head.py:163: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
rois.new_zeros(len(rois)), # dummy arg
./mmdet/models/roi_heads/bbox_heads/bbox_head.py:511: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert rois.size(1) == 4 or rois.size(1) == 5, repr(rois.shape)
./mmdet/models/roi_heads/bbox_heads/bbox_head.py:517: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert bbox_pred.size(1) == 4
./mmdet/models/roi_heads/bbox_heads/bbox_head.py:519: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if rois.size(1) == 4:
./mmdet/core/bbox/coder/delta_xywh_bbox_coder.py:87: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert pred_bboxes.size(0) == bboxes.size(0)
./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
./mmdet/models/roi_heads/bbox_heads/bbox_head.py:492: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
keep_inds[:len(pos_is_gts
)] = pos_keep
./mmdet/core/bbox/transforms.py:110: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if bboxes.shape[0] == 0:
./mmdet/core/bbox/transforms.py:114: TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
bboxes = bboxes.detach().cpu().numpy()
./mmdet/core/bbox/transforms.py:115: TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
labels = labels.detach().cpu().numpy()
Traceback (most recent call last):
File "tools/deployment/pytorch2onnx.py", line 275, in
dynamic_export=args.dynamic_export)
File "tools/deployment/pytorch2onnx.py", line 77, in pytorch2onnx
dynamic_axes=dynamic_axes)
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/onnx/init.py", line 280, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/onnx/utils.py", line 94, in export
use_external_data_format=use_external_data_format)
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/onnx/utils.py", line 695, in _export
dynamic_axes=dynamic_axes)
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/onnx/utils.py", line 459, in _model_to_graph
_retain_param_name)
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/onnx/utils.py", line 422, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/onnx/utils.py", line 373, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/jit/_trace.py", line 1160, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/jit/_trace.py", line 132, in forward
self._force_outplace,
File "./.pyenv/versions/QueryInst/lib/python3.7/site-packages/torch/jit/_trace.py", line 121, in wrapper
out_vars, _ = _flatten(outs)
RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs.
Dictionaries and strings are also accepted, but their usage is not recommended.
Here, received an input of unsupported type: numpy.ndarray

`

cityscapes config

你好,我是一个研究生,最近在复现你的算法,可以具体告诉我一下cityscapes下的配置文件细节吗,非常感谢! doge

How do you handle with the loss_match of track_head in QueryTrack?

Dear author:
In MaskTrack, when calculate the loss_match of track_head, the picture having only one instance will not participate in the calculation of loss_mach. So, if the pictures of one batch all have only one instance, the loss_match of this batch will be 0. How do you solve this problem in your QueryTrack?

KeyError: 'QueryInst is not in the models registry'

你好,我尝试在win10上用自己的数据集进行训练,但是返回了如下错误。

训练命令:
python tools/train.py configs/queryinst/queryinst_r50_fpn_1x_coco_scratch.py
我把queryinst_r50_fpn_1x_coco_scratch.py中num_classes改为了我的数据集类别总数。

运行后出现下面问题:
**Traceback (most recent call last):
File "tools/train.py", line 188, in
main()
File "tools/train.py", line 161, in main
test_cfg=cfg.get('test_cfg'))
File "C:\Users\RTX3090.conda\envs\open-mmlab\lib\site-packages\mmdet\models\builder.py", line 58, in build_detector
cfg, default_args=dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "d:\lbq\code\swin_tsf\mmsegmentation-master\mmcv\mmcv\utils\registry.py", line 210, in build
return self.build_func(*args, **kwargs, registry=self)
File "d:\lbq\code\swin_tsf\mmsegmentation-master\mmcv\mmcv\cnn\builder.py", line 26, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "d:\lbq\code\swin_tsf\mmsegmentation-master\mmcv\mmcv\utils\registry.py", line 44, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')
KeyError: 'QueryInst is not in the models registry'
**

Ground Truth Not Found

My command for training
./tools/dist_train.sh configs/queryinst/queryinst_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py 4

The training was stucked when model met an empty croped gt.
My log is shown as below:

2021-08-26 10:39:36,940 - mmdet - INFO - workflow: [('train', 1)], max: 36 epochs
2021-08-26 10:41:30,531 - mmdet - INFO - Epoch [1][50/14659]	lr: 1.249e-06, eta: 13 days, 20:51:37, time: 2.271, data_time: 0.978, memory: 9683, stage0_loss_cls: 2.2784, stage0_pos_acc: 2.0437, stage0_loss_bbox: 2.9789, stage0_loss_iou: 1.6788, stage0_loss_mask: 5.9337, stage1_loss_cls: 2.3385, stage1_pos_acc: 2.0824, stage1_loss_bbox: 4.2721, stage1_loss_iou: 1.8811, stage1_loss_mask: 6.2975, stage2_loss_cls: 2.3301, stage2_pos_acc: 0.9363, stage2_loss_bbox: 2.9707, stage2_loss_iou: 1.9285, stage2_loss_mask: 5.8670, stage3_loss_cls: 2.1712, stage3_pos_acc: 3.0306, stage3_loss_bbox: 3.1920, stage3_loss_iou: 2.2616, stage3_loss_mask: 6.0022, stage4_loss_cls: 2.2627, stage4_pos_acc: 1.7393, stage4_loss_bbox: 2.8403, stage4_loss_iou: 2.3233, stage4_loss_mask: 5.8660, stage5_loss_cls: 2.2882, stage5_pos_acc: 1.5126, stage5_loss_bbox: 2.8825, stage5_loss_iou: 2.3851, stage5_loss_mask: 5.8457, loss: 81.0761, grad_norm: 9063.7358
2021-08-26 10:42:34,279 - mmdet - INFO - Epoch [1][100/14659]	lr: 2.498e-06, eta: 10 days, 19:53:52, time: 1.276, data_time: 0.029, memory: 9683, stage0_loss_cls: 2.2928, stage0_pos_acc: 2.2991, stage0_loss_bbox: 2.4200, stage0_loss_iou: 1.5812, stage0_loss_mask: 5.5926, stage1_loss_cls: 2.3128, stage1_pos_acc: 1.5530, stage1_loss_bbox: 2.8068, stage1_loss_iou: 1.7299, stage1_loss_mask: 5.6873, stage2_loss_cls: 2.2738, stage2_pos_acc: 1.3359, stage2_loss_bbox: 2.0444, stage2_loss_iou: 1.7757, stage2_loss_mask: 5.1849, stage3_loss_cls: 2.0966, stage3_pos_acc: 3.8337, stage3_loss_bbox: 1.9937, stage3_loss_iou: 1.9307, stage3_loss_mask: 5.0764, stage4_loss_cls: 2.2168, stage4_pos_acc: 2.2209, stage4_loss_bbox: 1.8457, stage4_loss_iou: 1.9443, stage4_loss_mask: 4.9774, stage5_loss_cls: 2.1422, stage5_pos_acc: 2.2032, stage5_loss_bbox: 1.8217, stage5_loss_iou: 1.9223, stage5_loss_mask: 4.9423, loss: 68.6125, grad_norm: 5779.0945
2021-08-26 10:43:39,684 - mmdet - INFO - Epoch [1][150/14659]	lr: 3.746e-06, eta: 9 days, 21:06:05, time: 1.307, data_time: 0.030, memory: 9683, stage0_loss_cls: 2.1774, stage0_pos_acc: 2.4055, stage0_loss_bbox: 1.7417, stage0_loss_iou: 1.5327, stage0_loss_mask: 5.2476, stage1_loss_cls: 2.1912, stage1_pos_acc: 1.9714, stage1_loss_bbox: 1.5765, stage1_loss_iou: 1.6106, stage1_loss_mask: 4.8875, stage2_loss_cls: 2.1093, stage2_pos_acc: 3.1017, stage2_loss_bbox: 1.4834, stage2_loss_iou: 1.7319, stage2_loss_mask: 4.6163, stage3_loss_cls: 1.9555, stage3_pos_acc: 9.2308, stage3_loss_bbox: 1.4211, stage3_loss_iou: 1.7616, stage3_loss_mask: 4.4452, stage4_loss_cls: 2.0144, stage4_pos_acc: 13.1486, stage4_loss_bbox: 1.3939, stage4_loss_iou: 1.7237, stage4_loss_mask: 4.5496, stage5_loss_cls: 1.9050, stage5_pos_acc: 18.0571, stage5_loss_bbox: 1.4082, stage5_loss_iou: 1.6820, stage5_loss_mask: 4.6783, loss: 59.8446, grad_norm: 2041.8116
2021-08-26 10:44:43,974 - mmdet - INFO - Epoch [1][200/14659]	lr: 4.995e-06, eta: 9 days, 8:55:17, time: 1.286, data_time: 0.027, memory: 9683, stage0_loss_cls: 2.1020, stage0_pos_acc: 2.5095, stage0_loss_bbox: 1.4044, stage0_loss_iou: 1.4558, stage0_loss_mask: 4.7985, stage1_loss_cls: 2.0902, stage1_pos_acc: 3.7327, stage1_loss_bbox: 1.2853, stage1_loss_iou: 1.5602, stage1_loss_mask: 4.4117, stage2_loss_cls: 1.9464, stage2_pos_acc: 14.1383, stage2_loss_bbox: 1.2960, stage2_loss_iou: 1.5834, stage2_loss_mask: 4.4650, stage3_loss_cls: 1.9040, stage3_pos_acc: 18.7631, stage3_loss_bbox: 1.3171, stage3_loss_iou: 1.5608, stage3_loss_mask: 4.4906, stage4_loss_cls: 1.9529, stage4_pos_acc: 19.2329, stage4_loss_bbox: 1.4226, stage4_loss_iou: 1.6032, stage4_loss_mask: 4.6667, stage5_loss_cls: 1.8324, stage5_pos_acc: 21.7396, stage5_loss_bbox: 1.4392, stage5_loss_iou: 1.6493, stage5_loss_mask: 4.6460, loss: 56.8839, grad_norm: 1372.3742
2021-08-26 10:45:50,304 - mmdet - INFO - Epoch [1][250/14659]	lr: 6.244e-06, eta: 9 days, 2:47:51, time: 1.327, data_time: 0.028, memory: 9694, stage0_loss_cls: 2.0222, stage0_pos_acc: 5.2418, stage0_loss_bbox: 1.2542, stage0_loss_iou: 1.4832, stage0_loss_mask: 4.7497, stage1_loss_cls: 1.9436, stage1_pos_acc: 10.3808, stage1_loss_bbox: 1.1948, stage1_loss_iou: 1.5693, stage1_loss_mask: 4.5709, stage2_loss_cls: 1.9026, stage2_pos_acc: 18.4786, stage2_loss_bbox: 1.2399, stage2_loss_iou: 1.5680, stage2_loss_mask: 4.6608, stage3_loss_cls: 1.8653, stage3_pos_acc: 21.0621, stage3_loss_bbox: 1.2361, stage3_loss_iou: 1.5910, stage3_loss_mask: 4.5865, stage4_loss_cls: 1.8833, stage4_pos_acc: 22.2822, stage4_loss_bbox: 1.2228, stage4_loss_iou: 1.5849, stage4_loss_mask: 4.6193, stage5_loss_cls: 1.8216, stage5_pos_acc: 20.0130, stage5_loss_bbox: 1.2325, stage5_loss_iou: 1.5996, stage5_loss_mask: 4.6086, loss: 56.0107, grad_norm: 612.7333
2021-08-26 10:46:54,223 - mmdet - INFO - Epoch [1][300/14659]	lr: 7.493e-06, eta: 8 days, 21:30:13, time: 1.277, data_time: 0.028, memory: 9704, stage0_loss_cls: 1.9121, stage0_pos_acc: 13.8155, stage0_loss_bbox: 1.2645, stage0_loss_iou: 1.4868, stage0_loss_mask: 4.5622, stage1_loss_cls: 1.8526, stage1_pos_acc: 22.7398, stage1_loss_bbox: 1.2319, stage1_loss_iou: 1.5474, stage1_loss_mask: 4.4435, stage2_loss_cls: 1.8383, stage2_pos_acc: 23.9177, stage2_loss_bbox: 1.2318, stage2_loss_iou: 1.5391, stage2_loss_mask: 4.4818, stage3_loss_cls: 1.7926, stage3_pos_acc: 25.6336, stage3_loss_bbox: 1.2685, stage3_loss_iou: 1.5620, stage3_loss_mask: 4.5198, stage4_loss_cls: 1.8395, stage4_pos_acc: 25.4535, stage4_loss_bbox: 1.2195, stage4_loss_iou: 1.5786, stage4_loss_mask: 4.4744, stage5_loss_cls: 1.7601, stage5_pos_acc: 26.6780, stage5_loss_bbox: 1.2468, stage5_loss_iou: 1.5819, stage5_loss_mask: 4.4257, loss: 54.6615, grad_norm: 539.8647
2021-08-26 10:47:59,050 - mmdet - INFO - Epoch [1][350/14659]	lr: 8.741e-06, eta: 8 days, 18:07:15, time: 1.297, data_time: 0.027, memory: 10247, stage0_loss_cls: 1.8577, stage0_pos_acc: 21.1812, stage0_loss_bbox: 1.1807, stage0_loss_iou: 1.4646, stage0_loss_mask: 4.5113, stage1_loss_cls: 1.8366, stage1_pos_acc: 22.9034, stage1_loss_bbox: 1.1725, stage1_loss_iou: 1.5198, stage1_loss_mask: 4.4354, stage2_loss_cls: 1.8283, stage2_pos_acc: 24.0835, stage2_loss_bbox: 1.1699, stage2_loss_iou: 1.5183, stage2_loss_mask: 4.4518, stage3_loss_cls: 1.7548, stage3_pos_acc: 25.1505, stage3_loss_bbox: 1.1876, stage3_loss_iou: 1.5402, stage3_loss_mask: 4.3562, stage4_loss_cls: 1.7753, stage4_pos_acc: 25.4366, stage4_loss_bbox: 1.1960, stage4_loss_iou: 1.5559, stage4_loss_mask: 4.4155, stage5_loss_cls: 1.7162, stage5_pos_acc: 25.3584, stage5_loss_bbox: 1.1832, stage5_loss_iou: 1.5339, stage5_loss_mask: 4.4058, loss: 53.5674, grad_norm: 324.1903
2021-08-26 10:49:02,791 - mmdet - INFO - Epoch [1][400/14659]	lr: 9.990e-06, eta: 8 days, 15:11:47, time: 1.276, data_time: 0.029, memory: 10247, stage0_loss_cls: 1.8611, stage0_pos_acc: 25.3944, stage0_loss_bbox: 1.1805, stage0_loss_iou: 1.4688, stage0_loss_mask: 4.4686, stage1_loss_cls: 1.8132, stage1_pos_acc: 24.9371, stage1_loss_bbox: 1.1515, stage1_loss_iou: 1.4919, stage1_loss_mask: 4.3721, stage2_loss_cls: 1.7563, stage2_pos_acc: 27.1302, stage2_loss_bbox: 1.1182, stage2_loss_iou: 1.4570, stage2_loss_mask: 4.3585, stage3_loss_cls: 1.7043, stage3_pos_acc: 28.2956, stage3_loss_bbox: 1.1452, stage3_loss_iou: 1.4661, stage3_loss_mask: 4.3051, stage4_loss_cls: 1.7711, stage4_pos_acc: 28.3810, stage4_loss_bbox: 1.1635, stage4_loss_iou: 1.5052, stage4_loss_mask: 4.2992, stage5_loss_cls: 1.6963, stage5_pos_acc: 27.6587, stage5_loss_bbox: 1.2358, stage5_loss_iou: 1.5787, stage5_loss_mask: 4.3334, loss: 52.7017, grad_norm: 268.2526
2021-08-26 10:50:07,148 - mmdet - INFO - Epoch [1][450/14659]	lr: 1.124e-05, eta: 8 days, 13:06:45, time: 1.288, data_time: 0.028, memory: 10247, stage0_loss_cls: 1.8659, stage0_pos_acc: 21.0187, stage0_loss_bbox: 1.1456, stage0_loss_iou: 1.4673, stage0_loss_mask: 4.4616, stage1_loss_cls: 1.8124, stage1_pos_acc: 20.8431, stage1_loss_bbox: 1.1410, stage1_loss_iou: 1.5024, stage1_loss_mask: 4.3939, stage2_loss_cls: 1.7415, stage2_pos_acc: 22.2469, stage2_loss_bbox: 1.0791, stage2_loss_iou: 1.4552, stage2_loss_mask: 4.3218, stage3_loss_cls: 1.6844, stage3_pos_acc: 23.0229, stage3_loss_bbox: 1.0540, stage3_loss_iou: 1.4710, stage3_loss_mask: 4.2206, stage4_loss_cls: 1.7414, stage4_pos_acc: 23.0507, stage4_loss_bbox: 1.0820, stage4_loss_iou: 1.4799, stage4_loss_mask: 4.2415, stage5_loss_cls: 1.6787, stage5_pos_acc: 22.9782, stage5_loss_bbox: 1.1479, stage5_loss_iou: 1.5137, stage5_loss_mask: 4.2180, loss: 51.9207, grad_norm: 213.7757
2021-08-26 10:51:12,068 - mmdet - INFO - Epoch [1][500/14659]	lr: 1.249e-05, eta: 8 days, 11:35:48, time: 1.298, data_time: 0.028, memory: 10247, stage0_loss_cls: 1.8008, stage0_pos_acc: 26.0289, stage0_loss_bbox: 1.1152, stage0_loss_iou: 1.4964, stage0_loss_mask: 4.5369, stage1_loss_cls: 1.7177, stage1_pos_acc: 26.9426, stage1_loss_bbox: 1.0998, stage1_loss_iou: 1.4901, stage1_loss_mask: 4.4598, stage2_loss_cls: 1.6178, stage2_pos_acc: 28.2533, stage2_loss_bbox: 1.0045, stage2_loss_iou: 1.4458, stage2_loss_mask: 4.2719, stage3_loss_cls: 1.5983, stage3_pos_acc: 28.3737, stage3_loss_bbox: 0.9817, stage3_loss_iou: 1.4316, stage3_loss_mask: 4.1732, stage4_loss_cls: 1.6095, stage4_pos_acc: 28.0076, stage4_loss_bbox: 1.0100, stage4_loss_iou: 1.4404, stage4_loss_mask: 4.1736, stage5_loss_cls: 1.5825, stage5_pos_acc: 27.7242, stage5_loss_bbox: 1.0086, stage5_loss_iou: 1.4421, stage5_loss_mask: 4.1600, loss: 50.6682, grad_norm: 161.7756
2021-08-26 10:52:17,094 - mmdet - INFO - Epoch [1][550/14659]	lr: 1.374e-05, eta: 8 days, 10:22:02, time: 1.299, data_time: 0.031, memory: 10247, stage0_loss_cls: 1.7570, stage0_pos_acc: 26.8588, stage0_loss_bbox: 1.1083, stage0_loss_iou: 1.4725, stage0_loss_mask: 4.3113, stage1_loss_cls: 1.6231, stage1_pos_acc: 28.9210, stage1_loss_bbox: 1.0594, stage1_loss_iou: 1.4476, stage1_loss_mask: 4.2218, stage2_loss_cls: 1.5860, stage2_pos_acc: 28.7443, stage2_loss_bbox: 0.9882, stage2_loss_iou: 1.3822, stage2_loss_mask: 4.0420, stage3_loss_cls: 1.5382, stage3_pos_acc: 29.1749, stage3_loss_bbox: 0.9445, stage3_loss_iou: 1.3533, stage3_loss_mask: 3.9423, stage4_loss_cls: 1.5706, stage4_pos_acc: 29.3126, stage4_loss_bbox: 0.9957, stage4_loss_iou: 1.3977, stage4_loss_mask: 3.9466, stage5_loss_cls: 1.5491, stage5_pos_acc: 30.3116, stage5_loss_bbox: 1.0134, stage5_loss_iou: 1.4122, stage5_loss_mask: 3.9807, loss: 48.6438, grad_norm: 165.5006
2021-08-26 10:53:22,121 - mmdet - INFO - Epoch [1][600/14659]	lr: 1.499e-05, eta: 8 days, 9:22:05, time: 1.302, data_time: 0.029, memory: 10247, stage0_loss_cls: 1.8014, stage0_pos_acc: 24.0576, stage0_loss_bbox: 1.0937, stage0_loss_iou: 1.4583, stage0_loss_mask: 4.4055, stage1_loss_cls: 1.6502, stage1_pos_acc: 25.7283, stage1_loss_bbox: 1.0336, stage1_loss_iou: 1.4023, stage1_loss_mask: 4.2823, stage2_loss_cls: 1.6193, stage2_pos_acc: 25.9719, stage2_loss_bbox: 0.9329, stage2_loss_iou: 1.3439, stage2_loss_mask: 4.0294, stage3_loss_cls: 1.5818, stage3_pos_acc: 26.1097, stage3_loss_bbox: 0.9050, stage3_loss_iou: 1.3209, stage3_loss_mask: 3.9482, stage4_loss_cls: 1.6150, stage4_pos_acc: 25.8387, stage4_loss_bbox: 0.9309, stage4_loss_iou: 1.3462, stage4_loss_mask: 3.9348, stage5_loss_cls: 1.5781, stage5_pos_acc: 26.1663, stage5_loss_bbox: 0.9603, stage5_loss_iou: 1.3778, stage5_loss_mask: 3.9012, loss: 48.4532, grad_norm: 135.0550
2021-08-26 10:54:27,236 - mmdet - INFO - Epoch [1][650/14659]	lr: 1.623e-05, eta: 8 days, 8:30:35, time: 1.301, data_time: 0.029, memory: 10247, stage0_loss_cls: 1.7764, stage0_pos_acc: 24.2415, stage0_loss_bbox: 1.0727, stage0_loss_iou: 1.4545, stage0_loss_mask: 4.3936, stage1_loss_cls: 1.6191, stage1_pos_acc: 26.6580, stage1_loss_bbox: 0.9463, stage1_loss_iou: 1.4043, stage1_loss_mask: 4.0678, stage2_loss_cls: 1.5642, stage2_pos_acc: 26.5745, stage2_loss_bbox: 0.8532, stage2_loss_iou: 1.3563, stage2_loss_mask: 3.7875, stage3_loss_cls: 1.5627, stage3_pos_acc: 25.8296, stage3_loss_bbox: 0.8517, stage3_loss_iou: 1.3410, stage3_loss_mask: 3.7530, stage4_loss_cls: 1.5756, stage4_pos_acc: 27.2208, stage4_loss_bbox: 0.8791, stage4_loss_iou: 1.3547, stage4_loss_mask: 3.7740, stage5_loss_cls: 1.5621, stage5_pos_acc: 27.6374, stage5_loss_bbox: 0.8854, stage5_loss_iou: 1.3647, stage5_loss_mask: 3.7638, loss: 46.9637, grad_norm: 121.3612
2021-08-26 10:55:31,504 - mmdet - INFO - Epoch [1][700/14659]	lr: 1.748e-05, eta: 8 days, 7:37:41, time: 1.287, data_time: 0.026, memory: 10247, stage0_loss_cls: 1.7435, stage0_pos_acc: 26.5202, stage0_loss_bbox: 1.0671, stage0_loss_iou: 1.4283, stage0_loss_mask: 4.2096, stage1_loss_cls: 1.6122, stage1_pos_acc: 26.1169, stage1_loss_bbox: 0.9308, stage1_loss_iou: 1.3396, stage1_loss_mask: 3.8757, stage2_loss_cls: 1.5824, stage2_pos_acc: 28.1921, stage2_loss_bbox: 0.8364, stage2_loss_iou: 1.2766, stage2_loss_mask: 3.7162, stage3_loss_cls: 1.5652, stage3_pos_acc: 27.4108, stage3_loss_bbox: 0.8119, stage3_loss_iou: 1.2662, stage3_loss_mask: 3.6062, stage4_loss_cls: 1.5751, stage4_pos_acc: 28.9362, stage4_loss_bbox: 0.8734, stage4_loss_iou: 1.3164, stage4_loss_mask: 3.6785, stage5_loss_cls: 1.5723, stage5_pos_acc: 29.3925, stage5_loss_bbox: 0.9343, stage5_loss_iou: 1.3846, stage5_loss_mask: 3.6585, loss: 45.8611, grad_norm: 108.1211
2021-08-26 10:56:35,840 - mmdet - INFO - Epoch [1][750/14659]	lr: 1.873e-05, eta: 8 days, 6:51:16, time: 1.286, data_time: 0.025, memory: 10247, stage0_loss_cls: 1.7365, stage0_pos_acc: 25.9305, stage0_loss_bbox: 1.1240, stage0_loss_iou: 1.4425, stage0_loss_mask: 4.2803, stage1_loss_cls: 1.6124, stage1_pos_acc: 27.2223, stage1_loss_bbox: 0.9277, stage1_loss_iou: 1.3490, stage1_loss_mask: 3.8425, stage2_loss_cls: 1.5820, stage2_pos_acc: 27.2568, stage2_loss_bbox: 0.8401, stage2_loss_iou: 1.2861, stage2_loss_mask: 3.6427, stage3_loss_cls: 1.5647, stage3_pos_acc: 28.4669, stage3_loss_bbox: 0.8065, stage3_loss_iou: 1.2604, stage3_loss_mask: 3.5016, stage4_loss_cls: 1.5462, stage4_pos_acc: 28.9539, stage4_loss_bbox: 0.8224, stage4_loss_iou: 1.2827, stage4_loss_mask: 3.5201, stage5_loss_cls: 1.5790, stage5_pos_acc: 28.0625, stage5_loss_bbox: 0.8625, stage5_loss_iou: 1.2923, stage5_loss_mask: 3.5762, loss: 45.2801, grad_norm: 99.4212
2021-08-26 10:57:40,493 - mmdet - INFO - Epoch [1][800/14659]	lr: 1.998e-05, eta: 8 days, 6:14:34, time: 1.294, data_time: 0.029, memory: 10247, stage0_loss_cls: 1.7001, stage0_pos_acc: 27.6462, stage0_loss_bbox: 1.0656, stage0_loss_iou: 1.4303, stage0_loss_mask: 4.2057, stage1_loss_cls: 1.5513, stage1_pos_acc: 28.8365, stage1_loss_bbox: 0.8659, stage1_loss_iou: 1.3150, stage1_loss_mask: 3.6463, stage2_loss_cls: 1.5046, stage2_pos_acc: 29.7402, stage2_loss_bbox: 0.7838, stage2_loss_iou: 1.2468, stage2_loss_mask: 3.4932, stage3_loss_cls: 1.4943, stage3_pos_acc: 30.3968, stage3_loss_bbox: 0.7764, stage3_loss_iou: 1.2411, stage3_loss_mask: 3.4376, stage4_loss_cls: 1.4802, stage4_pos_acc: 31.3031, stage4_loss_bbox: 0.8314, stage4_loss_iou: 1.2847, stage4_loss_mask: 3.5074, stage5_loss_cls: 1.4886, stage5_pos_acc: 33.1703, stage5_loss_bbox: 0.8857, stage5_loss_iou: 1.3279, stage5_loss_mask: 3.5649, loss: 44.1290, grad_norm: 99.3313
2021-08-26 10:58:45,254 - mmdet - INFO - Epoch [1][850/14659]	lr: 2.123e-05, eta: 8 days, 5:42:30, time: 1.295, data_time: 0.028, memory: 10247, stage0_loss_cls: 1.7266, stage0_pos_acc: 25.7188, stage0_loss_bbox: 1.0323, stage0_loss_iou: 1.4061, stage0_loss_mask: 4.1405, stage1_loss_cls: 1.5495, stage1_pos_acc: 26.3889, stage1_loss_bbox: 0.8425, stage1_loss_iou: 1.2886, stage1_loss_mask: 3.5787, stage2_loss_cls: 1.5065, stage2_pos_acc: 28.2313, stage2_loss_bbox: 0.7561, stage2_loss_iou: 1.2177, stage2_loss_mask: 3.4587, stage3_loss_cls: 1.4781, stage3_pos_acc: 29.3448, stage3_loss_bbox: 0.7380, stage3_loss_iou: 1.2051, stage3_loss_mask: 3.3795, stage4_loss_cls: 1.4777, stage4_pos_acc: 29.7030, stage4_loss_bbox: 0.7641, stage4_loss_iou: 1.2305, stage4_loss_mask: 3.4189, stage5_loss_cls: 1.4723, stage5_pos_acc: 30.1511, stage5_loss_bbox: 0.7887, stage5_loss_iou: 1.2514, stage5_loss_mask: 3.3970, loss: 43.1052, grad_norm: 91.7964
2021-08-26 10:59:50,234 - mmdet - INFO - Epoch [1][900/14659]	lr: 2.248e-05, eta: 8 days, 5:16:40, time: 1.300, data_time: 0.028, memory: 10247, stage0_loss_cls: 1.6663, stage0_pos_acc: 27.5676, stage0_loss_bbox: 1.0314, stage0_loss_iou: 1.3817, stage0_loss_mask: 4.0167, stage1_loss_cls: 1.5113, stage1_pos_acc: 29.0569, stage1_loss_bbox: 0.8031, stage1_loss_iou: 1.2455, stage1_loss_mask: 3.4317, stage2_loss_cls: 1.4575, stage2_pos_acc: 30.5629, stage2_loss_bbox: 0.7264, stage2_loss_iou: 1.1641, stage2_loss_mask: 3.2658, stage3_loss_cls: 1.4436, stage3_pos_acc: 32.9249, stage3_loss_bbox: 0.7004, stage3_loss_iou: 1.1511, stage3_loss_mask: 3.1839, stage4_loss_cls: 1.4497, stage4_pos_acc: 32.7798, stage4_loss_bbox: 0.7274, stage4_loss_iou: 1.1699, stage4_loss_mask: 3.2156, stage5_loss_cls: 1.4516, stage5_pos_acc: 32.8581, stage5_loss_bbox: 0.7502, stage5_loss_iou: 1.1905, stage5_loss_mask: 3.2463, loss: 41.3817, grad_norm: 82.7916
2021-08-26 11:00:55,297 - mmdet - INFO - Epoch [1][950/14659]	lr: 2.373e-05, eta: 8 days, 4:53:37, time: 1.301, data_time: 0.029, memory: 10247, stage0_loss_cls: 1.6926, stage0_pos_acc: 24.8302, stage0_loss_bbox: 1.0278, stage0_loss_iou: 1.4474, stage0_loss_mask: 4.1217, stage1_loss_cls: 1.5174, stage1_pos_acc: 26.3150, stage1_loss_bbox: 0.7981, stage1_loss_iou: 1.3047, stage1_loss_mask: 3.5163, stage2_loss_cls: 1.4542, stage2_pos_acc: 29.7413, stage2_loss_bbox: 0.6940, stage2_loss_iou: 1.2135, stage2_loss_mask: 3.3908, stage3_loss_cls: 1.4345, stage3_pos_acc: 30.6539, stage3_loss_bbox: 0.6771, stage3_loss_iou: 1.1896, stage3_loss_mask: 3.3288, stage4_loss_cls: 1.4294, stage4_pos_acc: 31.8011, stage4_loss_bbox: 0.6830, stage4_loss_iou: 1.1847, stage4_loss_mask: 3.3438, stage5_loss_cls: 1.4342, stage5_pos_acc: 30.7768, stage5_loss_bbox: 0.6908, stage5_loss_iou: 1.1846, stage5_loss_mask: 3.3271, loss: 42.0862, grad_norm: 80.4314
2021-08-26 11:01:59,821 - mmdet - INFO - Exp name: queryinst_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py
2021-08-26 11:01:59,821 - mmdet - INFO - Epoch [1][1000/14659]	lr: 2.498e-05, eta: 8 days, 4:28:13, time: 1.290, data_time: 0.029, memory: 10247, stage0_loss_cls: 1.7185, stage0_pos_acc: 23.4859, stage0_loss_bbox: 1.0140, stage0_loss_iou: 1.3786, stage0_loss_mask: 3.8208, stage1_loss_cls: 1.5292, stage1_pos_acc: 27.5284, stage1_loss_bbox: 0.8249, stage1_loss_iou: 1.2268, stage1_loss_mask: 3.2860, stage2_loss_cls: 1.4810, stage2_pos_acc: 28.5340, stage2_loss_bbox: 0.7470, stage2_loss_iou: 1.1554, stage2_loss_mask: 3.1576, stage3_loss_cls: 1.4664, stage3_pos_acc: 31.6244, stage3_loss_bbox: 0.7178, stage3_loss_iou: 1.1216, stage3_loss_mask: 3.0968, stage4_loss_cls: 1.4632, stage4_pos_acc: 31.3865, stage4_loss_bbox: 0.7182, stage4_loss_iou: 1.1125, stage4_loss_mask: 3.1103, stage5_loss_cls: 1.4789, stage5_pos_acc: 31.1255, stage5_loss_bbox: 0.7303, stage5_loss_iou: 1.1205, stage5_loss_mask: 3.1161, loss: 40.5925, grad_norm: 91.0885
2021-08-26 11:03:05,687 - mmdet - INFO - Epoch [1][1050/14659]	lr: 2.500e-05, eta: 8 days, 4:16:47, time: 1.318, data_time: 0.028, memory: 10260, stage0_loss_cls: 1.6991, stage0_pos_acc: 26.2447, stage0_loss_bbox: 1.0155, stage0_loss_iou: 1.4066, stage0_loss_mask: 3.8848, stage1_loss_cls: 1.4980, stage1_pos_acc: 27.5659, stage1_loss_bbox: 0.7647, stage1_loss_iou: 1.2471, stage1_loss_mask: 3.2741, stage2_loss_cls: 1.4364, stage2_pos_acc: 29.7703, stage2_loss_bbox: 0.6777, stage2_loss_iou: 1.1569, stage2_loss_mask: 3.1341, stage3_loss_cls: 1.4320, stage3_pos_acc: 30.3810, stage3_loss_bbox: 0.6392, stage3_loss_iou: 1.1158, stage3_loss_mask: 3.0552, stage4_loss_cls: 1.4235, stage4_pos_acc: 30.6780, stage4_loss_bbox: 0.6377, stage4_loss_iou: 1.1067, stage4_loss_mask: 3.0394, stage5_loss_cls: 1.4280, stage5_pos_acc: 30.8946, stage5_loss_bbox: 0.6443, stage5_loss_iou: 1.1061, stage5_loss_mask: 3.0546, loss: 39.8778, grad_norm: 81.4806
2021-08-26 11:04:09,375 - mmdet - INFO - Epoch [1][1100/14659]	lr: 2.500e-05, eta: 8 days, 3:48:30, time: 1.274, data_time: 0.026, memory: 10260, stage0_loss_cls: 1.6987, stage0_pos_acc: 23.8894, stage0_loss_bbox: 1.0367, stage0_loss_iou: 1.3789, stage0_loss_mask: 3.7901, stage1_loss_cls: 1.4886, stage1_pos_acc: 28.3727, stage1_loss_bbox: 0.7877, stage1_loss_iou: 1.2206, stage1_loss_mask: 3.2159, stage2_loss_cls: 1.4322, stage2_pos_acc: 30.4321, stage2_loss_bbox: 0.6857, stage2_loss_iou: 1.1131, stage2_loss_mask: 3.1053, stage3_loss_cls: 1.4224, stage3_pos_acc: 32.5551, stage3_loss_bbox: 0.6554, stage3_loss_iou: 1.0726, stage3_loss_mask: 3.0383, stage4_loss_cls: 1.4080, stage4_pos_acc: 33.1763, stage4_loss_bbox: 0.6439, stage4_loss_iou: 1.0641, stage4_loss_mask: 3.0504, stage5_loss_cls: 1.4335, stage5_pos_acc: 32.6161, stage5_loss_bbox: 0.6368, stage5_loss_iou: 1.0557, stage5_loss_mask: 3.0440, loss: 39.4782, grad_norm: 78.5361
2021-08-26 11:05:14,090 - mmdet - INFO - Epoch [1][1150/14659]	lr: 2.500e-05, eta: 8 days, 3:30:28, time: 1.294, data_time: 0.029, memory: 10260, stage0_loss_cls: 1.6715, stage0_pos_acc: 27.3082, stage0_loss_bbox: 1.0007, stage0_loss_iou: 1.4006, stage0_loss_mask: 3.7686, stage1_loss_cls: 1.4621, stage1_pos_acc: 29.3495, stage1_loss_bbox: 0.7468, stage1_loss_iou: 1.2150, stage1_loss_mask: 3.1715, stage2_loss_cls: 1.4031, stage2_pos_acc: 32.7003, stage2_loss_bbox: 0.6451, stage2_loss_iou: 1.1141, stage2_loss_mask: 2.9907, stage3_loss_cls: 1.3903, stage3_pos_acc: 34.3246, stage3_loss_bbox: 0.6216, stage3_loss_iou: 1.0744, stage3_loss_mask: 2.9547, stage4_loss_cls: 1.3798, stage4_pos_acc: 34.3448, stage4_loss_bbox: 0.6325, stage4_loss_iou: 1.0721, stage4_loss_mask: 2.9729, stage5_loss_cls: 1.4005, stage5_pos_acc: 33.8655, stage5_loss_bbox: 0.6101, stage5_loss_iou: 1.0476, stage5_loss_mask: 2.9774, loss: 38.7239, grad_norm: 78.3448
2021-08-26 11:06:18,885 - mmdet - INFO - Epoch [1][1200/14659]	lr: 2.500e-05, eta: 8 days, 3:14:17, time: 1.295, data_time: 0.024, memory: 10260, stage0_loss_cls: 1.6741, stage0_pos_acc: 26.5082, stage0_loss_bbox: 1.0016, stage0_loss_iou: 1.3997, stage0_loss_mask: 3.8476, stage1_loss_cls: 1.4358, stage1_pos_acc: 29.2393, stage1_loss_bbox: 0.7487, stage1_loss_iou: 1.2229, stage1_loss_mask: 3.2792, stage2_loss_cls: 1.3819, stage2_pos_acc: 30.4406, stage2_loss_bbox: 0.6568, stage2_loss_iou: 1.1234, stage2_loss_mask: 3.1402, stage3_loss_cls: 1.3637, stage3_pos_acc: 33.2124, stage3_loss_bbox: 0.6305, stage3_loss_iou: 1.0950, stage3_loss_mask: 3.1075, stage4_loss_cls: 1.3658, stage4_pos_acc: 32.1867, stage4_loss_bbox: 0.6289, stage4_loss_iou: 1.0902, stage4_loss_mask: 3.0865, stage5_loss_cls: 1.3765, stage5_pos_acc: 33.1551, stage5_loss_bbox: 0.6318, stage5_loss_iou: 1.1011, stage5_loss_mask: 3.1301, loss: 39.5195, grad_norm: 76.9653
2021-08-26 11:07:25,167 - mmdet - INFO - Epoch [1][1250/14659]	lr: 2.500e-05, eta: 8 days, 3:10:12, time: 1.326, data_time: 0.035, memory: 10260, stage0_loss_cls: 1.6800, stage0_pos_acc: 25.1354, stage0_loss_bbox: 0.9629, stage0_loss_iou: 1.3814, stage0_loss_mask: 3.5957, stage1_loss_cls: 1.4227, stage1_pos_acc: 28.0095, stage1_loss_bbox: 0.7004, stage1_loss_iou: 1.1653, stage1_loss_mask: 3.0698, stage2_loss_cls: 1.3556, stage2_pos_acc: 31.2645, stage2_loss_bbox: 0.6103, stage2_loss_iou: 1.0676, stage2_loss_mask: 2.9250, stage3_loss_cls: 1.3398, stage3_pos_acc: 33.5218, stage3_loss_bbox: 0.5740, stage3_loss_iou: 1.0287, stage3_loss_mask: 2.8877, stage4_loss_cls: 1.3407, stage4_pos_acc: 34.1780, stage4_loss_bbox: 0.5645, stage4_loss_iou: 1.0191, stage4_loss_mask: 2.8721, stage5_loss_cls: 1.3556, stage5_pos_acc: 33.6427, stage5_loss_bbox: 0.5609, stage5_loss_iou: 1.0110, stage5_loss_mask: 2.8753, loss: 37.3662, grad_norm: 75.3087
2021-08-26 11:08:30,769 - mmdet - INFO - Epoch [1][1300/14659]	lr: 2.500e-05, eta: 8 days, 3:01:29, time: 1.312, data_time: 0.031, memory: 10260, stage0_loss_cls: 1.6479, stage0_pos_acc: 28.9296, stage0_loss_bbox: 0.9668, stage0_loss_iou: 1.4114, stage0_loss_mask: 3.8106, stage1_loss_cls: 1.3825, stage1_pos_acc: 32.6764, stage1_loss_bbox: 0.6959, stage1_loss_iou: 1.2048, stage1_loss_mask: 3.2734, stage2_loss_cls: 1.3125, stage2_pos_acc: 35.9537, stage2_loss_bbox: 0.6092, stage2_loss_iou: 1.1096, stage2_loss_mask: 3.1411, stage3_loss_cls: 1.3076, stage3_pos_acc: 37.4952, stage3_loss_bbox: 0.5831, stage3_loss_iou: 1.0698, stage3_loss_mask: 3.0958, stage4_loss_cls: 1.2939, stage4_pos_acc: 38.1969, stage4_loss_bbox: 0.5723, stage4_loss_iou: 1.0563, stage4_loss_mask: 3.0967, stage5_loss_cls: 1.3119, stage5_pos_acc: 37.5753, stage5_loss_bbox: 0.5783, stage5_loss_iou: 1.0544, stage5_loss_mask: 3.1193, loss: 38.7049, grad_norm: 69.7411
2021-08-26 11:09:37,150 - mmdet - INFO - Epoch [1][1350/14659]	lr: 2.500e-05, eta: 8 days, 2:58:17, time: 1.327, data_time: 0.031, memory: 10260, stage0_loss_cls: 1.6179, stage0_pos_acc: 29.8083, stage0_loss_bbox: 0.9617, stage0_loss_iou: 1.4000, stage0_loss_mask: 3.6221, stage1_loss_cls: 1.3624, stage1_pos_acc: 32.9784, stage1_loss_bbox: 0.7101, stage1_loss_iou: 1.1954, stage1_loss_mask: 3.0473, stage2_loss_cls: 1.3282, stage2_pos_acc: 34.8804, stage2_loss_bbox: 0.6097, stage2_loss_iou: 1.0835, stage2_loss_mask: 2.9072, stage3_loss_cls: 1.3201, stage3_pos_acc: 36.4125, stage3_loss_bbox: 0.5731, stage3_loss_iou: 1.0404, stage3_loss_mask: 2.8688, stage4_loss_cls: 1.3002, stage4_pos_acc: 37.3596, stage4_loss_bbox: 0.5613, stage4_loss_iou: 1.0244, stage4_loss_mask: 2.8589, stage5_loss_cls: 1.3196, stage5_pos_acc: 37.2872, stage5_loss_bbox: 0.5531, stage5_loss_iou: 1.0206, stage5_loss_mask: 2.8500, loss: 37.1359, grad_norm: 74.0133
2021-08-26 11:10:42,106 - mmdet - INFO - Epoch [1][1400/14659]	lr: 2.500e-05, eta: 8 days, 2:46:29, time: 1.299, data_time: 0.027, memory: 10260, stage0_loss_cls: 1.6639, stage0_pos_acc: 24.5354, stage0_loss_bbox: 0.9730, stage0_loss_iou: 1.3973, stage0_loss_mask: 3.6086, stage1_loss_cls: 1.4075, stage1_pos_acc: 27.8628, stage1_loss_bbox: 0.7055, stage1_loss_iou: 1.1755, stage1_loss_mask: 3.0696, stage2_loss_cls: 1.3575, stage2_pos_acc: 31.3315, stage2_loss_bbox: 0.6009, stage2_loss_iou: 1.0746, stage2_loss_mask: 2.9750, stage3_loss_cls: 1.3346, stage3_pos_acc: 35.5107, stage3_loss_bbox: 0.5721, stage3_loss_iou: 1.0375, stage3_loss_mask: 2.9445, stage4_loss_cls: 1.3261, stage4_pos_acc: 34.3865, stage4_loss_bbox: 0.5639, stage4_loss_iou: 1.0271, stage4_loss_mask: 2.9395, stage5_loss_cls: 1.3379, stage5_pos_acc: 34.6709, stage5_loss_bbox: 0.5691, stage5_loss_iou: 1.0290, stage5_loss_mask: 2.9497, loss: 37.6400, grad_norm: 71.6296
2021-08-26 11:11:47,091 - mmdet - INFO - Epoch [1][1450/14659]	lr: 2.500e-05, eta: 8 days, 2:35:03, time: 1.298, data_time: 0.026, memory: 10260, stage0_loss_cls: 1.6694, stage0_pos_acc: 25.8043, stage0_loss_bbox: 0.9478, stage0_loss_iou: 1.3652, stage0_loss_mask: 3.5024, stage1_loss_cls: 1.4135, stage1_pos_acc: 30.3336, stage1_loss_bbox: 0.6791, stage1_loss_iou: 1.1441, stage1_loss_mask: 2.9723, stage2_loss_cls: 1.3526, stage2_pos_acc: 33.2251, stage2_loss_bbox: 0.6031, stage2_loss_iou: 1.0438, stage2_loss_mask: 2.8801, stage3_loss_cls: 1.3430, stage3_pos_acc: 35.2084, stage3_loss_bbox: 0.5677, stage3_loss_iou: 1.0006, stage3_loss_mask: 2.8446, stage4_loss_cls: 1.3277, stage4_pos_acc: 35.9694, stage4_loss_bbox: 0.5671, stage4_loss_iou: 0.9964, stage4_loss_mask: 2.8584, stage5_loss_cls: 1.3488, stage5_pos_acc: 35.4904, stage5_loss_bbox: 0.5653, stage5_loss_iou: 0.9986, stage5_loss_mask: 2.8676, loss: 36.8592, grad_norm: 73.8468
2021-08-26 11:12:52,427 - mmdet - INFO - Epoch [1][1500/14659]	lr: 2.500e-05, eta: 8 days, 2:27:06, time: 1.308, data_time: 0.028, memory: 10260, stage0_loss_cls: 1.6548, stage0_pos_acc: 25.4055, stage0_loss_bbox: 0.9172, stage0_loss_iou: 1.3403, stage0_loss_mask: 3.5027, stage1_loss_cls: 1.3948, stage1_pos_acc: 30.2240, stage1_loss_bbox: 0.6886, stage1_loss_iou: 1.1475, stage1_loss_mask: 2.9771, stage2_loss_cls: 1.3370, stage2_pos_acc: 33.6155, stage2_loss_bbox: 0.5852, stage2_loss_iou: 1.0394, stage2_loss_mask: 2.8556, stage3_loss_cls: 1.3124, stage3_pos_acc: 37.1978, stage3_loss_bbox: 0.5461, stage3_loss_iou: 0.9921, stage3_loss_mask: 2.8452, stage4_loss_cls: 1.3030, stage4_pos_acc: 38.8616, stage4_loss_bbox: 0.5495, stage4_loss_iou: 0.9866, stage4_loss_mask: 2.8609, stage5_loss_cls: 1.3085, stage5_pos_acc: 38.3760, stage5_loss_bbox: 0.5476, stage5_loss_iou: 0.9827, stage5_loss_mask: 2.8690, loss: 36.5439, grad_norm: 70.3628
2021-08-26 11:13:57,001 - mmdet - INFO - Epoch [1][1550/14659]	lr: 2.500e-05, eta: 8 days, 2:15:13, time: 1.292, data_time: 0.027, memory: 10260, stage0_loss_cls: 1.6851, stage0_pos_acc: 24.5149, stage0_loss_bbox: 0.9397, stage0_loss_iou: 1.3625, stage0_loss_mask: 3.4770, stage1_loss_cls: 1.4110, stage1_pos_acc: 28.2324, stage1_loss_bbox: 0.6584, stage1_loss_iou: 1.1252, stage1_loss_mask: 2.8640, stage2_loss_cls: 1.3431, stage2_pos_acc: 32.4330, stage2_loss_bbox: 0.5522, stage2_loss_iou: 1.0091, stage2_loss_mask: 2.7325, stage3_loss_cls: 1.3222, stage3_pos_acc: 34.9220, stage3_loss_bbox: 0.5196, stage3_loss_iou: 0.9702, stage3_loss_mask: 2.7097, stage4_loss_cls: 1.3133, stage4_pos_acc: 35.7501, stage4_loss_bbox: 0.5125, stage4_loss_iou: 0.9522, stage4_loss_mask: 2.7075, stage5_loss_cls: 1.3295, stage5_pos_acc: 35.7375, stage5_loss_bbox: 0.5176, stage5_loss_iou: 0.9538, stage5_loss_mask: 2.7439, loss: 35.7117, grad_norm: 72.5540
2021-08-26 11:15:02,440 - mmdet - INFO - Epoch [1][1600/14659]	lr: 2.500e-05, eta: 8 days, 2:08:33, time: 1.309, data_time: 0.031, memory: 10260, stage0_loss_cls: 1.6480, stage0_pos_acc: 26.2355, stage0_loss_bbox: 0.9417, stage0_loss_iou: 1.3586, stage0_loss_mask: 3.4764, stage1_loss_cls: 1.3551, stage1_pos_acc: 31.0848, stage1_loss_bbox: 0.6607, stage1_loss_iou: 1.1333, stage1_loss_mask: 2.8353, stage2_loss_cls: 1.2772, stage2_pos_acc: 35.2232, stage2_loss_bbox: 0.5610, stage2_loss_iou: 1.0170, stage2_loss_mask: 2.7561, stage3_loss_cls: 1.2661, stage3_pos_acc: 36.9457, stage3_loss_bbox: 0.5299, stage3_loss_iou: 0.9758, stage3_loss_mask: 2.7287, stage4_loss_cls: 1.2547, stage4_pos_acc: 38.4415, stage4_loss_bbox: 0.5165, stage4_loss_iou: 0.9627, stage4_loss_mask: 2.7384, stage5_loss_cls: 1.2716, stage5_pos_acc: 39.2369, stage5_loss_bbox: 0.5176, stage5_loss_iou: 0.9582, stage5_loss_mask: 2.7461, loss: 35.4867, grad_norm: 70.7494
2021-08-26 11:16:07,147 - mmdet - INFO - Epoch [1][1650/14659]	lr: 2.500e-05, eta: 8 days, 1:58:19, time: 1.294, data_time: 0.027, memory: 10260, stage0_loss_cls: 1.6747, stage0_pos_acc: 25.6228, stage0_loss_bbox: 0.9274, stage0_loss_iou: 1.3638, stage0_loss_mask: 3.3953, stage1_loss_cls: 1.3673, stage1_pos_acc: 29.8191, stage1_loss_bbox: 0.6477, stage1_loss_iou: 1.1207, stage1_loss_mask: 2.8112, stage2_loss_cls: 1.3010, stage2_pos_acc: 33.2384, stage2_loss_bbox: 0.5509, stage2_loss_iou: 1.0006, stage2_loss_mask: 2.6553, stage3_loss_cls: 1.2719, stage3_pos_acc: 36.5612, stage3_loss_bbox: 0.5095, stage3_loss_iou: 0.9532, stage3_loss_mask: 2.6669, stage4_loss_cls: 1.2616, stage4_pos_acc: 38.9800, stage4_loss_bbox: 0.5050, stage4_loss_iou: 0.9380, stage4_loss_mask: 2.6746, stage5_loss_cls: 1.2729, stage5_pos_acc: 38.0502, stage5_loss_bbox: 0.4997, stage5_loss_iou: 0.9334, stage5_loss_mask: 2.6741, loss: 34.9768, grad_norm: 71.5673
2021-08-26 11:17:11,737 - mmdet - INFO - Epoch [1][1700/14659]	lr: 2.500e-05, eta: 8 days, 1:48:00, time: 1.292, data_time: 0.028, memory: 10260, stage0_loss_cls: 1.6531, stage0_pos_acc: 24.8506, stage0_loss_bbox: 0.9111, stage0_loss_iou: 1.3768, stage0_loss_mask: 3.3537, stage1_loss_cls: 1.3289, stage1_pos_acc: 29.6029, stage1_loss_bbox: 0.6354, stage1_loss_iou: 1.1256, stage1_loss_mask: 2.7265, stage2_loss_cls: 1.2492, stage2_pos_acc: 35.0890, stage2_loss_bbox: 0.5676, stage2_loss_iou: 1.0286, stage2_loss_mask: 2.6394, stage3_loss_cls: 1.2338, stage3_pos_acc: 40.2179, stage3_loss_bbox: 0.5471, stage3_loss_iou: 0.9936, stage3_loss_mask: 2.5983, stage4_loss_cls: 1.2161, stage4_pos_acc: 41.3352, stage4_loss_bbox: 0.5345, stage4_loss_iou: 0.9823, stage4_loss_mask: 2.5976, stage5_loss_cls: 1.2298, stage5_pos_acc: 40.0981, stage5_loss_bbox: 0.5292, stage5_loss_iou: 0.9787, stage5_loss_mask: 2.6037, loss: 34.6408, grad_norm: 74.2182
2021-08-26 11:18:17,203 - mmdet - INFO - Epoch [1][1750/14659]	lr: 2.500e-05, eta: 8 days, 1:42:19, time: 1.308, data_time: 0.029, memory: 10260, stage0_loss_cls: 1.6493, stage0_pos_acc: 27.4200, stage0_loss_bbox: 0.9352, stage0_loss_iou: 1.3797, stage0_loss_mask: 3.4129, stage1_loss_cls: 1.3249, stage1_pos_acc: 33.7146, stage1_loss_bbox: 0.6363, stage1_loss_iou: 1.1272, stage1_loss_mask: 2.7844, stage2_loss_cls: 1.2501, stage2_pos_acc: 38.5490, stage2_loss_bbox: 0.5377, stage2_loss_iou: 1.0149, stage2_loss_mask: 2.6762, stage3_loss_cls: 1.2411, stage3_pos_acc: 42.5941, stage3_loss_bbox: 0.5090, stage3_loss_iou: 0.9733, stage3_loss_mask: 2.6403, stage4_loss_cls: 1.2254, stage4_pos_acc: 42.4388, stage4_loss_bbox: 0.4923, stage4_loss_iou: 0.9571, stage4_loss_mask: 2.6582, stage5_loss_cls: 1.2398, stage5_pos_acc: 42.0586, stage5_loss_bbox: 0.4944, stage5_loss_iou: 0.9531, stage5_loss_mask: 2.6798, loss: 34.7925, grad_norm: 70.7024
2021-08-26 11:19:22,128 - mmdet - INFO - Epoch [1][1800/14659]	lr: 2.500e-05, eta: 8 days, 1:34:55, time: 1.300, data_time: 0.030, memory: 10379, stage0_loss_cls: 1.6151, stage0_pos_acc: 28.5230, stage0_loss_bbox: 0.8962, stage0_loss_iou: 1.3379, stage0_loss_mask: 3.3015, stage1_loss_cls: 1.3039, stage1_pos_acc: 33.2951, stage1_loss_bbox: 0.6194, stage1_loss_iou: 1.0892, stage1_loss_mask: 2.7000, stage2_loss_cls: 1.2418, stage2_pos_acc: 38.5387, stage2_loss_bbox: 0.5380, stage2_loss_iou: 0.9809, stage2_loss_mask: 2.5582, stage3_loss_cls: 1.2145, stage3_pos_acc: 42.3104, stage3_loss_bbox: 0.5076, stage3_loss_iou: 0.9360, stage3_loss_mask: 2.5354, stage4_loss_cls: 1.1984, stage4_pos_acc: 43.3533, stage4_loss_bbox: 0.5003, stage4_loss_iou: 0.9203, stage4_loss_mask: 2.5300, stage5_loss_cls: 1.2125, stage5_pos_acc: 42.7439, stage5_loss_bbox: 0.4876, stage5_loss_iou: 0.9110, stage5_loss_mask: 2.5421, loss: 33.6781, grad_norm: 74.8392
2021-08-26 11:20:26,881 - mmdet - INFO - Epoch [1][1850/14659]	lr: 2.500e-05, eta: 8 days, 1:26:36, time: 1.295, data_time: 0.026, memory: 10379, stage0_loss_cls: 1.6402, stage0_pos_acc: 26.3269, stage0_loss_bbox: 0.9378, stage0_loss_iou: 1.3596, stage0_loss_mask: 3.3698, stage1_loss_cls: 1.3408, stage1_pos_acc: 31.4008, stage1_loss_bbox: 0.6353, stage1_loss_iou: 1.0938, stage1_loss_mask: 2.7399, stage2_loss_cls: 1.2689, stage2_pos_acc: 35.9966, stage2_loss_bbox: 0.5364, stage2_loss_iou: 0.9779, stage2_loss_mask: 2.6163, stage3_loss_cls: 1.2561, stage3_pos_acc: 37.8304, stage3_loss_bbox: 0.5033, stage3_loss_iou: 0.9412, stage3_loss_mask: 2.5846, stage4_loss_cls: 1.2425, stage4_pos_acc: 38.1376, stage4_loss_bbox: 0.5033, stage4_loss_iou: 0.9371, stage4_loss_mask: 2.6020, stage5_loss_cls: 1.2553, stage5_pos_acc: 40.5602, stage5_loss_bbox: 0.4952, stage5_loss_iou: 0.9264, stage5_loss_mask: 2.6149, loss: 34.3788, grad_norm: 69.8435
2021-08-26 11:21:31,601 - mmdet - INFO - Epoch [1][1900/14659]	lr: 2.500e-05, eta: 8 days, 1:18:42, time: 1.295, data_time: 0.029, memory: 10379, stage0_loss_cls: 1.6763, stage0_pos_acc: 23.5548, stage0_loss_bbox: 0.9613, stage0_loss_iou: 1.3327, stage0_loss_mask: 3.2624, stage1_loss_cls: 1.3659, stage1_pos_acc: 28.5197, stage1_loss_bbox: 0.6521, stage1_loss_iou: 1.0761, stage1_loss_mask: 2.6724, stage2_loss_cls: 1.3015, stage2_pos_acc: 33.7910, stage2_loss_bbox: 0.5524, stage2_loss_iou: 0.9623, stage2_loss_mask: 2.5465, stage3_loss_cls: 1.2769, stage3_pos_acc: 37.5364, stage3_loss_bbox: 0.5105, stage3_loss_iou: 0.9124, stage3_loss_mask: 2.5400, stage4_loss_cls: 1.2598, stage4_pos_acc: 38.8722, stage4_loss_bbox: 0.5056, stage4_loss_iou: 0.8961, stage4_loss_mask: 2.5365, stage5_loss_cls: 1.2736, stage5_pos_acc: 40.1570, stage5_loss_bbox: 0.4917, stage5_loss_iou: 0.8863, stage5_loss_mask: 2.5463, loss: 33.9976, grad_norm: 70.5757
2021-08-26 11:22:36,294 - mmdet - INFO - Epoch [1][1950/14659]	lr: 2.500e-05, eta: 8 days, 1:10:51, time: 1.293, data_time: 0.027, memory: 10379, stage0_loss_cls: 1.6400, stage0_pos_acc: 26.8736, stage0_loss_bbox: 0.9084, stage0_loss_iou: 1.3418, stage0_loss_mask: 3.4105, stage1_loss_cls: 1.3184, stage1_pos_acc: 31.7751, stage1_loss_bbox: 0.6120, stage1_loss_iou: 1.0903, stage1_loss_mask: 2.7483, stage2_loss_cls: 1.2402, stage2_pos_acc: 37.1692, stage2_loss_bbox: 0.5336, stage2_loss_iou: 0.9890, stage2_loss_mask: 2.6649, stage3_loss_cls: 1.2257, stage3_pos_acc: 41.1638, stage3_loss_bbox: 0.5075, stage3_loss_iou: 0.9536, stage3_loss_mask: 2.6287, stage4_loss_cls: 1.2063, stage4_pos_acc: 42.5924, stage4_loss_bbox: 0.5062, stage4_loss_iou: 0.9484, stage4_loss_mask: 2.6098, stage5_loss_cls: 1.2333, stage5_pos_acc: 43.1168, stage5_loss_bbox: 0.5024, stage5_loss_iou: 0.9465, stage5_loss_mask: 2.6365, loss: 34.4023, grad_norm: 68.9732
2021-08-26 11:23:41,363 - mmdet - INFO - Exp name: queryinst_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py
2021-08-26 11:23:41,364 - mmdet - INFO - Epoch [1][2000/14659]	lr: 2.500e-05, eta: 8 days, 1:05:07, time: 1.302, data_time: 0.033, memory: 10379, stage0_loss_cls: 1.6157, stage0_pos_acc: 28.0797, stage0_loss_bbox: 0.8930, stage0_loss_iou: 1.3461, stage0_loss_mask: 3.2833, stage1_loss_cls: 1.2790, stage1_pos_acc: 33.3812, stage1_loss_bbox: 0.5989, stage1_loss_iou: 1.0778, stage1_loss_mask: 2.6179, stage2_loss_cls: 1.1971, stage2_pos_acc: 40.0282, stage2_loss_bbox: 0.5161, stage2_loss_iou: 0.9692, stage2_loss_mask: 2.4923, stage3_loss_cls: 1.1864, stage3_pos_acc: 41.1632, stage3_loss_bbox: 0.4747, stage3_loss_iou: 0.9215, stage3_loss_mask: 2.4593, stage4_loss_cls: 1.1722, stage4_pos_acc: 44.4289, stage4_loss_bbox: 0.4639, stage4_loss_iou: 0.9089, stage4_loss_mask: 2.4508, stage5_loss_cls: 1.1835, stage5_pos_acc: 44.9918, stage5_loss_bbox: 0.4557, stage5_loss_iou: 0.8996, stage5_loss_mask: 2.4687, loss: 32.9316, grad_norm: 67.7723
2021-08-26 11:24:46,291 - mmdet - INFO - Epoch [1][2050/14659]	lr: 2.500e-05, eta: 8 days, 0:58:58, time: 1.299, data_time: 0.025, memory: 10379, stage0_loss_cls: 1.6159, stage0_pos_acc: 28.6850, stage0_loss_bbox: 0.9238, stage0_loss_iou: 1.3584, stage0_loss_mask: 3.3194, stage1_loss_cls: 1.2727, stage1_pos_acc: 34.6746, stage1_loss_bbox: 0.6231, stage1_loss_iou: 1.1023, stage1_loss_mask: 2.7153, stage2_loss_cls: 1.1760, stage2_pos_acc: 40.8597, stage2_loss_bbox: 0.5384, stage2_loss_iou: 0.9938, stage2_loss_mask: 2.5986, stage3_loss_cls: 1.1673, stage3_pos_acc: 42.9596, stage3_loss_bbox: 0.5067, stage3_loss_iou: 0.9520, stage3_loss_mask: 2.5876, stage4_loss_cls: 1.1475, stage4_pos_acc: 46.5867, stage4_loss_bbox: 0.4939, stage4_loss_iou: 0.9400, stage4_loss_mask: 2.5573, stage5_loss_cls: 1.1630, stage5_pos_acc: 44.7366, stage5_loss_bbox: 0.4903, stage5_loss_iou: 0.9357, stage5_loss_mask: 2.5647, loss: 33.7436, grad_norm: 69.3526
2021-08-26 11:25:52,366 - mmdet - INFO - Epoch [1][2100/14659]	lr: 2.500e-05, eta: 8 days, 0:57:52, time: 1.322, data_time: 0.031, memory: 10379, stage0_loss_cls: 1.5835, stage0_pos_acc: 29.9282, stage0_loss_bbox: 0.8743, stage0_loss_iou: 1.3925, stage0_loss_mask: 3.4292, stage1_loss_cls: 1.2144, stage1_pos_acc: 36.4214, stage1_loss_bbox: 0.6154, stage1_loss_iou: 1.1387, stage1_loss_mask: 2.7780, stage2_loss_cls: 1.1328, stage2_pos_acc: 42.5968, stage2_loss_bbox: 0.5381, stage2_loss_iou: 1.0282, stage2_loss_mask: 2.6629, stage3_loss_cls: 1.1280, stage3_pos_acc: 46.1657, stage3_loss_bbox: 0.5008, stage3_loss_iou: 0.9818, stage3_loss_mask: 2.6260, stage4_loss_cls: 1.1079, stage4_pos_acc: 47.8904, stage4_loss_bbox: 0.4936, stage4_loss_iou: 0.9680, stage4_loss_mask: 2.6379, stage5_loss_cls: 1.1114, stage5_pos_acc: 49.2550, stage5_loss_bbox: 0.4914, stage5_loss_iou: 0.9609, stage5_loss_mask: 2.6372, loss: 34.0325, grad_norm: 67.0531
2021-08-26 11:26:55,927 - mmdet - INFO - Epoch [1][2150/14659]	lr: 2.500e-05, eta: 8 days, 0:46:29, time: 1.271, data_time: 0.022, memory: 10379, stage0_loss_cls: 1.6465, stage0_pos_acc: 27.2861, stage0_loss_bbox: 0.9112, stage0_loss_iou: 1.3809, stage0_loss_mask: 3.2773, stage1_loss_cls: 1.2806, stage1_pos_acc: 32.4545, stage1_loss_bbox: 0.5916, stage1_loss_iou: 1.0747, stage1_loss_mask: 2.6087, stage2_loss_cls: 1.2013, stage2_pos_acc: 37.1091, stage2_loss_bbox: 0.5088, stage2_loss_iou: 0.9662, stage2_loss_mask: 2.4953, stage3_loss_cls: 1.1934, stage3_pos_acc: 40.7695, stage3_loss_bbox: 0.4824, stage3_loss_iou: 0.9193, stage3_loss_mask: 2.4814, stage4_loss_cls: 1.1760, stage4_pos_acc: 41.7712, stage4_loss_bbox: 0.4698, stage4_loss_iou: 0.9015, stage4_loss_mask: 2.4674, stage5_loss_cls: 1.1843, stage5_pos_acc: 42.3751, stage5_loss_bbox: 0.4690, stage5_loss_iou: 0.8993, stage5_loss_mask: 2.4923, loss: 33.0790, grad_norm: 72.1277
2021-08-26 11:28:00,678 - mmdet - INFO - Epoch [1][2200/14659]	lr: 2.500e-05, eta: 8 days, 0:40:20, time: 1.295, data_time: 0.028, memory: 10379, stage0_loss_cls: 1.6484, stage0_pos_acc: 26.1184, stage0_loss_bbox: 0.9199, stage0_loss_iou: 1.3046, stage0_loss_mask: 3.1031, stage1_loss_cls: 1.2997, stage1_pos_acc: 33.3425, stage1_loss_bbox: 0.6182, stage1_loss_iou: 1.0203, stage1_loss_mask: 2.4607, stage2_loss_cls: 1.2138, stage2_pos_acc: 39.4199, stage2_loss_bbox: 0.5175, stage2_loss_iou: 0.9011, stage2_loss_mask: 2.3768, stage3_loss_cls: 1.1978, stage3_pos_acc: 42.3183, stage3_loss_bbox: 0.4882, stage3_loss_iou: 0.8593, stage3_loss_mask: 2.3531, stage4_loss_cls: 1.1829, stage4_pos_acc: 43.8389, stage4_loss_bbox: 0.4770, stage4_loss_iou: 0.8437, stage4_loss_mask: 2.3511, stage5_loss_cls: 1.1914, stage5_pos_acc: 43.3682, stage5_loss_bbox: 0.4723, stage5_loss_iou: 0.8356, stage5_loss_mask: 2.3603, loss: 31.9969, grad_norm: 71.5349
2021-08-26 11:29:05,539 - mmdet - INFO - Epoch [1][2250/14659]	lr: 2.500e-05, eta: 8 days, 0:34:48, time: 1.297, data_time: 0.026, memory: 10379, stage0_loss_cls: 1.6389, stage0_pos_acc: 26.4171, stage0_loss_bbox: 0.8738, stage0_loss_iou: 1.3478, stage0_loss_mask: 3.2712, stage1_loss_cls: 1.2597, stage1_pos_acc: 36.5073, stage1_loss_bbox: 0.5825, stage1_loss_iou: 1.0741, stage1_loss_mask: 2.5992, stage2_loss_cls: 1.1765, stage2_pos_acc: 42.4178, stage2_loss_bbox: 0.5101, stage2_loss_iou: 0.9813, stage2_loss_mask: 2.5105, stage3_loss_cls: 1.1682, stage3_pos_acc: 44.9137, stage3_loss_bbox: 0.4719, stage3_loss_iou: 0.9277, stage3_loss_mask: 2.4942, stage4_loss_cls: 1.1524, stage4_pos_acc: 47.3806, stage4_loss_bbox: 0.4631, stage4_loss_iou: 0.9198, stage4_loss_mask: 2.4841, stage5_loss_cls: 1.1615, stage5_pos_acc: 47.7888, stage5_loss_bbox: 0.4551, stage5_loss_iou: 0.9104, stage5_loss_mask: 2.5115, loss: 32.9455, grad_norm: 70.3203
2021-08-26 11:30:10,491 - mmdet - INFO - Epoch [1][2300/14659]	lr: 2.500e-05, eta: 8 days, 0:29:51, time: 1.299, data_time: 0.025, memory: 10379, stage0_loss_cls: 1.6181, stage0_pos_acc: 27.4700, stage0_loss_bbox: 0.8644, stage0_loss_iou: 1.3378, stage0_loss_mask: 3.2031, stage1_loss_cls: 1.2365, stage1_pos_acc: 34.6672, stage1_loss_bbox: 0.5978, stage1_loss_iou: 1.0615, stage1_loss_mask: 2.6021, stage2_loss_cls: 1.1551, stage2_pos_acc: 41.1225, stage2_loss_bbox: 0.5164, stage2_loss_iou: 0.9623, stage2_loss_mask: 2.5070, stage3_loss_cls: 1.1470, stage3_pos_acc: 43.9128, stage3_loss_bbox: 0.4866, stage3_loss_iou: 0.9210, stage3_loss_mask: 2.4700, stage4_loss_cls: 1.1324, stage4_pos_acc: 45.1144, stage4_loss_bbox: 0.4751, stage4_loss_iou: 0.9079, stage4_loss_mask: 2.4605, stage5_loss_cls: 1.1379, stage5_pos_acc: 45.8629, stage5_loss_bbox: 0.4707, stage5_loss_iou: 0.9008, stage5_loss_mask: 2.4844, loss: 32.6565, grad_norm: 66.8825
Ground Truth Not Found!
Ground Truth Not Found!
Ground Truth Not Found!
Ground Truth Not Found!
Ground Truth Not Found!
Ground Truth Not Found!
^CTraceback (most recent call last):
  File "/mnt/home1/programs/miniconda3/envs/usd/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/mnt/home1/programs/miniconda3/envs/usd/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/mnt/home1/programs/miniconda3/envs/usd/lib/python3.8/site-packages/torch/distributed/launch.py", line 260, in <module>
    main()
  File "/mnt/home1/programs/miniconda3/envs/usd/lib/python3.8/site-packages/torch/distributed/launch.py", line 253, in main
    process.wait()
  File "/mnt/home1/programs/miniconda3/envs/usd/lib/python3.8/subprocess.py", line 1083, in wait
    return self._wait(timeout=timeout)
  File "/mnt/home1/programs/miniconda3/envs/usd/lib/python3.8/subprocess.py", line 1806, in _wait
    (pid, sts) = self._try_wait(0)
  File "/mnt/home1/programs/miniconda3/envs/usd/lib/python3.8/subprocess.py", line 1764, in _try_wait
    (pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt

Base config file does not exist

Hi,
thanks for the great work!
I tried to use QueryInst_Swin_L_300_queries (single scale testing) coco with mmdetection.
I download the linked config file queryinst_swin_large_patch4_window7_fpn_300_proposals_crop_mstrain_400-1200_50e_coco.py

but unfortunately the base config it is depending on is missing and I cannot find a download link for that config.

Error:
queryinst_swin_large_patch4_window7_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py does not exist

Any chance to get this somewhere ?

Test crashes when applying tta

I tried to test with multiscale tta(MultiScaleFlipAug), but the test seemed to crash.
Is there any examples to run ttas? Thank you!

Question about the overall pipeline of QueryTrack.

Dear author:
In Contrastive Tracking Head, the Track_Dynconv utilize the q*t-1 as shown in Equation (4).
Why didn‘t it show in Figure 1: Overall pipeline of QueryTrack? There is no arrow connection between them.
Thank you!

Bias and BN are set at the same time.

Dear author:
Why set Bias and BN at the same time in 4 convs after Mask_DynamicConv in QueryInst. When training, the system will launch a warning.

test stage mAP(box)=0, mAP(seg)=0.71

Hi, I just download your code and your pretrained paramters. I test the model in the sub-dataset of coco dataset. But I find that the results of bbox's mAP = 0. I have checked that there are no problems with the dataset.

here is the result:

Evaluating bbox...
Loading and preparing results...
DONE (t=1.75s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=35.44s).
Accumulating evaluation results...
DONE (t=12.73s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.009
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.026
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.005
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.032
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.049
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.049
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.049
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.026
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.135

Evaluating segm...
Loading and preparing results...
UserWarning: The key "bbox" is deleted for more accurate mask AP of small/medium/large instances since v2.12.0. This does not change the overall mAP calculation.
warnings.warn(
DONE (t=4.94s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type segm
DONE (t=40.55s).
Accumulating evaluation results...
DeprecationWarning: np.float is a deprecated alias for the builtin float. To silence this warning, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float)
DONE (t=13.33s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.465
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.716
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.502
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.304
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.513
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.694
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.619
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.620
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.620
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.480
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.672
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.802
OrderedDict([('bbox_mAP', 0.009), ('bbox_mAP_50', 0.026), ('bbox_mAP_75', 0.005), ('bbox_mAP_s', 0.0), ('bbox_mAP_m', 0.002), ('bbox_mAP_l', 0.032), ('bbox_mAP_copypaste', '0.009 0.026 0.005 0.000 0.002 0.032'), ('segm_mAP', 0.465), ('segm_mAP_50', 0.716), ('segm_mAP_75', 0.502), ('segm_mAP_s', 0.304), ('segm_mAP_m', 0.513), ('segm_mAP_l', 0.694), ('segm_mAP_copypaste', '0.465 0.716 0.502 0.304 0.513 0.694')])

The weight of match_loss in QueryTrack.

Dear author:
The weight of box_head loss is 2,5,2 ,and mask_head loss is 8 in QueryInst. Can you tell me the weight of box_head,mask_head,track_head in QueryTrack? Sincere thanks ^-^

How to get QueryInst to train on an external dataset

Hi,

I am trying to train QueryInst with a Swin backbone on my own medical imaging temporal dataset which contains 3 classes - background + 2 anatomical landmarks. The reason being I would then like to initialise a TeViT model with Swin-QueryInst weights to help the temporal model.

To initialise the Swin backbone in the QueryInst training procedure, I pretrained a segmentation model using Swin + some output segmentation layers then initialise the backbone with the pretrained weights. I set the learning rate of the backbone to be 0.1 * lr of the ROI head + Bbox heads. Additionally - I use a StepLR schedulers and similar AdamW parameters to those published in the paper. Moreover, I used gradient clipping with similar values (norm=1, type=2)

However, performance of the QueryInst model is really poor and the segmentation performance as measured through IoU per class degrades significantly from the baseline I trained.

The mAP and mAP_0.5 in the training set seems to converge nicely. However, the models fails to learns a robust function for the instance masks.

Do you have any suggestions?

which learning rate is true?

Hi,

I find the latest configuration file from mmdetection official released version is different from here.
Which one is better in your experiment?

here:

optimizer = dict(_delete_=True, type='AdamW', lr=0.000025, weight_decay=0.0001)

or official released version from mmdetection:
https://github.com/open-mmlab/mmdetection/blob/master/configs/queryinst/queryinst_r50_fpn_1x_coco.py#L127

looking for your reply!

DynamicMaskHead of MMDistributedDataParallel does not matches the length of `CLASSES` in CocoDataset

While training a custom dataset having 2 coco classes with the config configs/queryinst/queryinst_swin_large_patch4_window7_fpn_300_proposals_crop_mstrain_400-1200_50e_coco.py

the model shows an error like this -
Traceback (most recent call last): File "tools/train.py", line 188, in <module> main() File "tools/train.py", line 184, in main meta=meta) File "/content/QueryInst/mmdet/apis/train.py", line 193, in train_detector runner.run(data_loaders, cfg.workflow) File "/usr/local/lib/python3.7/dist-packages/mmcv/runner/epoch_based_runner.py", line 127, in run epoch_runner(data_loaders[i], **kwargs) File "/usr/local/lib/python3.7/dist-packages/mmcv/runner/epoch_based_runner.py", line 45, in train self.call_hook('before_train_epoch') File "/usr/local/lib/python3.7/dist-packages/mmcv/runner/base_runner.py", line 307, in call_hook getattr(hook, fn_name)(self) File "/content/QueryInst/mmdet/datasets/utils.py", line 155, in before_train_epoch self._check_head(runner) File "/content/QueryInst/mmdet/datasets/utils.py", line 142, in _check_head (f'Thenum_classes({module.num_classes}) in ' AssertionError: Thenum_classes(80) in DynamicMaskHead of MMDistributedDataParallel does not matches the length ofCLASSES2) in CocoDataset

The environment is google colab with Tesla-K80 GPU enabled at GPU:0

I've updated the classes in /content/QueryInst/mmdet/datasets/coco.py , /content/QueryInst/mmdet/core/evaluation/class_names.py and also in the base files of the corresponding config file.
Please help !!

multi-gpus training stuck when "Groudtruth Not Founded!"

im training on my own datasets. i got this log Groudtruth Not Founded!, it doesnt seem like a bug code. The training process just stuck there, no more running .

Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!

no more log from there.

i tried same setting but run on single gpu this time, still got same notification and it keep running, seem work fine

Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!
Groudtruth Not Founded!
2021-06-01 11:03:43,099 - mmdet - INFO - Epoch [1][300/72815]   lr: 7.493e-06, eta: 58 days, 7:54:12, time: 1.307, data_time: 0.009, memory: 19179, stage0_loss_cls: 1.3532, stage0_pos_acc: 94.0000, stage0_loss_bbox: 0.7737, stage0_loss_iou: 0.9962, stage0_loss_mask: 4.7144, stage1_loss_cls: 1.5701, stage1_pos_acc: 94.0000, stage1_loss_bbox: 0.7169, stage1_loss_iou: 0.9065, stage1_loss_mask: 4.1660, stage2_loss_cls: 1.2717, stage2_pos_acc: 94.0000, stage2_loss_bbox: 0.6249, stage2_loss_iou: 0.8356, stage2_loss_mask: 3.8767, stage3_loss_cls: 1.3196, stage3_pos_acc: 94.0000, stage3_loss_bbox: 0.6106, stage3_loss_iou: 0.8257, stage3_loss_mask: 4.3968, stage4_loss_cls: 1.2100, stage4_pos_acc: 94.0000, stage4_loss_bbox: 0.5954, stage4_loss_iou: 0.7888, stage4_loss_mask: 4.1171, stage5_loss_cls: 1.2037, stage5_pos_acc: 94.0000, stage5_loss_bbox: 0.6028, stage5_loss_iou: 0.7866, stage5_loss_mask: 4.1907, loss: 42.4537

Learned Proposal Boxes?

I took a look at the self.init_proposal_bboxes.weight from your trained model, but I found the boxes coordinates were not learned and kept around the initial values of 0.5 0.5 1 1. Is there any problem for this? Thanks

Instance Segmentation using CPU fails on certain images when Swin Transformer backbone was used

The error that I have encountered when inferencing using the swin transformer on CPU was ERROR - upper bound and larger bound inconsistent with step sign

Such error disappears when inference was performed on GPU.

After some investigation it was found that when running on CPU, the bbox (batch size 1) that was provided to the function _do_paste_mask in the file mmdet/models/roi_heads/mask_heads/fcn_mask_head.py has negative coordinates, causing it to fail.

The parameter of the tracking loss

Thank you for your nice work!

Since the training code of QueryTrack is not released, I hope you can share the following training details with me:

  1. What are the meaning of $\alpha_t$ and $\gamma$ in the formula(7) (the definition of the tracking loss)
  2. Is there any document about your contrastive focal loss?

Thanks again.

"GroundTruth not found" error

For the crop augmentation, since the following negative crop setting is alllowed, is anybody meet the "GroundTruth not found" error?
'allow_negative_crop': True

How to use this with custom Dataset

hi
first of all thanks for publishing this. May be very basic q
in your Read.md you have provided python command
python tools/train.py configs/queryinst/queryinst_r50_fpn_1x_coco.py

1)I opened it but was not able to understand how does it starts training.
2) How can i use just the model for fine tuning with my custom dataset.

QueryInst is not in the models registry

Hi,
When I try running a demo with pre-trained weights of queryinst_r50_300 and configs "queryinst_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py", it reports that "QueryInst is not in the models registry". I've checked init.py in mmdet/models/detectors, it seems fine. Same error while I using test.
Any advice on how can I running a image demo of QueryInst?

Learning rate shows an abnormal changing trend and AP =0 while training queryinst model with my own dataset

Hi,

Thanks for your work!

Recently, I met two issues while training 'queryinst_r50_fpn_1x_coco'model with my own dataset.(samples_per_gpu=2,workers_per_gpu=2,
optimizer = dict(type='AdamW', lr=2.5e-05, weight_decay=0.0001,lr_config = dict(policy='step', step=[27, 33])) just as default settings)

I noticed that learning rate represents an abnormal changing trend:

  1. Starting with INCREASINGING trend in the first epoch ( all values are greater than the default lr setting value which is 2.5e-5)
  2. Keeping decreasing to and then stay with the default lr setting value (2.5e-5) for next few epochs.
  3. Decreasing again and stick to a value, for example, 2.5e-07.

This trending way is abnormal compared with some conventional lr trending patterns in which lr usually stays still or keeps deceasing during the training process.

The second issue I met is all the AP and AR values equal to zero all the time. I attached the training log here for review.
20210819_092034.log

Could you help me with this? Thanks a lot!

Details about QueryTrack

Thank you for your nice work!

Since the training code of QueryTrack is not released, I hope you can share the following training details with me:

  1. How is the instance embedding extracted from the reference frame? Is the process the same as that in the target frame? In the target frame, the corresponding embeddings of the ground-truth are extracted through the DynamicConv and HungarianAssigner.
  2. Besides the instance segmentation loss of the target frame, do you also calculate it for the reference frame like CrossVIS?
  3. During inference, the matching factor is changed. Therefore, I wonder if the association process is changed compared to MaskTrackRCNN?

Thanks again.

MMCV version

i use mmcv 1.3.3 and 1.4,there will be an assertion: ca_forward miss in module _ext
so, what's your version of mmcv test.py used?
or maybe there are some other reasons?

QueryInst with SwinT backbone

Dear authors,

I have found on the repo a config of QueryInst with SwinTiny backbone.
However, I see no results or checkpoints of this QueryInst version.
Did you try to run this? If yes, do you have results and checkpoint saved?

Code on Youtube-VIS

Hi there authors,

Thank you for your great open-source work! May I know when you are going to release the code on Youtube-VIS dataset? And BTW what is the image resolutions you are using for Youtube-VIS dataset?

Thanks and look forward to your reply!

KeyError: "QueryInst: 'QueryRoIHead is not in the models registry'"

Hi,

I have this error coming up when I try to train QueryInst.
I see that the models registry in mmdet toolbox does not have this file QueryRoIHead, but it is present in the registry where this QueryInst is cloned.

I tried to just copy and paste the file is required registry, but it did not help.

Could you please help me in fixing this?

gpu

请问一张v100能训练吗

Training code for TrackQuery

Hi,

Thank you for your great work. I have noticed that the repo contains the description about TrackQuery.
Could you please release the code for TrackQuery at your convenience?
Thank you!

Best,
Fan

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.