Giter VIP home page Giter VIP logo

vfa's Introduction

VFA

Few-Shot Object Detection via Variational Feature Aggregation (AAAI2023)
Jiaming Han, Yuqiang Ren, Jian Ding, Ke Yan, Gui-Song Xia.
arXiv preprint.

Our code is based on mmfewshot.

Setup

  • Installation

Here is a from-scratch setup script.

conda create -n vfa python=3.8 -y
conda activate vfa

conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=11.0 -c pytorch

pip install openmim
mim install mmcv-full==1.3.12

# install mmclassification mmdetection
mim install mmcls==0.15.0
mim install mmdet==2.16.0

# install mmfewshot
mim install mmfewshot==0.1.0

# install VFA
python setup.py develop
  • Prepare Datasets

Please refer to mmfewshot's detection data preparation.

Model Zoo

All pretrained models can be found at github release.

Results on PASCAL VOC dataset

  • Base Training
Split Base AP50 config ckpt
1 78.6 config ckpt
2 79.5 config ckpt
3 79.8 config ckpt
  • Few Shot Fine-tuning
Split Shot nAP50 config ckpt
1 1 57.5 config ckpt
1 2 65.0 config ckpt
1 3 64.3 config ckpt
1 5 67.1 config ckpt
1 10 67.4 config ckpt
2 1 40.8 config ckpt
2 2 45.9 config ckpt
2 3 51.1 config ckpt
2 5 51.8 config ckpt
2 10 51.8 config ckpt
3 1 49.0 config ckpt
3 2 54.9 config ckpt
3 3 56.6 config ckpt
3 5 59.0 config ckpt
3 10 58.5 config ckpt

Results on COCO dataset

  • Base Training
Base mAP config ckpt
36.0 config ckpt
  • Few Shot Finetuning
Shot nAP config ckpt
10 16.8 config ckpt
30 19.5 config ckpt

Train and Test

  • Testing
# single-gpu test
python test.py ${CONFIG} ${CHECKPOINT} --eval mAP|bbox

# multi-gpus test
bash dist_test.sh ${CONFIG} ${CHECKPOINT} ${NUM_GPU} --eval mAP|bbox

For example:

  • test VFA on VOC split1 1-shot with sinel-gpu, we should run:
python test.py configs/vfa/voc/vfa_split1/vfa_r101_c4_8xb4_voc-split1_1shot-fine-tuning.py \
work_dirs/vfa_r101_c4_8xb4_voc-split1_1shot-fine-tuning/iter_400.pth \
--eval mAP
  • test VFA on COCO 10-shot with 8 gpus, we should run:
bash dist_test.sh configs/vfa/coco/vfa_r101_c4_8xb4_coco_10shot-fine-tuning.py \
work_dirs/vfa_r101_c4_8xb4_coco_10shot-fine-tuning/iter_10000.pth \
8 --eval bbox
  • Training
# single-gpu training
python train.py ${CONFIG}

# multi-gpus training
bash dist_train.sh ${CONFIG} ${NUM_GPU}

For example: train VFA on VOC.

# Stage I: base training.
bash dist_train.sh configs/vfa/voc/vfa_split1/vfa_r101_c4_8xb4_voc-split{1,2,3}_base-training.py 8

# Stage II: few-shot fine-tuning on all splits and shots.
voc_config_dir=configs/vfa/voc/
for split in 1 2 3; do
    for shot in 1 2 3 5 10; do
        config_path=${voc_config_dir}/vfa_split${split}/vfa_r101_c4_8xb4_voc-split${split}_${shot}shot-fine-tuning.py
        echo $config_path
        bash dist_train.sh $config_path 8
    done
done

Note: All our configs and models are trained with 8 gpus. You need to change the learning rate or batch size if you use fewer/more gpus.

Citation

If you find our work useful for your research, please consider citing:

@InProceedings{han2023vfa,
    title     = {Few-Shot Object Detection via Variational Feature Aggregation},
    author    = {Han, Jiaming and Ren, Yuqiang and Ding, Jian and Yan, Ke and Xia, Gui-Song},
    booktitle = {Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI-23)},
    year      = {2023}
}

vfa's People

Contributors

csuhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

vfa's Issues

about Meta R-CNN++

Hello. Thanks for your great work!
In your paper Meta R-CNN++ was presented as a stonger baseline. But it seems that Meta R-CNN++ is similar to the Meta R-CNN implementation in mmfewshot. I would like to ask whether they are the same.

训练时发生错误

环境:安装readme中配置的环境
系统:Linux
训练执行命令:python train.py configs/vfa/voc/vfa_split1/vfa_r101_c4_8xb4_voc-split1_base-training.py
报错:在使用voc数据集来进行base-training,开始是可以正常训练的,当训练到了3000个iter的时候,也可以成功进行checkpoint的保存,但是接着在进行验证集的推理的时候,会发生报错,如下:
[ ] 0/4952, elapsed: 0s, ETA:Traceback (most recent call last):
File "train.py", line 252, in
main()
File "train.py", line 241, in main
train_detector(
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmfewshot/detection/apis/train.py", line 197, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmcv/runner/iter_based_runner.py", line 133, in run
iter_runner(iter_loaders[i], **kwargs)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmcv/runner/iter_based_runner.py", line 66, in train
self.call_hook('after_train_iter')
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
getattr(hook, fn_name)(self)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 232, in after_train_iter
self._do_evaluate(runner)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmfewshot/detection/core/evaluation/eval_hooks.py", line 47, in _do_evaluate
results = single_gpu_test(runner.model, self.dataloader, show=False)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmfewshot/detection/apis/test.py", line 45, in single_gpu_test
result = model(mode='test', rescale=True, **data)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 42, in forward
return super().forward(*inputs, **kwargs)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 159, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmfewshot/detection/models/detectors/query_support_detector.py", line 173, in forward
return self.forward_test(img, img_metas, **kwargs)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 147, in forward_test
return self.simple_test(imgs[0], img_metas[0], **kwargs)
File "/home/hdhcy/opt/vfa-main-new/vfa/vfa_detector.py", line 100, in simple_test
bbox_results = super().simple_test(img, img_metas, proposals, rescale)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmfewshot/detection/models/detectors/meta_rcnn.py", line 176, in simple_test
return self.roi_head.simple_test(
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmfewshot/detection/models/roi_heads/meta_rcnn_roi_head.py", line 272, in simple_test
det_bboxes, det_labels = self.simple_test_bboxes(
File "/home/hdhcy/opt/vfa-main-new/vfa/vfa_roi_head.py", line 281, in simple_test_bboxes
det_bbox, det_label = self.bbox_head.get_bboxes(
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 186, in new_func
return old_func(*args, **kwargs)
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmdet/models/roi_heads/bbox_heads/bbox_head.py", line 369, in get_bboxes
det_bboxes, det_labels = multiclass_nms(bboxes, scores,
File "/home/hdhcy/anaconda3/envs/vfa/lib/python3.8/site-packages/mmdet/core/post_processing/bbox_nms.py", line 38, in multiclass_nms
bboxes = multi_bboxes.view(multi_scores.size(0), -1, 4)
RuntimeError: cannot reshape tensor of 0 elements into shape [0, -1, 4] because the unspecified dimension size -1 can be any value and is ambiguous

请问一下作者有遇到过这样的问题吗?请解答,谢谢

微调时,模型评估出错

我用自己的数据集(mate-rcnn可跑通)使用基训练时,模型训练评估没有任何问题。在微调阶段,模型可训练也可保存,但是无法评估,换个配置环境也不行

2023-06-01 16:46:07,131 - mmdet - INFO - starting model initialization...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 30/30, 7.4 task/s, elapsed: 4s, ETA:     0s2023-06-01 16:46:11,222 - mmdet - INFO - model initialization done.
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 139/139, 10.9 task/s, elapsed: 13s, ETA:     0s
---------------iou_thr: 0.5---------------
Traceback (most recent call last):
  File "train.py", line 252, in <module>
    main()
  File "train.py", line 248, in main
    meta=meta)
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmfewshot-0.1.0-py3.7.egg/mmfewshot/detection/apis/train.py", line 206, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 134, in run
    iter_runner(iter_loaders[i], **kwargs)
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/iter_based_runner.py", line 67, in train
    self.call_hook('after_train_iter')
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
    getattr(hook, fn_name)(self)
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py", line 262, in after_train_iter
    self._do_evaluate(runner)
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmfewshot-0.1.0-py3.7.egg/mmfewshot/detection/core/evaluation/eval_hooks.py", line 50, in _do_evaluate
    key_score = self.evaluate(runner, results)
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py", line 362, in evaluate
    results, logger=runner.logger, **self.eval_kwargs)
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmfewshot-0.1.0-py3.7.egg/mmfewshot/detection/datasets/voc.py", line 470, in evaluate
    use_legacy_coordinate=True)
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmfewshot-0.1.0-py3.7.egg/mmfewshot/detection/core/evaluation/mean_ap.py", line 153, in eval_map
    mean_ap, eval_results, classes, area_ranges, logger=logger)
  File "/root/miniconda3/envs/openmmlab/lib/python3.7/site-packages/mmdet/core/evaluation/mean_ap.py", line 502, in print_map_summary
    label_names[j], num_gts[i, j], results[j]['num_dets'],
IndexError: tuple index out of range

ZeroDivisionError: integer division or modulo by zero

Hello Dear author, I ran into this problem when running a few shot training on a single gpu. I look forward to your reply. Thank you

Traceback (most recent call last):
File "train.py", line 255, in
main()
File "train.py", line 225, in main
build_dataset(
File "/home/tcs/anaconda3/envs/mmdet/lib/python3.8/site-packages/mmfewshot/detection/datasets/builder.py", line 95, in build_dataset
dataset = NWayKShotDataset(
File "/home/tcs/anaconda3/envs/mmdet/lib/python3.8/site-packages/mmfewshot/detection/datasets/dataset_wrappers.py", line 332, in init
self.prepare_support_shots()
File "/home/tcs/anaconda3/envs/mmdet/lib/python3.8/site-packages/mmfewshot/detection/datasets/dataset_wrappers.py", line 388, in prepare_support_shots
self.num_support_shots // num_gts + 1)
ZeroDivisionError: integer division or modulo by zero

Problem in base_training

Thanks you for your work, but when I was running python train.py configs/vfa/voc/vfa_split1/vfa_r101_c4_8xb4_voc-split1_base-training.py, the code reported an error. Could you help me see where the problem is?
Here is the error:

2023-04-23 20:57:54,654 - mmfewshot - INFO - Exp name: vfa_r101_c4_8xb4_voc-split1_base-training.py
2023-04-23 20:57:54,654 - mmfewshot - INFO - Iter [3000/18000] lr: 2.000e-02, eta: 1:13:18, time: 0.344, data_time: 0.015, memory: 12341, loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, loss_bbox: nan, acc: 5.6250, loss_meta_cls: nan, meta_acc: 6.6667, loss_vae: nan, loss: nan
2023-04-23 20:57:54,656 - mmdet - INFO - starting model initialization...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3000/3000, 300.0 task/s, elapsed: 10s, ETA: 0s2023-04-23 20:58:04,709 - mmdet - INFO - model initialization done.
[ ] 0/4952, elapsed: 0s, ETA:Traceback (most recent call last):
File "train.py", line 252, in
main()
File "train.py", line 241, in main
train_detector(
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmfewshot/detection/apis/train.py", line 197, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmcv/runner/iter_based_runner.py", line 133, in run
iter_runner(iter_loaders[i], **kwargs)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmcv/runner/iter_based_runner.py", line 66, in train
self.call_hook('after_train_iter')
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
getattr(hook, fn_name)(self)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 232, in after_train_iter
self._do_evaluate(runner)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmfewshot/detection/core/evaluation/eval_hooks.py", line 47, in _do_evaluate
results = single_gpu_test(runner.model, self.dataloader, show=False)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmfewshot/detection/apis/test.py", line 45, in single_gpu_test
result = model(mode='test', rescale=True, **data)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 42, in forward
return super().forward(*inputs, **kwargs)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 159, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmfewshot/detection/models/detectors/query_support_detector.py", line 173, in forward
return self.forward_test(img, img_metas, **kwargs)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 147, in forward_test
return self.simple_test(imgs[0], img_metas[0], **kwargs)
File "/media/hdd0/xiaopeng/zyn/VFA-source/VFA-1.0.0/vfa/vfa_detector.py", line 100, in simple_test
bbox_results = super().simple_test(img, img_metas, proposals, rescale)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmfewshot/detection/models/detectors/meta_rcnn.py", line 176, in simple_test
return self.roi_head.simple_test(
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmfewshot/detection/models/roi_heads/meta_rcnn_roi_head.py", line 272, in simple_test
det_bboxes, det_labels = self.simple_test_bboxes(
File "/media/hdd0/xiaopeng/zyn/VFA-source/VFA-1.0.0/vfa/vfa_roi_head.py", line 281, in simple_test_bboxes
det_bbox, det_label = self.bbox_head.get_bboxes(
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 186, in new_func
return old_func(*args, **kwargs)
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmdet/models/roi_heads/bbox_heads/bbox_head.py", line 369, in get_bboxes
det_bboxes, det_labels = multiclass_nms(bboxes, scores,
File "/home/xiaopeng/anaconda3/envs/zyn-vfa/lib/python3.8/site-packages/mmdet/core/post_processing/bbox_nms.py", line 38, in multiclass_nms
bboxes = multi_bboxes.view(multi_scores.size(0), -1, 4)
RuntimeError: cannot reshape tensor of 0 elements into shape [0, -1, 4] because the unspecified dimension size -1 can be any value and is ambiguous

the experimental results did not meet the benchmarks reported in the paper

Thank you for your contribution to this amazing work.
When I used the provided code for training, the map of base-training was 74.1, and the novel ap of 1-shot fine-tuning was only 30, which did not meet the benchmarks reported in the paper.
I did not modify any configurations, just changed warmup_iters to 500.
Has anyone encountered this situation? Thank you for any responses or solutions.

PCB class and TestMixins class

Thank you for your contribution. May I ask, what is the function of PCB class and TestMixins class under utils.py file?

class PCB:
    def __init__(self, class_names, model="RN101", templates="a photo of a {}"):
        super().__init__()
        self.device = torch.cuda.current_device()

        # image transforms
        self.expand_ratio = 0.1
        self.trans = trans.Compose([
            trans.Resize([224, 224], interpolation=3),
            trans.ToTensor(),
            trans.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
        
        # CLIP configs
        import clip
        self.class_names = class_names
        self.clip, _ = clip.load(model, device=self.device)
        self.prompts = clip.tokenize([
            templates.format(cls_name) 
            for cls_name in self.class_names
        ]).to(self.device)
        with torch.no_grad():
            text_features = self.clip.encode_text(self.prompts)
            self.text_features = F.normalize(text_features, dim=-1, p=2)
        

    def load_image_by_box(self, img_path, boxes):
        image = Image.open(img_path).convert("RGB")
        image_list = []
        for box in boxes:
            x1, y1, x2, y2 = box
            h, w = y2-y1, x2-x1
            x1 = max(0, x1 - w*self.expand_ratio)
            y1 = max(0, y1 - h*self.expand_ratio)
            x2 = x2 + w*self.expand_ratio
            y2 = y2 + h*self.expand_ratio
            sub_image = image.crop((int(x1), int(y1), int(x2), int(y2))) 
            sub_image = self.trans(sub_image).to(self.device)
            image_list.append(sub_image)
        return torch.stack(image_list)
        
    @torch.no_grad()
    def __call__(self, img_path, boxes):
        images = self.load_image_by_box(img_path, boxes)

        image_features = self.clip.encode_image(images)
        image_features = F.normalize(image_features, dim=-1, p=2)
        logit_scale = self.clip.logit_scale.exp()
        logits_per_image = logit_scale * image_features @ self.text_features.t()
        return logits_per_image.softmax(dim=-1)


class TestMixins:
    def __init__(self):
        self.pcb = None

    def refine_test(self, results, img_metas):
        if not hasattr(self, 'pcb'):
            self.pcb = PCB(COCO_SPLIT['ALL_CLASSES'], model='ViT-B/32')
            # exclue ids for COCO
            self.exclude_ids = [7, 9, 10, 11, 12, 13, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
                            30, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 44, 45,
                            46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 59, 61, 63, 64, 65,
                            66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79]

        boxes_list, scores_list, labels_list = [], [], []
        for cls_id, result in enumerate(results[0]):
            if len(result) == 0:
                continue
            boxes_list.append(result[:, :4])
            scores_list.append(result[:, 4])
            labels_list.append([cls_id] * len(result))

        if len(boxes_list) == 0:
            return results
        
        boxes_list = np.concatenate(boxes_list, axis=0)
        scores_list = np.concatenate(scores_list, axis=0)
        labels_list = np.concatenate(labels_list, axis=0)

        logits = self.pcb(img_metas[0]['filename'], boxes_list)

        for i, prob in enumerate(logits):
            if labels_list[i] not in self.exclude_ids:
                scores_list[i] = scores_list[i] * 0.5 + logits[i, labels_list[i]] * 0.5

        j = 0
        for i in range(len(results[0])):
            num_dets = len(results[0][i])
            if num_dets == 0:
                continue
            for k in range(num_dets):
                results[0][i][k, 4] = scores_list[j]
                j += 1
        
        return results

ZeroDivisionError: integer division or modulo by zero

Hello dear author, I encountered this problem when running few-shot training in a single GPU. I look forward to your reply, thank you.

--+-------+----------------+-------+
Traceback (most recent call last):
File "train.py", line 255, in
main()
File "train.py", line 225, in main
build_dataset(
File "/home/tcs/anaconda3/envs/mmdet/lib/python3.8/site-packages/mmfewshot/detection/datasets/builder.py", line 95, in build_dataset
dataset = NWayKShotDataset(
File "/home/tcs/anaconda3/envs/mmdet/lib/python3.8/site-packages/mmfewshot/detection/datasets/dataset_wrappers.py", line 332, in init
self.prepare_support_shots()
File "/home/tcs/anaconda3/envs/mmdet/lib/python3.8/site-packages/mmfewshot/detection/datasets/dataset_wrappers.py", line 388, in prepare_support_shots
self.num_support_shots // num_gts + 1)
ZeroDivisionError: integer division or modulo by zero

Where to modify batch_size?

Hi,thank you for your excellent work! I only have one gpu and I have tried to modify the learning rate, but it can not achieve the accuracy you provided, I would like to ask where to modify the batch_size.

about PCB in utils.py

Hi, I appreciate your great work.
I noticed you have a PCB block in utils.py, I assume it does the same thing as in Defrcn but with CLIP.
I wonder if your main result on COCO used the refined results and why do you think it is necessary to use a PCB block.

question about fig6

Hello, thank you very much for presenting such a meaningful work. I would like to ask about fig6 in the paper, how did you get the real class center? I am confused and very much looking forward to getting your reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.