Giter VIP home page Giter VIP logo

damo-yolo's Introduction

English | 简体中文

Contributing README-cn ThirdParty IndustryModels

Introduction

Welcome to DAMO-YOLO! It is a fast and accurate object detection method, which is developed by TinyML Team from Alibaba DAMO Data Analytics and Intelligence Lab. And it achieves a higher performance than state-of-the-art YOLO series. DAMO-YOLO is extend from YOLO but with some new techs, including Neural Architecture Search (NAS) backbones, efficient Reparameterized Generalized-FPN (RepGFPN), a lightweight head with AlignedOTA label assignment, and distillation enhancement. For more details, please refer to our Arxiv Report. Moreover, here you can find not only powerful models, but also highly efficient training strategies and complete tools from training to deployment.

Updates

  • [2023/04/12: We release DAMO-YOLO v0.3.1!] new
    • Add 701-categories DAMO-YOLO-S model, which cover more application scenarios and serve as high-quality pre-training model to improve the performance of downstream tasks。
    • Upgrade the DAMO-YOLO-Nano series model, which achieves 32.3/38.2/40.5 mAP with only 1.56/3.69/6.04 Flops, and runs in real-time at 4.08/5.05/6.69ms using Intel-CPU.
    • Add DAMO-YOLO-L model, which achieves 51.9 mAP with 7.95ms latency using T4-GPU.
  • [2023/03/13: We release DAMO-YOLO v0.3.0!]
    • Release DAMO-YOLO-Nano, which achieves 35.1 mAP with only 3.02GFlops.
    • Upgrade the optimizer builder, edits the optimizer config, you are able to use any optimizer supported by Pytorch.
    • Upgrade the data loading pipeline and training parameters, leading to significant improvements of DAMO-YOLO models, e.g., the mAP of DAMO-YOLO-T/S/M increased from 43.0/46.8/50.0 to 43.6/47.7/50.2 respectively.
  • [2023/02/15: Baseline for The 3rd Anti-UAV Challenge.]
  • [2023/01/07: We release DAMO-YOLO v0.2.1!]
  • [2022/12/15: We release DAMO-YOLO v0.1.1!]

Web Demo

Model Zoo

General Models

Model size mAPval
0.5:0.95
Latency T4
TRT-FP16-BS1
FLOPs
(G)
Params
(M)
AliYun Download Google Download
DAMO-YOLO-T 640 42.0 2.78 18.1 8.5 torch,onnx --
DAMO-YOLO-T* 640 43.6 2.78 18.1 8.5 torch,onnx --
DAMO-YOLO-S 640 46.0 3.83 37.8 16.3 torch,onnx --
DAMO-YOLO-S* 640 47.7 3.83 37.8 16.3 torch,onnx --
DAMO-YOLO-M 640 49.2 5.62 61.8 28.2 torch,onnx --
DAMO-YOLO-M* 640 50.2 5.62 61.8 28.2 torch,onnx --
DAMO-YOLO-L 640 50.8 7.95 97.3 42.1 torch,onnx --
DAMO-YOLO-L* 640 51.9 7.95 97.3 42.1 torch,onnx --
Legacy models
Model size mAPval
0.5:0.95
Latency T4
TRT-FP16-BS1
FLOPs
(G)
Params
(M)
AliYun Download Google Download
DAMO-YOLO-T 640 41.8 2.78 18.1 8.5 torch,onnx torch,onnx
DAMO-YOLO-T* 640 43.0 2.78 18.1 8.5 torch,onnx torch,onnx
DAMO-YOLO-S 640 45.6 3.83 37.8 16.3 torch,onnx torch,onnx
DAMO-YOLO-S* 640 46.8 3.83 37.8 16.3 torch,onnx torch,onnx
DAMO-YOLO-M 640 48.7 5.62 61.8 28.2 torch,onnx torch,onnx
DAMO-YOLO-M* 640 50.0 5.62 61.8 28.2 torch,onnx torch,onnx
  • We report the mAP of models on COCO2017 validation set, with multi-class NMS.
  • The latency in this table is measured without post-processing(NMS).
  • * denotes the model trained with distillation.
  • We use S as teacher to distill T, and M as teacher to distill S, ans L as teacher to distill M, while L is distilled by it self.

Light Models

Model size mAPval
0.5:0.95
Latency(ms) CPU
OpenVino-Intel8163
FLOPs
(G)
Params
(M)
AliYun Download Google Download
DAMO-YOLO-Ns 416 32.3 4.08 1.56 1.41 torch,onnx --
DAMO-YOLO-Nm 416 38.2 5.05 3.69 2.71 torch,onnx --
DAMO-YOLO-Nl 416 40.5 6.69 6.04 5.69 torch,onnx --
  • We report the mAP of models on COCO2017 validation set, with multi-class NMS.
  • The latency in this table is measured without post-processing, following picodet.
  • The latency is evaluated based on OpenVINO-2022.3.0, using commands below:
    # onnx export, enable --benchmark to ignore postprocess
    python tools/converter.py -f configs/damoyolo_tinynasL18_Ns.py -c ../damoyolo_tinynasL18_Ns.pth --batch_size 1  --img_size 416 --benchmark
    # model transform
    mo --input_model damoyolo_tinynasL18_Ns.onnx --data_type FP16
    # latency benchmark
    ./benchmark_app -m damoyolo_tinynasL18_Ns.xml -i ./assets/dog.jpg -api sync -d CPU -b 1 -hint latency 

701 categories DAMO-YOLO Model

We provide DAMO-YOLO-S model with 701 categories for general object detection, which has been trained on a large dataset including COCO, Objects365 and OpenImage. This model can also serve as a pre-trained model for fine-tuning in downstream tasks, enabling you to achieve better performance with ease.

Pretrained Model Downstream Task mAPval
0.5:0.95
AliYun Download Google Download
80-categories-DAMO-YOLO-S VisDrone 24.6 torch,onnx -
701-categories-DAMO-YOLO-S VisDrone 26.6 torch,onnx -
  • Note: The downloadable model is a pretrained model with 701 categories datasets. We demonstrate the VisDrone results to show that our pretrained model can enhance the performance of downstream tasks.

Quick Start

Installation

Step1. Install DAMO-YOLO.

git clone https://github.com/tinyvision/damo-yolo.git
cd DAMO-YOLO/
conda create -n DAMO-YOLO python=3.7 -y
conda activate DAMO-YOLO
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt
export PYTHONPATH=$PWD:$PYTHONPATH

Step2. Install pycocotools.

pip install cython;
pip install git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI # for Linux
pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI # for Windows
Demo

Step1. Download a pretrained torch, onnx or tensorRT engine from the benchmark table, e.g., damoyolo_tinynasL25_S.pth, damoyolo_tinynasL25_S.onnx, damoyolo_tinynasL25_S.trt.

Step2. Use -f(config filename) to specify your detector's config, --path to specify input data path, image/video/camera are supported. For example:

# torch engine with image
python tools/demo.py image -f ./configs/damoyolo_tinynasL25_S.py --engine ./damoyolo_tinynasL25_S.pth --conf 0.6 --infer_size 640 640 --device cuda --path ./assets/dog.jpg

# onnx engine with video
python tools/demo.py video -f ./configs/damoyolo_tinynasL25_S.py --engine ./damoyolo_tinynasL25_S.onnx --conf 0.6 --infer_size 640 640 --device cuda --path your_video.mp4

# tensorRT engine with camera
python tools/demo.py camera -f ./configs/damoyolo_tinynasL25_S.py --engine ./damoyolo_tinynasL25_S.trt --conf 0.6 --infer_size 640 640 --device cuda --camid 0
Reproduce our results on COCO

Step1. Prepare COCO dataset

cd <DAMO-YOLO Home>
ln -s /path/to/your/coco ./datasets/coco

Step 2. Reproduce our results on COCO by specifying -f(config filename)

python -m torch.distributed.launch --nproc_per_node=8 tools/train.py -f configs/damoyolo_tinynasL25_S.py
Finetune on your data

Please refer to custom dataset tutorial for details.

Evaluation
python -m torch.distributed.launch --nproc_per_node=8 tools/eval.py -f configs/damoyolo_tinynasL25_S.py --ckpt /path/to/your/damoyolo_tinynasL25_S.pth
Customize tinynas backbone Step1. If you want to customize your own backbone, please refer to [MAE-NAS Tutorial for DAMO-YOLO](https://github.com/alibaba/lightweight-neural-architecture-search/blob/main/scripts/damo-yolo/Tutorial_NAS_for_DAMO-YOLO_cn.md). This is a detailed tutorial about how to obtain an optimal backbone under the budget of latency/flops.

Step2. After the searching process completed, you can replace the structure text in configs with it. Finally, you can get your own custom ResNet-like or CSPNet-like backbone after setting the backbone name to TinyNAS_res or TinyNAS_csp. Please notice the difference of out_indices between TinyNAS_res and TinyNAS_csp.

structure = self.read_structure('tinynas_customize.txt')
TinyNAS = { 'name'='TinyNAS_res', # ResNet-like Tinynas backbone
            'out_indices': (2,4,5)}
TinyNAS = { 'name'='TinyNAS_csp', # CSPNet-like Tinynas backbone
            'out_indices': (2,3,4)}

Deploy

Installation

Step1. Install ONNX.

pip install onnx==1.8.1
pip install onnxruntime==1.8.0
pip install onnx-simplifier==0.3.5

Step2. Install CUDA、CuDNN、TensorRT and pyCUDA

2.1 CUDA

wget https://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda_10.2.89_440.33.01_linux.run
sudo sh cuda_10.2.89_440.33.01_linux.run
export PATH=$PATH:/usr/local/cuda-10.2/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.2/lib64
source ~/.bashrc

2.2 CuDNN

sudo cp cuda/include/* /usr/local/cuda/include/
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64/
sudo chmod a+r /usr/local/cuda/include/cudnn.h
sudo chmod a+r /usr/local/cuda/lib64/libcudnn*

2.3 TensorRT

cd TensorRT-7.2.1.6/python
pip install tensorrt-7.2.1.6-cp37-none-linux_x86_64.whl
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:TensorRT-7.2.1.6/lib

2.4 pycuda

pip install pycuda==2022.1
Model Convert

Now we support trt_int8 quantization, you can specify trt_type as int8 to export the int8 tensorRT engine. You can also try partial quantization to achieve a good compromise between accuracy and latency. Refer to partial_quantization for more details.

Step.1 convert torch model to onnx or trt engine, and the output file would be generated in ./deploy. end2end means to export trt with nms. trt_eval means to evaluate the exported trt engine on coco_val dataset after the export compelete.

# onnx export 
python tools/converter.py -f configs/damoyolo_tinynasL25_S.py -c damoyolo_tinynasL25_S.pth --batch_size 1 --img_size 640

# trt export
python tools/converter.py -f configs/damoyolo_tinynasL25_S.py -c damoyolo_tinynasL25_S.pth --batch_size 1 --img_size 640 --trt --end2end --trt_eval

Step.2 trt engine evaluation on coco_val dataset. end2end means to using trt_with_nms to evaluation.

python tools/trt_eval.py -f configs/damoyolo_tinynasL25_S.py -trt deploy/damoyolo_tinynasL25_S_end2end_fp16_bs1.trt --batch_size 1 --img_size 640 --end2end

Step.3 onnx or trt engine inference demo and appoint test image/video by --path. end2end means to using trt_with_nms to inference.

# onnx inference
python tools/demo.py image -f ./configs/damoyolo_tinynasL25_S.py --engine ./damoyolo_tinynasL25_S.onnx --conf 0.6 --infer_size 640 640 --device cuda --path ./assets/dog.jpg

# trt inference
python tools/demo.py image -f ./configs/damoyolo_tinynasL25_S.py --engine ./deploy/damoyolo_tinynasL25_S_end2end_fp16_bs1.trt --conf 0.6 --infer_size 640 640 --device cuda --path ./assets/dog.jpg --end2end

Industry Application Models:

We provide DAMO-YOLO models for applications in real scenarios, which are listed as follows. More powerful models are coming, please stay tuned.

Human Detection Helmet Detection Head Detection Smartphone Detectioin
Facemask Detection Cigarette Detection Traffic Sign Detection NFL-helmet detection

Third Party Resources

In order to promote communication among DAMO-YOLO users, we collect third-party resources in this section. If you have original content about DAMO-YOLO, please feel free to contact us at [email protected].

Cite DAMO-YOLO

If you use DAMO-YOLO in your research, please cite our work by using the following BibTeX entry:

 @article{damoyolo,
   title={DAMO-YOLO: A Report on Real-Time Object Detection Design},
   author={Xianzhe Xu, Yiqi Jiang, Weihua Chen, Yilun Huang, Yuan Zhang and Xiuyu Sun},
   journal={arXiv preprint arXiv:2211.15444v2},
   year={2022},
 }

 @inproceedings{sun2022mae,
   title={Mae-det: Revisiting maximum entropy principle in zero-shot nas for efficient object detection},
   author={Sun, Zhenhong and Lin, Ming and Sun, Xiuyu and Tan, Zhiyu and Li, Hao and Jin, Rong},
   booktitle={International Conference on Machine Learning},
   pages={20810--20826},
   year={2022},
   organization={PMLR}
 }

@inproceedings{jiang2022giraffedet,
  title={GiraffeDet: A Heavy-Neck Paradigm for Object Detection},
  author={yiqi jiang and Zhiyu Tan and Junyan Wang and Xiuyu Sun and Ming Lin and Hao Li},
  booktitle={International Conference on Learning Representations},
  year={2022},
}

damo-yolo's People

Contributors

andrewjywang avatar cwhgn avatar fujistoo avatar hylcool avatar jyqi avatar mucunwuxian avatar ofekp avatar oracle4444 avatar wonbeomjang avatar xianzhexu avatar xiuyu-sxy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

damo-yolo's Issues

如何在AlignOTA中实现ignore功能

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

源代码中的assigner有传入gt_bboxes_ignore,但是并没有相关的处理代码,应该如何修改AlignOTAAssigner才能给对应的预测框分配ignore标签?

Additional

No response

关于AlignOTA

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

assigner中的代码貌似和文章中写的操作不一致,请问这是什么原因(也可能是我理解错了):比如在计算回归cost时,代码用的负log iou,而文章中是取负 iou;scale_factor的计算时,使用的是abs().pow(2.0),而不是绝对值。

Additional

No response

ModuleNotFoundError: No module named 'damo'

请大神帮忙看看,我这个问题确实有蒙蔽,在pycharm上都能正常导入引用,但是训练时候就会报错。
python -m torch.distributed.launch --nproc_per_node=1 tools/train.py -f configs/damoyolo_tinynasL20_T_fire_smoke_xy_calling_body.py
/media/extend/anaconda3/envs/yolov5_2.0/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects --local_rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
Traceback (most recent call last):
File "tools/train.py", line 13, in
from damo.apis import Trainer
ModuleNotFoundError: No module named 'damo'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 46733) of binary: /media/extend/anaconda3/envs/yolov5_2.0/bin/python
Traceback (most recent call last):
File "/media/extend/anaconda3/envs/yolov5_2.0/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/media/extend/anaconda3/envs/yolov5_2.0/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/media/extend/anaconda3/envs/yolov5_2.0/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/media/extend/anaconda3/envs/yolov5_2.0/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/media/extend/anaconda3/envs/yolov5_2.0/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/media/extend/anaconda3/envs/yolov5_2.0/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/media/extend/anaconda3/envs/yolov5_2.0/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/media/extend/anaconda3/envs/yolov5_2.0/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

tools/train.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2022-12-01_17:44:43
host : bova
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 46733)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Traceback (most recent call last): File "tools/torch_inference.py", line 13, in <module> from damo.base_models.core.ops import RepConv ModuleNotFoundError: No module named 'damo'

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

Traceback (most recent call last):
File "tools/torch_inference.py", line 13, in
from damo.base_models.core.ops import RepConv
ModuleNotFoundError: No module named 'damo'

Additional

Traceback (most recent call last):
File "tools/torch_inference.py", line 13, in
from damo.base_models.core.ops import RepConv
ModuleNotFoundError: No module named 'damo'

为什么出现没有这个模块啊,这个路径明明是没有问题的啊??望解答

关于DAMO-YOLO的模型速度

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

DAMO-YOLO是很棒的工作!
我观察到 DAMO-YOLO 的速度评测是按照 Batch Size = 1 进行评估的,想询问下这边有试过更多batch size的速度比较吗(比如 Batch Size = 32的情况)

Additional

No response

Toturial about onnxruntime

Hi, thanks your awesome project! Can u provide a detail onnxruntime inference example step by step if free time.

ZeroHead与YOLOv5Head有什么区别

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

很赞赏DAMO-YOLO这相崭新的工作,请问ZeroHead与YOLOv5的Head在网络结构上有什么不同,是否很相似?
ZeroHead:https://github.com/tinyvision/DAMO-YOLO/blob/master/damo/base_models/heads/zero_head.py#L146-L167

 self.cls_convs = nn.ModuleList()
  self.reg_convs = nn.ModuleList()

  for i in range(len(self.strides)):
      cls_convs, reg_convs = self._build_not_shared_convs(
          self.in_channels[i], self.feat_channels[i])
      self.cls_convs.append(cls_convs)
      self.reg_convs.append(reg_convs)

  self.gfl_cls = nn.ModuleList([
      nn.Conv2d(self.feat_channels[i],
                self.cls_out_channels,
                3,
                padding=1) for i in range(len(self.strides))
  ])

  self.gfl_reg = nn.ModuleList([
      nn.Conv2d(self.feat_channels[i],
                4 * (self.reg_max + 1),
                3,
                padding=1) for i in range(len(self.strides))
  ])

YOLOv5 Head: https://github.com/ultralytics/yolov5/blob/master/models/yolo.py#L53
self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
其中的检测头类:

class Detect(nn.Module):
    stride = None  # strides computed during build
    onnx_dynamic = False  # ONNX export parameter
    export = False  # export mode

    def __init__(self, nc=80, anchors=(), ch=(), inplace=True):  # detection layer
        super().__init__()
        self.nc = nc  # number of classes
        self.no = nc + 5  # number of outputs per anchor
        self.nl = len(anchors)  # number of detection layers
        self.na = len(anchors[0]) // 2  # number of anchors
        self.grid = [torch.zeros(1)] * self.nl  # init grid
        self.anchor_grid = [torch.zeros(1)] * self.nl  # init anchor grid
        self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2))  # shape(nl,na,2)
        self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch)  # output conv
        self.inplace = inplace  # use inplace ops (e.g. slice assignment)

Additional

No response

ModuleNotFoundError: No module named 'cuda'

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

请问这个错误需要安装哪个包

Additional

No response

如何训练自己的数据集?

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

能不能给个详细训练自己数据集的教程?
另外,建议,把需要自己手动调整的参数都写在一起,现在调整都要找好几个文件,而且还找不到。

Additional

No response

代码可能有些路径上的bug(windows端)

比如self.read_structure(
'./damo/base_models/backbones/nas_backbones/tinynas_L20_k1kx.txt')
中,需要将该路径改成绝对路径才不会报错,还有部分别的引用相对路径的地方同样会出现文件找不到的问题

实时视频检测

我看inference可以推理单张图片,可以实现实时视频检测吗,像yolov5的detect.py一样

About flops of yolov7-tiny

yolov7-tiny's GFlops is 5.2 in Chien-Yao's paper. But it's 13.7 in damo-yolo table.8.
I want to know the difference.

ValueError: array of sample points is empty

此时的环境跑coco_2017数据集没问题。当训练自己数据集时候出现索引错误,config文件也改为5类了。
2022-12-05 11:03:31 | INFO | damo.apis.detector_trainer:266 - Training start...
2022-12-05 11:03:31 | ERROR | main:67 - An error has been caught in function '', process 'MainProcess' (5218), thread 'MainThread' (139953455899264):
Traceback (most recent call last):

File "tools/train.py", line 67, in
main()
└ <function main at 0x7f4882bdeb00>

File "tools/train.py", line 63, in main
trainer.train(args.local_rank)
│ │ │ └ 0
│ │ └ Namespace(config_file='configs/damoyolo_tinynasL20_T_fire_smoke_xy_calling_body.py', local_rank=0, opts=[], tea_ckpt=None, te...
│ └ <function Trainer.train at 0x7f4875990680>
└ <damo.apis.detector_trainer.Trainer object at 0x7f48759934d0>

File "/home/Documents/DAMO-YOLO/damo/apis/detector_trainer.py", line 272, in train
for data_iter, (inps, targets, ids) in enumerate(self.train_loader):
│ └ <torch.utils.data.dataloader.DataLoader object at 0x7f47e234c750>
└ <damo.apis.detector_trainer.Trainer object at 0x7f48759934d0>

File "/home/anaconda3/envs/DAMO-YOLO/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
│ └ <function _MultiProcessingDataLoaderIter._next_data at 0x7f4882c3aef0>
└ <torch.utils.data.dataloader._MultiProcessingDataLoaderIter object at 0x7f47e22b3050>
File "/home/anaconda3/envs/DAMO-YOLO/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
│ │ └ <torch._utils.ExceptionWrapper object at 0x7f480e26f1d0>
│ └ <function _MultiProcessingDataLoaderIter._process_data at 0x7f4882c3e050>
└ <torch.utils.data.dataloader._MultiProcessingDataLoaderIter object at 0x7f47e22b3050>
File "/home/anaconda3/envs/DAMO-YOLO/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
│ └ <function ExceptionWrapper.reraise at 0x7f497338cdd0>
└ <torch._utils.ExceptionWrapper object at 0x7f480e26f1d0>
File "/home/anaconda3/envs/DAMO-YOLO/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
│ │ └ 'Caught ValueError in DataLoader worker process 0.\nOriginal Traceback (most recent call last):\n File "/home/anaconda3...
│ └ <class 'ValueError'>
└ <torch._utils.ExceptionWrapper object at 0x7f480e26f1d0>

ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/anaconda3/envs/DAMO-YOLO/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/anaconda3/envs/DAMO-YOLO/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/anaconda3/envs/DAMO-YOLO/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/Documents/DAMO-YOLO/damo/dataset/datasets/mosaic_wrapper.py", line 308, in getitem
shear=self.shear,
File "/home/Documents/DAMO-YOLO/damo/dataset/datasets/mosaic_wrapper.py", line 142, in random_affine
segments = resample_segments(segments) # upsample
File "/home/Documents/DAMO-YOLO/damo/dataset/datasets/mosaic_wrapper.py", line 29, in resample_segments
np.interp(x, xp, s[:, i]) for i in range(2)
File "/home/Documents/DAMO-YOLO/damo/dataset/datasets/mosaic_wrapper.py", line 29, in
np.interp(x, xp, s[:, i]) for i in range(2)
File "<array_function internals>", line 6, in interp
File "/home/anaconda3/envs/DAMO-YOLO/lib/python3.7/site-packages/numpy/lib/function_base.py", line 1439, in interp
return interp_func(x, xp, fp, left, right)
ValueError: array of sample points is empty

版本问题

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

你好, 在按照教程配置环境后运行如下命令
python tools/torch_inference.py -f configs/damoyolo_tinynasL25_S.py --ckpt models/damoyolo_tinynasL25_S.pth --path assets/dog.jpg, 时遇到问题.

具体错误信息:
=>NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))

分析了下,由于我用的是RTX3090, 安装的是CUDA 11.4, 错误原因主要是由下面这句引起,
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.2 -c pytorch

因此我卸载了上述包后使用如下指令重新安装这些包,
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

再运行,提示:
=>AttributeError: module 'torch._six' has no attribute 'PY37'

能否帮忙看看

Additional

No response

win10安装报错

测试环境:
win10 x64
VS2019
cuda11.1+cudnn8.2
torch==1.9.0+cu111
torchvision==0.10.0+cu111

执行python setupy.py install 报错

C:\Users\fut\Desktop\DAMO-YOLO-20221202>python setup.py install
running install
running bdist_egg
running egg_info
writing damo.egg-info\PKG-INFO
writing dependency_links to damo.egg-info\dependency_links.txt
writing top-level names to damo.egg-info\top_level.txt
reading manifest file 'damo.egg-info\SOURCES.txt'
writing manifest file 'damo.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
running build_ext
D:\anaconda3\lib\site-packages\torch\utils\cpp_extension.py:274: UserWarning: Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte
warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
building 'damo._C' extension
Emitting ninja build file C:\Users\fut\Desktop\DAMO-YOLO-20221202\build\temp.win-amd64-3.8\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: error: 'C:/Users/fut/Desktop/DAMO-YOLO-20221202/damo/layers/csrc/vision.cpp', needed by 'C:/Users/fut/Desktop/DAMO-YOLO-20221202/build/temp.win-amd64-3.8/Release/Users/fut/Desktop/DAMO-YOLO-20221202/damo/layers/csrc/vision.obj', missing and no known rule to make it
Traceback (most recent call last):
File "D:\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1516, in _run_ninja_build
subprocess.run(
File "D:\anaconda3\lib\subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "setup.py", line 50, in
setuptools.setup(
File "D:\anaconda3\lib\site-packages\setuptools_init_.py", line 153, in setup
return distutils.core.setup(**attrs)
File "D:\anaconda3\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "D:\anaconda3\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "D:\anaconda3\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "D:\anaconda3\lib\site-packages\setuptools\command\install.py", line 67, in run
self.do_egg_install()
File "D:\anaconda3\lib\site-packages\setuptools\command\install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "D:\anaconda3\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\anaconda3\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "D:\anaconda3\lib\site-packages\setuptools\command\bdist_egg.py", line 164, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "D:\anaconda3\lib\site-packages\setuptools\command\bdist_egg.py", line 150, in call_command
self.run_command(cmdname)
File "D:\anaconda3\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\anaconda3\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "D:\anaconda3\lib\site-packages\setuptools\command\install_lib.py", line 11, in run
self.build()
File "D:\anaconda3\lib\distutils\command\install_lib.py", line 107, in build
self.run_command('build_ext')
File "D:\anaconda3\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\anaconda3\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "D:\anaconda3\lib\site-packages\setuptools\command\build_ext.py", line 79, in run
_build_ext.run(self)
File "D:\anaconda3\lib\site-packages\Cython\Distutils\old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "D:\anaconda3\lib\distutils\command\build_ext.py", line 340, in run
self.build_extensions()
File "D:\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 653, in build_extensions
build_ext.build_extensions(self)
File "D:\anaconda3\lib\site-packages\Cython\Distutils\old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "D:\anaconda3\lib\distutils\command\build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "D:\anaconda3\lib\distutils\command\build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "D:\anaconda3\lib\site-packages\setuptools\command\build_ext.py", line 196, in build_extension
_build_ext.build_extension(self, ext)
File "D:\anaconda3\lib\distutils\command\build_ext.py", line 528, in build_extension
objects = self.compiler.compile(sources,
File "D:\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 626, in win_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "D:\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1233, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "D:\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1538, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

Customize tinynas backbone tutorial dont work with customize parameters

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

As Customize tinynas backbone tutorial, it needs like the conv_data.out.fp16.damoyolo file, how to get the customize file if change parameters?

Additional

No response

关于GiraffeNeckV2 Fusion block个数的问题

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

作者你好,有一个问题。在GiraffeNeckV2代码中只看到了5个Fusion Block,也就是对应的self.merge_x (x=3,4,5,6,7),那么对应论文里的最大特征图分支的最后一个Fusion Block在哪?以及self.merge_7的来自最大特征图分支的输入也没有在代码里看到
微信图片_20221220111150

Additional

No response

环境问题

在执行这一句pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'时,总报错:
(damo-yolo) masterqkk@QKK:~/workspace/pycharm/DAMO-YOLO$ pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI
Cloning https://github.com/cocodataset/cocoapi.git to /tmp/pip-req-build-lg51638m
Running command git clone --filter=blob:none --quiet https://github.com/cocodataset/cocoapi.git /tmp/pip-req-build-lg51638m

error: RPC failed; curl 56 GnuTLS recv error (-110): The TLS connection was non-properly terminated.
fatal: the remote end hung up unexpectedly
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'

error: subprocess-exited-with-error

× git clone --filter=blob:none --quiet https://github.com/cocodataset/cocoapi.git /tmp/pip-req-build-lg51638m did not run successfully.
│ exit code: 128
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× git clone --filter=blob:none --quiet https://github.com/cocodataset/cocoapi.git /tmp/pip-req-build-lg51638m did not run successfully.
│ exit code: 128
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

能否帮忙看下?

训练的过程中为何出现这么多得警告

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

image
为什么会出现这么警告。。。

Additional

No response

使用tools/torch_inference.py在pyharm中直接进行推理报错?

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

设置参数部分:
def make_parser():
parser = argparse.ArgumentParser('damo eval')

parser.add_argument(
    '-f',
    '--config_file',
    default=r'configs/damoyolo_tinynasL25_S.py',
    type=str,
    help='pls input your config file',
)
parser.add_argument('-p',
                    '--path',
                    default=r'D:\hb\DAMO-YOLO\DAMO-YOLO-master\assets\dog.jpg',
                    type=str,
                    help='path to image')
parser.add_argument('-c',
                    '--ckpt',
                    default=r'path/to/your/damoyolo_tinynasL25_S.pth',
                    type=str,
                    help='ckpt for eval')
parser.add_argument('--conf',
                    default=0.6,
                    type=float,
                    help='conf of visualization')

parser.add_argument('--img_size',
                    default=640,
                    type=int,
                    help='test img size')
return parser

运行报错:
Traceback (most recent call last):

File "D:\hb\DAMO-YOLO\DAMO-YOLO-master\tools\torch_inference.py", line 125, in
main()
└ <function main at 0x000001861B046DC8>

File "D:\hb\DAMO-YOLO\DAMO-YOLO-master\tools\torch_inference.py", line 73, in main
config = parse_config(args.config_file)
│ │ └ 'configs/damoyolo_tinynasL25_S.py'
│ └ Namespace(ckpt='path/to/your/damoyolo_tinynasL25_S.pth', conf=0.6, config_file='configs/damoyolo_tinynasL25_S.py', img_size=6...
└ <function parse_config at 0x000001861B0469D8>

File "D:\hb\DAMO-YOLO\DAMO-YOLO-master\damo\config\base.py", line 134, in parse_config
return get_config_by_file(config_file)
│ └ 'configs/damoyolo_tinynasL25_S.py'
└ <function get_config_by_file at 0x0000018619256AF8>

File "D:\hb\DAMO-YOLO\DAMO-YOLO-master\damo\config\base.py", line 122, in get_config_by_file
"{} doesn't contains class named 'Config'".format(config_file))
└ 'configs/damoyolo_tinynasL25_S.py'

ImportError: configs/damoyolo_tinynasL25_S.py doesn't contains class named 'Config'
请问大佬应该如何解决?
期待回复,谢谢。

Additional

No response

训练过程中第10个epoch开始eval时出现了Error,如何解决?

硬件配置:RTX 3090
软件配置:Ubuntu 18.04,miniconda虚拟环境,python=3.8,pytorch=1.8.0,torchvision=0.9.0,cudatoolkit=11.1.1

2022-12-03 17:30:24.814 | INFO     | damo.apis.detector_trainer:train:358 - epoch: 10/300, iter: 328/358, mem: 4212Mb, iter_time: 0.314s, model_time: 0.209s, total_loss: 1.4, loss_cls: 0.6, loss_bbox: 0.5, loss_dfl: 0.3, lr: 2.498e-03, size: (640, 640), ETA: 9:17:39
2022-12-03 17:30:34.112 | INFO     | damo.apis.detector_trainer:save_ckpt:389 - Save weights to ./workdirs/damoyolo_tinynasL20_T
2022-12-03 17:30:34.229 | INFO     | damo.apis.detector_inference:inference:75 - Start evaluation on coco_2017_val dataset(570 images).
2022-12-03 17:30:40.252 | ERROR    | __main__:<module>:64 - An error has been caught in function '<module>', process 'MainProcess' (16584), thread 'MainThread' (140681432760896):
Traceback (most recent call last):

> File "tools/train.py", line 64, in <module>
    main()
    └ <function main at 0x7ff1f4393d30>

  File "tools/train.py", line 59, in main
    trainer.train(args.local_rank)
    │       │     │    └ 0
    │       │     └ Namespace(config_file='configs/damoyolo_tinynasL20_T.py', local_rank=0, opts=[], tea_ckpt=None, tea_config=None)
    │       └ <function Trainer.train at 0x7ff1e97c5c10>
    └ <damo.apis.detector_trainer.Trainer object at 0x7ff1e977ffa0>

  File "/home/pc/wanghe/DAMO-YOLO-master/damo/apis/detector_trainer.py", line 374, in train
    self.evaluate(local_rank, self.cfg.dataset.val_ann)
    │    │        │           │    │   │       └ ['coco_2017_val']
    │    │        │           │    │   └ {'paths_catalog': '/home/pc/wanghe/DAMO-YOLO-master/damo/config/paths_catalog.py', 'train_ann': ['coco_2017_train'], 'val_ann...
    │    │        │           │    └ ╒═════════╤════════════════════════════════════════════════════════════════════════════════════╕
    │    │        │           │      │ keys    │ values          ...
    │    │        │           └ <damo.apis.detector_trainer.Trainer object at 0x7ff1e977ffa0>
    │    │        └ 0
    │    └ <function Trainer.evaluate at 0x7ff1e97c5dc0>
    └ <damo.apis.detector_trainer.Trainer object at 0x7ff1e977ffa0>

  File "/home/pc/wanghe/DAMO-YOLO-master/damo/apis/detector_trainer.py", line 438, in evaluate
    inference(
    └ <function inference at 0x7ff1e9811430>

  File "/home/pc/wanghe/DAMO-YOLO-master/damo/apis/detector_inference.py", line 80, in inference
    predictions = compute_on_dataset(model, data_loader, device,
                  │                  │      │            └ device(type='cuda')
                  │                  │      └ <torch.utils.data.dataloader.DataLoader object at 0x7ff191a4b2b0>
                  │                  └ Detector(
                  │                      (backbone): TinyNAS(
                  │                        (block_list): ModuleList(
                  │                          (0): Focus(
                  │                            (conv): ConvBNAct(
                  │                              (conv):...
                  └ <function compute_on_dataset at 0x7ff1f43b00d0>

  File "/home/pc/wanghe/DAMO-YOLO-master/damo/apis/detector_inference.py", line 23, in compute_on_dataset
    output = model(images.to(device))
             │     │      │  └ device(type='cuda')
             │     │      └ <function ImageList.to at 0x7ff1e9867a60>
             │     └ <damo.structures.image_list.ImageList object at 0x7ff191a4b580>
             └ Detector(
                 (backbone): TinyNAS(
                   (block_list): ModuleList(
                     (0): Focus(
                       (conv): ConvBNAct(
                         (conv):...

  File "/home/pc/miniconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
             │    │        │        └ {}
             │    │        └ (<damo.structures.image_list.ImageList object at 0x7ff191a32190>,)
             │    └ <function Detector.forward at 0x7ff1e97c5550>
             └ Detector(
                 (backbone): TinyNAS(
                   (block_list): ModuleList(
                     (0): Focus(
                       (conv): ConvBNAct(
                         (conv):...

  File "/home/pc/wanghe/DAMO-YOLO-master/damo/detectors/detector.py", line 60, in forward
    outputs = self.head(
              └ Detector(
                  (backbone): TinyNAS(
                    (block_list): ModuleList(
                      (0): Focus(
                        (conv): ConvBNAct(
                          (conv):...

  File "/home/pc/miniconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
             │    │        │        └ {'imgs': <damo.structures.image_list.ImageList object at 0x7ff191a32190>}
             │    │        └ ((tensor([[[[9.0678e-01, 4.4305e-01, 0.0000e+00,  ..., 0.0000e+00,
             │    │                     0.0000e+00, 0.0000e+00],
             │    │                    [2.0511e+00,...
             │    └ <function ZeroHead.forward at 0x7ff1e97c05e0>
             └ ZeroHead(
                 (integral): Integral()
                 (loss_dfl): DistributionFocalLoss()
                 (loss_cls): QualityFocalLoss()
                 (loss_bbox): GIoU...

  File "/home/pc/wanghe/DAMO-YOLO-master/damo/base_models/heads/zero_head.py", line 190, in forward
    return self.forward_eval(xin=xin, labels=labels, imgs=imgs)
           │    │                │           │            └ <damo.structures.image_list.ImageList object at 0x7ff191a32190>
           │    │                │           └ None
           │    │                └ (tensor([[[[9.0678e-01, 4.4305e-01, 0.0000e+00,  ..., 0.0000e+00,
           │    │                             0.0000e+00, 0.0000e+00],
           │    │                            [2.0511e+00, ...
           │    └ <function ZeroHead.forward_eval at 0x7ff1e97c0700>
           └ ZeroHead(
               (integral): Integral()
               (loss_dfl): DistributionFocalLoss()
               (loss_cls): QualityFocalLoss()
               (loss_bbox): GIoU...

  File "/home/pc/wanghe/DAMO-YOLO-master/damo/base_models/heads/zero_head.py", line 272, in forward_eval
    output = postprocess(cls_scores, bbox_preds, self.num_classes,
             │           │           │           │    └ 4
             │           │           │           └ ZeroHead(
             │           │           │               (integral): Integral()
             │           │           │               (loss_dfl): DistributionFocalLoss()
             │           │           │               (loss_cls): QualityFocalLoss()
             │           │           │               (loss_bbox): GIoU...
             │           │           └ tensor([[[-32.3044, -23.7261,  44.9907,  40.6942],
             │           │                      [-10.5998,  -4.7541,  33.7747,  30.0459],
             │           │                      [ -7.4818,  -2...
             │           └ tensor([[[0.0160, 0.0152, 0.0171, 0.0109],
             │                      [0.0226, 0.0271, 0.0206, 0.0123],
             │                      [0.0199, 0.0295, 0.0191, 0.011...
             └ <function postprocess at 0x7ff1f3d88d30>

  File "/home/pc/wanghe/DAMO-YOLO-master/damo/utils/boxes.py", line 125, in postprocess
    detections, scores, labels = multiclass_nms(bbox_preds[i],
                                 │              │          └ 0
                                 │              └ tensor([[[-32.3044, -23.7261,  44.9907,  40.6942],
                                 │                         [-10.5998,  -4.7541,  33.7747,  30.0459],
                                 │                         [ -7.4818,  -2...
                                 └ <function multiclass_nms at 0x7ff1f43b0550>

  File "/home/pc/wanghe/DAMO-YOLO-master/damo/utils/boxes.py", line 79, in multiclass_nms
    keep = torchvision.ops.batched_nms(bboxes, scores, labels, iou_thr)
           │           │   │           │       │       │       └ 0.7
           │           │   │           │       │       └ tensor([1, 0, 0,  ..., 2, 2, 2], device='cuda:0')
           │           │   │           │       └ tensor([0.0526, 0.0590, 0.0560,  ..., 0.0722, 0.0689, 0.0566], device='cuda:0')
           │           │   │           └ tensor([[-9.2591e-01, -4.2191e-01,  2.7347e+01,  2.4332e+01],
           │           │   │                     [ 9.0118e+01,  2.7858e-01,  1.4205e+02,  2.4722e+01],
           │           │   │              ...
           │           │   └ <function batched_nms at 0x7ff1f4329ca0>
           │           └ <module 'torchvision.ops' from '/home/pc/miniconda3/envs/yolov5/lib/python3.8/site-packages/torchvision/ops/__init__.py'>
           └ <module 'torchvision' from '/home/pc/miniconda3/envs/yolov5/lib/python3.8/site-packages/torchvision/__init__.py'>

  File "/home/pc/miniconda3/envs/yolov5/lib/python3.8/site-packages/torch/jit/_trace.py", line 1091, in wrapper
    return fn(*args, **kwargs)
           │   │       └ {}
           │   └ (tensor([[-9.2591e-01, -4.2191e-01,  2.7347e+01,  2.4332e+01],
           │             [ 9.0118e+01,  2.7858e-01,  1.4205e+02,  2.4722e+01],
           │     ...
           └ <function batched_nms at 0x7ff1f4329c10>
  File "/home/pc/miniconda3/envs/yolov5/lib/python3.8/site-packages/torchvision/ops/boxes.py", line 75, in batched_nms
    keep = nms(boxes_for_nms, scores, iou_threshold)
           │   │              │       └ 0.7
           │   │              └ tensor([0.0526, 0.0590, 0.0560,  ..., 0.0722, 0.0689, 0.0566], device='cuda:0')
           │   └ tensor([[ 6.8040e+02,  6.8090e+02,  7.0867e+02,  7.0566e+02],
           │             [ 9.0118e+01,  2.7858e-01,  1.4205e+02,  2.4722e+01],
           │      ...
           └ <function nms at 0x7ff1f4329790>
  File "/home/pc/miniconda3/envs/yolov5/lib/python3.8/site-packages/torchvision/ops/boxes.py", line 35, in nms
    _assert_has_ops()
    └ <function _assert_has_ops at 0x7ff1f43b0b80>
  File "/home/pc/miniconda3/envs/yolov5/lib/python3.8/site-packages/torchvision/extension.py", line 62, in _assert_has_ops
    raise RuntimeError(

RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.

请问我该如何解决?

我还有一个建议:由于RTX 3090的显卡好像只支持cuda11.1及以上的版本,你们的程序是只支持pytorch1.7.0和cuda10.2吗?感觉有点太老了,建议后期的code支持更加新版本的pytorch以及cuda版本。谢谢

Question:没有cpu版本的吗

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

只能在有gpu的机器上运行吗,纯cpu机器不能运行吗

Additional

No response

训练自己的数据集

哈喽各位,我想请问下怎么去训练自己的数据集,coco格式的或者yolo格式的

loss一直都是0

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

训练自己的数据集,loss一直为0,请问是什么原因,需要在哪里改label的类别吗?
image

Additional

No response

既然是sota那方便跑一下paperwithcode的coco实时榜单方便对比?

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

Real-Time Object Detection on COCO
https://paperswithcode.com/sota/real-time-object-detection-on-coco?p=yolov7-trainable-bag-of-freebies-sets-new

Additional

No response

[Bug]: 虚拟环境安装cocoapi包

Before Reporting

  • I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

  • I have read the README carefully and no error occured during the installation process. (Otherwise, we recommand that you can ask a question using the Question template) 我已经仔细阅读了README上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)

Search before reporting

  • I have searched the DAMO-YOLO issues and found no similar bugs. 我已经在issue列表中搜索但是没有发现类似的bug报告。

OS

Windows10

Device

cpu

CUDA version

No response

TensorRT version

No response

Python version

3.8

PyTorch version

1.7

torchvision version

0.8

Describe the bug

Windows
conda虚拟环境
安装cocoapi报错
正确安装命令如下:
pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

亲测有效,望采纳

To Reproduce

安装包命令错误,详细见上。

Hyper-parameters/Configs

No response

Logs

No response

Screenshots

No response

Additional

No response

环境问题

Before Asking

  • I have read the README carefully. 我已经仔细阅读了README上的操作指引。

  • I want to train my custom dataset, and I have read the tutorials for finetune on your data carefully and organize my dataset correctly; 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。

  • I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。

Search before asking

  • I have searched the DAMO-YOLO issues and found no similar questions.

Question

我在按照要求搭建完成环境后,输入python tools/torch_inference.py -f configs/damoyolo_tinynasL25_S.py --ckpt /path/to/your/damoyolo_tinynasL25_S.pth --path assets/dog.jpg 命令
出现下列报错:
Traceback (most recent call last):
File "tools/torch_inference.py", line 13, in
from damo.base_models.core.ops import RepConv
ModuleNotFoundError: No module named 'damo.base_models'
尝试安装提示缺少的支持包,但提示并没有这个包。
大佬们有没有遇到这样的情况过,请问该如何去解决这种错误?
能帮忙看看吗?

Additional

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.