Giter VIP home page Giter VIP logo

paddlepaddle / paddledetection Goto Github PK

View Code? Open in Web Editor NEW
12.1K 197.0 2.8K 421.46 MB

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.

License: Apache License 2.0

Python 84.56% Shell 2.04% CMake 0.57% C++ 11.61% Cuda 0.98% Makefile 0.16% C 0.08%
object-detection instance-segmentation faster-rcnn mask-rcnn yolov3 blazeface face-detection fcos pp-yolo fairmot

paddledetection's Introduction

简体中文 | English

💌目录

🌈简介

PaddleDetection是一个基于PaddlePaddle的目标检测端到端开发套件,在提供丰富的模型组件和测试基准的同时,注重端到端的产业落地应用,通过打造产业级特色模型|工具、建设产业应用范例等手段,帮助开发者实现数据准备、模型选型、模型训练、模型部署的全流程打通,快速进行落地应用。

主要模型效果示例如下(点击标题可快速跳转):

通用目标检测 小目标检测 旋转框检测 3D目标物检测
人脸检测 2D关键点检测 多目标追踪 实例分割
车辆分析——车牌识别 车辆分析——车流统计 车辆分析——违章检测 车辆分析——属性分析
行人分析——闯入分析 行人分析——行为分析 行人分析——属性分析 行人分析——人流统计

同时,PaddleDetection提供了模型的在线体验功能,用户可以选择自己的数据进行在线推理。

说明:考虑到服务器负载压力,在线推理均为CPU推理,完整的模型开发实例以及产业部署实践代码示例请前往🎗️产业特色模型|产业工具

传送门模型在线体验

📣最新进展

🔥超越YOLOv8,飞桨推出精度最高的实时检测器RT-DETR!

👫开源社区

  • 📑项目合作: 如果您是企业开发者且有明确的目标检测垂类应用需求,请扫描如下二维码入群,并联系群管理员AI后可免费与官方团队展开不同层次的合作。
  • 🏅️社区贡献: PaddleDetection非常欢迎你加入到飞桨社区的开源建设中,参与贡献方式可以参考开源项目开发指南
  • 💻直播教程: PaddleDetection会定期在飞桨直播间(B站:飞桨PaddlePaddle微信: 飞桨PaddlePaddle),针对发新内容、以及产业范例、使用教程等进行直播分享。
  • 🎁加入社区: 微信扫描二维码并填写问卷之后,可以及时获取如下信息,包括:
    • 社区最新文章、直播课等活动预告
    • 往期直播录播&PPT
    • 30+行人车辆等垂类高性能预训练模型
    • 七大任务开源数据集下载链接汇总
    • 40+前沿检测领域顶会算法
    • 15+从零上手目标检测理论与实践视频课程
    • 10+工业安防交通全流程项目实操(含源码)

PaddleDetection官方交流群二维码

📖 技术交流合作

✨主要特性

🧩模块化设计

PaddleDetection将检测模型解耦成不同的模块组件,通过自定义模块组件组合,用户可以便捷高效地完成检测模型的搭建。传送门🧩模块组件

📱丰富的模型库

PaddleDetection支持大量的最新主流的算法基准以及预训练模型,涵盖2D/3D目标检测、实例分割、人脸检测、关键点检测、多目标跟踪、半监督学习等方向。传送门📱模型库⚖️模型性能对比

🎗️产业特色模型|产业工具

PaddleDetection打造产业级特色模型以及分析工具:PP-YOLOE+、PP-PicoDet、PP-TinyPose、PP-HumanV2、PP-Vehicle等,针对通用、高频垂类应用场景提供深度优化解决方案以及高度集成的分析工具,降低开发者的试错、选择成本,针对业务场景快速应用落地。传送门🎗️产业特色模型|产业工具

💡🏆产业级部署实践

PaddleDetection整理工业、农业、林业、交通、医疗、金融、能源电力等AI应用范例,打通数据标注-模型训练-模型调优-预测部署全流程,持续降低目标检测技术产业落地门槛。传送门💡产业实践范例🏆企业应用案例

🍱安装

参考安装说明进行安装。

🔥教程

深度学习入门教程

快速开始

数据准备

配置文件说明

模型开发

部署推理

🔑FAQ

🧩模块组件

Backbones Necks Loss Common Data Augmentation
  • Post-processing
  • Training
  • Common
  • 📱模型库

    2D Detection Multi Object Tracking KeyPoint Detection Others
  • Instance Segmentation
  • Face Detection
  • Semi-Supervised Detection
  • 3D Detection
  • Vehicle Analysis Toolbox
  • Human Analysis Toolbox
  • Sport Analysis Toolbox
  • ⚖️模型性能对比

    🖥️服务器端模型性能对比

    各模型结构和骨干网络的代表模型在COCO数据集上精度mAP和单卡Tesla V100上预测速度(FPS)对比图。

    测试说明(点击展开)
    • ViT为ViT-Cascade-Faster-RCNN模型,COCO数据集mAP高达55.7%
    • Cascade-Faster-RCNN为Cascade-Faster-RCNN-ResNet50vd-DCN,PaddleDetection将其优化到COCO数据mAP为47.8%时推理速度为20FPS
    • PP-YOLOE是对PP-YOLO v2模型的进一步优化,L版本在COCO数据集mAP为51.6%,Tesla V100预测速度78.1FPS
    • PP-YOLOE+是对PPOLOE模型的进一步优化,L版本在COCO数据集mAP为53.3%,Tesla V100预测速度78.1FPS
    • YOLOX和YOLOv5均为基于PaddleDetection复现算法,YOLOv5代码在PaddleYOLO中,参照PaddleYOLO_MODEL
    • 图中模型均可在📱模型库中获取

    ⌚️移动端模型性能对比

    各移动端模型在COCO数据集上精度mAP和高通骁龙865处理器上预测速度(FPS)对比图。

    测试说明(点击展开)
    • 测试数据均使用高通骁龙865(4xA77+4xA55)处理器,batch size为1, 开启4线程测试,测试使用NCNN预测库,测试脚本见MobileDetBenchmark
    • PP-PicoDet及PP-YOLO-Tiny为PaddleDetection自研模型,可在📱模型库中获取,其余模型PaddleDetection暂未提供

    🎗️产业特色模型|产业工具

    产业特色模型|产业工具是PaddleDetection针对产业高频应用场景打造的兼顾精度和速度的模型以及工具箱,注重从数据处理-模型训练-模型调优-模型部署的端到端打通,且提供了实际生产环境中的实践范例代码,帮助拥有类似需求的开发者高效的完成产品开发落地应用。

    该系列模型|工具均已PP前缀命名,具体介绍、预训练模型以及产业实践范例代码如下。

    💎PP-YOLOE 高精度目标检测模型

    简介(点击展开)

    PP-YOLOE是基于PP-YOLOv2的卓越的单阶段Anchor-free模型,超越了多种流行的YOLO模型。PP-YOLOE避免了使用诸如Deformable Convolution或者Matrix NMS之类的特殊算子,以使其能轻松地部署在多种多样的硬件上。其使用大规模数据集obj365预训练模型进行预训练,可以在不同场景数据集上快速调优收敛。

    传送门PP-YOLOE说明

    传送门arXiv论文

    预训练模型(点击展开)
    模型名称 COCO精度(mAP) V100 TensorRT FP16速度(FPS) 推荐部署硬件 配置文件 模型下载
    PP-YOLOE+_l 53.3 149.2 服务器 链接 下载地址

    传送门全部预训练模型

    产业应用代码示例(点击展开)
    行业 类别 亮点 文档说明 模型下载
    农业 农作物检测 用于葡萄栽培中基于图像的监测和现场机器人技术,提供了来自5种不同葡萄品种的实地实例 PP-YOLOE+ 下游任务 下载链接
    通用 低光场景检测 低光数据集使用ExDark,包括从极低光环境到暮光环境等10种不同光照条件下的图片。 PP-YOLOE+ 下游任务 下载链接
    工业 PCB电路板瑕疵检测 工业数据集使用PKU-Market-PCB,该数据集用于印刷电路板(PCB)的瑕疵检测,提供了6种常见的PCB缺陷 PP-YOLOE+ 下游任务 下载链接

    💎PP-YOLOE-R 高性能旋转框检测模型

    简介(点击展开)

    PP-YOLOE-R是一个高效的单阶段Anchor-free旋转框检测模型,基于PP-YOLOE+引入了一系列改进策略来提升检测精度。根据不同的硬件对精度和速度的要求,PP-YOLOE-R包含s/m/l/x四个尺寸的模型。在DOTA 1.0数据集上,PP-YOLOE-R-l和PP-YOLOE-R-x在单尺度训练和测试的情况下分别达到了78.14mAP和78.28 mAP,这在单尺度评估下超越了几乎所有的旋转框检测模型。通过多尺度训练和测试,PP-YOLOE-R-l和PP-YOLOE-R-x的检测精度进一步提升至80.02mAP和80.73 mAP,超越了所有的Anchor-free方法并且和最先进的Anchor-based的两阶段模型精度几乎相当。在保持高精度的同时,PP-YOLOE-R避免使用特殊的算子,例如Deformable Convolution或Rotated RoI Align,使其能轻松地部署在多种多样的硬件上。

    传送门PP-YOLOE-R说明

    传送门arXiv论文

    预训练模型(点击展开)
    模型 Backbone mAP V100 TRT FP16 (FPS) RTX 2080 Ti TRT FP16 (FPS) Params (M) FLOPs (G) 学习率策略 角度表示 数据增广 GPU数目 每GPU图片数目 模型下载 配置文件
    PP-YOLOE-R-l CRN-l 80.02 69.7 48.3 53.29 281.65 3x oc MS+RR 4 2 model config

    传送门全部预训练模型

    产业应用代码示例(点击展开)
    行业 类别 亮点 文档说明 模型下载
    通用 旋转框检测 手把手教你上手PP-YOLOE-R旋转框检测,10分钟将脊柱数据集精度训练至95mAP 基于PP-YOLOE-R的旋转框检测 下载链接

    💎PP-YOLOE-SOD 高精度小目标检测模型

    简介(点击展开)

    PP-YOLOE-SOD(Small Object Detection)是PaddleDetection团队针对小目标检测提出的检测方案,在VisDrone-DET数据集上单模型精度达到38.5mAP,达到了SOTA性能。其分别基于切图拼图流程优化的小目标检测方案以及基于原图模型算法优化的小目标检测方案。同时提供了数据集自动分析脚本,只需输入数据集标注文件,便可得到数据集统计结果,辅助判断数据集是否是小目标数据集以及是否需要采用切图策略,同时给出网络超参数参考值。

    传送门PP-YOLOE-SOD 小目标检测模型

    预训练模型(点击展开) - VisDrone数据集预训练模型
    模型 COCOAPI mAPval
    0.5:0.95
    COCOAPI mAPval
    0.5
    COCOAPI mAPtest_dev
    0.5:0.95
    COCOAPI mAPtest_dev
    0.5
    MatlabAPI mAPtest_dev
    0.5:0.95
    MatlabAPI mAPtest_dev
    0.5
    下载 配置文件
    PP-YOLOE+_SOD-l 31.9 52.1 25.6 43.5 30.25 51.18 下载链接 配置文件

    传送门全部预训练模型

    产业应用代码示例(点击展开)
    行业 类别 亮点 文档说明 模型下载
    通用 小目标检测 基于PP-YOLOE-SOD的无人机航拍图像检测案例全流程实操。 基于PP-YOLOE-SOD的无人机航拍图像检测 下载链接

    💫PP-PicoDet 超轻量实时目标检测模型

    简介(点击展开)

    全新的轻量级系列模型PP-PicoDet,在移动端具有卓越的性能,成为全新SOTA轻量级模型。

    传送门PP-PicoDet说明

    传送门arXiv论文

    预训练模型(点击展开)
    模型名称 COCO精度(mAP) 骁龙865 四线程速度(FPS) 推荐部署硬件 配置文件 模型下载
    PicoDet-L 36.1 39.7 移动端、嵌入式 链接 下载地址

    传送门全部预训练模型

    产业应用代码示例(点击展开)
    行业 类别 亮点 文档说明 模型下载
    智慧城市 道路垃圾检测 通过在市政环卫车辆上安装摄像头对路面垃圾检测并分析,实现对路面遗撒的垃圾进行监控,记录并通知环卫人员清理,大大提升了环卫人效。 基于PP-PicoDet的路面垃圾检测 下载链接

    📡PP-Tracking 实时多目标跟踪系统

    简介(点击展开)

    PaddleDetection团队提供了实时多目标跟踪系统PP-Tracking,是基于PaddlePaddle深度学习框架的业界首个开源的实时多目标跟踪系统,具有模型丰富、应用广泛和部署高效三大优势。 PP-Tracking支持单镜头跟踪(MOT)和跨镜头跟踪(MTMCT)两种模式,针对实际业务的难点和痛点,提供了行人跟踪、车辆跟踪、多类别跟踪、小目标跟踪、流量统计以及跨镜头跟踪等各种多目标跟踪功能和应用,部署方式支持API调用和GUI可视化界面,部署语言支持Python和C++,部署平台环境支持Linux、NVIDIA Jetson等。

    传送门PP-Tracking说明

    预训练模型(点击展开)
    模型名称 模型简介 精度 速度(FPS) 推荐部署硬件 配置文件 模型下载
    ByteTrack SDE多目标跟踪算法 仅包含检测模型 MOT-17 test: 78.4 - 服务器、移动端、嵌入式 链接 下载地址
    FairMOT JDE多目标跟踪算法 多任务联合学习方法 MOT-16 test: 75.0 - 服务器、移动端、嵌入式 链接 下载地址
    OC-SORT SDE多目标跟踪算法 仅包含检测模型 MOT-17 half val: 75.5 - 服务器、移动端、嵌入式 链接 下载地址
    产业应用代码示例(点击展开)
    行业 类别 亮点 文档说明 模型下载
    通用 多目标跟踪 快速上手单镜头、多镜头跟踪 PP-Tracking之手把手玩转多目标跟踪 下载链接

    ⛷️PP-TinyPose 人体骨骼关键点识别

    简介(点击展开)

    PaddleDetection 中的关键点检测部分紧跟最先进的算法,包括 Top-Down 和 Bottom-Up 两种方法,可以满足用户的不同需求。同时,PaddleDetection 提供针对移动端设备优化的自研实时关键点检测模型 PP-TinyPose。

    传送门PP-TinyPose说明

    预训练模型(点击展开)
    模型名称 模型简介 COCO精度(AP) 速度(FPS) 推荐部署硬件 配置文件 模型下载
    PP-TinyPose 轻量级关键点算法
    输入尺寸256x192
    68.8 骁龙865 四线程: 158.7 FPS 移动端、嵌入式 链接 下载地址

    传送门全部预训练模型

    产业应用代码示例(点击展开)
    行业 类别 亮点 文档说明 模型下载
    运动 健身 提供从模型选型、数据准备、模型训练优化,到后处理逻辑和模型部署的全流程可复用方案,有效解决了复杂健身动作的高效识别,打造AI虚拟健身教练! 基于PP-TinyPose增强版的智能健身动作识别 下载链接

    🏃🏻PP-Human 实时行人分析工具

    简介(点击展开)

    PaddleDetection深入探索核心行业的高频场景,提供了行人开箱即用分析工具,支持图片/单镜头视频/多镜头视频/在线视频流多种输入方式,广泛应用于智慧交通、智慧城市、工业巡检等领域。支持服务器端部署及TensorRT加速,T4服务器上可达到实时。 PP-Human支持四大产业级功能:五大异常行为识别、26种人体属性分析、实时人流计数、跨镜头(ReID)跟踪。

    传送门PP-Human行人分析工具使用指南

    预训练模型(点击展开)
    任务 T4 TensorRT FP16: 速度(FPS) 推荐部署硬件 模型下载 模型体积
    行人检测(高精度) 39.8 服务器 目标检测 182M
    行人跟踪(高精度) 31.4 服务器 多目标跟踪 182M
    属性识别(高精度) 单人 117.6 服务器 目标检测
    属性识别
    目标检测:182M
    属性识别:86M
    摔倒识别 单人 100 服务器 多目标跟踪
    关键点检测
    基于关键点行为识别
    多目标跟踪:182M
    关键点检测:101M
    基于关键点行为识别:21.8M
    闯入识别 31.4 服务器 多目标跟踪 182M
    打架识别 50.8 服务器 视频分类 90M
    抽烟识别 340.1 服务器 目标检测
    基于人体id的目标检测
    目标检测:182M
    基于人体id的目标检测:27M
    打电话识别 166.7 服务器 目标检测
    基于人体id的图像分类
    目标检测:182M
    基于人体id的图像分类:45M

    传送门完整预训练模型

    产业应用代码示例(点击展开)
    行业 类别 亮点 文档说明 模型下载
    智能安防 摔倒检测 飞桨行人分析PP-Human中提供的摔倒识别算法,采用了关键点+时空图卷积网络的技术,对摔倒姿势无限制、背景环境无要求。 基于PP-Human v2的摔倒检测 下载链接
    智能安防 打架识别 本项目基于PaddleVideo视频开发套件训练打架识别模型,然后将训练好的模型集成到PaddleDetection的PP-Human中,助力行人行为分析。 基于PP-Human的打架识别 下载链接
    智能安防 摔倒检测 基于PP-Human完成来客分析整体流程。使用PP-Human完成来客分析中非常常见的场景: 1. 来客属性识别(单镜和跨境可视化);2. 来客行为识别(摔倒识别)。 基于PP-Human的来客分析案例教程 下载链接

    🏎️PP-Vehicle 实时车辆分析工具

    简介(点击展开)

    PaddleDetection深入探索核心行业的高频场景,提供了车辆开箱即用分析工具,支持图片/单镜头视频/多镜头视频/在线视频流多种输入方式,广泛应用于智慧交通、智慧城市、工业巡检等领域。支持服务器端部署及TensorRT加速,T4服务器上可达到实时。 PP-Vehicle囊括四大交通场景核心功能:车牌识别、属性识别、车流量统计、违章检测。

    传送门PP-Vehicle车辆分析工具指南

    预训练模型(点击展开)
    任务 T4 TensorRT FP16: 速度(FPS) 推荐部署硬件 模型方案 模型体积
    车辆检测(高精度) 38.9 服务器 目标检测 182M
    车辆跟踪(高精度) 25 服务器 多目标跟踪 182M
    车牌识别 213.7 服务器 车牌检测
    车牌识别
    车牌检测:3.9M
    车牌字符识别: 12M
    车辆属性 136.8 服务器 属性识别 7.2M

    传送门完整预训练模型

    产业应用代码示例(点击展开)
    行业 类别 亮点 文档说明 模型下载
    智慧交通 交通监控车辆分析 本项目基于PP-Vehicle演示智慧交通中最刚需的车流量监控、车辆违停检测以及车辆结构化(车牌、车型、颜色)分析三大场景。 基于PP-Vehicle的交通监控分析系统 下载链接

    💡产业实践范例

    产业实践范例是PaddleDetection针对高频目标检测应用场景,提供的端到端开发示例,帮助开发者打通数据标注-模型训练-模型调优-预测部署全流程。 针对每个范例我们都通过AI-Studio提供了项目代码以及说明,用户可以同步运行体验。

    传送门产业实践范例完整列表

    🏆企业应用案例

    企业应用案例是企业在实生产环境下落地应用PaddleDetection的方案思路,相比产业实践范例其更多强调整体方案设计思路,可供开发者在项目方案设计中做参考。

    传送门企业应用案例完整列表

    📝许可证书

    本项目的发布受Apache 2.0 license许可认证。

    📌引用

    @misc{ppdet2019,
    title={PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle.},
    author={PaddlePaddle Authors},
    howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}},
    year={2019}
    }
    

    paddledetection's People

    Contributors

    baiyfbupt avatar channingss avatar co63oc avatar fdinsky avatar flyingqianmm avatar ghostxsl avatar heavengate avatar jerrywgz avatar ldoublev avatar littletomatodonkey avatar lokezhou avatar lyuwenyu avatar mmglove avatar nemonameless avatar noplz avatar pkhk-1 avatar qingqing01 avatar slf12 avatar sunahong1993 avatar thinkthinking avatar wanghaoshuang avatar wangxinxin08 avatar willthefrog avatar wjm202 avatar xyz-916 avatar yanhuidua avatar yghstill avatar yixinkristy avatar zhiboniu avatar zoooo0820 avatar

    Stargazers

     avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

    Watchers

     avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

    paddledetection's Issues

    运行检测模型卷积通道剪裁示例时报错

    当我运行python compress.py
    -s yolov3_mobilenet_v1_slim.yaml
    -c ../../configs/yolov3_mobilenet_v1.yml
    -o max_iters=20
    num_classes=4
    YoloTrainFeed.batch_size=32
    pretrain_weights=/home/aistudio/PaddleDetection/output/yolov3_mobilenet_v1/best_model
    -d "/home/aistudio/work/coco"
    这个命令时一直报错以下错误,经定位是eval_utils.py将gt_box的值转换成im_id异常,异常信息如下

    loading annotations into memory...
    Done (t=0.01s)
    creating index...
    index created!
    [ 16. 115. 218. 374.]
    Traceback (most recent call last):
    File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/contrib/slim/core/compressor.py", line 593, in run
    self._eval(context)
    File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/contrib/slim/core/compressor.py", line 542, in _eval
    func(self.eval_graph.program, self.scope))
    File "compress.py", line 207, in eval_func
    FLAGS.output_eval)
    File "../../ppdet/utils/eval_utils.py", line 205, in eval_results
    is_bbox_normalized=is_bbox_normalized)
    File "../../ppdet/utils/coco_eval.py", line 86, in bbox_eval
    results, clsid2catid, is_bbox_normalized=is_bbox_normalized)
    File "../../ppdet/utils/coco_eval.py", line 215, in bbox2out
    im_id = int(im_ids[i][0])
    TypeError: only size-1 arrays can be converted to Python scalars
    2019-11-20 20:32:00,491-ERROR: None
    2019-11-20 20:32:00,491-ERROR: None
    2019-11-20 20:32:01,633-INFO: epoch:1; batch_id:0; odict_keys(['loss', 'lr']) = [117.678, 0.0]

    后面通过排查发现compress.py 将gt_box的值当成im_id出入导致报错,
    在第79行新增outs.append(data['im_id']) 解决问题

    QQ图片20191120204048

    不同的服务器运行相同的程序,一个机器可以稳定的训练几天,另一台机器训练不到3个小时就会报错

    如题: 不同的服务器运行相同的程序,一个机器可以稳定的训练几天,另一台机器训练不到3个小时就会报错
    PaddleCheckError: Expected index.dims()[0] > 0, but received index.dims()[0]:0 <= 0:0. The index of gather_op should not be empty when the index's rank is 1. at [/paddle/paddle/fluid/operators/gather.cu.h:82]

    2019-12-08 09:36:34,183-INFO: 1076 samples in file dataset/coco/annotations/instances_val2017.json
    2019-12-08 09:36:34,186-INFO: places would be ommited when DataLoader is not iterable
    W1208 09:36:36.099746   317 device_context.cc:235] Please NOTE: device: 0, CUDA Capability: 60, Driver API Version: 10.0, Runtime API Version: 10.0
    W1208 09:36:36.412878   317 device_context.cc:243] device: 0, cuDNN Version: 7.6.
    2019-12-08 09:36:40,108-INFO: Loading checkpoint from output/faster_rcnn_dcn_x101_vd_64x4d_fpn_1x/27000...
    2019-12-08 09:36:46,786-INFO: 20764 samples in file dataset/coco/annotations/instances_train2017.json
    2019-12-08 09:36:46,893-INFO: places would be ommited when DataLoader is not iterable
    I1208 09:36:47.311914   317 parallel_executor.cc:421] The number of CUDAPlace, which is used in ParallelExecutor, is 2. And the Program will be copied 2 copies
    I1208 09:36:49.598440   317 graph_pattern_detector.cc:96] ---  detected 40 subgraphs
    I1208 09:36:49.672873   317 graph_pattern_detector.cc:96] ---  detected 37 subgraphs
    W1208 09:36:49.861197   317 fuse_all_reduce_op_pass.cc:72] Find all_reduce operators: 183. To make the speed faster, some all_reduce ops are fused during training, after fusion, the number of all_reduce ops is 153.
    I1208 09:36:49.873278   317 build_strategy.cc:363] SeqOnlyAllReduceOps:0, num_trainers:1
    I1208 09:36:50.447363   317 parallel_executor.cc:285] Inplace strategy is enabled, when build_strategy.enable_inplace = True
    I1208 09:36:50.538653   317 parallel_executor.cc:368] Garbage collection strategy is enabled, when FLAGS_eager_delete_tensor_gb = 0
    2019-12-08 09:37:11,792-INFO: iter: 27020, lr: 0.002500, 'loss_cls': '0.107255', 'loss_bbox': '0.027132', 'loss_rpn_cls': '0.029164', 'loss_rpn_bbox': '0.004501', 'loss': '0.162709', time: 1.196, eta: 9 days, 14:16:56
    2019-12-08 09:37:33,380-INFO: iter: 27040, lr: 0.002500, 'loss_cls': '0.160085', 'loss_bbox': '0.037842', 'loss_rpn_cls': '0.029905', 'loss_rpn_bbox': '0.005318', 'loss': '0.248189', time: 1.069, eta: 8 days, 13:42:18
    2019-12-08 09:37:56,325-INFO: iter: 27060, lr: 0.002500, 'loss_cls': '0.111756', 'loss_bbox': '0.042657', 'loss_rpn_cls': '0.022942', 'loss_rpn_bbox': '0.003554', 'loss': '0.176668', time: 1.153, eta: 9 days, 5:55:00
    2019-12-08 09:38:18,266-INFO: iter: 27080, lr: 0.002500, 'loss_cls': '0.117590', 'loss_bbox': '0.037847', 'loss_rpn_cls': '0.028765', 'loss_rpn_bbox': '0.007621', 'loss': '0.183682', time: 1.103, eta: 8 days, 20:15:40
    2019-12-08 09:38:38,424-INFO: iter: 27100, lr: 0.002500, 'loss_cls': '0.109035', 'loss_bbox': '0.036450', 'loss_rpn_cls': '0.020142', 'loss_rpn_bbox': '0.002935', 'loss': '0.167680', time: 1.001, eta: 8 days, 0:35:44
    2019-12-08 09:39:01,073-INFO: iter: 27120, lr: 0.002500, 'loss_cls': '0.112554', 'loss_bbox': '0.030803', 'loss_rpn_cls': '0.027392', 'loss_rpn_bbox': '0.007306', 'loss': '0.210294', time: 1.125, eta: 9 days, 0:33:28
    2019-12-08 09:39:22,075-INFO: iter: 27140, lr: 0.002500, 'loss_cls': '0.119142', 'loss_bbox': '0.032270', 'loss_rpn_cls': '0.024191', 'loss_rpn_bbox': '0.004826', 'loss': '0.176259', time: 1.051, eta: 8 days, 10:21:23
    2019-12-08 09:39:43,539-INFO: iter: 27160, lr: 0.002500, 'loss_cls': '0.146193', 'loss_bbox': '0.047863', 'loss_rpn_cls': '0.028642', 'loss_rpn_bbox': '0.007621', 'loss': '0.238028', time: 1.064, eta: 8 days, 12:47:36
    2019-12-08 09:40:05,350-INFO: iter: 27180, lr: 0.002500, 'loss_cls': '0.095980', 'loss_bbox': '0.035866', 'loss_rpn_cls': '0.022334', 'loss_rpn_bbox': '0.004785', 'loss': '0.186832', time: 1.113, eta: 8 days, 22:10:17
    2019-12-08 09:40:27,442-INFO: iter: 27200, lr: 0.002500, 'loss_cls': '0.103658', 'loss_bbox': '0.030443', 'loss_rpn_cls': '0.023747', 'loss_rpn_bbox': '0.005142', 'loss': '0.174824', time: 1.100, eta: 8 days, 19:36:49
    2019-12-08 09:40:48,639-INFO: iter: 27220, lr: 0.002500, 'loss_cls': '0.123171', 'loss_bbox': '0.041430', 'loss_rpn_cls': '0.026343', 'loss_rpn_bbox': '0.007074', 'loss': '0.197138', time: 1.065, eta: 8 days, 12:56:42
    2019-12-08 09:41:09,553-INFO: iter: 27240, lr: 0.002500, 'loss_cls': '0.126446', 'loss_bbox': '0.030707', 'loss_rpn_cls': '0.021942', 'loss_rpn_bbox': '0.004218', 'loss': '0.185318', time: 1.044, eta: 8 days, 8:53:24
    2019-12-08 09:41:29,916-INFO: iter: 27260, lr: 0.002500, 'loss_cls': '0.109329', 'loss_bbox': '0.035236', 'loss_rpn_cls': '0.029849', 'loss_rpn_bbox': '0.008709', 'loss': '0.188167', time: 1.012, eta: 8 days, 2:44:33
    2019-12-08 09:41:51,827-INFO: iter: 27280, lr: 0.002500, 'loss_cls': '0.117947', 'loss_bbox': '0.035055', 'loss_rpn_cls': '0.022669', 'loss_rpn_bbox': '0.004247', 'loss': '0.183069', time: 1.100, eta: 8 days, 19:38:23
    2019-12-08 09:42:13,266-INFO: iter: 27300, lr: 0.002500, 'loss_cls': '0.104928', 'loss_bbox': '0.032062', 'loss_rpn_cls': '0.031705', 'loss_rpn_bbox': '0.006268', 'loss': '0.186234', time: 1.072, eta: 8 days, 14:19:22
    2019-12-08 09:42:35,486-INFO: iter: 27320, lr: 0.002500, 'loss_cls': '0.110162', 'loss_bbox': '0.032131', 'loss_rpn_cls': '0.037423', 'loss_rpn_bbox': '0.009977', 'loss': '0.179894', time: 1.113, eta: 8 days, 22:07:25
    2019-12-08 09:42:57,989-INFO: iter: 27340, lr: 0.002500, 'loss_cls': '0.098853', 'loss_bbox': '0.036495', 'loss_rpn_cls': '0.022839', 'loss_rpn_bbox': '0.007259', 'loss': '0.177353', time: 1.103, eta: 8 days, 20:13:59
    2019-12-08 09:43:18,957-INFO: iter: 27360, lr: 0.002500, 'loss_cls': '0.136675', 'loss_bbox': '0.037647', 'loss_rpn_cls': '0.024854', 'loss_rpn_bbox': '0.005528', 'loss': '0.243505', time: 1.051, eta: 8 days, 10:15:43
    2019-12-08 09:43:40,055-INFO: iter: 27380, lr: 0.002500, 'loss_cls': '0.107223', 'loss_bbox': '0.036117', 'loss_rpn_cls': '0.025308', 'loss_rpn_bbox': '0.005180', 'loss': '0.191671', time: 1.074, eta: 8 days, 14:33:44
    2019-12-08 09:44:01,801-INFO: iter: 27400, lr: 0.002500, 'loss_cls': '0.114322', 'loss_bbox': '0.032498', 'loss_rpn_cls': '0.021028', 'loss_rpn_bbox': '0.003865', 'loss': '0.180020', time: 1.088, eta: 8 days, 17:13:38
    2019-12-08 09:44:23,235-INFO: iter: 27420, lr: 0.002500, 'loss_cls': '0.101029', 'loss_bbox': '0.029264', 'loss_rpn_cls': '0.028364', 'loss_rpn_bbox': '0.007907', 'loss': '0.181565', time: 1.067, eta: 8 days, 13:19:37
    2019-12-08 09:44:44,491-INFO: iter: 27440, lr: 0.002500, 'loss_cls': '0.097345', 'loss_bbox': '0.030726', 'loss_rpn_cls': '0.029091', 'loss_rpn_bbox': '0.005249', 'loss': '0.170476', time: 1.069, eta: 8 days, 13:35:30
    2019-12-08 09:45:06,253-INFO: iter: 27460, lr: 0.002500, 'loss_cls': '0.107105', 'loss_bbox': '0.038378', 'loss_rpn_cls': '0.025624', 'loss_rpn_bbox': '0.005551', 'loss': '0.181381', time: 1.087, eta: 8 days, 17:10:51
    2019-12-08 09:45:28,042-INFO: iter: 27480, lr: 0.002500, 'loss_cls': '0.108549', 'loss_bbox': '0.029579', 'loss_rpn_cls': '0.024967', 'loss_rpn_bbox': '0.006547', 'loss': '0.215194', time: 1.064, eta: 8 days, 12:39:42
    2019-12-08 09:45:49,157-INFO: iter: 27500, lr: 0.002500, 'loss_cls': '0.112655', 'loss_bbox': '0.038301', 'loss_rpn_cls': '0.025386', 'loss_rpn_bbox': '0.006977', 'loss': '0.210186', time: 1.081, eta: 8 days, 15:58:03
    2019-12-08 09:46:10,362-INFO: iter: 27520, lr: 0.002500, 'loss_cls': '0.115814', 'loss_bbox': '0.031053', 'loss_rpn_cls': '0.022635', 'loss_rpn_bbox': '0.004697', 'loss': '0.195093', time: 1.056, eta: 8 days, 11:02:29
    2019-12-08 09:46:32,177-INFO: iter: 27540, lr: 0.002500, 'loss_cls': '0.116692', 'loss_bbox': '0.036010', 'loss_rpn_cls': '0.027288', 'loss_rpn_bbox': '0.005695', 'loss': '0.193081', time: 1.066, eta: 8 days, 13:02:31
    2019-12-08 09:46:54,590-INFO: iter: 27560, lr: 0.002500, 'loss_cls': '0.128424', 'loss_bbox': '0.034904', 'loss_rpn_cls': '0.025524', 'loss_rpn_bbox': '0.004873', 'loss': '0.208800', time: 1.151, eta: 9 days, 5:23:29
    2019-12-08 09:47:16,635-INFO: iter: 27580, lr: 0.002500, 'loss_cls': '0.107845', 'loss_bbox': '0.033880', 'loss_rpn_cls': '0.027985', 'loss_rpn_bbox': '0.005409', 'loss': '0.170375', time: 1.102, eta: 8 days, 19:58:25
    2019-12-08 09:47:37,165-INFO: iter: 27600, lr: 0.002500, 'loss_cls': '0.096917', 'loss_bbox': '0.027007', 'loss_rpn_cls': '0.023711', 'loss_rpn_bbox': '0.005217', 'loss': '0.149012', time: 1.022, eta: 8 days, 4:34:20
    2019-12-08 09:47:59,157-INFO: iter: 27620, lr: 0.002500, 'loss_cls': '0.148022', 'loss_bbox': '0.041588', 'loss_rpn_cls': '0.034595', 'loss_rpn_bbox': '0.008284', 'loss': '0.243836', time: 1.097, eta: 8 days, 18:56:47
    2019-12-08 09:48:21,268-INFO: iter: 27640, lr: 0.002500, 'loss_cls': '0.102161', 'loss_bbox': '0.033482', 'loss_rpn_cls': '0.022656', 'loss_rpn_bbox': '0.003083', 'loss': '0.176066', time: 1.113, eta: 8 days, 22:04:28
    2019-12-08 09:48:42,021-INFO: iter: 27660, lr: 0.002500, 'loss_cls': '0.116623', 'loss_bbox': '0.031174', 'loss_rpn_cls': '0.021008', 'loss_rpn_bbox': '0.006698', 'loss': '0.181974', time: 1.032, eta: 8 days, 6:25:45
    
    2019-12-08 10:03:30,895-INFO: iter: 28260, lr: 0.002500, 'loss_cls': '0.120618', 'loss_bbox': '0.036356', 'loss_rpn_cls': '0.028862', 'loss_rpn_bbox': '0.006762', 'loss': '0.196524', time: 1.062, eta: 8 days, 12:08:16
    2019-12-08 10:03:54,389-INFO: iter: 28280, lr: 0.002500, 'loss_cls': '0.116200', 'loss_bbox': '0.037455', 'loss_rpn_cls': '0.026497', 'loss_rpn_bbox': '0.006574', 'loss': '0.179571', time: 1.176, eta: 9 days, 9:52:23
    2019-12-08 10:04:16,664-INFO: iter: 28300, lr: 0.002500, 'loss_cls': '0.090389', 'loss_bbox': '0.022946', 'loss_rpn_cls': '0.028943', 'loss_rpn_bbox': '0.005928', 'loss': '0.150511', time: 1.106, eta: 8 days, 20:34:48
    2019-12-08 10:04:39,204-INFO: iter: 28320, lr: 0.002500, 'loss_cls': '0.099024', 'loss_bbox': '0.030282', 'loss_rpn_cls': '0.027065', 'loss_rpn_bbox': '0.006674', 'loss': '0.180380', time: 1.125, eta: 9 days, 0:10:30
    2019-12-08 10:05:00,137-INFO: iter: 28340, lr: 0.002500, 'loss_cls': '0.104252', 'loss_bbox': '0.031640', 'loss_rpn_cls': '0.023287', 'loss_rpn_bbox': '0.004962', 'loss': '0.179577', time: 1.055, eta: 8 days, 10:42:35
    2019-12-08 10:05:20,870-INFO: iter: 28360, lr: 0.002500, 'loss_cls': '0.106998', 'loss_bbox': '0.036490', 'loss_rpn_cls': '0.027223', 'loss_rpn_bbox': '0.006033', 'loss': '0.196109', time: 1.037, eta: 8 days, 7:08:30
    
    2019-12-08 11:26:26,361-INFO: iter: 32040, lr: 0.002500, 'loss_cls': '0.093410', 'loss_bbox': '0.025801', 'loss_rpn_cls': '0.020566', 'loss_rpn_bbox': '0.004328', 'loss': '0.155523', time: 1.130, eta: 8 days, 23:52:36
    2019-12-08 11:26:48,282-INFO: iter: 32060, lr: 0.002500, 'loss_cls': '0.116218', 'loss_bbox': '0.030774', 'loss_rpn_cls': '0.028761', 'loss_rpn_bbox': '0.008732', 'loss': '0.198504', time: 1.111, eta: 8 days, 20:15:33
    2019-12-08 11:27:10,345-INFO: iter: 32080, lr: 0.002500, 'loss_cls': '0.130258', 'loss_bbox': '0.032691', 'loss_rpn_cls': '0.036417', 'loss_rpn_bbox': '0.007845', 'loss': '0.220228', time: 1.090, eta: 8 days, 16:15:38
    2019-12-08 11:27:32,476-INFO: iter: 32100, lr: 0.002500, 'loss_cls': '0.108925', 'loss_bbox': '0.037187', 'loss_rpn_cls': '0.021106', 'loss_rpn_bbox': '0.007838', 'loss': '0.178534', time: 1.119, eta: 8 days, 21:54:37
    2019-12-08 11:27:54,745-INFO: iter: 32120, lr: 0.002500, 'loss_cls': '0.112486', 'loss_bbox': '0.036119', 'loss_rpn_cls': '0.026041', 'loss_rpn_bbox': '0.005284', 'loss': '0.181879', time: 1.102, eta: 8 days, 18:36:12
    2019-12-08 11:28:17,569-INFO: iter: 32140, lr: 0.002500, 'loss_cls': '0.105735', 'loss_bbox': '0.034030', 'loss_rpn_cls': '0.024859', 'loss_rpn_bbox': '0.003535', 'loss': '0.172869', time: 1.147, eta: 9 days, 3:05:29
    2019-12-08 11:28:38,749-INFO: iter: 32160, lr: 0.002500, 'loss_cls': '0.097556', 'loss_bbox': '0.032315', 'loss_rpn_cls': '0.027183', 'loss_rpn_bbox': '0.004453', 'loss': '0.168935', time: 1.063, eta: 8 days, 11:03:53
    2019-12-08 11:29:01,167-INFO: iter: 32180, lr: 0.002500, 'loss_cls': '0.106844', 'loss_bbox': '0.027867', 'loss_rpn_cls': '0.027673', 'loss_rpn_bbox': '0.007006', 'loss': '0.180671', time: 1.103, eta: 8 days, 18:40:46
    2019-12-08 11:29:22,644-INFO: iter: 32200, lr: 0.002500, 'loss_cls': '0.111045', 'loss_bbox': '0.039628', 'loss_rpn_cls': '0.023468', 'loss_rpn_bbox': '0.004395', 'loss': '0.191487', time: 1.086, eta: 8 days, 15:34:46
    2019-12-08 11:29:44,434-INFO: iter: 32220, lr: 0.002500, 'loss_cls': '0.124690', 'loss_bbox': '0.027931', 'loss_rpn_cls': '0.019735', 'loss_rpn_bbox': '0.005108', 'loss': '0.175699', time: 1.097, eta: 8 days, 17:33:22
    2019-12-08 11:30:05,626-INFO: iter: 32240, lr: 0.002500, 'loss_cls': '0.095456', 'loss_bbox': '0.035033', 'loss_rpn_cls': '0.022073', 'loss_rpn_bbox': '0.003227', 'loss': '0.156538', time: 1.059, eta: 8 days, 10:13:40
    2019-12-08 11:30:26,897-INFO: iter: 32260, lr: 0.002500, 'loss_cls': '0.119746', 'loss_bbox': '0.046080', 'loss_rpn_cls': '0.019381', 'loss_rpn_bbox': '0.004442', 'loss': '0.182005', time: 1.066, eta: 8 days, 11:36:56
    2019-12-08 11:30:48,440-INFO: iter: 32280, lr: 0.002500, 'loss_cls': '0.115690', 'loss_bbox': '0.028565', 'loss_rpn_cls': '0.028244', 'loss_rpn_bbox': '0.005486', 'loss': '0.200766', time: 1.057, eta: 8 days, 9:58:26
    2019-12-08 11:31:09,633-INFO: iter: 32300, lr: 0.002500, 'loss_cls': '0.107902', 'loss_bbox': '0.033771', 'loss_rpn_cls': '0.021195', 'loss_rpn_bbox': '0.004134', 'loss': '0.175219', time: 1.079, eta: 8 days, 14:09:52
    2019-12-08 11:31:30,260-INFO: iter: 32320, lr: 0.002500, 'loss_cls': '0.098927', 'loss_bbox': '0.037162', 'loss_rpn_cls': '0.021459', 'loss_rpn_bbox': '0.005403', 'loss': '0.158387', time: 1.029, eta: 8 days, 4:37:34
    2019-12-08 11:31:53,241-INFO: iter: 32340, lr: 0.002500, 'loss_cls': '0.107896', 'loss_bbox': '0.033144', 'loss_rpn_cls': '0.033767', 'loss_rpn_bbox': '0.004966', 'loss': '0.197591', time: 1.142, eta: 9 days, 2:11:20
    
    2019-12-08 11:32:13,843-INFO: iter: 32360, lr: 0.002500, 'loss_cls': '0.107879', 'loss_bbox': '0.041093', 'loss_rpn_cls': '0.033133', 'loss_rpn_bbox': '0.011063', 'loss': '0.211041', time: 1.035, eta: 8 days, 5:37:19
    2019-12-08 11:32:34,803-INFO: iter: 32380, lr: 0.002500, 'loss_cls': '0.110426', 'loss_bbox': '0.039289', 'loss_rpn_cls': '0.027911', 'loss_rpn_bbox': '0.005713', 'loss': '0.184945', time: 1.052, eta: 8 days, 8:53:31
    2019-12-08 11:32:55,430-INFO: iter: 32400, lr: 0.002500, 'loss_cls': '0.128478', 'loss_bbox': '0.041886', 'loss_rpn_cls': '0.026493', 'loss_rpn_bbox': '0.006556', 'loss': '0.188470', time: 1.031, eta: 8 days, 4:53:08
    2019-12-08 11:33:17,293-INFO: iter: 32420, lr: 0.002500, 'loss_cls': '0.110536', 'loss_bbox': '0.032399', 'loss_rpn_cls': '0.030574', 'loss_rpn_bbox': '0.007749', 'loss': '0.206734', time: 1.086, eta: 8 days, 15:30:35
    2019-12-08 11:33:38,599-INFO: iter: 32440, lr: 0.002500, 'loss_cls': '0.095937', 'loss_bbox': '0.035684', 'loss_rpn_cls': '0.020276', 'loss_rpn_bbox': '0.007348', 'loss': '0.159013', time: 1.070, eta: 8 days, 12:17:27
    2019-12-08 11:34:00,011-INFO: iter: 32460, lr: 0.002500, 'loss_cls': '0.120759', 'loss_bbox': '0.040102', 'loss_rpn_cls': '0.025354', 'loss_rpn_bbox': '0.005780', 'loss': '0.201859', time: 1.072, eta: 8 days, 12:43:34
    2019-12-08 11:34:20,885-INFO: iter: 32480, lr: 0.002500, 'loss_cls': '0.091575', 'loss_bbox': '0.030783', 'loss_rpn_cls': '0.027161', 'loss_rpn_bbox': '0.004689', 'loss': '0.153618', time: 1.039, eta: 8 days, 6:27:44
    2019-12-08 11:34:41,995-INFO: iter: 32500, lr: 0.002500, 'loss_cls': '0.131100', 'loss_bbox': '0.042119', 'loss_rpn_cls': '0.024955', 'loss_rpn_bbox': '0.005969', 'loss': '0.209270', time: 1.054, eta: 8 days, 9:20:55
    2019-12-08 11:35:03,078-INFO: iter: 32520, lr: 0.002500, 'loss_cls': '0.108701', 'loss_bbox': '0.025954', 'loss_rpn_cls': '0.019844', 'loss_rpn_bbox': '0.004977', 'loss': '0.180352', time: 1.053, eta: 8 days, 9:05:40
    2019-12-08 11:35:24,807-INFO: iter: 32540, lr: 0.002500, 'loss_cls': '0.100763', 'loss_bbox': '0.031137', 'loss_rpn_cls': '0.025281', 'loss_rpn_bbox': '0.004916', 'loss': '0.175427', time: 1.068, eta: 8 days, 11:56:12
    2019-12-08 11:35:47,071-INFO: iter: 32560, lr: 0.002500, 'loss_cls': '0.103370', 'loss_bbox': '0.027407', 'loss_rpn_cls': '0.025815', 'loss_rpn_bbox': '0.007054', 'loss': '0.167267', time: 1.132, eta: 9 days, 0:06:07
    2019-12-08 11:36:09,177-INFO: iter: 32580, lr: 0.002500, 'loss_cls': '0.093084', 'loss_bbox': '0.023440', 'loss_rpn_cls': '0.023047', 'loss_rpn_bbox': '0.006587', 'loss': '0.151249', time: 1.100, eta: 8 days, 18:06:41
    2019-12-08 11:36:30,414-INFO: iter: 32600, lr: 0.002500, 'loss_cls': '0.092012', 'loss_bbox': '0.028003', 'loss_rpn_cls': '0.021767', 'loss_rpn_bbox': '0.006627', 'loss': '0.159122', time: 1.050, eta: 8 days, 8:25:08
    2019-12-08 11:36:52,008-INFO: iter: 32620, lr: 0.002500, 'loss_cls': '0.118538', 'loss_bbox': '0.029654', 'loss_rpn_cls': '0.031362', 'loss_rpn_bbox': '0.007302', 'loss': '0.238050', time: 1.106, eta: 8 days, 19:08:26
    2019-12-08 11:37:13,449-INFO: iter: 32640, lr: 0.002500, 'loss_cls': '0.072994', 'loss_bbox': '0.026526', 'loss_rpn_cls': '0.022744', 'loss_rpn_bbox': '0.004115', 'loss': '0.134926', time: 1.055, eta: 8 days, 9:23:13
    2019-12-08 11:37:34,177-INFO: iter: 32660, lr: 0.002500, 'loss_cls': '0.097421', 'loss_bbox': '0.024360', 'loss_rpn_cls': '0.023935', 'loss_rpn_bbox': '0.006053', 'loss': '0.161697', time: 1.053, eta: 8 days, 9:03:48
    2019-12-08 11:37:55,721-INFO: iter: 32680, lr: 0.002500, 'loss_cls': '0.104007', 'loss_bbox': '0.027913', 'loss_rpn_cls': '0.023525', 'loss_rpn_bbox': '0.008149', 'loss': '0.181776', time: 1.053, eta: 8 days, 9:06:54
    2019-12-08 11:38:17,213-INFO: iter: 32700, lr: 0.002500, 'loss_cls': '0.115015', 'loss_bbox': '0.037233', 'loss_rpn_cls': '0.026973', 'loss_rpn_bbox': '0.007489', 'loss': '0.198169', time: 1.099, eta: 8 days, 17:47:08
    2019-12-08 11:38:37,801-INFO: iter: 32720, lr: 0.002500, 'loss_cls': '0.107442', 'loss_bbox': '0.030250', 'loss_rpn_cls': '0.026453', 'loss_rpn_bbox': '0.006128', 'loss': '0.170408', time: 1.023, eta: 8 days, 3:16:08
    2019-12-08 11:38:58,985-INFO: iter: 32740, lr: 0.002500, 'loss_cls': '0.115347', 'loss_bbox': '0.037755', 'loss_rpn_cls': '0.022134', 'loss_rpn_bbox': '0.003413', 'loss': '0.182344', time: 1.065, eta: 8 days, 11:21:58
    2019-12-08 11:39:20,945-INFO: iter: 32760, lr: 0.002500, 'loss_cls': '0.100408', 'loss_bbox': '0.030180', 'loss_rpn_cls': '0.031180', 'loss_rpn_bbox': '0.006408', 'loss': '0.176443', time: 1.091, eta: 8 days, 16:18:49
    2019-12-08 11:39:42,146-INFO: iter: 32780, lr: 0.002500, 'loss_cls': '0.090289', 'loss_bbox': '0.019610', 'loss_rpn_cls': '0.024948', 'loss_rpn_bbox': '0.003633', 'loss': '0.158710', time: 1.066, eta: 8 days, 11:26:46
    2019-12-08 11:40:03,092-INFO: iter: 32800, lr: 0.002500, 'loss_cls': '0.096883', 'loss_bbox': '0.028336', 'loss_rpn_cls': '0.028111', 'loss_rpn_bbox': '0.007136', 'loss': '0.154402', time: 1.048, eta: 8 days, 7:57:52
    2019-12-08 11:40:25,261-INFO: iter: 32820, lr: 0.002500, 'loss_cls': '0.127951', 'loss_bbox': '0.041875', 'loss_rpn_cls': '0.027269', 'loss_rpn_bbox': '0.006420', 'loss': '0.202579', time: 1.100, eta: 8 days, 17:53:07
    2019-12-08 11:40:47,515-INFO: iter: 32840, lr: 0.002500, 'loss_cls': '0.105165', 'loss_bbox': '0.026979', 'loss_rpn_cls': '0.026087', 'loss_rpn_bbox': '0.006410', 'loss': '0.165277', time: 1.106, eta: 8 days, 19:01:51
    2019-12-08 11:41:10,274-INFO: iter: 32860, lr: 0.002500, 'loss_cls': '0.113844', 'loss_bbox': '0.035535', 'loss_rpn_cls': '0.021415', 'loss_rpn_bbox': '0.005199', 'loss': '0.179511', time: 1.148, eta: 9 days, 3:02:05
    2019-12-08 11:41:32,032-INFO: iter: 32880, lr: 0.002500, 'loss_cls': '0.107139', 'loss_bbox': '0.034098', 'loss_rpn_cls': '0.023092', 'loss_rpn_bbox': '0.005283', 'loss': '0.178474', time: 1.094, eta: 8 days, 16:51:14
    2019-12-08 11:41:53,913-INFO: iter: 32900, lr: 0.002500, 'loss_cls': '0.107068', 'loss_bbox': '0.033079', 'loss_rpn_cls': '0.025516', 'loss_rpn_bbox': '0.005883', 'loss': '0.179267', time: 1.096, eta: 8 days, 17:05:30
    2019-12-08 11:42:15,265-INFO: iter: 32920, lr: 0.002500, 'loss_cls': '0.084352', 'loss_bbox': '0.025512', 'loss_rpn_cls': '0.026543', 'loss_rpn_bbox': '0.005927', 'loss': '0.174963', time: 1.060, eta: 8 days, 10:17:00
    2019-12-08 11:42:35,988-INFO: iter: 32940, lr: 0.002500, 'loss_cls': '0.094424', 'loss_bbox': '0.026548', 'loss_rpn_cls': '0.028392', 'loss_rpn_bbox': '0.004338', 'loss': '0.166381', time: 1.043, eta: 8 days, 6:58:18
    2019-12-08 11:42:57,952-INFO: iter: 32960, lr: 0.002500, 'loss_cls': '0.077417', 'loss_bbox': '0.029750', 'loss_rpn_cls': '0.025627', 'loss_rpn_bbox': '0.006614', 'loss': '0.165685', time: 1.098, eta: 8 days, 17:34:56
    2019-12-08 11:43:19,212-INFO: iter: 32980, lr: 0.002500, 'loss_cls': '0.104759', 'loss_bbox': '0.029914', 'loss_rpn_cls': '0.024789', 'loss_rpn_bbox': '0.005739', 'loss': '0.170735', time: 1.056, eta: 8 days, 9:36:00
    2019-12-08 11:43:39,756-INFO: iter: 33000, lr: 0.002500, 'loss_cls': '0.116220', 'loss_bbox': '0.036502', 'loss_rpn_cls': '0.019375', 'loss_rpn_bbox': '0.004633', 'loss': '0.182570', time: 1.028, eta: 8 days, 4:06:34
    2019-12-08 11:43:39,761-INFO: Save model to output/faster_rcnn_dcn_x101_vd_64x4d_fpn_1x/33000.
    2019-12-08 11:43:55,919-INFO: Test iter 0
    2019-12-08 11:44:14,859-INFO: Test iter 100
    2019-12-08 11:44:33,713-INFO: Test iter 200
    2019-12-08 11:44:53,128-INFO: Test iter 300
    2019-12-08 11:45:12,317-INFO: Test iter 400
    2019-12-08 11:45:30,595-INFO: Test iter 500
    2019-12-08 11:45:48,932-INFO: Test iter 600
    2019-12-08 11:46:07,444-INFO: Test iter 700
    2019-12-08 11:46:25,915-INFO: Test iter 800
    2019-12-08 11:46:44,161-INFO: Test iter 900
    2019-12-08 11:47:02,660-INFO: Test iter 1000
    2019-12-08 11:47:16,320-INFO: Test finish iter 1076
    2019-12-08 11:47:16,321-INFO: Total number of images: 1076, inference time: 5.363484210675358 fps.
    2019-12-08 11:47:16,698-INFO: Start evaluate...
    2019-12-08 11:47:19,054-INFO: Best test box ap: 0.03341993076262028, in iter: 30000
    2019-12-08 11:47:40,249-INFO: iter: 33020, lr: 0.002500, 'loss_cls': '0.108531', 'loss_bbox': '0.043173', 'loss_rpn_cls': '0.017900', 'loss_rpn_bbox': '0.004340', 'loss': '0.169767', time: 12.031, eta: 95 days, 15:54:32
    2019-12-08 11:48:01,339-INFO: iter: 33040, lr: 0.002500, 'loss_cls': '0.079204', 'loss_bbox': '0.025323', 'loss_rpn_cls': '0.019523', 'loss_rpn_bbox': '0.004204', 'loss': '0.132759', time: 1.051, eta: 8 days, 8:30:30
    2019-12-08 11:48:23,153-INFO: iter: 33060, lr: 0.002500, 'loss_cls': '0.090260', 'loss_bbox': '0.036725', 'loss_rpn_cls': '0.027528', 'loss_rpn_bbox': '0.006311', 'loss': '0.179389', time: 1.095, eta: 8 days, 17:01:20
    2019-12-08 11:48:44,463-INFO: iter: 33080, lr: 0.002500, 'loss_cls': '0.093552', 'loss_bbox': '0.034076', 'loss_rpn_cls': '0.018074', 'loss_rpn_bbox': '0.007104', 'loss': '0.152736', time: 1.065, eta: 8 days, 11:12:37
    2019-12-08 11:49:06,337-INFO: iter: 33100, lr: 0.002500, 'loss_cls': '0.136960', 'loss_bbox': '0.034778', 'loss_rpn_cls': '0.019313', 'loss_rpn_bbox': '0.004328', 'loss': '0.193764', time: 1.087, eta: 8 days, 15:29:17
    
    2019-12-08 11:49:27,727-INFO: iter: 33120, lr: 0.002500, 'loss_cls': '0.096826', 'loss_bbox': '0.027477', 'loss_rpn_cls': '0.027583', 'loss_rpn_bbox': '0.003562', 'loss': '0.162150', time: 1.071, eta: 8 days, 12:25:21
    
    
    2019-12-08 11:49:48,957-INFO: iter: 33140, lr: 0.002500, 'loss_cls': '0.111899', 'loss_bbox': '0.039398', 'loss_rpn_cls': '0.017612', 'loss_rpn_bbox': '0.005294', 'loss': '0.181398', time: 1.044, eta: 8 days, 7:10:46
    2019-12-08 11:50:09,371-INFO: iter: 33160, lr: 0.002500, 'loss_cls': '0.114998', 'loss_bbox': '0.040129', 'loss_rpn_cls': '0.021477', 'loss_rpn_bbox': '0.005812', 'loss': '0.186288', time: 1.042, eta: 8 days, 6:47:26
    2019-12-08 11:50:30,408-INFO: iter: 33180, lr: 0.002500, 'loss_cls': '0.088878', 'loss_bbox': '0.026715', 'loss_rpn_cls': '0.027719', 'loss_rpn_bbox': '0.007981', 'loss': '0.171643', time: 1.040, eta: 8 days, 6:19:52
    2019-12-08 11:50:51,979-INFO: iter: 33200, lr: 0.002500, 'loss_cls': '0.127711', 'loss_bbox': '0.037682', 'loss_rpn_cls': '0.028521', 'loss_rpn_bbox': '0.004866', 'loss': '0.217983', time: 1.069, eta: 8 days, 11:54:39
    2019-12-08 11:51:13,415-INFO: iter: 33220, lr: 0.002500, 'loss_cls': '0.135208', 'loss_bbox': '0.038123', 'loss_rpn_cls': '0.029232', 'loss_rpn_bbox': '0.005568', 'loss': '0.215429', time: 1.088, eta: 8 days, 15:28:55
    2019-12-08 11:51:34,174-INFO: iter: 33240, lr: 0.002500, 'loss_cls': '0.124688', 'loss_bbox': '0.035213', 'loss_rpn_cls': '0.030661', 'loss_rpn_bbox': '0.007142', 'loss': '0.207017', time: 1.044, eta: 8 days, 7:04:49
    2019-12-08 11:51:55,334-INFO: iter: 33260, lr: 0.002500, 'loss_cls': '0.107978', 'loss_bbox': '0.040368', 'loss_rpn_cls': '0.029550', 'loss_rpn_bbox': '0.005072', 'loss': '0.180974', time: 1.060, eta: 8 days, 10:09:37
    2019-12-08 11:52:16,021-INFO: iter: 33280, lr: 0.002500, 'loss_cls': '0.117390', 'loss_bbox': '0.036064', 'loss_rpn_cls': '0.022095', 'loss_rpn_bbox': '0.004055', 'loss': '0.184817', time: 1.032, eta: 8 days, 4:57:10
    2019-12-08 11:52:39,507-INFO: iter: 33300, lr: 0.002500, 'loss_cls': '0.099554', 'loss_bbox': '0.028488', 'loss_rpn_cls': '0.030340', 'loss_rpn_bbox': '0.004249', 'loss': '0.173735', time: 1.168, eta: 9 days, 6:50:47
    2019-12-08 11:53:00,719-INFO: iter: 33320, lr: 0.002500, 'loss_cls': '0.091982', 'loss_bbox': '0.025819', 'loss_rpn_cls': '0.026813', 'loss_rpn_bbox': '0.004227', 'loss': '0.173283', time: 1.058, eta: 8 days, 9:50:57
    /home/admin/.local/lib/python3.6/site-packages/paddle/fluid/executor.py:774: UserWarning: The following exception is not an EOF exception.
      "The following exception is not an EOF exception.")
    Traceback (most recent call last):
      File "tools/train.py", line 340, in <module>
        main()
      File "tools/train.py", line 246, in main
        outs = exe.run(compiled_train_prog, fetch_list=train_values)
      File "/home/admin/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 775, in run
        six.reraise(*sys.exc_info())
      File "/opt/conda/lib/python3.6/site-packages/six.py", line 693, in reraise
        raise value
      File "/home/admin/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 770, in run
        use_program_cache=use_program_cache)
      File "/home/admin/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 829, in _run_impl
        return_numpy=return_numpy)
      File "/home/admin/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 669, in _run_parallel
        tensors = exe.run(fetch_var_names)._move_to_list()
    paddle.fluid.core_avx.EnforceNotMet:
    
    --------------------------------------------
    C++ Call Stacks (More useful to developers):
    --------------------------------------------
    0   std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
    1   paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
    2   void paddle::operators::GPUGather<float, int>(paddle::platform::DeviceContext const&, paddle::framework::Tensor const&, paddle::framework::Tensor const&, paddle::framework::Tensor*)
    3   paddle::operators::GatherOpCUDAKernel<float>::Compute(paddle::framework::ExecutionContext const&) const
    4   std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::GatherOpCUDAKernel<float>, paddle::operators::GatherOpCUDAKernel<double>, paddle::operators::GatherOpCUDAKernel<long>, paddle::operators::GatherOpCUDAKernel<int>, paddle::operators::GatherOpCUDAKernel<paddle::platform::float16> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
    5   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_,boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&, paddle::framework::RuntimeContext*) const
    6   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_,boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const
    7   paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&)
    8   paddle::framework::details::ComputationOpHandle::RunImpl()
    9   paddle::framework::details::FastThreadedSSAGraphExecutor::RunOpSync(paddle::framework::details::OpHandleBase*)
    10  paddle::framework::details::FastThreadedSSAGraphExecutor::RunOp(paddle::framework::details::OpHandleBase*, std::shared_ptr<paddle::framework::BlockingQueue<unsigned long> > const&, unsigned long*)
    11  std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<void>, std::__future_base::_Result_base::_Deleter>, void> >::_M_invoke(std::_Any_data const&)
    12  std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&)
    13  ThreadPool::ThreadPool(unsigned long)::{lambda()#1}::operator()() const
    
    ------------------------------------------
    Python Call Stacks (More useful to users):
    ------------------------------------------
      File "/home/admin/.local/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2459, in append_op
        attrs=kwargs.get("attrs", None))
      File "/home/admin/.local/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
        return self.main_program.current_block().append_op(*args, **kwargs)
      File "/home/admin/.local/lib/python3.6/site-packages/paddle/fluid/layers/nn.py", line 10806, in gather
        attrs={'overwrite': overwrite})
      File "/home/admin/.local/lib/python3.6/site-packages/paddle/fluid/layers/detection.py", line 428, in rpn_target_assign
        predicted_cls_logits = nn.gather(cls_logits, score_index)
      File "/data/nas/workspace/jupyter/PaddleDetection-release-0.1/ppdet/core/workspace.py", line 113, in partial_apply
        return op(*args, **kwargs_)
      File "/data/nas/workspace/jupyter/PaddleDetection-release-0.1/ppdet/modeling/anchor_heads/rpn_head.py", line 227, in get_loss
        im_info=im_info)
      File "/data/nas/workspace/jupyter/PaddleDetection-release-0.1/ppdet/modeling/architectures/faster_rcnn.py", line 100, in build
        rpn_loss = self.rpn_head.get_loss(im_info, gt_box, is_crowd)
      File "/data/nas/workspace/jupyter/PaddleDetection-release-0.1/ppdet/modeling/architectures/faster_rcnn.py", line 196, in train
        return self.build(feed_vars, 'train')
      File "tools/train.py", line 128, in main
        train_fetches = model.train(feed_vars)
      File "tools/train.py", line 340, in <module>
        main()
    
    ----------------------
    Error Message Summary:
    ----------------------
    PaddleCheckError: Expected index.dims()[0] > 0, but received index.dims()[0]:0 <= 0:0.
    The index of gather_op should not be empty when the index's rank is 1. at [/paddle/paddle/fluid/operators/gather.cu.h:82]
      [operator < gather > error]
    terminate called without an active exception
    W1208 11:53:03.514448   366 init.cc:205] *** Aborted at 1575777183 (unix time) try "date -d @1575777183" if you are using GNU date ***
    W1208 11:53:03.517763   366 init.cc:205] PC: @                0x0 (unknown)
    W1208 11:53:03.519001   366 init.cc:205] *** SIGABRT (@0x1f90000013d) received by PID 317 (TID 0x7f0f2b4bf700) from PID 317; stack trace: ***
    W1208 11:53:03.525950   366 init.cc:205]     @     0x7f139613d100 (unknown)
    W1208 11:53:03.542176   366 init.cc:205]     @     0x7f1395da15f7 __GI_raise
    W1208 11:53:03.555480   366 init.cc:205]     @     0x7f1395da2ce8 __GI_abort
    W1208 11:53:03.578831   366 init.cc:205]     @     0x7f137646d84a __gnu_cxx::__verbose_terminate_handler()
    W1208 11:53:03.585741   366 init.cc:205]     @     0x7f137646bf47 __cxxabiv1::__terminate()
    W1208 11:53:03.608299   366 init.cc:205]     @     0x7f137646bf7d std::terminate()
    W1208 11:53:03.621994   366 init.cc:205]     @     0x7f137646bc5a __gxx_personality_v0
    W1208 11:53:03.625728   366 init.cc:205]     @     0x7f1388dd3b97 _Unwind_ForcedUnwind_Phase2
    W1208 11:53:03.629989   366 init.cc:205]     @     0x7f1388dd3e7d _Unwind_ForcedUnwind
    W1208 11:53:03.634865   366 init.cc:205]     @     0x7f139613bd60 __GI___pthread_unwind
    W1208 11:53:03.639715   366 init.cc:205]     @     0x7f1396136dd5 __pthread_exit
    W1208 11:53:03.664770   366 init.cc:205]     @     0x559fb4fe2289 PyThread_exit_thread
    W1208 11:53:03.670189   366 init.cc:205]     @     0x559fb4e7447a PyEval_RestoreThread.cold.736
    W1208 11:53:03.674211   366 init.cc:205]     @     0x7f13427e65b9 pybind11::gil_scoped_release::~gil_scoped_release()
    W1208 11:53:03.675829   366 init.cc:205]     @     0x7f134279af23 _ZZN8pybind1112cpp_function10initializeIZN6paddle6pybindL22pybind11_init_core_avxERNS_6moduleEEUlRNS2_9operators6reader22LoDTensorBlockingQueueERKSt6vectorINS2_9framework9LoDTensorESaISC_EEE58_bIS9_SG_EINS_4nameENS_9is_methodENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNESY_
    W1208 11:53:03.678143   366 init.cc:205]     @     0x7f13427fa6e6 pybind11::cpp_function::dispatcher()
    W1208 11:53:03.705709   366 init.cc:205]     @     0x559fb4f23fd4 _PyCFunction_FastCallDict
    W1208 11:53:03.722385   366 init.cc:205]     @     0x559fb4fb1d3e call_function
    W1208 11:53:03.749676   366 init.cc:205]     @     0x559fb4fd619a _PyEval_EvalFrameDefault
    W1208 11:53:03.776196   366 init.cc:205]     @     0x559fb4fac8c8 PyEval_EvalCodeEx
    W1208 11:53:03.791987   366 init.cc:205]     @     0x559fb4fad456 function_call
    W1208 11:53:03.819322   366 init.cc:205]     @     0x559fb4f23dde PyObject_Call
    W1208 11:53:03.846752   366 init.cc:205]     @     0x559fb4fd7994 _PyEval_EvalFrameDefault
    W1208 11:53:03.861977   366 init.cc:205]     @     0x559fb4fab7db fast_function
    W1208 11:53:03.877820   366 init.cc:205]     @     0x559fb4fb1cc5 call_function
    W1208 11:53:03.905719   366 init.cc:205]     @     0x559fb4fd619a _PyEval_EvalFrameDefault
    W1208 11:53:03.921274   366 init.cc:205]     @     0x559fb4fab7db fast_function
    W1208 11:53:03.937258   366 init.cc:205]     @     0x559fb4fb1cc5 call_function
    W1208 11:53:03.964187   366 init.cc:205]     @     0x559fb4fd619a _PyEval_EvalFrameDefault
    W1208 11:53:03.989130   366 init.cc:205]     @     0x559fb4fabe4b _PyFunction_FastCallDict
    W1208 11:53:04.013437   366 init.cc:205]     @     0x559fb4f2439f _PyObject_FastCallDict
    W1208 11:53:04.037919   366 init.cc:205]     @     0x559fb4f28ff3 _PyObject_Call_Prepend
    
    

    请问这个错误是什么原因呢?

    paddle1.6:
    2019-12-05 15:32:27,163-WARNING: Your reader has raised an exception!
    Exception in thread Thread-11:
    Traceback (most recent call last):
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/paddle/fluid/reader.py", line 488, in thread_main
    six.reraise(*sys.exc_info())
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/six.py", line 693, in reraise
    raise value
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/paddle/fluid/reader.py", line 468, in thread_main
    for tensors in self._tensor_reader():
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/paddle/fluid/reader.py", line 542, in tensor_reader_impl
    for slots in paddle_reader():
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/paddle/fluid/data_feeder.py", line 454, in reader_creator
    for item in reader():
    File "/home/vis/wangping/workspace/food/code/PaddleDetection-master/ppdet/data/reader.py", line 103, in _reader
    for _batch in batched_ds:
    File "/home/vis/wangping/workspace/food/code/PaddleDetection-master/ppdet/data/dataset.py", line 30, in next
    return self.next()
    File "/home/vis/wangping/workspace/food/code/PaddleDetection-master/ppdet/data/transform/transformer.py", line 44, in _proxy_method
    return func(*args, **kwargs)
    File "/home/vis/wangping/workspace/food/code/PaddleDetection-master/ppdet/data/transform/transformer.py", line 57, in next
    sample = self._ds.next()
    File "/home/vis/wangping/workspace/food/code/PaddleDetection-master/ppdet/data/transform/transformer.py", line 44, in _proxy_method
    return func(*args, **kwargs)
    File "/home/vis/wangping/workspace/food/code/PaddleDetection-master/ppdet/data/transform/transformer.py", line 99, in next
    out = self._ds.next()
    File "/home/vis/wangping/workspace/food/code/PaddleDetection-master/ppdet/data/transform/transformer.py", line 44, in _proxy_method
    return func(*args, **kwargs)
    File "/home/vis/wangping/workspace/food/code/PaddleDetection-master/ppdet/data/transform/parallel_map.py", line 191, in next
    raise ValueError("all consumers exited, no more samples")
    ValueError: all consumers exited, no more samples

    2019-12-05 15:32:48,322-INFO: iter: 3320, lr: 0.000001, 'loss': '11.461334', time: 2.233, eta: 7:14:41
    Traceback (most recent call last):
    File "tools/train.py", line 340, in
    main()
    File "tools/train.py", line 246, in main
    outs = exe.run(compiled_train_prog, fetch_list=train_values)
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/paddle/fluid/executor.py", line 775, in run
    six.reraise(*sys.exc_info())
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/six.py", line 693, in reraise
    raise value
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/paddle/fluid/executor.py", line 770, in run
    use_program_cache=use_program_cache)
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/paddle/fluid/executor.py", line 829, in _run_impl
    return_numpy=return_numpy)
    File "/home/vis/wangping/envs/env_py3.6_torch1.0/lib/python3.6/site-packages/paddle/fluid/executor.py", line 669, in _run_parallel
    tensors = exe.run(fetch_var_names)._move_to_list()
    paddle.fluid.core_avx.EOFException: There is no next data. at [/paddle/paddle/fluid/operators/reader/read_op.cc:90]

    关于在配置中添加更多的数据增强方式导致训练错误以及训练卡住的问题

    使用mask rcnn 模型的resnet50+fpn 训练卡死不动 控制台输出:

    Done (t=0.07s)
    creating index...
    index created!
    {1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6}
    {'sky': 1, 'building': 2, 'terrain': 3, 'person': 4, 'vegetation': 5, 'car': 6}
    2019-12-12 11:44:38,129-INFO: 139 samples in file /home/aistudio/data/data17467/cococo/annotations/instance_train.json
    2019-12-12 11:44:38,131-INFO: places would be ommited when DataLoader is not iterable
    I1212 11:44:38.156224 4327 parallel_executor.cc:421] The number of CUDAPlace, which is used in ParallelExecutor, is 1. And the Program will be copied 1 copies
    I1212 11:44:38.196641 4327 graph_pattern_detector.cc:96] --- detected 28 subgraphs
    I1212 11:44:38.215721 4327 graph_pattern_detector.cc:96] --- detected 25 subgraphs
    I1212 11:44:38.257105 4327 build_strategy.cc:363] SeqOnlyAllReduceOps:0, num_trainers:1
    I1212 11:44:38.307612 4327 parallel_executor.cc:285] Inplace strategy is enabled, when build_strategy.enable_inplace = True
    I1212 11:44:38.331254 4327 parallel_executor.cc:368] Garbage collection strategy is enabled, when FLAGS_eager_delete_tensor_gb = 0

    Export yolov3 model with shape of weights error

    command :
    python tools/export_model.py -c configs/yolov3_darknet.yml --output_dir=./inference_model -o weights=../model/car_p/vehicle_yolov3_darknet YoloTestFeed.image_shape=[3,608,608]

    logs::2019-11-27 15:09:45,628-INFO: Loading parameters from ../model/car_p/vehicle_yolov3_darknet...
    Traceback (most recent call last):
    File "tools/export_model.py", line 120, in
    main()
    File "tools/export_model.py", line 107, in main
    checkpoint.load_params(exe, infer_prog, cfg.weights)
    File "./ppdet/utils/checkpoint.py", line 118, in load_params
    fluid.io.load_vars(exe, path, prog, predicate=_if_exist)
    File "/software/conda/envs/super_mask/lib/python3.6/site-packages/paddle/fluid/io.py", line 682, in load_vars
    filename=filename)
    File "/software/conda/envs/super_mask/lib/python3.6/site-packages/paddle/fluid/io.py", line 741, in load_vars
    format(orig_shape, each_var.name, new_shape))

    **RuntimeError: Shape not matching: the Program requires a parameter with a shape of ((255, 1024, 1, 1)), while the loaded parameter (namely [ yolo_output.0.conv.weights ]) has a shape of ((33, 1024, 1, 1))

    The weights can be used to infer image correctly,but with error in model export. Is the code or config file not updated?

    Reference from: https://github.com/PaddlePaddle/PaddleDetection/blob/release/0.1/docs/EXPORT_MODEL.md

    YOLOv3导出后的模型如何在Paddle-Lite中使用?完全不知道模型输入输出的接口呀,郁闷呀。

    您好,请教一下。
    用PaddleDetection训练的网络,想用在Paddle-Lite中,但是完全不知道模型输入输出接口呀,也找不到文档,过程是这样的:
    用PaddleDetection训练的网络,模型导出后得到了__modle__和__params__文件,我想将其用在嵌入式系统中,于是使用Paddle-Lite中的优化工具(model_optimize_tool)对模型进行优化,得到了__model__.nb和param.nb文档。按照Paddle-Lite中的例程编写了程序,然后就遇到问题了,问题如下:
    一、对于输入接口:
    (1)PaddleDetection导出模型输入接口是什么?原始图片?还是YOLO网络的输入?
    (2)原本的YOLOv3模型是可以输入任何尺寸图片的,但是PaddleDetection中配置固定为[3,608,608],那需要我人为resize吗?
    (3)输入网络的数据是需要用均值和方差做标准化的,这个是我来做还是模型自己会做?导出的模型有没有保存均值和方差信息?
    二、对于输出接口:
    (1)导出模型的输出是否就是原始YOLOv3模型的输出?为什么我在测试Paddle-Lite自己写的推理程序时,由于少了一个原始图片输入,导致程序core掉,原始YOLOv3模型需要原始图片信息?
    (2)NMS是不是要我自己做?
    (3)原始YOLOv3模型输出的box预测值是相对cell位置的,是否需要我自己将预测框换算为绝对值?
    这一系列问题找不到对于的说明文档,PaddleDetection和Paddle-Lite中都找不到,本来Paddle-Lite就是将训练好的模型进行工程化的过程,理应按照说明文档操作很快可以完成,但事实上到处是坑,两周都无法完成,体验很差,很是让人郁闷呀。
    能否请帮忙解答一下,谢谢。

    Are there any code for model evaluation metric?

    I have check out content in eval.py, but not found the code required. Are there any code for precison and recalls calculation for single category? Which favors the exported yolov3 model, and xml or json format data for validation?

    图像resize取整会导致x和y方向的scale不一致问题

    具体见:

    im_scale_x = im_scale
    im_scale_y = im_scale
    resize_w = np.round(im_scale_x * float(im_shape[1]))
    resize_h = np.round(im_scale_y * float(im_shape[0]))
    im_info = [resize_h, resize_w, im_scale]

    resize_w和resize_y取整后,im_scale_x和im_scale_y不一定是im_scale。
    需要改为:

    im_scale_x = selected_size / float(im_shape[1])
    im_scale_y = selected_size / float(im_shape[0])
    im_info = [resize_h, resize_w, im_scale_x, im_scale_y]

    其他使用了im_info的op要进行对应修改。
    此外,在不使用cv2时,在

    im = im.resize((resize_w, resize_h), self.interp)

    之前,要取整

    resize_w = int(resize_w)
    resize_h = int(resize_h)

    基于paddleDetection的 Faster_rcnn使用Python API预测时报错

    训练环境:paddle1.5.1, cuda7, cudnn9;
    测试环境:paddle1.6.0, cuda7, cudnn9;
    代码环境:基于paddleDetection的Faster_rcnn,因为待检测目标较小,便修改了参数 "anchor_sizes": [16, 32, 64,128,256] , "anchor_start_size": 16
    问题:训练正常、评估正常,但在服务部署时,使用cpu并enable_mkldnn预测时会报错;关掉enable_mkldnn 则预测正常,但速度慢很多,不能接受。
    模型加载相关代码:
    Uploading image.png…

    def load_inference_model(args):
        use_gpu =False
        # 设置AnalysisConfig
        config = AnalysisConfig(os.path.join(args.weights, "model"), os.path.join(args.weights,"params"))
        if use_gpu:
            print("use gpu infer")
            config.enable_use_gpu(memory_pool_init_size_mb=3000)
        else:
            print("use cpu infer")
            config.disable_gpu()
            thread_num = 20
            config.set_cpu_math_library_num_threads(thread_num)
            config.enable_mkldnn()
        # 创建PaddlePredictor
        predictor = create_paddle_predictor(config)
        return predictor
    
    
    #####
    def infer(args):
           data_list = []
           for dd in data:
               dd = np.array(dd)
               image = PaddleTensor()
               image.name = "data"
               image.shape = [1] + list(dd.shape)
               print("image.shape", image.shape)
               image.dtype = PaddleDType.FLOAT32
               image.data = PaddleBuf(
                  dd.astype("float32").flatten().tolist())
               data_list.append(image)
           print("data_list", data_list)
           print("shapes", [dd.shape for dd in data_list])
           ####
           postprocess_conf["im_height"] = im_height
           postprocess_conf["im_width"] = im_width
           outputs = predictor.run(data_list)
           print("outputs", outputs[0].shape)
    
    

    问题:

    W1204 02:16:48.881902  2325 naive_executor.cc:43] The NaiveExecutor can not work properly if the cmake flag ON_INFER is not set.
    W1204 02:16:48.881958  2325 naive_executor.cc:45] Unlike the training phase, all the scopes and variables will be reused to save the allocation overhead.
    W1204 02:16:48.881970  2325 naive_executor.cc:48] Please re-compile the inference library by setting the cmake flag ON_INFER=ON if you are running Paddle Inference
    Traceback (most recent call last):
      File "infer_demo_mkldnn.py", line 233, in <module>
        infer(args)
      File "infer_demo_mkldnn.py", line 166, in infer
        outputs = predictor.run(data_list)
    paddle.fluid.core_avx.EnforceNotMet:
    
    --------------------------------------------
    C++ Call Stacks (More useful to developers):
    --------------------------------------------
    0   std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
    1   paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
    2   paddle::operators::ConcatMKLDNNOpKernel<float>::Compute(paddle::framework::ExecutionContext const&) const
    3   std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::ConcatMKLDNNOpKernel<float>, paddle::operators::ConcatMKLDNNOpKernel<signed char>, paddle::operators::ConcatMKLDNNOpKernel<unsigned char> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
    4   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&, paddle::framework::RuntimeContext*) const
    5   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const
    6   paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&)
    7   paddle::framework::NaiveExecutor::Run()
    8   paddle::AnalysisPredictor::Run(std::vector<paddle::PaddleTensor, std::allocator<paddle::PaddleTensor> > const&, std::vector<paddle::PaddleTensor, std::allocator<paddle::PaddleTensor> >*, int)
    
    ------------------------------------------
    Python Call Stacks (More useful to users):
    ------------------------------------------
      File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/framework.py", line 1771, in append_op
        attrs=kwargs.get("attrs", None))
      File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/layer_helper.py", line 43, in append_op
        return self.main_program.current_block().append_op(*args, **kwargs)
      File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/layers/tensor.py", line 210, in concat
        attrs={'axis': axis})
      File "/tmp/code/user_custom.py", line 822, in __call__
        roi_feat_shuffle = fluid.layers.concat(roi_out_list)
      File "/tmp/code/user_custom.py", line 194, in create_model
        roi_feat = roi_extractor(body_feats, rois, spatial_scale)
      File "command.py", line 103, in main
        wfw_obj = wfw_cls(workflow_conf)
      File "command.py", line 108, in <module>
        main()
    
    ----------------------
    Error Message Summary:
    ----------------------
    PaddleCheckError: Expected input->layout() == DataLayout::kMKLDNN, but received input->layout():NCHW != DataLayout::kMKLDNN:MKLDNNLAYOUT.
    Wrong layout set for Input tensor at [/paddle/paddle/fluid/operators/mkldnn/concat_mkldnn_op.cc:34]
      [operator < concat > error]
    
    [pid: 3791][tid: 139684194371328][INFO][2019-12-03 15:43:02,095][handler.py:171] Failed infering, EnforceNotMet:
    

    使用Detection训练后测试的问题

    frankfurt_000001_005184_leftImg8bit
    frankfurt_000001_055387_leftImg8bit
    frankfurt_000001_055387_leftImg8bit
    测试结果如下 分别为迭代5W 4W 1W次的结果测试,配置信息为:

    architecture: MaskRCNN
    train_feed: MaskRCNNTrainFeed
    eval_feed: MaskRCNNEvalFeed
    test_feed: MaskRCNNTestFeed
    use_gpu: true
    max_iters: 180000
    snapshot_iter: 10000
    log_smooth_window: 20
    save_dir: output
    pretrain_weights: https://paddlemodels.bj.bcebos.com/object_detection/mask_rcnn_r50_1x.tar
    metric: COCO
    weights: output/mask_rcnn_r50_1x/50000/
    num_classes: 7
    finetune_exclude_pretrained_params: ['cls_score','bbox_pred','mask_fcn_logits']

    MaskRCNN:
    backbone: ResNet
    rpn_head: RPNHead
    roi_extractor: RoIAlign
    bbox_assigner: BBoxAssigner
    bbox_head: BBoxHead
    mask_assigner: MaskAssigner
    mask_head: MaskHead

    ResNet:
    norm_type: affine_channel
    norm_decay: 0.
    depth: 50
    feature_maps: 4
    freeze_at: 2

    ResNetC5:
    depth: 50
    norm_type: affine_channel

    RPNHead:
    anchor_generator:
    anchor_sizes: [32, 64, 128, 256, 512]
    aspect_ratios: [0.5, 1.0, 2.0]
    stride: [16.0, 16.0]
    variance: [1.0, 1.0, 1.0, 1.0]
    rpn_target_assign:
    rpn_batch_size_per_im: 256
    rpn_fg_fraction: 0.5
    rpn_negative_overlap: 0.3
    rpn_positive_overlap: 0.7
    rpn_straddle_thresh: 0.0
    train_proposal:
    min_size: 0.0
    nms_thresh: 0.7
    pre_nms_top_n: 6000
    post_nms_top_n: 2000
    test_proposal:
    min_size: 0.0
    nms_thresh: 0.7
    pre_nms_top_n: 1000
    post_nms_top_n: 1000

    RoIAlign:
    resolution: 14
    spatial_scale: 0.0625
    sampling_ratio: 0

    BBoxHead:
    head: ResNetC5
    nms:
    keep_top_k: 100
    nms_threshold: 0.5
    normalized: false
    score_threshold: 0.05

    MaskHead:
    dilation: 1
    conv_dim: 256
    resolution: 14

    BBoxAssigner:
    batch_size_per_im: 512
    bbox_reg_weights: [0.1, 0.1, 0.2, 0.2]
    bg_thresh_hi: 0.5
    bg_thresh_lo: 0.0
    fg_fraction: 0.25
    fg_thresh: 0.5

    MaskAssigner:
    resolution: 14

    LearningRate:
    base_lr: 0.0015
    schedulers:

    • !PiecewiseDecay
      gamma: 0.1
      milestones: [120000, 160000]
      #values: [1.0 , 0.5 , 0.1]
    • !LinearWarmup
      start_factor: 0.3333333333333333
      steps: 500

    OptimizerBuilder:
    optimizer:
    momentum: 0.9
    type: Momentum
    regularizer:
    factor: 0.0001
    type: L2

    MaskRCNNTrainFeed:
    batch_size: 4
    dataset:
    dataset_dir: /home/shuxsu/models-develop/PaddleCV/PaddleDetection/dataset/coco
    annotation: /home/shuxsu/models-develop/PaddleCV/PaddleDetection/dataset/coco/annotations/instance_train.json
    image_dir: train
    num_workers: 10
    MaskRCNNEvalFeed:
    batch_size: 1
    dataset:
    dataset_dir: /home/shuxsu/models-develop/PaddleCV/PaddleDetection/dataset/coco
    annotation: /home/shuxsu/models-develop/PaddleCV/PaddleDetection/dataset/coco/annotations/instance_val.json
    image_dir: val
    MaskRCNNTestFeed:
    batch_size: 1
    dataset:
    annotation: /home/shuxsu/models-develop/PaddleCV/PaddleDetection/dataset/coco/annotations/instance_val.json
    image_shape: [3, 2048,1024]
    运行的训练命令为

    python train.py -c=/home/shuxsu/models-develop/PaddleCV/PaddleDetection/configs/mask_rcnn_r50_1x.yml -d=/home/shuxsu/models-develop/PaddleCV/PaddleDetection/dataset/coco

    修改后脚本文件的数据集json
    instance_train.zip
    效果并不理想,这个数据集是通过cityscape数据集转换成coco数据集的 转换脚本为paddle工作人员提供,原脚本文件为:
    cityscape1coco.zip
    由于数据集类别过多,项目需求并不需要,自行修改脚本提取特定的类别,修改后的脚本文件如下:
    cityscape2coco.zip

    现在的训练效果并不理想,其中很多bbox都在上面,类别也不显示,不知道其中的原因为何,希望能指点一下,不胜感激。

    iou_similarity 不支持box_normalized参数

    如题,iou_similarity对于整数坐标输入的计算存在问题。
    对于box_normalized=False的情况,
    iou_similarity中的面积计算都应该改为 (ymax-ymin+1) * (xmax-xmin+1)。

    使用tools/export_model.py导出模型出错

    你好,我采用PaddleDetection的tools/export_model.py执行导出模型相关操作,出下如下错误,未查找出相关问题
    运行环境为ubuntu16.04 Cuda9+Cudnn7.3
    使用PaddleDetection yolov3+mobilenetv1训练正常
    使用
    python tools/export_model.py -c configs/fruveg/yolov3_mobilenet_v1_voc.yml --output_dir=./inference_model -o weights=output/yolov3_mobilenet_v1_voc/80000 YoloTestFeed.image_shape=[3,608,608]
    出现如下错误
    Traceback (most recent call last):
    File "tools/export_model.py", line 118, in
    main()
    File "tools/export_model.py", line 88, in main
    test_feed = create(cfg.test_feed)
    File "/home/ryg/FruVeg/PaddleDetection/ppdet/core/workspace.py", line 160, in create
    "the module {} is not registered".format(name)
    AssertionError: the module YoloTestFee is not registered
    使用
    ython tools/export_model.py -c configs/fruveg/yolov3_mobilenet_v1_voc.yml --output_dir=./inference_model -o weights=output/yolov3_mobilenet_v1_voc/80000
    同样出现错误
    Traceback (most recent call last):
    File "tools/export_model.py", line 118, in
    main()
    File "tools/export_model.py", line 88, in main
    test_feed = create(cfg.test_feed)
    File "/home/ryg/FruVeg/PaddleDetection/ppdet/core/workspace.py", line 160, in create
    "the module {} is not registered".format(name)
    AssertionError: the module YoloTestFee is not registered
    使用python tools/configure.py list检查
    Available modules in the category 'op':

    AnchorGenerator Wrapper for anchor_generator OP
    RPNTargetAssign Wrapper for rpn_target_assign OP
    GenerateProposals Wrapper for generate_proposals OP
    MaskAssigner Wrapper for generate_mask_labels OP
    MultiClassNMS Wrapper for multiclass_nms OP
    BBoxAssigner Wrapper for generate_proposal_labels OP
    RoIAlign Wrapper for roi_align OP
    RoIPool Wrapper for roi_pool OP
    MultiBoxHead Wrapper for multi_box_head OP
    SSDOutputDecoder Wrapper for detection_output OP
    RetinaTargetAssign Wrapper for retinanet_target_assign OP
    RetinaOutputDecoder Wrapper for retinanet_detection_output OP
    BoxCoder Wrapper for box_coder OP

    Available modules in the category 'module':

    MultiClassSoftNMS
    RPNHead RPN Head
    FPNRPNHead RPN Head that supports FPN input
    YOLOv3Head Head block for YOLOv3 network
    RetinaHead Retina Head
    ResNet Residual Network, see https://arxiv.org/abs/1512.03385
    ResNetC5 Residual Network, see https://arxiv.org/abs/1512.03385
    ResNeXt ResNeXt, see https://arxiv.org/abs/1611.05431
    ResNeXtC5 ResNeXt, see https://arxiv.org/abs/1611.05431
    DarkNet DarkNet, see https://pjreddie.com/darknet/yolo/
    MobileNet MobileNet v1, see https://arxiv.org/abs/1704.04861
    SENet Squeeze-and-Excitation Networks, see https://arxiv.org/abs/1709.01507
    SENetC5 Squeeze-and-Excitation Networks, see https://arxiv.org/abs/1709.01507
    FPN Feature Pyramid Network, see https://arxiv.org/abs/1612.03144
    VGG VGG, see https://arxiv.org/abs/1409.1556
    BlazeNet BlazeFace, see https://arxiv.org/abs/1907.05047
    FaceBoxNet FaceBoxes, see https://https://arxiv.org/abs/1708.05234
    CBResNet CBNet, see https://arxiv.org/abs/1909.03625
    FPNRoIAlign RoI align pooling for FPN feature maps
    XConvNormHead RCNN head with serveral convolution layers
    TwoFCHead RCNN head with two Fully Connected layers
    BBoxHead RCNN bbox head
    MaskHead RCNN mask head
    CascadeBBoxHead Cascade RCNN bbox head
    CascadeXConvNormHead RCNN head with serveral convolution layers
    CascadeTwoFCHead RCNN head with serveral convolution layers
    CascadeBBoxAssigner

    Available modules in the category 'architecture':

    FasterRCNN Faster R-CNN architecture, see https://arxiv.org/abs/1506.01497
    MaskRCNN Mask R-CNN architecture, see https://arxiv.org/abs/1703.06870
    CascadeRCNN Cascade R-CNN architecture, see https://arxiv.org/abs/1712.00726
    CascadeMaskRCNN Cascade Mask R-CNN architecture, see https://arxiv.org/abs/1712.00726
    CascadeRCNNClsAware Cascade R-CNN architecture, see https://arxiv.org/abs/1712.00726
    YOLOv3 YOLOv3 network, see https://arxiv.org/abs/1804.02767
    SSD Single Shot MultiBox Detector, see https://arxiv.org/abs/1512.02325
    RetinaNet RetinaNet architecture, see https://arxiv.org/abs/1708.02002
    BlazeFace BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs,
    FaceBoxes FaceBoxes: Sub-millisecond Neural Face Detection on Mobile GPUs,

    Available modules in the category 'optim':

    LearningRate Learning Rate configuration
    OptimizerBuilder Build optimizer handles

    Available modules in the category 'data':

    TrainFeed DataFeed encompasses all data loading related settings
    EvalFeed DataFeed encompasses all data loading related settings
    TestFeed DataFeed encompasses all data loading related settings
    FasterRCNNTrainFeed DataFeed encompasses all data loading related settings
    FasterRCNNEvalFeed DataFeed encompasses all data loading related settings
    FasterRCNNTestFeed DataFeed encompasses all data loading related settings
    MaskRCNNTrainFeed DataFeed encompasses all data loading related settings
    MaskRCNNEvalFeed DataFeed encompasses all data loading related settings
    MaskRCNNTestFeed DataFeed encompasses all data loading related settings
    SSDTrainFeed DataFeed encompasses all data loading related settings
    SSDEvalFeed DataFeed encompasses all data loading related settings
    SSDTestFeed DataFeed encompasses all data loading related settings
    YoloTrainFeed DataFeed encompasses all data loading related settings
    YoloEvalFeed DataFeed encompasses all data loading related settings
    YoloTestFeed DataFeed encompasses all data loading related settings
    不存在缺少模块
    盼复🙏

    在Windows10测试环境运行C++ inference失败

    使用Windows10专业版+VS2015+cuda9.2+cudnn7.6.1+paddlepaddle 1.5.2的开发环境中,编译运行C++ inference程序,可以使用CPU模式(因为开发环境GPU安装失败)进行预测,但是在Windows10专业版+vc2015(update3)运行时库+.Net 4.5(4.6)运行时库的测试环境中,不管使用CPU还是GPU均报0xc000007b错误。

    No module named 'ppdet'

    No module named 'ppdet'
    按照文件安装好之后提示这个,明明有ppdet这个文件夹啊。
    请问该怎么解决。

    关于训练后测试bbox的类别不符的问题

    通过单张图片迭代500次后测试得到了比较好的效果
    如图:
    aachen58

    aachen155

    但是我增大样本数量 数据集由原来的单张改变为150张
    进行训练后得到的效果 个别类别是错误的
    而且收敛的比较慢 通过--eval评估发现ap ar也都特别小 0.01-0.1这个区间
    想知道是什么原因会导致这样

    训练4w轮后的测试结果如图

    aachen143
    aachen88
    可以看到person类别是正确的 但是car和 terrain就是不对的 想知道这是什么原因可能会导致

    How to specify batch size

    Hi guys, sorry if this is obvious but how do I specify what batch size to use in order to run for example mask rcnn on GPU? I don't see that flag anywhere.

    训练mobilenet_v1_ssd时关闭config中的with_extra_blocks 报错

    with_extra_blocks 设置为true是没问题的

    Trace如下:

    Traceback (most recent call last):
    File "tools/train.py", line 340, in
    main()
    File "tools/train.py", line 128, in main
    train_fetches = model.train(feed_vars)
    File "$paddlepaddle/work/PaddleDetection/ppdet/modeling/architectures/ssd.py", line 94, in train
    return self.build(feed_vars, 'train')
    File "$paddlepaddle/work/PaddleDetection/ppdet/modeling/architectures/ssd.py", line 82, in build
    inputs=body_feats, image=im, num_classes=self.num_classes)
    File "$paddlepaddle/work/PaddleDetection/ppdet/core/workspace.py", line 113, in partial_apply
    return op(*args, **kwargs_)
    File "/usr/local/lib/python3.5/dist-packages/paddle/fluid/layers/detection.py", line 2154, in multi_box_head
    'aspect_ratios should be list or tuple, and the length of inputs '
    File "/usr/local/lib/python3.5/dist-packages/paddle/fluid/layers/detection.py", line 2131, in _is_list_or_tuple_and_equal
    raise ValueError(err_info)
    ValueError: aspect_ratios should be list or tuple, and the length of inputs and aspect_ratios should be the same.

    另外请教下,这里extra block是啥意思,有论文之类的原理介绍吗?
    感谢

    x2coco.py 里的逻辑错误

    version: release/0.1

    遍历跑到 267行时候,如果test_proportion 为0.0,test目录没有创建过,copy会出错

    255 count = 1
    256 for img_name in os.listdir(args.image_input_dir):
    257 if count <= train_num:
    258 shutil.copyfile(
    259 osp.join(args.image_input_dir, img_name),
    260 osp.join(args.output_dir + '/train/', img_name))
    261 else:
    262 if count <= train_num + val_num:
    263 shutil.copyfile(
    264 osp.join(args.image_input_dir, img_name),
    265 osp.join(args.output_dir + '/val/', img_name))
    266 else:
    267 shutil.copyfile(
    268 osp.join(args.image_input_dir, img_name),
    269 osp.join(args.output_dir + '/test/', img_name))
    270 count = count + 1

    使用x2coco.py转换数据集时,找不到命名形如 xxx.xxx.jpg 的图片对应的label文件

    版本:0.1.0

    x2coco.py转换自有dataset, type为labelme
    数据集的label目录中,有文件命名为
    1569478414.73587.json

    运行出现如下错误

    Traceback (most recent call last):
    File "../PaddleDetection/ppdet/data/tools/x2coco.py", line 300, in
    main()
    File "../PaddleDetection/ppdet/data/tools/x2coco.py", line 276, in main
    args.dataset_type, args.output_dir + '/train', args.json_input_dir)
    File "../PaddleDetection/ppdet/data/tools/x2coco.py", line 142, in deal_json
    with open(label_file) as f:
    FileNotFoundError: [Errno 2] No such file or directory: 'json_annotation/1569478414.json'

    原因是取jpg文件的文件名时,只考虑了文件名符合 .jpg 的情况,对于. *. jpg 这种就不支持了,对应地也就找不到label文件

    --- a/ppdet/data/tools/x2coco.py
    +++ b/ppdet/data/tools/x2coco.py
    @@ -132,7 +132,8 @@ def deal_json(ds_type, img_path, json_path):
    image_num = -1
    object_num = -1
    for img_file in os.listdir(img_path):

    •    img_label = img_file.split('.')[0]
      
    •    img_label = os.path.splitext(img_file)[0]
      

    请问在使用yolov3时候报错,错误信息如下

    Traceback (most recent call last):
    File "tools/train.py", line 340, in
    main()
    File "tools/train.py", line 149, in main
    fetches = model.eval(feed_vars)
    File "/data/project/PaddleDetection/ppdet/modeling/architectures/yolov3.py", line 83, in eval
    return self.build(feed_vars, mode='test')
    File "/data/project/PaddleDetection/ppdet/modeling/architectures/yolov3.py", line 56, in build
    body_feats = self.backbone(im)
    File "/data/project/PaddleDetection/ppdet/modeling/backbones/mobilenet.py", line 157, in call
    input, 3, int(32 * scale), 2, 1, name=self.prefix_name + "conv1")
    File "/data/project/PaddleDetection/ppdet/modeling/backbones/mobilenet.py", line 83, in _conv_norm
    bias_attr=False)
    File "/usr/local/lib/python3.6/dist-packages/paddle/fluid/layers/nn.py", line 2783, in conv2d
    default_initializer=_get_default_param_initializer())
    File "/usr/local/lib/python3.6/dist-packages/paddle/fluid/layer_helper_base.py", line 330, in create_parameter
    **attr._to_kwargs(with_initializer=True))
    File "/usr/local/lib/python3.6/dist-packages/paddle/fluid/framework.py", line 2384, in create_parameter
    param = Parameter(global_block, *args, **kwargs)
    File "/usr/local/lib/python3.6/dist-packages/paddle/fluid/framework.py", line 4480, in init
    self, block, persistable=True, shape=shape, dtype=dtype, **kwargs)
    File "/usr/local/lib/python3.6/dist-packages/paddle/fluid/framework.py", line 673, in init
    "matched.".format(self.name, old_shape, shape))
    ValueError: Variable conv1_weights has been created before. the previous shape is (32, 3, 3, 3); the new shape is (32, 608, 3, 3). They are not matched.

    训练出错,使用configs/dcn/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms配置文件训练跑不起来

    具体的复现信息如下:
    /home/user/anaconda3/envs/mmlab/bin/python /home/user/PaddleDetection-release-0.1/tools/train.py -c ../configs/dcn/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms.yml --use_tb=True --tb_log_dir=../tb_1206_1452_cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms/scalar --eval
    CascadeBBoxAssigner:
    [32mclass_aware[0m: true
    batch_size_per_im: 512
    bbox_reg_weights:

    • 10
    • 20
    • 30
      bg_thresh_hi:
    • 0.5
    • 0.6
    • 0.7
      bg_thresh_lo:
    • 0.0
    • 0.0
    • 0.0
      fg_fraction: 0.25
      fg_thresh:
    • 0.5
    • 0.6
    • 0.7
      num_classes: 81
      shuffle_before_sample: true
      CascadeBBoxHead:
      [32mhead[0m: CascadeTwoFCHead
      [32mnms[0m: MultiClassSoftNMS
      num_classes: 81
      CascadeRCNNClsAware:
      [32mbackbone[0m: ResNet
      [32mrpn_head[0m: FPNRPNHead
      bbox_assigner: CascadeBBoxAssigner
      bbox_head: CascadeBBoxHead
      fpn: FPN
      roi_extractor: FPNRoIAlign
      CascadeTwoFCHead:
      [32mmlp_dim[0m: 1024
      FPN:
      freeze_norm: false
      has_extra_convs: false
      max_level: 6
      min_level: 2
      norm_type: null
      num_chan: 256
      spatial_scale:
    • 0.03125
    • 0.0625
    • 0.125
    • 0.25
      FPNRPNHead:
      [32mrpn_target_assign[0m:
      rpn_batch_size_per_im: 256
      rpn_fg_fraction: 0.5
      rpn_negative_overlap: 0.3
      rpn_positive_overlap: 0.7
      rpn_straddle_thresh: 0.0
      [32mtest_proposal[0m:
      min_size: 0.0
      nms_thresh: 0.7
      post_nms_top_n: 1000
      pre_nms_top_n: 1000
      [32mtrain_proposal[0m:
      min_size: 0.0
      nms_thresh: 0.7
      post_nms_top_n: 2000
      pre_nms_top_n: 2000
      anchor_generator:
      anchor_sizes:
      • 32
      • 64
      • 128
      • 256
      • 512
        aspect_ratios:
      • 0.5
      • 1.0
      • 2.0
        stride:
      • 16.0
      • 16.0
        variance:
      • 1.0
      • 1.0
      • 1.0
      • 1.0
        anchor_start_size: 32
        max_level: 6
        min_level: 2
        num_chan: 256
        num_classes: 1
        FPNRoIAlign:
        [32mbox_resolution[0m: 14
        [32msampling_ratio[0m: 2
        canconical_level: 4
        canonical_size: 224
        mask_resolution: 14
        max_level: 5
        min_level: 2
        FasterRCNNEvalFeed:
        [32mbatch_transforms[0m:
    • !PadBatch
      pad_to_stride: 32
      [32mdataset[0m:
      annotation: annotations/instances_val2017.json
      dataset_dir: dataset/coco
      image_dir: val2017
      [32msample_transforms[0m:
    • !DecodeImage
      to_rgb: true
      with_mixup: false
    • !NormalizeImage
      is_channel_first: false
      is_scale: true
      mean:
      • 0.485
      • 0.456
      • 0.406
        std:
      • 0.229
      • 0.224
      • 0.225
    • !ResizeImage
      interp: 1
      max_size: 2000
      target_size:
      • 1200
        use_cv2: true
    • !Permute
      channel_first: true
      to_bgr: false
      batch_size: 1
      drop_last: false
      enable_aug_flip: false
      enable_multiscale: false
      fields:
    • image
    • im_info
    • im_id
    • im_shape
    • gt_box
    • gt_label
    • is_difficult
      image_shape:
    • null
    • 3
    • null
    • null
      num_scale: 1
      num_workers: 2
      samples: -1
      shuffle: false
      use_padded_im_info: true
      FasterRCNNTestFeed:
      [32mbatch_transforms[0m:
    • !PadBatch
      pad_to_stride: 32
      [32mdataset[0m:
      annotation: dataset/coco/annotations/instances_val2017.json
      batch_size: 1
      drop_last: false
      fields:
    • image
    • im_info
    • im_id
    • im_shape
      image_shape:
    • null
    • 3
    • null
    • null
      num_workers: 2
      sample_transforms:
    • !DecodeImage
      to_rgb: true
      with_mixup: false
    • !NormalizeImage
      is_channel_first: false
      is_scale: true
      mean:
      • 0.485
      • 0.456
      • 0.406
        std:
      • 0.229
      • 0.224
      • 0.225
    • !ResizeImage
      interp: 1
      max_size: 1333
      target_size: 800
      use_cv2: true
    • !Permute
      channel_first: true
      to_bgr: false
      samples: -1
      shuffle: false
      use_padded_im_info: true
      FasterRCNNTrainFeed:
      [32mbatch_transforms[0m:
    • !PadBatch
      pad_to_stride: 32
      [32mdataset[0m:
      annotation: annotations/instances_train2017.json
      dataset_dir: dataset/coco
      image_dir: train2017
      [32msample_transforms[0m:
    • !DecodeImage
      to_rgb: true
      with_mixup: false
    • !RandomFlipImage
      is_mask_flip: false
      is_normalized: false
      prob: 0.5
    • !NormalizeImage
      is_channel_first: false
      is_scale: true
      mean:
      • 0.485
      • 0.456
      • 0.406
        std:
      • 0.229
      • 0.224
      • 0.225
    • !ResizeImage
      interp: 1
      max_size: 1800
      target_size:
      • 416
      • 448
      • 480
      • 512
      • 544
      • 576
      • 608
      • 640
      • 672
      • 704
      • 736
      • 768
      • 800
      • 832
      • 864
      • 896
      • 928
      • 960
      • 992
      • 1024
      • 1056
      • 1088
      • 1120
      • 1152
      • 1184
      • 1216
      • 1248
      • 1280
      • 1312
      • 1344
      • 1376
      • 1408
        use_cv2: true
    • !Permute
      channel_first: true
      to_bgr: false
      batch_size: 1
      bufsize: 10
      class_aware_sampling: false
      drop_last: false
      fields:
    • image
    • im_info
    • im_id
    • gt_box
    • gt_label
    • is_crowd
      image_shape:
    • null
    • 3
    • null
    • null
      memsize: null
      num_workers: 2
      samples: -1
      shuffle: true
      use_process: false
      LearningRate:
      [32mschedulers[0m:
    • !PiecewiseDecay
      gamma: 0.1
      milestones:
      • 340000
      • 440000
        values: null
    • !LinearWarmup
      start_factor: 0.1
      steps: 1000
      base_lr: 0.01
      MultiClassSoftNMS:
      background_label: 0
      keep_top_k: 300
      normalized: false
      score_threshold: 0.01
      softnms_sigma: 0.5
      OptimizerBuilder:
      optimizer:
      momentum: 0.9
      type: Momentum
      regularizer:
      factor: 0.0001
      type: L2
      ResNet:
      [32mdcn_v2_stages[0m:
    • 3
    • 4
    • 5
      [32mdepth[0m: 200
      [32mnonlocal_stages[0m:
    • 4
      [32mnorm_type[0m: bn
      [32mvariant[0m: d
      feature_maps:
    • 2
    • 3
    • 4
    • 5
      freeze_at: 2
      freeze_norm: true
      norm_decay: 0.0
      weight_prefix_name: ''
      architecture: CascadeRCNNClsAware
      eval_feed: FasterRCNNEvalFeed
      log_smooth_window: 20
      max_iters: 460000
      metric: COCO
      num_classes: 81
      pretrain_weights: https://paddle-imagenet-models-name.bj.bcebos.com/ResNet200_vd_pretrained.tar
      save_dir: output
      snapshot_iter: 10000
      test_feed: FasterRCNNTestFeed
      train_feed: FasterRCNNTrainFeed
      use_gpu: true
      weights: output/cascade_rcnn_cls_aware_r200_vd_fpn_dcnv2_nonlocal_softnms/model_final

    Traceback (most recent call last):
    File "/home/user/PaddleDetection-release-0.1/tools/train.py", line 341, in
    main()
    File "/home/user/PaddleDetection-release-0.1/tools/train.py", line 129, in main
    train_fetches = model.train(feed_vars)
    File "/home/user/PaddleDetection-release-0.1/ppdet/modeling/architectures/cascade_rcnn_cls_aware.py", line 178, in train
    return self.build(feed_vars, 'train')
    File "/home/user/PaddleDetection-release-0.1/ppdet/modeling/architectures/cascade_rcnn_cls_aware.py", line 88, in build
    body_feats = self.backbone(im)
    File "/home/user/PaddleDetection-release-0.1/ppdet/modeling/backbones/resnet.py", line 432, in call
    res = self.layer_warp(res, i)
    File "/home/user/PaddleDetection-release-0.1/ppdet/modeling/backbones/resnet.py", line 383, in layer_warp
    nonlocal_name + '_{}'.format(i), int(dim_in / 2) )
    File "/home/user/PaddleDetection-release-0.1/ppdet/modeling/backbones/nonlocal_helper.py", line 152, in add_space_nonlocal
    conv = space_nonlocal(input, dim_in, dim_out, prefix, dim_inner)
    File "/home/user/PaddleDetection-release-0.1/ppdet/modeling/backbones/nonlocal_helper.py", line 101, in space_nonlocal
    t_re = fluid.layers.reshape(t, shape=list(theta_shape), actual_shape=theta_shape_op )
    File "/home/user/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/layers/nn.py", line 8976, in reshape
    attrs["shape"] = get_attr_shape(shape)
    File "/home/user/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/layers/nn.py", line 8949, in get_attr_shape
    "be -1. But received shape[%d] is also -1." % dim_idx)
    AssertionError: Only one dimension value of 'shape' in reshape can be -1. But received shape[2] is also -1.

    Process finished with exit code 1

    【供大家参考】YOLOv3导出模型后使用Python预测及可视化示例

    cd PaddleDetection
    export PYTHONPATH=`pwd`:$PYTHONPATH
    

    导出模型:

    python tools/export_model.py -c configs/dcn/yolov3_r50vd_dcn.yml \
            -o weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_r50vd_dcn_imagenet.tar \
            --output_dir=inference_model \
    

    导出的模型存储在 inference_model/yolov3_r50vd_dcn 下面.

    文件infer.py 如下, 运行 python infer.py 即可得到 test_demo.jpg

    import paddle.fluid as fluid
    import numpy as np
    import cv2
    from PIL import Image, ImageDraw
    from ppdet.utils.coco_eval import get_category_info
    
    def Permute(im, channel_first=True, to_bgr=False):
        if channel_first:
            im = np.swapaxes(im, 1, 2)
            im = np.swapaxes(im, 1, 0)
        if to_bgr:
            im = im[[2, 1, 0], :, :]
        return im
    
    
    def DecodeImage(im_path):
        with open(im_path, 'rb') as f:
            im = f.read()
        data = np.frombuffer(im, dtype='uint8')
        im = cv2.imdecode(data, 1)  # BGR mode, but need RGB mode
        im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
        return im
    
    
    def ResizeImage(im, target_size=608, max_size=0):
        if len(im.shape) != 3:
            raise ImageError('image is not 3-dimensional.')
        im_shape = im.shape
        print(im_shape)
        im_size_min = np.min(im_shape[0:2])
        im_size_max = np.max(im_shape[0:2])
        if float(im_size_min) == 0:
            raise ZeroDivisionError('min size of image is 0')
        if max_size != 0:
            im_scale = float(target_size) / float(im_size_min)
            # Prevent the biggest axis from being more than max_size
            if np.round(im_scale * im_size_max) > max_size:
                im_scale = float(max_size) / float(im_size_max)
            im_scale_x = im_scale
            im_scale_y = im_scale
        else:
            im_scale_x = float(target_size) / float(im_shape[1])
            im_scale_y = float(target_size) / float(im_shape[0])
        
        im = cv2.resize(
                 im,
                 None,
                 None,
                 fx=im_scale_x,
                 fy=im_scale_y,
                 interpolation=2)
        return im
    
    
    def NormalizeImage(im,mean = [0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], is_scale=True):
        """Normalize the image.
        Operators:
            1.(optional) Scale the image to [0,1]
            2. Each pixel minus mean and is divided by std
        """
        im = im.astype(np.float32, copy=False)
        mean = np.array(mean)[np.newaxis, np.newaxis, :]
        std = np.array(std)[np.newaxis, np.newaxis, :]
        if is_scale:
            im = im / 255.0
        im -= mean
        im /= std
        return im
    
    
    def Prepocess(img_path):
        test_img = DecodeImage(img_path)
        img_shape = test_img.shape[:2]
        test_img = ResizeImage(test_img)
        test_img = NormalizeImage(test_img)
        test_img = Permute(test_img)
        test_img = test_img[np.newaxis,:]#.reshape(1, 3, 608, 608)
        return test_img, img_shape
    
    def colormap(rgb=False):
        """
        Get colormap
        """
        color_list = np.array([
            0.000, 0.447, 0.741, 0.850, 0.325, 0.098, 0.929, 0.694, 0.125, 0.494,
            0.184, 0.556, 0.466, 0.674, 0.188, 0.301, 0.745, 0.933, 0.635, 0.078,
            0.184, 0.300, 0.300, 0.300, 0.600, 0.600, 0.600, 1.000, 0.000, 0.000,
            1.000, 0.500, 0.000, 0.749, 0.749, 0.000, 0.000, 1.000, 0.000, 0.000,
            0.000, 1.000, 0.667, 0.000, 1.000, 0.333, 0.333, 0.000, 0.333, 0.667,
            0.000, 0.333, 1.000, 0.000, 0.667, 0.333, 0.000, 0.667, 0.667, 0.000,
            0.667, 1.000, 0.000, 1.000, 0.333, 0.000, 1.000, 0.667, 0.000, 1.000,
            1.000, 0.000, 0.000, 0.333, 0.500, 0.000, 0.667, 0.500, 0.000, 1.000,
            0.500, 0.333, 0.000, 0.500, 0.333, 0.333, 0.500, 0.333, 0.667, 0.500,
            0.333, 1.000, 0.500, 0.667, 0.000, 0.500, 0.667, 0.333, 0.500, 0.667,
            0.667, 0.500, 0.667, 1.000, 0.500, 1.000, 0.000, 0.500, 1.000, 0.333,
            0.500, 1.000, 0.667, 0.500, 1.000, 1.000, 0.500, 0.000, 0.333, 1.000,
            0.000, 0.667, 1.000, 0.000, 1.000, 1.000, 0.333, 0.000, 1.000, 0.333,
            0.333, 1.000, 0.333, 0.667, 1.000, 0.333, 1.000, 1.000, 0.667, 0.000,
            1.000, 0.667, 0.333, 1.000, 0.667, 0.667, 1.000, 0.667, 1.000, 1.000,
            1.000, 0.000, 1.000, 1.000, 0.333, 1.000, 1.000, 0.667, 1.000, 0.167,
            0.000, 0.000, 0.333, 0.000, 0.000, 0.500, 0.000, 0.000, 0.667, 0.000,
            0.000, 0.833, 0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 0.167, 0.000,
            0.000, 0.333, 0.000, 0.000, 0.500, 0.000, 0.000, 0.667, 0.000, 0.000,
            0.833, 0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 0.167, 0.000, 0.000,
            0.333, 0.000, 0.000, 0.500, 0.000, 0.000, 0.667, 0.000, 0.000, 0.833,
            0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 0.143, 0.143, 0.143, 0.286,
            0.286, 0.286, 0.429, 0.429, 0.429, 0.571, 0.571, 0.571, 0.714, 0.714,
            0.714, 0.857, 0.857, 0.857, 1.000, 1.000, 1.000
        ]).astype(np.float32)
        color_list = color_list.reshape((-1, 3)) * 255
        if not rgb:
            color_list = color_list[:, ::-1]
        return color_list
    
    def draw_bbox(image, catid2name, bboxes, threshold):
        """
        Draw bbox on image
        """
        draw = ImageDraw.Draw(image)
    
        catid2color = {}
        color_list = colormap(rgb=True)[:40]
        for dt in np.array(bboxes):
            catid, bbox, score = dt['category_id'], dt['bbox'], dt['score']
            if score < threshold:
                continue
    
            xmin, ymin, w, h = bbox
            xmax = xmin + w
            ymax = ymin + h
    
            if catid not in catid2color:
                idx = np.random.randint(len(color_list))
                catid2color[catid] = color_list[idx]
            color = tuple(catid2color[catid])
    
            # draw bbox
            draw.line(
                [(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin),
                 (xmin, ymin)],
                width=2,
                fill=color)
    
            # draw label
            text = "{} {:.2f}".format(catid2name[catid], score)
            tw, th = draw.textsize(text)
            draw.rectangle(
                [(xmin + 1, ymin - th), (xmin + tw + 1, ymin)], fill=color)
            draw.text((xmin + 1, ymin - th), text, fill=(255, 255, 255))
    
        return image
    
    def clip_bbox(bbox):
        xmin = max(min(bbox[0], 1.), 0.)
        ymin = max(min(bbox[1], 1.), 0.)
        xmax = max(min(bbox[2], 1.), 0.)
        ymax = max(min(bbox[3], 1.), 0.)
        return xmin, ymin, xmax, ymax
    
    def bbox2out(results, clsid2catid, is_bbox_normalized=False):
        """
        Args:
            results: request a dict, should include: `bbox`, `im_id`,
                     if is_bbox_normalized=True, also need `im_shape`.
            clsid2catid: class id to category id map of COCO2017 dataset.
            is_bbox_normalized: whether or not bbox is normalized.
        """
        xywh_res = []
        for t in results:
            bboxes = t['bbox'][0]
            lengths = t['bbox'][1][0]
            if bboxes.shape == (1, 1) or bboxes is None:
                continue
    
            k = 0
            for i in range(len(lengths)):
                num = lengths[i]
                for j in range(num):
                    dt = bboxes[k]
                    clsid, score, xmin, ymin, xmax, ymax = dt.tolist()
                    catid = (clsid2catid[int(clsid)])
    
                    if is_bbox_normalized:
                        xmin, ymin, xmax, ymax = \
                                clip_bbox([xmin, ymin, xmax, ymax])
                        w = xmax - xmin
                        h = ymax - ymin
                        im_height, im_width = t['im_shape'][i].tolist()
                        xmin *= im_width
                        ymin *= im_height
                        w *= im_width
                        h *= im_height
                    else:
                        w = xmax - xmin + 1
                        h = ymax - ymin + 1
    
                    bbox = [xmin, ymin, w, h]
                    coco_res = {
                        'category_id': catid,
                        'bbox': bbox,
                        'score': score
                    }
                    xywh_res.append(coco_res)
                    k += 1
        return xywh_res
    
    def test():
        infer_prog = fluid.Program()
        startup_prog = fluid.Program()
        
        place = fluid.CUDAPlace(0)
        exe = fluid.Executor(place)
        exe.run(startup_prog)
        
        path = "inference_model/yolov3_r50vd_dcn"
        img_path = "demo/000000014439.jpg"
        
        test_img, img_shape = Prepocess(img_path)
        print("shape of test_img:", test_img.shape)
        img_shape = np.array(img_shape).reshape(1, 2)
        img_shape = img_shape.astype('int32')
        print(img_shape.dtype)
        #exit()
        [inference_program, feed_target_names, fetch_targets] = (fluid.io.load_inference_model(
            dirname=path, executor=exe, model_filename='__model__', params_filename='__params__'))
        print(feed_target_names, test_img.shape, img_shape.shape)
        outs = exe.run(inference_program,
                  feed={feed_target_names[0]: test_img, feed_target_names[1]: img_shape},
                  fetch_list=fetch_targets,
                  return_numpy=False)
        print(img_shape)
        res = {
                 'bbox': (np.array(outs[0]), outs[0].recursive_sequence_lengths()),
                  'im_shape': img_shape
              }
    
        clsid2catid, catid2name = get_category_info(None, False, True)
        bbox_results = bbox2out([res], clsid2catid, False)
        print(bbox_results)
    
        image = Image.open(img_path).convert('RGB')
        image = draw_bbox(image, catid2name, bbox_results, 0.5)
        image.save('test_demo.jpg', quality=95)
    
    if __name__ == '__main__':
        test()
    
    

    open images 目标检测 形状设置不正确

    oidv5 目标检测模型,直接推理的时候报错:
    Traceback (most recent call last):
    File "tools/infer.py", line 271, in
    main()
    File "tools/infer.py", line 131, in main
    test_fetches = model.test(feed_vars)
    File "/Users/admin/projects/PaddleDetection/ppdet/modeling/architectures/cascade_rcnn_cls_aware.py", line 184, in test
    return self.build(feed_vars, 'test')
    File "/Users/admin/projects/PaddleDetection/ppdet/modeling/architectures/cascade_rcnn_cls_aware.py", line 88, in build
    body_feats = self.backbone(im)
    File "/Users/admin/projects/PaddleDetection/ppdet/modeling/backbones/resnet.py", line 432, in call
    res = self.layer_warp(res, i)
    File "/Users/admin/projects/PaddleDetection/ppdet/modeling/backbones/resnet.py", line 383, in layer_warp
    nonlocal_name + '_{}'.format(i), int(dim_in / 2) )
    File "/Users/admin/projects/PaddleDetection/ppdet/modeling/backbones/nonlocal_helper.py", line 152, in add_space_nonlocal
    conv = space_nonlocal(input, dim_in, dim_out, prefix, dim_inner)
    File "/Users/admin/projects/PaddleDetection/ppdet/modeling/backbones/nonlocal_helper.py", line 101, in space_nonlocal
    t_re = fluid.layers.reshape(t, shape=list(theta_shape), actual_shape=theta_shape_op )
    File "/Users/admin/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/layers/nn.py", line 8976, in reshape
    attrs["shape"] = get_attr_shape(shape)
    File "/Users/admin/anaconda3/envs/paddle/lib/python3.7/site-packages/paddle/fluid/layers/nn.py", line 8949, in get_attr_shape
    "be -1. But received shape[%d] is also -1." % dim_idx)

    Only one dimension value of 'shape' in reshape can be -1. But received shape[2] is also -1.

    应该是配置参数不正确,怎么修改?

    我使用的cpu版本的paddle,Mac OS 系统

    谢谢

    generate_proposal报错

    训练cascade_rcnn_cls_aware_r101_vd_fpn模型时,遇到这个问题

    Traceback (most recent call last):
      File "tools/train.py", line 340, in <module>
        main()
      File "tools/train.py", line 246, in main
        outs = exe.run(compiled_train_prog, fetch_list=train_values)
      File "/opt/_internal/cpython-3.6.0/lib/python3.6/site-packages/paddle/fluid/executor.py", line 774, in run
        six.reraise(*sys.exc_info())
      File "/opt/_internal/cpython-3.6.0/lib/python3.6/site-packages/six.py", line 693, in reraise
        raise value
      File "/opt/_internal/cpython-3.6.0/lib/python3.6/site-packages/paddle/fluid/executor.py", line 769, in run
        use_program_cache=use_program_cache)
      File "/opt/_internal/cpython-3.6.0/lib/python3.6/site-packages/paddle/fluid/executor.py", line 828, in _run_impl
        return_numpy=return_numpy)
      File "/opt/_internal/cpython-3.6.0/lib/python3.6/site-packages/paddle/fluid/executor.py", line 668, in _run_parallel
        tensors = exe.run(fetch_var_names)._move_to_list()
    paddle.fluid.core_avx.EnforceNotMet:
    
    --------------------------------------------
    C++ Call Stacks (More useful to developers):
    --------------------------------------------
    0   std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&&&, char const*, int)
    1   paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
    2   void paddle::operators::GPUGather<float, int>(paddle::platform::DeviceContext const&, paddle::framework::Tensor const&, paddle::framework::Tensor const&, paddle::framework::Tensor*)
    3   paddle::operators::CUDAGenerateProposalsKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const
    4   _ZNSt17_Function_handlerIFvRKN6paddle9framework16ExecutionContextEEZNKS1_24OpKernelRegistrarFunctorINS0_8platform9CUDAPlaceELb0ELm0EJNS0_9operators27CUDAGenerateProposalsKernelINS7_17CUDADeviceContextEfEEEEclEPKcSF_iEUlS4_E_E9_M_invokeERKSt9_Any_dataS4_
    5   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
    6   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
    7   paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
    8   paddle::framework::details::ComputationOpHandle::RunImpl()
    9   paddle::framework::details::FastThreadedSSAGraphExecutor::RunOpSync(paddle::framework::details::OpHandleBase*)
    10  paddle::framework::details::FastThreadedSSAGraphExecutor::RunOp(paddle::framework::details::OpHandleBase*, std::shared_ptr<paddle::framework::BlockingQueue<unsigned long> > const&, unsigned long*)
    11  std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()(), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<void>, std::__future_base::_Result_base::_Deleter>, void> >::_M_invoke(std::_Any_data const&)
    12  std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()()>&, bool&)
    13  _ZZN10ThreadPoolC1EmENKUlvE_clEv
    
    ------------------------------------------
    Python Call Stacks (More useful to users):
    ------------------------------------------
      File "/opt/_internal/cpython-3.6.0/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2508, in append_op
        attrs=kwargs.get("attrs", None))
      File "/opt/_internal/cpython-3.6.0/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
        return self.main_program.current_block().append_op(*args, **kwargs)
      File "/opt/_internal/cpython-3.6.0/lib/python3.6/site-packages/paddle/fluid/layers/detection.py", line 2810, in generate_proposals
        'RpnRoiProbs': rpn_roi_probs})
      File "/ssd1/xiege/PaddleDetection/ppdet/core/workspace.py", line 113, in partial_apply
        return op(*args, **kwargs_)
      File "/ssd1/xiege/PaddleDetection/ppdet/modeling/anchor_heads/rpn_head.py", line 438, in _get_single_proposals
        variances=self.anchor_var)
      File "/ssd1/xiege/PaddleDetection/ppdet/modeling/anchor_heads/rpn_head.py", line 462, in get_proposals
        fpn_feat, im_info, lvl, mode)
      File "/ssd1/xiege/PaddleDetection/ppdet/modeling/architectures/cascade_rcnn_cls_aware.py", line 95, in build
        rpn_rois = self.rpn_head.get_proposals(body_feats, im_info, mode=mode)
      File "/ssd1/xiege/PaddleDetection/ppdet/modeling/architectures/cascade_rcnn_cls_aware.py", line 178, in train
        return self.build(feed_vars, 'train')
      File "tools/train.py", line 128, in main
        train_fetches = model.train(feed_vars)
      File "tools/train.py", line 340, in <module>
        main()
    
    ----------------------
    Error Message Summary:
    ----------------------
    Error: The index of gather_op should not be empty when the index's rank is 1.
      [Hint: Expected index.dims()[0] > 0, but received index.dims()[0]:0 <= 0:0.] at (/ssd3/xiege/paddle/Paddle/paddle/fluid/operators/gather.cu.h:82)
      [operator < generate_proposals > error]
    terminate called without an active exception
    W1202 12:22:18.564357  5013 init.cc:205] *** Aborted at 1575289338 (unix time) try "date -d @1575289338" if you are using GNU date ***
    W1202 12:22:18.566856  5013 init.cc:205] PC: @                0x0 (unknown)
    W1202 12:22:18.567122  5013 init.cc:205] *** SIGABRT (@0x12c2) received by PID 4802 (TID 0x7f25b6bfd700) from PID 4802; stack trace: ***
    W1202 12:22:18.569178  5013 init.cc:205]     @     0x7f2c6a0647e0 (unknown)
    W1202 12:22:18.571429  5013 init.cc:205]     @     0x7f2c694604f5 __GI_raise
    W1202 12:22:18.573428  5013 init.cc:205]     @     0x7f2c69461cd5 __GI_abort
    W1202 12:22:18.574733  5013 init.cc:205]     @     0x7f2bd822aa8d __gnu_cxx::__verbose_terminate_handler()
    W1202 12:22:18.575976  5013 init.cc:205]     @     0x7f2bd8228be6 (unknown)
    W1202 12:22:18.577111  5013 init.cc:205]     @     0x7f2bd8228c13 std::terminate()
    W1202 12:22:18.578280  5013 init.cc:205]     @     0x7f2bd82288a6 __gxx_personality_v0
    W1202 12:22:18.579454  5013 init.cc:205]     @     0x7f2c154bb1ce (unknown)
    W1202 12:22:18.580602  5013 init.cc:205]     @     0x7f2c154bb2b4 _Unwind_ForcedUnwind
    W1202 12:22:18.582444  5013 init.cc:205]     @     0x7f2c6a062f60 __GI___pthread_unwind
    W1202 12:22:18.584290  5013 init.cc:205]     @     0x7f2c6a05d175 __pthread_exit
    W1202 12:22:18.586236  5013 init.cc:205]     @     0x7f2c6a426c5f PyThread_exit_thread
    W1202 12:22:18.588135  5013 init.cc:205]     @     0x7f2c6a3d32aa PyEval_RestoreThread
    W1202 12:22:18.590368  5013 init.cc:205]     @     0x7f2baac61dd9 pybind11::gil_scoped_release::~gil_scoped_release()
    W1202 12:22:18.591140  5013 init.cc:205]     @     0x7f2baac14643 _ZZN8pybind1112cpp_function10initializeIZN6paddle6pybindL22pybind11_init_core_avxERNS_6moduleEEUlRNS2_9operators6reader22LoDTensorBlockingQueueERKSt6vectorINS2_9framework9LoDTensorESaISC_EEE58_bIS9_SG_EINS_4nameENS_9is_methodENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNESY_
    W1202 12:22:18.593189  5013 init.cc:205]     @     0x7f2baac83fb1 pybind11::cpp_function::dispatcher()
    W1202 12:22:18.595247  5013 init.cc:205]     @     0x7f2c6a33cfbd _PyCFunction_FastCallDict
    W1202 12:22:18.597225  5013 init.cc:205]     @     0x7f2c6a3d4138 (unknown)
    W1202 12:22:18.599215  5013 init.cc:205]     @     0x7f2c6a3d7acc _PyEval_EvalFrameDefault
    W1202 12:22:18.601167  5013 init.cc:205]     @     0x7f2c6a3d3fce (unknown)
    W1202 12:22:18.603116  5013 init.cc:205]     @     0x7f2c6a3d45f3 PyEval_EvalCodeEx
    W1202 12:22:18.605052  5013 init.cc:205]     @     0x7f2c6a3163f3 (unknown)
    W1202 12:22:18.606988  5013 init.cc:205]     @     0x7f2c6a2e434a PyObject_Call
    W1202 12:22:18.608937  5013 init.cc:205]     @     0x7f2c6a3d7e55 _PyEval_EvalFrameDefault
    W1202 12:22:18.610893  5013 init.cc:205]     @     0x7f2c6a3d3660 (unknown)
    W1202 12:22:18.612833  5013 init.cc:205]     @     0x7f2c6a3d4584 (unknown)
    W1202 12:22:18.614792  5013 init.cc:205]     @     0x7f2c6a3d7acc _PyEval_EvalFrameDefault
    W1202 12:22:18.616705  5013 init.cc:205]     @     0x7f2c6a3d3660 (unknown)
    W1202 12:22:18.618620  5013 init.cc:205]     @     0x7f2c6a3d4584 (unknown)
    W1202 12:22:18.620504  5013 init.cc:205]     @     0x7f2c6a3d7acc _PyEval_EvalFrameDefault
    W1202 12:22:18.622380  5013 init.cc:205]     @     0x7f2c6a3d3660 (unknown)
    W1202 12:22:18.624182  5013 init.cc:205]     @     0x7f2c6a3dcc66 _PyFunction_FastCallDict
    

    我试了10次,遇到了1次,不知道是否是generate_proposal op的问题~

    Detection中无epoch概念?

    请问其中的周期次数是按照这样计算的吗?
    epoch
    如果我的最大迭代次数是18W batch_size=4 图片大约300张训练图片 意思是 300/4=75个epoch? 还是说 用最大迭代次数18W/300=600个epoch ?

    load model进行预测报错 “it holds double, but desires to be float ”

    可以load了
    [inference_program, feed_target_names, fetch_targets] = fluid.io.load_inference_model(dirname=path, executor=exe, model_filename="model", params_filename="params")
    但是设置: im_info= np.array([800.,800.,1.])
    然后
    batch_outputs = exe.run(inference_program,
    feed={feed_target_names[0]: tensor_img,
    feed_target_names[1]: im_info,
    feed_target_names[2]: image_shape[np.newaxis, :]},
    fetch_list=fetch_targets,
    return_numpy=False)

    会报错
    PaddleCheckError: Tensor holds the wrong type, it holds double, but desires to be float at [D:\1.6.1\paddle\paddle/fluid
    /framework/tensor_impl.h:30]
    @qingqing01

    训练正常但是边训练边评估以及单独评估/测试出现错误

    错误信息如下:

    2019-12-06 09:52:00,229-INFO: Load categories from /home/shuxsu/models-develop/PaddleCV/PaddleDetection/dataset/coco/annotations/instance_val.json
    loading annotations into memory...
    Done (t=0.00s)
    creating index...
    index created!
    2019-12-06 09:52:00,692-INFO: Infer iter 0
    Traceback (most recent call last):
    File "infer.py", line 270, in
    main()
    File "infer.py", line 194, in main
    bbox_results = bbox2out([res], clsid2catid, is_bbox_normalized)
    File "/home/shuxsu/models-develop/PaddleCV/PaddleDetection/ppdet/utils/coco_eval.py", line 217, in bbox2out
    catid = (clsid2catid[int(clsid)])
    KeyError: 6

    Recommend Projects

    • React photo React

      A declarative, efficient, and flexible JavaScript library for building user interfaces.

    • Vue.js photo Vue.js

      🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

    • Typescript photo Typescript

      TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

    • TensorFlow photo TensorFlow

      An Open Source Machine Learning Framework for Everyone

    • Django photo Django

      The Web framework for perfectionists with deadlines.

    • D3 photo D3

      Bring data to life with SVG, Canvas and HTML. 📊📈🎉

    Recommend Topics

    • javascript

      JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

    • web

      Some thing interesting about web. New door for the world.

    • server

      A server is a program made to process requests and deliver data to clients.

    • Machine learning

      Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

    • Game

      Some thing interesting about game, make everyone happy.

    Recommend Org

    • Facebook photo Facebook

      We are working to build community through open source technology. NB: members must have two-factor auth.

    • Microsoft photo Microsoft

      Open source projects and samples from Microsoft.

    • Google photo Google

      Google ❤️ Open Source for everyone.

    • D3 photo D3

      Data-Driven Documents codes.