Giter VIP home page Giter VIP logo

yolov4-ncnn-raspberry-pi-4's Introduction

YoloV4 Raspberry Pi 4

output image

YoloV4 with the ncnn framework.

License

Paper: https://arxiv.org/pdf/2004.10934.pdf

Specially made for a bare Raspberry Pi 4 see Q-engineering deep learning examples


Benchmark.

Numbers in FPS and reflect only the inference timing. Grabbing frames, post-processing and drawing are not taken into account.

Model size mAP Jetson Nano RPi 4 1950 RPi 5 2900 Rock 5 RK35881
NPU
RK3566/682
NPU
Nano
TensorRT
Orin
TensorRT
NanoDet 320x320 20.6 26.2 13.0 43.2 36.0
NanoDet Plus 416x416 30.4 18.5 5.0 30.0 24.9
PP-PicoDet 320x320 27.0 24.0 7.5 53.7 46.7
YoloFastestV2 352x352 24.1 38.4 18.8 78.5 65.4
YoloV2 20 416x416 19.2 10.1 3.0 24.0 20.0
YoloV3 20 352x352 tiny 16.6 17.7 4.4 18.1 15.0
YoloV4 416x416 tiny 21.7 16.1 3.4 17.5 22.4
YoloV4 608x608 full 45.3 1.3 0.2 1.82 1.5
YoloV5 640x640 nano 22.5 5.0 1.6 13.6 12.5 58.8 14.8 19.0 100
YoloV5 640x640 small 22.5 5.0 1.6 6.3 12.5 37.7 11.7 9.25 100
YoloV6 640x640 nano 35.0 10.5 2.7 15.8 20.8 63.0 18.0
YoloV7 640x640 tiny 38.7 8.5 2.1 14.4 17.9 53.4 16.1 15.0
YoloV8 640x640 nano 37.3 14.5 3.1 20.0 16.3 53.1 18.2
YoloV8 640x640 small 44.9 4.5 1.47 11.0 9.2 28.5 8.9
YoloV9 640x640 comp 53.0 1.2 0.28 1.5 1.2
YoloX 416x416 nano 25.8 22.6 7.0 38.6 28.5
YoloX 416x416 tiny 32.8 11.35 2.8 17.2 18.1
YoloX 640x640 small 40.5 3.65 0.9 4.5 7.5 30.0 10.0

1 The Rock 5 and Orange Pi5 have the RK3588 on board.
2 The Rock 3, Radxa Zero 3 and Orange Pi3B have the RK3566 on board.
20 Recognize 20 objects (VOC) instead of 80 (COCO)


Dependencies.

To run the application, you have to:

  • A Raspberry Pi 4 with a 32 or 64-bit operating system. It can be the Raspberry 64-bit OS, or Ubuntu 18.04 / 20.04. Install 64-bit OS
  • The Tencent ncnn framework installed. Install ncnn
  • OpenCV 64-bit installed. Install OpenCV 4.5
  • Code::Blocks installed. ($ sudo apt-get install codeblocks)

Installing the app.

To extract and run the network in Code::Blocks
$ mkdir MyDir
$ cd MyDir
$ wget https://github.com/Qengineering/YoloV4-ncnn-Raspberry-Pi-4/archive/refs/heads/master.zip
$ unzip -j master.zip
Remove master.zip, LICENSE and README.md as they are no longer needed.
$ rm master.zip
$ rm LICENSE
$ rm README.md

Your MyDir folder must now look like this:
parking.jpg
busstop.jpg
YoloV4.cpb
yolov4-tiny-opt.bin
yolov4-tiny-opt.param

If you want to run the full YoloV4 version you need:
yolov4.bin (download this 245 MB file from Mega)
yolov4.param


Running the app.

To run the application load the project file YoloV4.cbp in Code::Blocks. More info or
if you want to connect a camera to the app, follow the instructions at Hands-On.

Many thanks to nihui again!

output image


paypal

yolov4-ncnn-raspberry-pi-4's People

Contributors

qengineering avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

yolov4-ncnn-raspberry-pi-4's Issues

Segmentation fault

I cloned the repository and ran the YoloV4.cbp file, but I am getting a "Segmentation fault" error.
I am using raspberry pi 4 with code blocks

How to deal with scaled YoloV4-csp or YoloV4-tiny models?

Hi I have 2 custom trained scaled YoloV4 models (YoloV4-csp & Darknet YoloV4-Tiny) to detect 10 classes. I have reformatted them to work with your examples, for csp version (Pytorch->ONNX->MNN) and TINY (Darknet->tflite & Darknet->ncnn). I'm having a hard-time figuring out how to adapt your yolov4.cpp code to set confidence or IOU thresholds for my Tiny model. I'm not going get into the YoloV4-csp model here because I couldn't convert it to ncnn and the mnn/onnx versions are another can of worms. Don't even know where to begin...

Anyways, I couldn't re-work your SSD code to get the tflite tiny model to run, but with some tweaking I did manage to get the ncnn version going with webcam infeed. I compared the topology of this repo's Tiny.param with my custom trained one and the only thing that stands out are the csp feature shortcuts. For example, the topology of my model's last few layers looks like this:

Split 27_192_bn_leaky_split 1 2 27_192_bn_leaky 27_192_bn_leaky_split_0 27_192_bn_leaky_split_1 -23330=8,3,20,20,256,3,20,20,256
Convolution 28_200 1 1 27_192_bn_leaky_split_0 28_200_bn_leaky -23330=4,3,20,20,512 0=512 1=3 4=1 5=1 6=1179648 9=2 -23310=1,1.0e-01
Convolution 29_208 1 1 28_200_bn_leaky 29_208 -23330=4,3,20,20,45 0=45 1=1 5=1 6=23040
Yolov3DetectionOutput detection_out 1 1 29_208 yolo0 -23330=4,2,6,1,1 0=10 1=3 2=2.5e-01 -23304=12,1.0e+01,1.4+01,2.3e+01,2.7e+01,3.7e+01,5.8e+01,8.1e+01,8.2e+01,1.35e+02,1.69e+02,3.44e+02,3.190e+02 -23305=3,1077936128,1082130432,1084227584 -23306=2,3.36e+01,3.36e+01
Convolution 32_236 1 1 27_192_bn_leaky_split_1 32_236_bn_leaky -23330=4,3,20,20,128 0=128 1=1 5=1 6=32768 9=2 -23310=1,1.0e-01
Interp 33_244 1 1 32_236_bn_leaky 33_244 -23330=4,3,40,40,128 0=1 1=2.0e+00 2=2.0e+00
Concat 34_247 2 1 33_244 23_167_bn_leaky_split_1 34_247 -23330=4,3,40,40,384
Convolution 35_250 1 1 34_247 35_250_bn_leaky -23330=4,3,40,40,256 0=256 1=3 4=1 5=1 6=884736 9=2 -23310=1,1.0e-01
Convolution 36_258 1 1 35_250_bn_leaky 36_258 -23330=4,3,40,40,45 0=45 1=1 5=1 6=11520
Yolov3DetectionOutput detection_out 1 1 36_258 yolo1 -23330=4,2,6,38,1 0=10 1=3 2=2.5e-01 -23304=12,1.0e+01,1.4e+01,2.3e+01,2.7e+01,3.7e+01,5.8e+01,8.1e+01,8.2e+01,1.35e+02,1.69e+02,3.44e+02,3.19e+02 -23305=3,1065353216,1073741824,1077936128 -23306=2,1.68e+01,1.68e+01

In the yolov4.cpp code, when I set ex.extract("yolo0", out), the confidence of my detected object is ~80% where as if I use the last layer's output ("yolo1") I get ~20-30% for the same object. I can't figure out how I should be merging the results of these two parallel output layers or how to set their detection thresholds like you do in the YoloV5 example. Any advice with code would be greatly appreciated!

Program crash (Segmentation fault)

When "not" using YOLOV4_TINY program generates a segmentation fault in

static int detect_yolov4(const....
.
.
objects.push_back(object); <--

Reproducable under Ubuntu as well

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.