Giter VIP home page Giter VIP logo

deepstream-yolo's People

Contributors

anvarnazar avatar faizan1234567 avatar marcoslucianops avatar pieris98 avatar satchelwu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepstream-yolo's Issues

How to rename the name of a functions used for a model?

Hello @marcoslucianops . There seems to be a lot of change in the repo since the last time I used it to convert my yolov4-tiny model. Last time, I used your nvdsparsebbox_Yolo.cpp file which you have edited to add support for yolov4-tiny about 3 months back. Now this file in your repo is completely changed though. However, my question is I want to run yolov4 tiny model in deepstream but I want its name to be changed to another name instead of yolov4.

As of now, in nvdsparsebbox_Yolo.cpp file, I see there is a function with name NvDsInferParseCustomYoloV4 and other related functions to it like convertBBoxYoloV4 addBBoxProposalYoloV4 decodeYoloV4Tensor NvDsInferParseYoloV4. And now in config file we are giving the name to the function NvDsInferParseCustomYoloV4.

If I change the names of all these functions with some other name, say model ex:NvDsInferParseCustommodel ,convertBBoxmodel, addBBoxProposalmodel etc and give the name of function as NvDsInferParseCustommodel in config file, will it work?
So basically I am replacing the name yolov4 with some other name and calling the renamed function from config file. Will this work?

How can I get NMS_THRESH and CONF_THRESH inferenced from config files

@marcoslucianops

Hi, I want to make nvdsparsebbox_Yolo more flexible by remove:

static const int NUM_CLASSES_YOLO = 80;
#define NMS_THRESH 0.45
#define CONF_THRESH 0.25

Right now I just cannot get NMS_THRESH from config file "config_infer_primary_yoloV5s.txt:
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25

Do you have any suggestion?

Thanks,

Triton Server Integration with DeepStream

Hi @marcoslucianops,

Thanks for your Projects. It helped me a lot honestly.
Actually, I have run the Yolov3 model(trained on my Custom Dataset) on Jetson Nano using DeepStream with 4 Cameras. Next, I want to Integrate Triton Server with DeepStream for the same model.
So, my doubts are:
1.) How to do the Integration, what all I need to do extra?
2.) Can I serve the TRT models with the triton Server integrated with DeepStream?

Thanks

Request

@marcoslucianops Can you please check your email when you have time? I sent you a request abour Deepstream App

Question: Deepstream source code

Thanks for creating this repo; I was under the impression that the Nvidia deepstream sdk has to be edited in a c++ file. Here it seems that is not how the interaction with deep stream taking place. Any information to clear up my misconception will be helpful.

Yolov3-tiny-prn failing at launch of custom app

I keep getting this error.
deepstream-app: yolo.cpp:141: NvDsInferStatus Yolo::buildYoloNetwork(std::vector&, nvinfer1::INetworkDefinition&): Assertion `m_ConfigBlocks.at(i).at("activation") == "linear"' failed.
Aborted

Things i have done :

  1. My weights and cfg files are prn and have been correctly labeled in the "infer config files.txt" file .
  2. I have change the yolo.cpp file to the one in your repo.
  3. I have installed deepstream 5.0 correctly as i have run other default apps by nvidia on it.
  4. My camera is CSI and its working.

Yolov5 Performance Metrics - Jetson Nano

Hi,
I noticed your table is empty for FPS for Yolov5s on the Jetson Nano. Have you gotten any FPS results lately? I followed your Yolov5s example and am getting ~13FPS for Yolov5s (pretrained weights; 608 resolution)) on the Nano 4GB using the default video file in the main config.

Also, under your "NVIDIA GTX 1050 (4GB Mobile)" section, you have 3 tables: TensorRT, Darknet, and PyTorch. What's the difference between the TensorRT table and the Darknet table? Because doesn't deepstream-app automatically convert your cfg and weights file into a TensorRT engine anyway? So essentially you'll be using TensorRT whether you point directly to a .engine file or a .cfg/.weights file? I understand why you would have the PyTorch table because you're starting with a different architecture configuration, but doesn't Yolov4 in the TensorRT table have the same architecture in the end as the Yolov4 in the Darknet table? Hope that makes sense, just look for clarity.

Generate .so shared library for Cuda 10.1

Hi,
I am using Jetson Nano and able to generate .so shared library for pre-trained and custom yolov4 models and they works perfect. The Cuda version of my Jetson Nano is 10.2.

I am using Nvidia-Deepstream docker with yoloV4/yoloV4-tiny models at Jetson Nano an it works just fine.

Also, I use docker on AWS VM for better performance. The yoloV3/yoloV3-tiny models works without issue, but when I tried yoloV4 model with your solution, it didn't work. I checked my AWS VM Cuda version and it is 10.1. I do not have deepstream installed on VM, so couldn't generated .so shared library for Cuda 10.1

I believe the problem comes from different Cuda version, because docker is running over there without any issue for yoloV3.

If it is a version issue, how could I generate .so lib file for Cuda 10.1 or is there any other solution to get around the issue.

Your help would be appreciated!

Performance YOLOv5 deepstream

Hi, I was wondering if the performance I was getting with deepstream yolov5 was normal on the jetson nano 4GB?

I run inference on 2 video camera (1280*720) and I get very laggy preview.
I have to set drop-frame-interval=5 to obtain a real-time inference, it takes 0.15s to make inference on each camera

Maybe it's my config ?

Environment :
jetpack 4.5.1
deepstream 5.1

Model used YOLOv5s 3.0

btw: nice tuto !

deepstream_app_config.txt :

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=1
rows=1
columns=2
width=1920
height=1080
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=rtsp://192.168.1.19:554/1/h264major
num-sources=1
gpu-id=0
cudadec-memtype=0
#latency=200
#drop-frame-interval=5

[source1]
enable=1
type=3
uri=rtsp://192.168.1.20:554/1/h264major
num-sources=1
gpu-id=0
cudadec-memtype=0
#latency=200
#drop-frame-interval=5

[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
type=2
sync=0
source-id=1
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=1
batch-size=2
batched-push-timeout=40000
width=1280
height=720
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tests]
file-loop=0

YoloV5

How can i use yolov5?

change number of classes to detect

i want only "person" to be detected by yolov4

so i modified label.txt and config_infer_primary.txt num-detected-classes=1

i should also change in nvdsparsebbox_Yolo.cpp right, where is the variable to change?

about main program

hi,could you please tell me where is the main program that links the various plugins

Custom engine model name not changing

Hi, firstly thanks for your great work. Small issue, when I create an engine with a custom engine name using your native folder, the engine doesn't have the same name as that specified in the config file.

For example, if I set model-engine-file=model_b1_gpu0_fp32_custom.engine, the engine is saved as model_b1_gpu0_fp32.engine.

use camera with yolo and deepstream

Hi @marcoslucianops ,

I want to test Yolov3-tiny with a camera plug and play. In deepstream_app_config I changed the type to 1 and I got some errors, it says it failed to creat_camera_source_bin.

Do you know how to use yolov3-tiny with a simple usb camera?

For now I just started to use deepstream, by any way do you know if is there some tutorial to begin with deepstream?

Sincerely,

Image size in config file for training yolov4

first of all thanks for such a great repo to explain how to run yolov3 tiny with the deepstream. I am training a yoloV4 and the image size for training I have is 150(width)x80(height) to 600(width)x150(height). my config file contains height and width = 416x416. so is that a correct size or shall I change the size in the config file based on my image size? does yolo resize the image while keeping the aspect ratio constant in training?

thanks once again

save bounding box data and show the cropped object detection

Hi, Thank you for this repository,
i am not professional in C&C++. and use 3 yolo models back to back for car plate recognition.
how can save and show only object detected in each step of deepstream. and in last model how can sort left to right object detection(for take plate numbers). please sample code..
Many Thanks .

How to use DLA to build engine

https://forums.developer.nvidia.com/t/how-to-use-dla-in-deepstream-yolov5/161550/25

Hi @marcoslucianops ,
I used deepstream-yolov4, and I check out the engine. That been build on GPU. I saw the article before. How do I fix following program for building DLA engine?

// Build the engine
std::cout << "Building the TensorRT Engine" << std::endl;
nvinfer1::ICudaEngine * engine = builder->buildCudaEngine(*network);
if (engine) {
std::cout << "Building complete\n" << std::endl;
} else {
std::cerr << "Building engine failed\n" << std::endl;
}

Building Custom Yolov5 model

Hello @marcoslucianops,
I recently trained yolov5 model with one class and after following your instruction here() on how to config custom model, I got the error when I ran the command sudo ./yolov5 -s:
Loading weights: ../yolov5s.wts
[03/19/2021-00:46:32] [E] [TRT] (Unnamed Layer* 17) [Convolution]: kernel weights has count 0 but 2048 was expected
[03/19/2021-00:46:32] [E] [TRT] (Unnamed Layer* 17) [Convolution]: count of 0 weights in kernel, but kernel dimensions (1,1) with 64 input channels, 32 output channels and 1 groups were specified. Expected Weights count is 64 * 11 * 32 / 1 = 2048
[03/19/2021-00:46:32] [E] [TRT] Parameter check failed at: ../builder/Network.cpp::addScale::482, condition: shift.count > 0 ? (shift.values != nullptr) : (shift.values == nullptr)
yolov5: /home/george/Desktop/vibever/tensorrtx/yolov5/common.hpp:189: nvinfer1::IScaleLayer
addBatchNorm2d(nvinfer1::INetworkDefinition*, std::map<std::__cxx11::basic_string, nvinfer1::Weights>&, nvinfer1::ITensor&, std::__cxx11::string, float): Assertion `scale_1' failed.
Aborted
Is there any other modification I need to make? Thanks.

Use deep stream for custom applications

Hi @marcoslucianops
In your opinion, Is it possible to use deep stream for custom app? for example : face recognition
That's mean I want to use decode multi-stream and one detector of deep stream for face detection, but for face recognition, the deep stream doesn't support any model for this task, I want to know How I can integrated the face rec system to deep steam? I want to get outputs like counting and coordinates of deep stream, Is it possible?

Assertion `scale_1' failed

[E] [TRT] Parameter check failed at: ../builder/Network.cpp::addScale::434, condition: shift.count > 0 ? (shift.values != nullptr) : (shift.values == nullptr)
0
common.hpp:190: nvinfer1::IScaleLayer* addBatchNorm2d(nvinfer1::INetworkDefinition*, std::map<std::__cxx11::basic_string, nvinfer1::Weights>&, nvinfer1::ITensor&, std::__cxx11::string, float): Assertion `scale_1' failed.

WARNING: Num classes mismatch. Configured: 80, detected by network: 0

Hello sir! Thanks for your work.

I am trying to run YoloV4 from DeepStream 5.0.1 using your repository on my Jetson Nano. I started with this. Everything went okay, I successfully compiled with CUDA and tested the TRT inference. However, when the stream starts, the terminal displays this message:

WARNING: Num classes mismatch. Configured: 80, detected by network: 0
I followed your instructions, but got this. Number of classes in my labels.txt is 80.

My system's info:

  • CUDA 10.2
  • JetPack 4.4.1
  • DeepStream 5.0.1
  • TensorRT 7.1.3
  • cuDNN 8.0
  • OpenCV 4.1.1

Please help me!

image

multiple inference problem

hi, Thanks for sharing !
I ran multiple inference on Jetson Xavier (Jetpack 4.4), but no result detected. terminal print as follows..
I tested the 2 models used, each of them works well standalone.

Using winsys: x11
Deserialize yoloLayer plugin: yolo_99
Deserialize yoloLayer plugin: yolo_108
Deserialize yoloLayer plugin: yolo_117
0:00:03.522306324 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 2]: deserialized trt engine from :/home/admin123/deepstream/DeepStream-Yolo/native/model_b16_gpu0_fp16_helmet.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT data 3x416x416
1 OUTPUT kFLOAT yolo_99 24x52x52
2 OUTPUT kFLOAT yolo_108 24x26x26
3 OUTPUT kFLOAT yolo_117 24x13x13

0:00:03.522553823 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 2]: Use deserialized engine model: /home/admin123/deepstream/DeepStream-Yolo/native/model_b16_gpu0_fp16_helmet.engine
0:00:03.533651338 30756 0x7f3c002380 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 2]: Load new model:/home/admin123/deepstream/DeepStream-Yolo/examples/multiple_inferences/sgie1/config_infer_secondary1.txt sucessfully
Deserialize yoloLayer plugin: yolo_51
Deserialize yoloLayer plugin: yolo_59
0:00:03.886455896 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/admin123/deepstream/DeepStream-Yolo/native/model_b1_gpu0_fp16_personv3.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT data 3x416x416
1 OUTPUT kFLOAT yolo_51 18x13x13
2 OUTPUT kFLOAT yolo_59 18x26x26

0:00:03.886608479 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/admin123/deepstream/DeepStream-Yolo/native/model_b1_gpu0_fp16_personv3.engine
0:00:03.888024542 30756 0x7f3c002380 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/admin123/deepstream/DeepStream-Yolo/examples/multiple_inferences/pgie/config_infer_primary.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:181>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:167>: Pipeline running

WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0

yolov4 tiny generates bboxes on non-targeted regions

Hello,

I'm running two yolov4 tiny models on deepstream app, the first detector works on the full frame to detect cars then the second one works on the detected cars boxes to extract windshields. The first detection works fine but the second one there are some bboxes that are detected out of car's region as shown in the images below, the green box is from the first detector and the red box is from the second detector:

1
2

These are the yolo make folders and configuration files
nvdsinfer_custom_impl_Yolo.zip
nvdsinfer_custom_impl_Yolo_ws.zip
vehicle_detection_config.txt
ws_detection_config.txt

nvdsinfer_custom_impl.h missing

Hi, I have tried to compile your yolov5 app on Jetson, but it looks like the code is missing
nvdsinfer_custom_impl.h . Could you please take a look on it?

How do I create an engine file?

hello.
I need an engine file to run in deepstreemsdk. How do I create a model.engine file?

I use jetson nano and deepstreem sdk 5.0.1

Problem with a custom Yolo

Hi,

I have some trouble with deepstream.

I trained a Yolov3-tiny on darknet with specific classes :

person
wheelchair
bicycle
motorcycle
car
bus
truck
ambulance
traffic light
stop sign
cedez le passage
shoes
sports ball
traffic cones

As you can see it contains some classes of the COCO dataset but not only them, and not in the same order.

I trained on darknet, and It worked : I took somes pictures to verify it :
pre_velo2

Then I took the .cfg .weight and .names for deepstream (I uses deepstream 4.0).

I changed the number of classes in nvdsparsebbox_Yolo.cpp, and I compiled it.

I also created a config_infer_primary... and deepstream_app_config... and configure it well (right number of classes, right sources)

I changed my .names for labels.txt

And I tried Yolo_Deepstream

I don't understand my results :

For now The video I used let me saw theses objects :

person
car
bicycle
wheelchair

but i detect this :

person is detected as person
car is detected as bicycle
bicycle is detected as wheelchair
wheelchair is detected as...wheelchair.

I really don't know where is the problem, It works on darknet, and I didn't modify the order.
Plus, first I thought that some classes were inversed, with car->bicycle and bicycle->wheelchair, but wheelchair->wheelchair!

Do you know where the problem can be in deepstream?

Sincerely,

How to resize the input video stream in deepstream-app?

Hello @marcoslucianops . I have followed your repo and able to run yolov4-tiny model in deepstream. Now I want to run a video with a framesize of 2464X1440 in deepstream. I am getting a log that deepstream at max supports resolution of 2048X2048. So I want to resize my original input video framesize to less than 2048X2048. So how to do this resizing and where to add this? Your help would be appreciated.

why **PERF is different on monitor and mobaxterm

I change INPUT_H and INPUT_W in yololayer.h,but i find something strange
when INPUT_H and INPUT_W equal to 320,linking to the jetsonr via mobaxterm,display as follows
1

.But on a monitor connected to Jetson, it displays as below
2
Why is that?
and what are the ways I can do it if I want to speed up inferring

Use custom yolov4-tiny model which has 6 classes on Deepstream

Hi, thanks for your hard work and sharing it with us !
I'm able to use pre-trained yoloV4 and yoloV4-tiny with Deepstream, but had problem on custom yoloV4-tiny model. I would like to use my custom yoloV4-tiny model which has 6 classes.

For the original Deepstream API, I was just changing the "static const int NUM_CLASSES_YOLO = 6" on "nvdsparsebbox_Yolo.cpp" file , then make it and then able to use the generated "libnvdsinfer_custom_impl_Yolo.so" file with my custom-yoloV3 weight file to inference on Deepstream 5.

Please guide me to use my custom yoloV4-tiny model on Deeepstream 5

Your help would be appreciated!

How to do inference for multiple images?

Hello @marcoslucianops .,Thank you for sharing your work. In MULTIPLE-INFERENCES.MD file, what is meant by primary inference and secondary inference? I mean what's the difference between them? And I want to run my tiny-yolov4 model on multiple images . How to do that? Thanks in advance.

How to extract stream metadata.

Hello @marcoslucianops,
Thanks for the awesome work, I truly appreciate the great work. I want to extract the stream metadata, which contains useful information about the frames in the batched buffer for YOLOv3-Tiny-PRN model. How can I obtain the metadata?

Recipe for target 'yolo.o' failed

Using the new deepstream-5.1 triton docker to build the nvdsinfer_custom_impl_Yolo I get the following error in make:

yolo.cpp: In member function 'NvDsInferStatus Yolo::buildYoloNetwork(std::vector&, nvinfer1::INetworkDefinition&)':
yolo.cpp:298:48: error: 'createReorgPlugin' was not declared in this scope
nvinfer1::IPluginV2* reorgPlugin = createReorgPlugin(2);
^~~~~~~~~~~~~~~~~
yolo.cpp:298:48: note: suggested alternative: 'reorgPlugin'
nvinfer1::IPluginV2* reorgPlugin = createReorgPlugin(2);
^~~~~~~~~~~~~~~~~
reorgPlugin
Makefile:61: recipe for target 'yolo.o' failed

Checking cuda version using nvcc --version:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Tue_Sep_15_19:10:02_PDT_2020
Cuda compilation tools, release 11.1, V11.1.74
Build cuda_11.1.TC455_06.29069683_0

I have done sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/ and ran CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo.

No issues previously using 5.0.1 and old instructions

app run failed

hi,could you please tell me how solve this question?
1

API

Hi, great job. I'm trying to deploy a model as an API service and I'd like to know if is it possible to do using your repo?
If yes, could you help me with this?

Tks!

How to edit reference deepstream-app for custom functions?

Hi @marcoslucianops I have followed your tutorial and able to run tiny-yolov4 model on nano. I understood a pipeline in deepstream can be created using config files. Now however I want to edit the reference deepstream-app to add some custom functionalities. In which file can I edit for them? I have seen some deepstream-test samples and all of them have a .c/.cpp files to edit the pipeline. I have followed your tutorial. So in this process, which files are used. Are the files in located in /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-app ? If I edit the file in /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-app and then run from your directory, will they work? In which file should I edit. Please help me with this.

get metadata from deepstream-Yolo

Hi @marcoslucianops ,

We talk few days ago, you told me to go to your repository in order to know how to get metadata from deepstream.
I read this section : https://github.com/marcoslucianops/DeepStream-Yolo/#custom-functions-in-your-model

However I'm still lost. I understand that I can get metadata with NvDsObjectMeta, NvDsFrameMeta and NvOSD_RectParams but :

-I don't know where are these structure in analytics_done_buf_prob function

-I don't understand how to use this function : I suppose I got to write in analytics_done_buf_prob function a code that let me save the metadata, or directly write a code in this function where I use this metadata, but I don't know where.

Could you help me understand, for example how get the coordinates of a specific bounding box and write theses coordinates on a file?

Way too many bounding boxes, misclassifications

Hello, I followed your instructions for yolov5. I was able to run till the last step. The only change in my custom model was to change #labels from 80 to 6 and then change labels.txt. My model was fine-tuned on custom data (using ultralytics yolov5 repo). I updated yololayer.h accordingly. The yolov5.engine/wts file also has the same number of labels. However when I run deepstream-app on a test video, I get way too many bounding boxes and labels all over. There is no problem when I just run yolov5 test without deepstream. Any knobs I may have missed changing for a custom yolov5s model and following the entire set of steps?

Non-square models support

Hi Marcos,
I noticed that your modified code for nvdsinfer_custom_impl_Yolo doesn't support non-square/asymmetric models (width!=height).

This was unexpected as I saw your discussion about attempts to get that working on the default implementation here:
https://forums.developer.nvidia.com/t/trouble-in-converting-non-square-grid-in-yolo-network-to-tensorrt-via-deepstream/107541/19
so I thought you would have included the functionality in your implementation. Is there a way to change some code (similar to eh-steve's patch from the above link) to enable asymmetric model input sizes for your implementation? I'm eager to make this work on your implementation as it already supports arbitrary custom yolo models based on alexeyAB's darknet fork.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.