itsnine / yolov5-onnxruntime Goto Github PK
View Code? Open in Web Editor NEWYOLOv5 ONNX Runtime C++ inference code.
YOLOv5 ONNX Runtime C++ inference code.
how to install OpenCV 4.x
ONNXRuntime 1.7+ on windows 11
after build, it can sucess run in cpu, but not in gpu
it will generate the follow errors:
root@4dbec2b03d4e:/ssd/liuhao/yolov5-onnxruntime/build# ./yolo_ort --model_path ../models/yolov5m.onnx --image ../images/bus.jpg --class_names ../models/coco.names --gpu Inference device: GPU /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:115 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] CUDA failure 101: invalid device ordinal ; GPU=0 ; hostname=4dbec2b03d4e ; expr=cudaSetDevice(info_.device_id);
my envir is:
ubuntu 18.04
cuda 11.03
onnxruntime x64-gpu-1.8.0
Hello, when I used the same onnx model to detect the original yolov5 project and this project, I encountered the problem of different results. The original yolov5 project category result is correct, but only part of the category of this project is correct, the recognition box The same, but the confidence level is also different. How to solve this problem?
how to address this problem?
Official yolov5 PyTorch repo uses half precision. I try the onnx model with half precision on python, and speed increased. Can this repo support half precision?
--model_path ../models/yolov5s.onnx --image ../images/bus.jpg --class_names ../models/coco.names --gpu
ERROR: Failed to access class name path:
There seem to be an issue handling an input other than 640x640. When I try to feed a 320x1296 input it throws an error:
Got invalid dimensions for input: images for the following indices index: 2 Got: 1296 Expected: 640 index: 3 Got: 320 Expected: 640
I think it has to do with the dynamic input shape checking of the code, which I think it is not doing its job correctly.
Can someone point me where should I look at, to make it able to excecute multiple input shape images?
Thanks!
Are there any plans to add support for ONNXRuntime 1.12+ (replace session.GetInputName with session.GetInputNameAllocated), replacing const char input and output vectors with ORTAllocatedStringPtr vectors?
Hello,
I am trying to run this example, but when I'm writing "cmake --build .
" in the terminal, I always get this error:
LINK : fatal error LNK1104: Datei "onnxruntime-win-x64\onnxruntime-win-x64\lib\onnxruntime.lib.lib" kann nicht geöffnet werden. [...\build\yolo_ort.vcxproj]
Maybe the "lib.lib" from onnxruntime.lib.lib
is the problem, but I don't know how to solve it.
I am using the win-x64-1.10.0 onnxruntime version.
Thanks for your help.
This is great reference for c++.
Question:
In Line https://github.com/itsnine/yolov5-onnxruntime/blob/master/src/detector.cpp#L112 .
Why are we considering only first element of outputTensors, It has 4 output array.
We could request for all 4 outputs if we change parameters in https://github.com/itsnine/yolov5-onnxruntime/blob/master/src/detector.cpp#L190
Any particular reason to go this way?
I could not find any reference including Anchor_boxes, Could you please add one?
Thanks.
What is the speed of inference?I try it on RTX2060 with only 10 FPS.
Hello, I had an experiment where all the images were high pixels, so scaling to 640*640 would cause the target to be too small. I tried to modify the c++ file in the src folder to change 640 to 1280, but after compiling, I still need 640 input, so how should I modify the project?
Hi, I have built onnxruntime on macOS, but under MacOS/Release folder, there is no onnxruntime_cxx_api such file at all.
Also, there is no such lib sub folder under it, it just inside MacOS/Release/libonnxruntime.dylib.
How should I set them under macOS?
just got 2 such file in src:
/libs/onnxruntime//cmake/external/onnxruntime-extensions/includes/onnxruntime/onnxruntime_cxx_api.h
/libs/onnxruntime//include/onnxruntime/core/session/onnxruntime_cxx_api.h
env:
platform: windows 10 x64
onnxruntime version: gpu 1.7.0
when processing excute to AppendExecutionProvider_CUDA function, occur an exception, and i found cudaOption obeject Member variables not right, for exanmple the device_id is a Uninitialized values like -858993460, can you help me ? thank you very much!
How do you do deployment of instance segmentation models? For example yolov5n-seg.onnx
In preprocessing
void YOLODetector::preprocessing(cv::Mat &image, float*& blob, std::vector<int64_t>& inputTensorShape)
{
cv::Mat resizedImage, floatImage;
cv::cvtColor(image, resizedImage, cv::COLOR_BGR2RGB);
utils::letterbox(image, resizedImage, this->inputImageShape,
cv::Scalar(114, 114, 114), this->isDynamicInputShape,
false, true, 32);
inputTensorShape[2] = resizedImage.rows;
inputTensorShape[3] = resizedImage.cols;
resizedImage.convertTo(floatImage, CV_32FC3, 1 / 255.0);
blob = new float[floatImage.cols * floatImage.rows * floatImage.channels()];
cv::Size floatImageSize {floatImage.cols, floatImage.rows};
// hwc -> chw
std::vector<cv::Mat> chw(floatImage.channels());
for (int i = 0; i < floatImage.channels(); ++i)
{
chw[i] = cv::Mat(floatImageSize, CV_32FC1, blob + i * floatImageSize.width * floatImageSize.height);
}
cv::split(floatImage, chw);
}
(CUDA113+CUDNN82) han@han:~/Desktop/hxb_projects/CPP_Instance/10-30/git_3/yolov5-onnxruntime/build$ ./yolo_ort --model_path /home/han/Desktop/hxb_projects/CPP_Instance/10-30/git_3/yolov5-onnxruntime/models/yolov5s.onnx --image /home/han/Desktop/hxb_projects/CPP_Instance/10-30/git_3/yolov5-onnxruntime/images/bus.jpg --class_names /home/han/Desktop/hxb_projects/CPP_Instance/10-30/git_3/yolov5-onnxruntime/coco.names
Inference device: CPU
Input shape: 1
Input shape: 3
Input shape: 640
Input shape: 640
Input name: images
Output name: images
Model was initialized.
Invalid input name: �����U
I tried out your sample - very cool! I get 110 FPS with a YOLOv5s running CUDA 11.5 on my 1080ti. I am curious what it would take to evaluate performance with TensorRT. Have you tried this? Any pointers?
Thanks.
First of all, thanks for the working C++ code that uses onnxruntime with yolov5 👍
Could you please add the license file to the repository so that it is clear how your code can be used in other projects
hello,I have tested ort c++ inference successfully! but I didn't make batch inference works! could you please give a batch inference c++ example??
Thank you very much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.