Giter VIP home page Giter VIP logo

hailo-application-code-examples's Introduction

Hailo Application Code Examples

github_code

A set of stand-alone canned examples for compiling NN models and exercising different tasks on Hailo devices.

Disclaimer:
The code examples in this github repository are provided by Hailo solely on an “AS IS” basis and “with all faults”. No responsibility or liability is accepted or shall be imposed upon Hailo regarding the accuracy, merchantability, completeness or suitability of the code example. Hailo shall not have any liability or responsibility for errors or omissions in, or any business decisions made by you in reliance on this code example or any part of it. If an error occurs when running one of the repository's examples, please open a ticket in the "Issues" tab.
Please note that the examples were tested on specific versions and we can only guarantee the expected results using the exact versions on the exact environment. The specific version and environment will be mentioned in the README.md of each example.

Under the runtime directory you would find

  • Examples for using different coding languages (Python, c, c#, c++) and performs different tasks
  • Each example was tested on the specified environment
  • The README for that example would highlight any external dependancy
  • Rich GStreamer pipelines, including BASH and c++ implementations
  • Specific platform guides (e.g. TDA4)

Under the model_compilation directory you would find

  • Examples for converting a naitve mode in ONNX or tflite formats to Hailo executable - HEF
  • Complete optimization flow, including quantization

Under the resources directory you would find

  • documents for specific issues
  • different files & general stuff

Under the tools directory you would find

  • basic optimization diagnostic tool, which help with diagnostic common optimization issues and mistakes

hailo-application-code-examples's People

Contributors

batsheva-knecht avatar erangur avatar giladnah avatar giladnahor avatar hailocs avatar hidant avatar jj1972kim avatar nadaved1 avatar nina-vilela avatar omaz-ai avatar omerwer avatar ronithailo avatar sporky42 avatar yanivbot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

hailo-application-code-examples's Issues

Query: Is there any way to use hailo_scheduler with raw streams?

As per my understanding, hailo provides two types of API:

  1. Raw Streams : which allows low level interactions with the device and supports async read, write.
  2. Virtual Streams : Provide high level interface that is used to interact with the device. Supports HAILO_SCHEDULER.

Unfortunately I couldn't find anything about async read/write with HAILO_SCHEDULER

I have gone through all of the sample code provided in Hailo-Application-Code-Examples and example code in hailort branch. Also checked the HailoRt user guide: Release 4.15.0.

Is it possible to use raw streams async read/write with HAILO_SCHEDULER?

fastsam result image?

Can you provide a rendering of the fastsam result? The post-processing effect I implemented myself is not as good as the official one.

'PcieDevice is deprecated'

When i tried to run sample application im getting this error. I just run 'PcieDevice()' after importing 'hailo_platform' on hailo python venv and the log result is-

PcieDevice()
HailoHWObject is deprecated! Please use VDevice/Device object.
PcieDevice is deprecated! Please use VDevice/Device object.
PcieHcpControl is deprecated! Please Use Control object
<hailo_platform.pyhailort.hw_object.PcieDevice object at 0x7fe46afbb4c0>

Performing YOLOv8 inference using my custom model

Hello, I have modified the YOLOv8 example code on Windows and successfully performed inference on specific images using the yolov8s_nms_on_hailo.hef model, which was trained on the COCO dataset. I want to replace it with a model trained on my dataset, which only includes one class. Therefore, I changed the class name in common.h and adjusted print_boxes_coord_per_class to class_idx < 1, but I encountered errors when running the modified code.
-I- Running network. Input frame size: 1228800 [HailoRT] [error] CHECK_AS_EXPECTED failed - Optional buffer size must be equal to pool buffer size. Optional buffer size = 160320, buffer pool size = 409600 [HailoRT] [error] CHECK_EXPECTED failed with status=HwReadElement14_yolov8n/conv41 - HAILO_INVALID_OPERATION(6) (D2H) failed with status=HAILO_INVALID_OPERATION(6) [HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_INVALID_OPERATION(6) - Failed reading with status = 6 Read failed with status: 6
image
image

The error is related to std::vector<float32_t> vstream_output_data(160320) in the code. I would like to know how the value 160320 was determined. How should I appropriately set this value to suit my own model?

yolov8 cpp example error:

Here the error:

./build/x86_64/vstream_yolov8_example_cpp -hef=yolov8m.hef -input=../../async_yolov5/640.mp4
-I-----------------------------------------------
-I- Network Name
-I-----------------------------------------------
-I- IN: yolov8m/input_layer1
-I-----------------------------------------------
-I- OUT: yolov8m/conv57
-I- OUT: yolov8m/conv58
-I- OUT: yolov8m/conv70
-I- OUT: yolov8m/conv71
-I- OUT: yolov8m/conv82
-I- OUT: yolov8m/conv83
-I-----------------------------------------------

-I- Started write thread: yolov8m/input_layer1 (640, 640, 3)
-I- Started read thread: yolov8m/conv57 (80, 80, 64)
-I- Started read thread: yolov8m/conv83 (20, 20, 80)

-I- Starting postprocessing

-I- Started read thread: yolov8m/conv70 (40, 40, 64)
-I- Started read thread: yolov8m/conv58 (80, 80, 80)
-I- Started read thread: yolov8m/conv71 (40, 40, 80)
-I- Started read thread: yolov8m/conv82 (20, 20, 64)
[HailoRT] [error] CHECK failed - Got HAILO_TIMEOUT while waiting for descriptors in write_buffer (channel_id=0:2)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4) - HwWriteElement3_yolov8m/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK failed - Got HAILO_TIMEOUT while waiting for descriptors in write_buffer (channel_id=0:2)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4) - HwWriteElement3_yolov8m/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK failed - Got HAILO_TIMEOUT while waiting for descriptors in write_buffer (channel_id=0:2)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4) - HwWriteElement3_yolov8m/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK failed - Got HAILO_TIMEOUT while waiting for descriptors in write_buffer (channel_id=0:2)

Issues with Converting YOLOv5 ONNX Model to HEF Format with and without NMS

I have encountered some issues while modifying the official inference codes for (windows/yolov5) and (windows/yolov8) to support image inference. I tested the downloaded yolov5m_nms.hef and yolov5m_no_nms.hef files (both provided by Hailo), and I would greatly appreciate your assistance with the following concerns:

  1. (windows/yolov8) Code:
    This code is capable of inferring the yolov8_nms.hef model, so I wondered if it could also infer the yolov5_nms.hef model. After replacing the model, I found that this code can indeed work with both models. However, I am not sure how to convert my trained yolov5m.onnx model into an HEF model with NMS. The official yolov5m.yaml configuration file seems to be for an older model, and even after modifying the nodes in yolov5m.yaml, I am still unsure if the provided yolov5m_nms.json is compatible with my model. During the conversion, I encountered the error The layer named conv84 doesn't exist in the HN. Could you kindly guide me on how to generate a yolov5m_nms.hef model with NMS? My model conversion command is:
    ‘hailomz compile --ckpt yolov5m.onnx --calib-path my_data/chunxin_image --model-script yolov5m.alls --classes 1 yolov5m’
    117f07acbf75d1ac6d209c8a9e3e291

  2. (windows/yolov5) Code:
    I noticed that this example uses a yolov5.hef model without NMS, and this model type is Multi Context. To convert the yolov5m.onnx model for a custom dataset, I modified the yolov5m.alls and removed the content related to using the nms.json file. The modified yolov5m.alls content is as follows.
    19a2802097e9071410ef0f5674e3561
    However, when I performed inference with the code, the drawn bounding boxes were incorrect, as shown below:
    2fb011d769a3ed1c4098fa616daa0c5
    The bounding boxes for the generic model yolov5m_no_nms.hef were correct, as shown below.
    9d980b49457970ee06515db9db8c9db

Could you kindly advise me on how to generate a yolov5m_no_nms.hef model without NMS?

Thank you very much for your assistance.

Best regards

Do not hardcode number of classes being 80

In runtime/python/yolo_general_inference/yolo_inference.py throughout 80 classes are hardcoded in throughout despite the number of classes being available from the command line.

Cannot compile yolov5_yolov7_detection_cpp

I cannot compile yolov5_yolov7_detection_cpp because "common" folder is missing on this example. I tried to copy it from yolov8_cpp/x86_64 however faced with some other compiling error. Do I miss something?

Here is the dump:

bilkosem@C11-BRVZ1U1DFIE:~/git/Hailo-Application-Code-Examples/yolov5_yolov7_detection_cpp$ ./build.sh
-I- Building x86_64
-- Found OpenCV: /usr/local/include/opencv4
-- Configuring done
-- Generating done
-- Build files have been written to: /home/bilkosem/git/Hailo-Application-Code-Examples/yolov5_yolov7_detection_cpp/build/x86_64
[ 25%] Building CXX object CMakeFiles/vstream_yolov7_example_cpp.dir/yolo_output.cpp.o
In file included from /home/bilkosem/git/Hailo-Application-Code-Examples/yolov5_yolov7_detection_cpp/yolo_output.cpp:8:
:
/home/bilkosem/git/Hailo-Application-Code-Examples/yolov5_yolov7_detection_cpp/yolo_output.hpp:6:10: fatal error: common/hailo_objects.hpp: No such file or directory
 #include "common/hailo_objects.hpp"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
CMakeFiles/vstream_yolov7_example_cpp.dir/build.make:75: recipe for target 'CMakeFiles/vstream_yolov7_example_cpp.dir/yolo_output.cpp.o' failed
make[2]: *** [CMakeFiles/vstream_yolov7_example_cpp.dir/yolo_output.cpp.o] Error 1
CMakeFiles/Makefile2:82: recipe for target 'CMakeFiles/vstream_yolov7_example_cpp.dir/all' failed
make[1]: *** [CMakeFiles/vstream_yolov7_example_cpp.dir/all] Error 2
Makefile:90: recipe for target 'all' failed
make: *** [all] Error 2

here is the error I got after I copied the "common" folder

/home/bilkosem/git/Hailo-Application-Code-Examples/yolov5_yolov7_detection_cpp/yolo_output.cpp: In member function ‘virtual uint YoloOutputLayer::get_class_prob(uint, uint, uint, uint)’:
/home/bilkosem/git/Hailo-Application-Code-Examples/yolov5_yolov7_detection_cpp/yolo_output.cpp:45:25: error: ‘using element_type = class HailoTensor {aka class HailoTensor}’ has no member named ‘get_uint16’; did you mean ‘get_uint8’?
         return _tensor->get_uint16(row, col, channel);
                         ^~~~~~~~~~
                         get_uint8
CMakeFiles/vstream_yolov7_example_cpp.dir/build.make:75: recipe for target 'CMakeFiles/vstream_yolov7_example_cpp.dir/yolo_output.cpp.o' failed
make[2]: *** [CMakeFiles/vstream_yolov7_example_cpp.dir/yolo_output.cpp.o] Error 1
CMakeFiles/Makefile2:82: recipe for target 'CMakeFiles/vstream_yolov7_example_cpp.dir/all' failed
make[1]: *** [CMakeFiles/vstream_yolov7_example_cpp.dir/all] Error 2
Makefile:90: recipe for target 'all' failed
make: *** [all] Error 2

Questions about YOLOv8 inference

Version:
hailort 4.16,opencv 4.90,CMake 3.24。
The c++ code compiled without errors。

(1)When I ran inference YOLOv8 under Windows, I ran into a problem. The program did not show any relevant inference results.But when I tested yolov5, it worked. I don't know how to solve it.
yolov8_example.exe -hef=F:\Hailo\Hailo-Application-Code-Examples\runtime\windows\yolov8\hefs\yolov8s.hef -video=F:\Hailo\Hailo-Application-Code-Examples\data\yolo5_test.mp4
image
The content of hailort.log is:
image

(2)Is there any difference between the cpp code under runtime\windows\yolov8 and the code under runtime\cpp\detection\yolov8\x86_64? runtime\cpp\detection\yolov8\x86_64 works on Linux, but can it also work on Windows?

custom hef model

i'ed compiled an onnx yolov5m model to hef, that model is retrained by my own data for letter recognize
output from compile screen
[info] input_layer1: Pass
[info] conv3: Pass
[info] conv5: Pass
[info] normalization1: Pass
[info] conv7: Pass
[info] space_to_depth1: Pass
[info] conv12: Pass
[info] conv26: Pass
[info] batch_norm1_dc: Pass
[info] context_0_to_context_1_context_2_0: Pass
[info] auto_reshape_from_input_layer1_to_normalization1: Pass
[info] ew_add3: Pass
[info] ew_add8: Pass
[info] conv2_sd0: Pass
[info] conv20: Pass
[info] shortcut_from_conv1_to_conv2_sd0-3: Pass
[info] conv11_sd1: Pass
[info] conv10: Pass
[info] conv11_sd0: Pass
[info] conv8_sd1: Pass
[info] conv2_sd3: Pass
[info] ew_add5: Pass
[info] conv19_dc: Pass
[info] batch_norm1_d1: Pass
[info] conv23_d0: Pass
[info] concat2: Pass
[info] conv24: Pass
[info] conv21_d1: Pass
[info] conv2_sdc: Pass
[info] ew_add2: Pass
[info] concat_from_conv2_sd0-3_to_conv2_sdc: Pass
[info] conv1_sd0: Pass
[info] conv25_dc: Pass
[info] conv17_d0: Pass
[info] smuffers_shortcut_conv7_to_conv8: Pass
[info] conv27: Pass
[info] conv23_d1: Pass
[info] conv13: Pass
[info] conv11_sd3: Pass
[info] conv19_d1: Pass
[info] conv16: Pass
[info] conv22: Pass
[info] conv2_sd1: Pass
[info] conv15_dc: Pass
[info] conv17_dc: Pass
[info] conv18: Pass
[info] ew_add6: Pass
[info] conv23_dc: Pass
[info] conv2_sd2: Pass
[info] batch_norm1_d0: Pass
[info] smuffers_shortcut_conv5_to_conv6: Pass
[info] ew_add4: Pass
[info] conv9: Pass
[info] conv19_d0: Pass
[info] conv11_sdc: Pass
[info] batch_norm1_fs: Pass
[info] conv25_d0: Pass
[info] conv8_sd0: Pass
[info] concat1: Pass
[info] conv14: Pass
[info] conv21_dc: Pass
[info] conv2_sd4: Pass
[info] conv4: Pass
[info] conv1_sd2: Pass
[info] conv1_sd1: Pass
[info] conv15_d1: Pass
[info] conv6_sd0: Pass
[info] batch_norm2: Pass
[info] conv25_d1: Pass
[info] conv1_sdc: Pass
[info] conv21_d0: Pass
[info] conv11_sd2: Pass
[info] conv15_d0: Pass
[info] conv17_d1: Pass
[info] conv8_sdc: Pass
[info] ew_add1: Pass
[info] ew_add7: Pass
[info] conv6_sdc: Pass
[info] conv6_sd1: Pass
[info] ew_add12: Pass
[info] conv33: Pass
[info] conv31: Pass
[info] conv29: Pass
[info] context_1_to_context_2_5: Pass
[info] conv34_dc: Pass
[info] context_1_to_context_2_7: Pass
[info] concat_from_conv45_d0-2_to_conv45_dc: Pass
[info] concat_from_conv46_d3-5_to_conv46_dc: Pass
[info] conv46_d2: Pass
[info] conv36_d0: Pass
[info] conv49: Pass
[info] conv28_d3: Pass
[info] conv51: Pass
[info] conv35: Pass
[info] conv36_dc: Pass
[info] conv42_d0: Pass
[info] conv34_d0: Pass
[info] shortcut_from_shortcut_from_conv44_to_conv45_d0-3_to_conv45_d0-2: Pass
[info] conv32_d1: Pass
[info] conv39: Pass
[info] shortcut_from_conv44_to_conv45_d0-3: Pass
[info] conv46_dc: Pass
[info] conv47: Pass
[info] conv52_dc: Pass
[info] conv48: Pass
[info] conv52_d0: Pass
[info] concat_from_conv45_d0-1_to_concat_from_conv45_d0-2_to_conv45_dc: Pass
[info] conv50_dc: Pass
[info] conv44: Pass
[info] context_0_to_context_1_in_1: Pass
[info] conv32_d0: Pass
[info] conv46_d3: Pass
[info] conv45_d3: Pass
[info] conv40_d1: Pass
[info] shortcut_from_conv27_to_conv28_d0-3: Pass
[info] conv45_d1: Pass
[info] conv38_d1: Pass
[info] conv32_dc: Pass
[info] concat_from_conv28_d3-5_to_conv28_dc: Pass
[info] conv46_d5: Pass
[info] concat_from_conv46_d0-2_to_conv46_dc: Pass
[info] conv46_d1: Pass
[info] conv45_d0: Pass
[info] conv42_d1: Pass
[info] conv30: Pass
[info] ew_add13: Pass
[info] ew_add11: Pass
[info] conv50_d1: Pass
[info] conv34_d1: Pass
[info] conv40_dc: Pass
[info] conv28_d4: Pass
[info] conv36_d1: Pass
[info] conv41: Pass
[info] conv38_d0: Pass
[info] conv43: Pass
[info] ew_add14: Pass
[info] conv45_dc: Pass
[info] conv52_d1: Pass
[info] conv28_d5: Pass
[info] concat_from_conv28_d0-2_to_conv28_dc: Pass
[info] conv38_dc: Pass
[info] context_1_to_context_2_9: Pass
[info] conv28_d1: Pass
[info] conv42_dc: Pass
[info] conv45_d2: Pass
[info] ew_add10: Pass
[info] batch_norm3: Pass
[info] shortcut_from_conv45_to_conv46_d0-3: Pass
[info] conv46_d4: Pass
[info] conv28_dc: Pass
[info] conv37: Pass
[info] conv28_d0: Pass
[info] concat3: Pass
[info] conv50_d0: Pass
[info] shortcut_from_shortcut_from_conv45_to_conv46_d0-3_to_conv46_d0-2: Pass
[info] conv28_d2: Pass
[info] ew_add9: Pass
[info] conv40_d0: Pass
[info] conv53: Pass
[info] conv46_d0: Pass
[info] output_layer2: Pass
[info] output_layer1: Pass
[info] output_layer3: Pass
[info] batch_norm4: Pass
[info] concat4: Pass
[info] conv55: Pass
[info] resize2: Pass
[info] conv54: Pass
[info] conv74: Pass
[info] context_1_to_context_2_in_10: Pass
[info] context_1_to_context_2_in_8: Pass
[info] conv58: Pass
[info] conv78_dc: Pass
[info] auto_reshape_from_conv93_to_output_layer3: Pass
[info] conv59_d1: Pass
[info] context_1_to_context_2_in_6: Pass
[info] conv93: Pass
[info] conv61_dc: Pass
[info] auto_reshape_from_conv74_to_output_layer1: Pass
[info] conv59_dc: Pass
[info] conv80_d0: Pass
[info] conv67: Pass
[info] context_0_to_context_2_in_4: Pass
[info] conv71: Pass
[info] conv79: Pass
[info] conv59_d0: Pass
[info] conv69: Pass
[info] conv86: Pass
[info] resize1: Pass
[info] conv77: Pass
[info] concat9: Pass
[info] conv56: Pass
[info] concat7: Pass
[info] concat6: Pass
[info] conv80_d1: Pass
[info] conv70_d1: Pass
[info] conv82: Pass
[info] conv83_d1: Pass
[info] batch_norm7: Pass
[info] conv91: Pass
[info] conv81: Pass
[info] conv66: Pass
[info] conv73_sd0: Pass
[info] conv84: Pass
[info] conv62: Pass
[info] conv61_d1: Pass
[info] conv63: Pass
[info] conv92: Pass
[info] conv64: Pass
[info] conv73_sdc: Pass
[info] conv87: Pass
[info] conv78_d0: Pass
[info] conv90_d1: Pass
[info] conv78_d1: Pass
[info] concat8: Pass
[info] concat10: Pass
[info] conv88_d1: Pass
[info] conv89: Pass
[info] conv90_d0: Pass
[info] conv65: Pass
[info] batch_norm8: Pass
[info] conv80_dc: Pass
[info] conv83_d0: Pass
[info] conv75: Pass
[info] conv72: Pass
[info] concat12: Pass
[info] conv76: Pass
[info] conv61_d0: Pass
[info] conv85: Pass
[info] batch_norm5: Pass
[info] concat11: Pass
[info] conv90_dc: Pass
[info] conv70_dc: Pass
[info] conv68_d0: Pass
[info] conv68_dc: Pass
[info] batch_norm6: Pass
[info] conv88_d0: Pass
[info] conv88_dc: Pass
[info] conv68_d1: Pass
[info] conv83_dc: Pass
[info] auto_reshape_from_conv84_to_output_layer2: Pass
[info] conv73_sd1: Pass
[info] conv70_d0: Pass
[info] concat5: Pass
[info] conv60: Pass
[info] conv57: Pass
[info] Solving the allocation (Mapping), time per context: 59m 59s
Model Details


Input Tensors Shapes 640x640x3
Operations per Input Tensor 52.24 GOPs
Operations per Input Tensor 26.17 GMACs
Pure Operations per Input Tensor 52.24 GOPs
Pure Operations per Input Tensor 26.17 GMACs
Model Parameters 23.52 M


but run with yolov5_yolov7_detection example i get error
[HailoRT] [error] CHECK failed - Got HAILO_TIMEOUT while waiting for descriptors in write_buffer (channel_id=0:2)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4) - HwWriteElement1_yolov5m/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK failed - Got HAILO_TIMEOUT while waiting for descriptors in write_buffer (channel_id=0:2)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4) - HwWriteElement1_yolov5m/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK failed - Got HAILO_TIMEOUT while waiting for descriptors in write_buffer (channel_id=0:2)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4) - HwWriteElement1_yolov5m/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4)

i can run with model provide by hailo successfull
please help me to solve this problem

C++ import error in windows/yolov5/

Build incomplete in windows while testing example C++ code provided in windows/yolov5/

[main] Building folder: yolov5 all
[build] Starting build
[proc] Executing command: "C:\Program Files\CMake\bin\cmake.EXE" --build d:/code_practice/Hailo-Application-Code-Examples/runtime/windows/yolov5/build --config Debug --target all -j 6 --
[build] [ 33%] Building CXX object CMakeFiles/cpp_yolov5_win_standalone_example.dir/yolov5_windows_example.cpp.obj
[build] In file included from C:/PROGRA~1/HailoRT/include/hailo/hailort.h:19,
[build]                  from D:\code_practice\Hailo-Application-Code-Examples\runtime\windows\yolov5\common.h:23,
[build]                  from D:\code_practice\Hailo-Application-Code-Examples\runtime\windows\yolov5\yolov5_windows_example.cpp:32:
[build] C:/PROGRA~1/HailoRT/include/hailo/platform.h:42:10: fatal error: sys/socket.h: No such file or directory
[build]  #include <sys/socket.h>
[build]           ^~~~~~~~~~~~~~
[build] compilation terminated.

System information:

OS: Microsoft Windows 11 IoT Enterprise 
cmake version 3.28.0-rc3
OpenCV 4.8.0


Hailo version: 
Executing on device: 0000:04:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.14.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8
Serial Number: HLLWM2B223701649
Part Number: HM218B1C2FAE
Product Name: HAILO-8 AI ACC M.2 M KEY MODULE EXT TEMP

Is python instance_segmenation example is FP32 ?

I plan to write a c++ post-processor version of fastSam.
I found that there is an implementation under the python package, and it seems that the difference from yolov8seg is the number of categories.

But the strange thing is that when I use hef in the python package, yolov8seg (my own C++ implementation) infers correctly, but fastsam fails.

I later noticed:

input_vstreams_params = InputVStreamParams.make_from_network_group(network_group, quantized=False, format_type=FormatType.FLOAT32)
output_vstreams_params = OutputVStreamParams.make_from_network_group(network_group, quantized=False, format_type=FormatType.FLOAT32)

Is the model used in this python fp32?

Bugs in runtime/python/yolo_general_inference/yolo_inference.py

Getting errors while running runtime/python/yolo_general_inference/yolo_inference.py.

  File "./yolo_inference.py", line 56
    'yolox': 
    ^
SyntaxError: invalid syntax
Given input data dtype (uint8) is different than inferred dtype (float32). conversion for every frame will reduce performance
Traceback (most recent call last):
  File "./yolo_inference.py", line 449, in <module>
    img = letterbox_image(batch_images[j], (width, height))
TypeError: list indices must be integers or slices, not tuple

Unexpected HailoRT Warning Message

Dear team,

Thank you for your attention, I doubt why there is always this warning message while I run our compiled and converted .hef inference engine. Will it affect the performance?

[HailoRT] [warning] HEF was compiled assuming clock rate of 400 MHz, while the device clock rate is 200 MHz. FPS calculations might not be accurate.

Best,
Hui

multistrem_lpr

Hi,

Generally recording to the file is necessarary later debugging the results.

How we can add filesink for multistream_lpr ?

yolov5_postprocess_python

My hailo8 chip is installed on an aarch64 edge device, and the system is ubuntu20.04. So I can’t use hailo_model_zoo’s YoloPostProc?

from hailo_model_zoo.core.postprocessing.detection.yolo import YoloPostProc

捕获3

Looking forward to your reply

YOLO inference example gives ``No module named 'hailo_platform.pyhailort._pyhailort'``

Hi, I've tried running some of the examples with the pyHailoRT. Once I run the yolo_inference.py script, I get a ModuleNotFoundError: No module named 'hailo_platform.pyhailort._pyhailort' error. I've installed hailort==4.14.0 and the other outlined dependency and tried installing hailort using both the wheel as well as using the pre-build Docker image.

Is there any solution c++ inference for 15h with /dev/video0 input?

Hi, I am investigating with Hailo-15H.
I want to do inference of the image that is inputed by /dev/video0 in 15h device.
I have tested it with 'runtime/cpp/detection/yolov8_cross_compilation_h15' code. but it did not work.
Is there any solution to inference the image that is inputed by /dev/video0 in 15h device?

root@hailo15:~/ws/Test_v1/resources# ./yolov8_cross_compilation_h15 -input=/dev/video0 -hef=hefs/h15/yolov8s_h15.hef
-I-----------------------------------------------
-I-  Network  Name                                     
-I-----------------------------------------------
-I-  IN:  yolov8s/input_layer1
-I-----------------------------------------------
-I-  OUT: yolov8s/conv41
-I-  OUT: yolov8s/conv42
-I-  OUT: yolov8s/conv52
-I-  OUT: yolov8s/conv53
-I-  OUT: yolov8s/conv62
-I-  OUT: yolov8s/conv63
-I-----------------------------------------------

[ WARN:[email protected]] global /usr/src/debug/opencv/4.5.5-r0/git/modules/videoio/src/cap_gstreamer.cpp (2401) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module source reported: Could not read from resource.
[ WARN:[email protected]] global /usr/src/debug/opencv/4.5.5-r0/git/modules/videoio/src/cap_gstreamer.cpp (1356) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:[email protected]] global /usr/src/debug/opencv/4.5.5-r0/git/modules/videoio/src/cap_gstreamer.cpp (862) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
terminate called after throwing an instance of 'char const*'
Aborted
root@hailo15:~/ws/Test_v1/resources# 

The model that i would like to inference is yolov8m object detection.
Thanks,

multistream_app to much cpu usage

Hi,

We are testing multistream_app and compiled it succesfully.

we tested with 5 2Mp sources and its almost using all cpu our i9 server. is this normal ?

Screen Shot 2023-10-18 at 23 32 09

cpp async_yolov5

Hi,

Succesfully compile and run async_yolov5 and has the processed_video.mp4

The resulting video not in original resolution its 640x640 and there is no boundign boxes.

Can we see the bounding boxes and the original video resolution with bounding boxes

Best

post_processing undefined reference

Hi,
@erangur
I try to compile in the Hailo_sw_suite env and has below error:

Ubuntu Release: 18.04

(hailo_venv) alp2080@alp2080:/data/hailo/hailo_sw_suite/yolov5_v7_det$ ./build.sh -I- Building x86_64 -- Found OpenCV: /usr/local/include/opencv4 -- Configuring done -- Generating done -- Build files have been written to: /data/hailo/hailo_sw_suite/yolov5_v7_det/build/x86_64 [ 50%] Linking CXX executable vstream_yolov7_example_cpp /usr/bin/ld: CMakeFiles/vstream_yolov7_example_cpp.dir/yolov5_yolov7_inference.cpp.o: in function post_processing_all(std::vector<std::shared_ptr, std::allocator<std::shared_ptr > >&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::chrono::duration<double, std::ratio<1l, 1l> >&)':
/data/hailo/hailo_sw_suite/yolov5_v7_det/yolov5_yolov7_inference.cpp:84: undefined reference to post_processing(unsigned char*, float, float, unsigned char*, float, float, unsigned char*, float, float)' collect2: error: ld returned 1 exit status make[2]: *** [CMakeFiles/vstream_yolov7_example_cpp.dir/build.make:148: vstream_yolov7_example_cpp] Error 1 make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/vstream_yolov7_example_cpp.dir/all] Error 2 make: *** [Makefile:84: all] Error 2

Issue about yolox_streaming_inference

Hello
I'm testing 2I640HL equipment.

Currently, I've even finished converting my custom yolo model to hef format and running it.
For the model execution, I referred to 'object_detection' guide in the hailo application code examples.

However, since the guides uses an image file as the input source, I'd like to connect the webcam to check the model results in streaming, so I tried to run the 'yolox_streaming_inference' guide.

I modified and ran only the model name, path, and labeling part in the file provided by default, but the following error occurs.

I'm attaching the revised file and error message as below,
and I'd appreciate it if you could help me.

Screenshot from 2024-05-24 12-56-36

yolo_streaming_inference.zip

Inquiring About Further Examples for YOLOv8-seg Inference on Windows

Hello, I have successfully run inference with a custom dataset using the YOLOv8 model on Windows, and the results were impressive. I am interested in further exploring how to perform inference with the YOLOv8-seg model on Windows, but I am unsure about how to obtain the segmentation branch results. The examples provided for Windows are still too limited. Will there be new code examples provided for this in the future?

Any more examples in Windows?

Hi, My dev enviroment is win10 / C++ / Visual Studio , Not Linux and GCC

We need more example about win10 / C++ / Visual Studio environment not only yolov5_windows example.

That is very difficult example for beginner of Hailo user to understand

we neeed basic and simple example about all process in classification .
e.g) read image used by opencv, put that data in hailo deivce, read result from hailo device, and show that result.

hailort example not runing on win10 / C++ / Visual Studio environment.

Thank you

Not able to run my own yolov7 inference model

Dear team,

thank you for your time. I downloaded the hefs by get_hefs_and_video.sh, it works fine.

Then I tried with my own inference model, here is the error message:

[HailoRT] [warning] HEF was compiled assuming clock rate of 400 MHz, while the device clock rate is 200 MHz. FPS calculations might not be accurate.
[HailoRT] [warning] HEF was compiled assuming clock rate of 400 MHz, while the device clock rate is 200 MHz. FPS calculations might not be accurate.
-I-----------------------------------------------
-I-  Network  Name                                     
-I-----------------------------------------------
-I-  IN:  yolov7_tiny/input_layer1
-I-----------------------------------------------
-I-  OUT: yolov7_tiny/conv43
-I-  OUT: yolov7_tiny/conv51
-I-  OUT: yolov7_tiny/conv58
-I-----------------------------------------------

-I- Started write thread: yolov7_tiny/input_layer1 (640, 640, 3)
-I- Started read thread: yolov7_tiny/conv43 (80, 80, 21)
-I- Started read thread: yolov7_tiny/conv58 (20, 20, 21)
-I- Started read thread: yolov7_tiny/conv51 (40, 40, 21)

-I- Starting postprocessing

Config file doesn't exist, using default parameters
[HailoRT] [error] CHECK failed - yolov7_tiny/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4), timeout=10000ms
[HailoRT] [error] CHECK_SUCCESS_AS_EXPECTED failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_EXPECTED failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4) - HwWriteElement15_yolov7_tiny/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK failed - yolov7_tiny/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4), timeout=10000ms
[HailoRT] [error] CHECK_SUCCESS_AS_EXPECTED failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_EXPECTED failed with status=HAILO_TIMEOUT(4)
[HailoRT] [error] CHECK_SUCCESS failed with status=HAILO_TIMEOUT(4) - HwWriteElement15_yolov7_tiny/input_layer1 (H2D) failed with status=HAILO_TIMEOUT(4)

To be more specific, there is no detection while running.

Do you have any idea how to address this issue?

Best,
Hui

Not work if class number is not 80

Retraining a yolov8 model with 20 classes, and it doesn't work in the example.
Since I'm not familiar xTensor we did a work around to solve this issue. Not sure if this change is correct but at least it works for us.
Below diff for your info.
yolov8_cpp/x86_64/yolov8_postprocess.cpp
(-) auto output_scores = xt::view(dequantized_output_s, xt::all(), xt::all(), xt::all());
(+) auto output_scores = xt::view(dequantized_output_s, xt::all(), xt::all(), xt::range(0, NUM_CLASSES));

Inference Issues with Converted Models on Windows - No Detection on my own model.

Recently, I have been attempting to perform image inference on Windows using models that I have converted myself. To verify the feasibility of my conversion method, I tested three models: person_v8n.onnx (from my own dataset with only a "person" category), yolov8n.onnx (converted from a pt model provided by the official ultralytics project), and yolov8n_hailo.onnx (a model provided by the official hailo_mz). Following the instructions from the model_zoo, I used the same commands to convert the onnx models into hef models. Notably, person_v8n.onnx differs from the other two models in that it contains only one category, whereas both yolov8n.onnx and yolov8n_hailo.onnx encompass 80 categories. The calibration dataset for person_v8n.onnx consists of images from specific scenes, while the datasets for yolov8n.onnx and yolov8n_hailo.onnx are from coco_val2017. All three models were successfully converted to hef models, and I have integrated the NMS into the model files using yolov8n_nms_config.json. However, during testing, I observed that while yolov8n.hef and yolov8n_hailo.hef were able to display bounding boxes on the images, person_v8n.hef seemed to detect no targets as no bounding boxes were drawn on the images (no other errors occurred during the process, and the categories and vstream_output_data were modified according to the model specifications). Could anyone advise on potential reasons for this issue?
image
image
image
image

image

After running the code,There are no objects on the image... so Sad!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.