Giter VIP home page Giter VIP logo

Comments (13)

xmba15 avatar xmba15 commented on May 17, 2024 1

@cardboardcode I started this small library to know how onnxruntime works some time ago, and yes, I am not actively working on it recently. I am more than happy if you can help contribute to adding API documentation for it.

from onnx_runtime_cpp.

cardboardcode avatar cardboardcode commented on May 17, 2024 1

Okay.

Will get started on it. No definite date on when this will be completed since I can only do this in my free time.

Will track the progress here in this thread.

from onnx_runtime_cpp.

xmba15 avatar xmba15 commented on May 17, 2024 1

Hi @cardboardcode Sorry for the late reply.
That TestObjectDetection.cpp is actually one of the earliest examples I wrote for this library. Its main purpose is to test that the API in OrtSessionHandler is giving the right inference output.
Also, at the beginning, I got all the onnx models from the onnx model zoo that I actually didn't know the size of the outputs, so I used TestObjectDetection.cpp to print out to see.

Just in case you are not aware of, I later use this tool to visualize the size of an unknown model downloaded from OSS.
https://netron.app/

Thanks for the documentation work.

from onnx_runtime_cpp.

roachsinai avatar roachsinai commented on May 17, 2024 1

@xmba15 Thank you so muchhhh!!!

from onnx_runtime_cpp.

xmba15 avatar xmba15 commented on May 17, 2024 1

@cardboardcode related to TestObjectDetection.cpp, I made this fix. Hope it will make things clearer.
91fd474

from onnx_runtime_cpp.

xmba15 avatar xmba15 commented on May 17, 2024 1

Hi @cardboardcode
First of all, thanks for the hard work (I know it is very time-consuming to read to understand, and document codes that are written by others).

After reading the documentation page, here is my feedback.

  1. &2. To be honest, it seems wordy on documentation of the examples. And yes, I think a diagram will give an easier explanation for the users. Just like when I see the diagram in Codebase Architecture, I can see the whole structure of the classes.
  2. This is a small library and I didn't see source files missed out. However, as you can see from the above comments in this thread, I have made changes in yolov3 sample to use the weights in onnx model zoo. I wonder how these changes can be reflexed back into documentation once it is put into public use.
  3. Personally, I would like to see some sample detection result images in the documentation.

Thanks again for this. Can I mention the url to this documentation in the README of this repo?

from onnx_runtime_cpp.

cardboardcode avatar cardboardcode commented on May 17, 2024

TODO List

The list below shows which files' documentation has been done:

  • Constants.hpp
  • ImageClassificationOrtSessionHandler.hpp
  • ImageRecognitionOrtSessionHandlerBase.hpp
  • ObjectDetectionOrtSessionHandler.hpp
  • OrtSessionHandler.hpp
  • include/Utility.hpp
  • examples/Utility.hpp
  • Yolov3App.cpp
  • MaskRCNNApp.cpp
  • TinyYolov2App.cpp
  • UltraLightFastGenericFaceDetectorApp.cpp
  • TestImageClassification.cpp
  • TestObjectDetection.cpp
  • Create dependency chart / inheritance graph.

The documentation can be accessed via this link:
https://onnx-runtime-cpp.readthedocs.io/en/latest/

from onnx_runtime_cpp.

cardboardcode avatar cardboardcode commented on May 17, 2024

Hi @xmba15,

May I clarify what Imagenet-Trained .onnx model you had in mind for TestObjectDetection.cpp?

Asking because, in documenting TestObjectDetection.cpp, realized it is not mentioned on the README.md and the .cpp only outputs the size of the resulting output tensor.

from onnx_runtime_cpp.

roachsinai avatar roachsinai commented on May 17, 2024

@xmba15 sorry for bother. But after I compile this whole repo, get error when run yolov3:

(mp) owin ~/c/G/O/onnx_runtime_cpp master » ./build/Debug/examples/yolov3 data/yolov3/yolov3.onnx data/images/bird_detection.jpg
[/home/xq/code/Git/ONNX/onnx_runtime_cpp/src/OrtSessionHandler.cpp][initSession][Line 204] >>> Model number of inputs: 2

[/home/xq/code/Git/ONNX/onnx_runtime_cpp/src/OrtSessionHandler.cpp][initSession][Line 210] >>> Model number of outputs: 3

zsh: segmentation fault  ./build/Debug/examples/yolov3 data/yolov3/yolov3.onnx

I download yolo model from https://github.com/onnx/models/raw/master/vision/object_detection_segmentation/yolov3/model/yolov3-10.onnx

command ./build/Debug/examples/TestImageClassification ./data/squeezenet1.1.onnx ./data/images/bird_detection.jpg run successd, though I don't know why get a lots of verbos:

[/home/xq/code/Git/ONNX/onnx_runtime_cpp/src/OrtSessionHandler.cpp][operator()][Line 297] >>> type of input 1: float
794 : shower curtain : 0.068582
922 : menu : 0.0469442
549 : envelope : 0.0451862
619 : lampshade, lamp shade : 0.0370278
991 : coral fungus : 0.0339118

[/home/xq/code/Git/ONNX/onnx_runtime_cpp/src/OrtSessionHandler.cpp][operator()][Line 297] >>> type of input 1: float
794 : shower curtain : 0.068582
922 : menu : 0.0469442
549 : envelope : 0.0451862
619 : lampshade, lamp shade : 0.0370278
991 : coral fungus : 0.0339118

[/home/xq/code/Git/ONNX/onnx_runtime_cpp/src/OrtSessionHandler.cpp][operator()][Line 297] >>> type of input 1: float
794 : shower curtain : 0.068582
922 : menu : 0.0469442
549 : envelope : 0.0451862
619 : lampshade, lamp shade : 0.0370278
991 : coral fungus : 0.0339118

[/home/xq/code/Git/ONNX/onnx_runtime_cpp/src/OrtSessionHandler.cpp][operator()][Line 297] >>> type of input 1: float
794 : shower curtain : 0.068582
922 : menu : 0.0469442
549 : envelope : 0.0451862
619 : lampshade, lamp shade : 0.0370278
991 : coral fungus : 0.0339118

I thougt the successd command is because the model is squeezenet1.1.onnx which you uploaded in this repo.

And it only have one input, but the model yolov3 I download at the model zoo, the model have 2 inputs:

Inputs name:  ['input_1', 'image_shape']
Outputs name:  ['yolonms_layer_1/ExpandDims_1:0', 'yolonms_layer_1/ExpandDims_3:0', 'yolonms_layer_1/concat_2:0']

Any suggesstions will be appreciate! Thanks in advace!

from onnx_runtime_cpp.

xmba15 avatar xmba15 commented on May 17, 2024

@roachsinai I created a sample for onnx model zoo's yolov3 in #11
Hope it help.
Next time please open a new issue.

from onnx_runtime_cpp.

cardboardcode avatar cardboardcode commented on May 17, 2024

Hi @xmba15,

The documentation has been tentatively completed. Please view it at the following link:

https://onnx-runtime-cpp.readthedocs.io/en/latest/index.html

However, I do see the need for external feedbacks in order to improve the reading experience of the documentation.

Enquiry

May I get your feedback on the following documentation aspects and source files?

1. Does it seem too wordy on documentations for the examples App .cpp files like Yolov3App, UltraLightFastGenericFaceDetectorApp, TinyYolov2App, TinyYolov2App, TestObjectDetection and TestImageClassification?

2. Would a diagrammatic process flow chart be a better alternative, in your opinion?

3. Are there source files which are accidentally missed out from your perspective?

4. What additional content can be added to better improve the documentations?

from onnx_runtime_cpp.

cardboardcode avatar cardboardcode commented on May 17, 2024

Yup. Please go ahead and mention the url in the README.md.

Once again, thank you for this useful library. Will continue to address your feedback once I have time.

TODO 2021

  • Convert the documentation of Yolov3App to a diagrammatic format.
  • Convert the documentation of UltraLightFastGenericFaceDetectorApp to a diagrammatic format.
  • Convert the documentation of TinyYolov2App to a diagrammatic format.
  • Convert the documentation of TestObjectDetection to a diagrammatic format.
  • Convert the documentation of TestImageClassification to a diagrammatic format.
  • Draft protocol for reflecting up to date documentation that is synchronized with latest commit.
  • Include more sample detection result images in the documentation... (To be elaborated)

from onnx_runtime_cpp.

cardboardcode avatar cardboardcode commented on May 17, 2024

TODO 2022

  • Create docs for YoloX: high-performance anchor-free YOLO by Megvii
  • Create docs for Semantic Segmentation Paddle Seg

from onnx_runtime_cpp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.